Handbook of Photonics for Biomedical Engineering [1 ed.] 9789400750517, 9789400750524

Nanophotonics has emerged rapidly into technological mainstream with the advent and maturity of nanotechnology available

275 71 42MB

English Pages XXI, 947 [946] Year 2017

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Handbook of Photonics for Biomedical Engineering [1 ed.]
 9789400750517, 9789400750524

Table of contents :
Front Matter....Pages i-xxi
Front Matter....Pages 1-1
Front Matter....Pages 3-28
Front Matter....Pages 29-60
Front Matter....Pages 61-86
Front Matter....Pages 87-121
Back Matter....Pages 123-145
....Pages 147-176

Citation preview

Aaron Ho-Pui Ho Donghyun Kim Michael G. Somekh Editors

Handbook of Photonics for Biomedical Engineering

Handbook of Photonics for Biomedical Engineering

Aaron Ho-Pui Ho • Donghyun Kim • Michael G. Somekh Editors

Handbook of Photonics for Biomedical Engineering With 491 Figures and 17 Tables

Editors Aaron Ho-Pui Ho Department of Electronic Engineering The Chinese University of Hong Kong Shatin, Hong Kong

Donghyun Kim School of Electrical and Electronic Engineering Yonsei University Seoul, Republic of Korea

Michael G. Somekh Department of Electronic and Information Engineering The Hong Kong Polytechnic University Hung Hom, Kowloon, Hong Kong

ISBN 978-94-007-5051-7 ISBN 978-94-007-5052-4 (eBook) ISBN 978-94-007-5053-1 (print and electronic bundle) DOI 10.1007/978-94-007-5052-4 Library of Congress Control Number: 2017930702 # Springer Science+Business Media Dordrecht 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer Science+Business Media B.V. The registered company address is: Van Godewijckstraat 30, 3311 GX Dordrecht, The Netherlands

Preface

Let there be light. Light is so fundamental to the way human beings can be human that numerous light-based techniques have emerged for applications that may affect every corner of human life. Light technology is also critical to all aspects of biomedical engineering. For this reason, biophotonics has come up to represent light-based science and technology for biomedical engineering and includes passive applications that intend to monitor biomolecular and macroscopic processes and require minimal disruption without interference, such as various biosensing and imaging techniques. On the other hand, biophotonics also proves to be highly useful for active applications in which light interacts with matter and active photon engagement with biomolecules, cells, and tissue is strongly desired. Our collaboration to summarize recent trends of biophotonics started a few years ago in 2013. Although this volume is not exhaustive at all in a time when novel ideas keep emerging almost on a daily basis, we intended here to address some of the hottest areas in biophotonics. This handbook is organized into sections of passive and active biophotonic applications. First, sections of passive biophotonics consist of biophotonic sensing techniques and applications, in vivo biomedical imaging techniques, and novel optical microscopy techniques. In contrast, sections of active biophotonics cover light manipulation and therapeutic applications. Finally, we have also included a section addressing emerging biophotonic materials and devices. Throughout the volume, we have tried to maintain topical and geographical balance. Biophotonic sensing techniques and applications cover developments in flow cytometry surface-enhanced Raman spectroscopy and biosensors based on surface plasmon resonance and microcavity resonators. The section also covers biosensors using photonic crystal optical fibers and lab-on-a-chip systems for POC applications. In in vivo biomedical imaging techniques, various imaging modalities for diagnostics of tissue and organ are addressed: for example, methods based on diffuse optics and optical coherence tomography. Recently, photoacoustics has drawn significant attention and is thus discussed in three chapters regarding different aspects of the technology. Microscopy has also been an increasingly hot topic. Diverse techniques have been discussed to achieve super-resolution and other desired imaging characteristics using fluorescence lifetime imaging and based on plasmonic super-localization, nonlinear multimodality, evanescent waves, adaptive optics, and resonant waveguides. v

vi

Preface

Active biophotonics is extremely important for clinical applications, for which light manipulation and therapeutic applications are addressed in chapters regarding photodynamic therapy and manipulation of cells or particles using optical tweezers in various environment. Single-particle tracking photonics is also included. Finally, materials and devices for emerging biophotonic applications are the topic of the final section. As an example, nanocrystals and quantum dots, and devices using extraordinary optical transmission and fluidics are discussed in these chapters. We hope that this handbook will serve as a great opportunity to review and summarize what is happening now and provide insights on the future direction to the readers who are interested in photonics for bioengineering applications. February, 2017

Aaron Ho-Pui Ho Donghyun Kim Michael G. Somekh

Contents

Part I 1

Biophotonic Sensing Techniques and Applications . . . . . . . .

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study Cancer Metastasis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xun-Bin Wei, Zhi-Chao Fan, Dan Wei, Rongrong Liu, Yuanzhen Suo, and Xiao-Fu Weng

1

3

2

SERS for Sensitive Biosensing and Imaging . . . . . . . . . . . . . . . . . . U. S. Dinish and Malini Olivo

29

3

Photonic Crystal Fiber-Based Biosensors . . . . . . . . . . . . . . . . . . . . Xia Yu, Derrick Yong, and Yating Zhang

61

4

Lab-on-a-Chip Device and System for Point-of-Care Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tsung-Feng Wu, Sung Hwan Cho, Yu-Jui Chiu, and Yu-Hwa Lo

5

SPR Biosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aaron Ho-Pui Ho, Shu-Yuen Wu, Siu-Kai Kong, Shuwen Zeng, and Ken-Tye Yong

6

Highly Sensitive Sensing with High-Q Whispering Gallery Microcavities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bei-Bei Li, Xiao-Chong Yu, Yi-Wen Hu, William Clements, and Yun-Feng Xiao

Part II

87 123

147

In Vivo Biomedical Imaging Techniques . . . . . . . . . . . . . . . . .

177

7

Monitoring Cancer Therapy with Diffuse Optical Methods Ulas Sunar and Daniel J. Rohrbach

.....

179

8

Optical and Optoacoustic Imaging in the Diffusive Regime . . . . . . Adrian Taruttis and Vasilis Ntziachristos

221

9

Multifunctional Photoacoustic Tomography . . . . . . . . . . . . . . . . . . Changho Lee, Sungjo Park, Jeesu Kim, and Chulhong Kim

247

vii

viii

Contents

10

Exploiting Complex Media for Biomedical Applications . . . . . . . . Youngwoon Choi, Moonseok Kim, and Wonshik Choi

11

Probing Different Biological Length Scales Using Photoacoustics: From 1 to 1000 MHz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eno Hysi, Eric M. Strohm, and Michael C. Kolios

303

Spectral-Domain Optical Coherence Phase Microscopy: A New Optical Imaging Tool for Quantitative Biology . . . . . . . . . Suho Ryu and Chulmin Joo

325

12

Part III

Novel Optical Microscopy Techniques . . . . . . . . . . . . . . . . . .

271

351

13

Fluorescence Lifetime Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Klaus Suhling, Liisa M. Hirvonen, James A. Levitt, Pei-Hua Chung, Carolyn Tregidgo, Dmitri A. Rusakov, Kaiyu Zheng, Simon Ameer-Beg, Simon P. Poland, Simao Coelho, Robert Henderson, and Nikola Krstajic

14

High-Resolution Optical Microscopy for Biological Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yoshimasa Kawata and Wataru Inami

407

15

Novel Plasmonic Microscopy: Principle and Applications . . . . . . . Xiaocong Yuan and Changjun Min

429

16

Nonlinear Multimodal Optical Imaging . . . . . . . . . . . . . . . . . . . . . Yan Zeng, Qiqi Sun, and Jianan Y. Qu

461

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael G. Somekh and Suejit Pechprasarn

503

18

Surface Plasmon-Enhanced Super-Localization Microscopy . . . . . Youngjin Oh, Jong-ryul Choi, Wonju Lee, and Donghyun Kim

19

Adaptive Optics for Aberration Correction in Optical Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amanda J. Wright and Simon P. Poland

585

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative Light . . . . . . . . . . . . . . . . . . . . . . . . . . F. Argoul, L. Berguiga, J. Elezgaray, and A. Arneodo

613

20

Part IV 21

545

Light Manipulation and Therapeutic Applications . . . . . . . .

655

Photodynamic Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wing-Ping Fong, Hing-Yuen Yeung, Pui-Chi Lo, and Dennis K. P. Ng

657

Contents

22

23

24

25

ix

Fiber Optical Tweezers for Manipulation and Sensing of Bioparticles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuxiang Liu and Miao Yu

683

Application of Ultrashort-Pulsed Lasers for Optical Manipulation of Biological Functions . . . . . . . . . . . . . . . . . . . . . . . Jonghee Yoon and Chulhee Choi

717

Optical-Tweezers-Based Microrheology of Soft Materials and Living Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ming-Tzo Wei, Olga Latinovic, Lawrence A. Hough, Yin-Quan Chen, H. Daniel Ou-Yang, and Arthur Chiou

731

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method and Application to Optical Trapping . . . . . . . . . . . Takanobu A. Katoh, Shoko Fujimura, and Takayuki Nishizaka

755

.....

767

Emerging Biophotonic Materials and Devices . . . . . . . . . . . .

807

27

Functional Metal Nanocrystals for Biomedical Applications . . . . . Lei Shao and Jianfang Wang

809

28

Cadmium-Free Quantum Dots for Biophotonic Imaging and Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Butian Zhang, Yucheng Wang, Rui Hu, Indrajit Roy, and Ken-Tye Yong

26

Optical Manipulation and Sensing in a Microfluidic Device Daniel Day, Stephen Weber, and Min Gu

Part V

29

30

Development of Extraordinary Optical Transmission-Based Techniques for Biomedical Applications . . . . . . . . . . . . . . . . . . . . . Seunghun Lee, Hyerin Song, Seonhee Hwang, Jong-ryul Choi, and Kyujung Kim Miniaturized Fluidic Devices and Their Biophotonic Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alana Mauluidy Soehartono, Liying Hong, Guang Yang, Peiyi Song, Hui Kit Stephanie Yap, Kok Ken Chan, Peter Han Joo Chong, and Ken-Tye Yong Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

841

871

893

941

About the Editors

Professor Aaron Ho-Pui Ho received his B.Eng. and Ph.D. in Electrical and Electronic Engineering from the University of Nottingham in 1986 and 1990, respectively. He has held academic positions as Associate Dean of Engineering, CUHK (2007–2010), and Assistant Professor in the Department of Physics and Materials Science, City University of Hong Kong (1996–2002). Prior to returning to Hong Kong, Aaron was with Hewlett-Parkard (1994–1996). His responsibility was process development for high-volume production of InGaAs PIN diodes and InGaAsP buried multiquantum well heterostructure 1300/1550 nm lasers. His industrial experience covers metalorganic vapor phase epitaxial growth (MOVPE), wafer scale InP device fabrication, and packaging of telecom photonic products. After completing his Ph.D. thesis entitled “Zinc Diffusion Enhanced Disordering in AlAs-GaAs Superlattices,” Aaron did 5 years of postdoc (1989–1994) at University of Nottingham and University of Leeds, UK. He was involved in two research projects: (i) giant magneto-resistance of Co/Cu superlattices prepared by molecular beam epitaxy (MBE) and (ii) laser ultrasound generation and detection for nondestructive evaluation of ceramic coatings (sponsored by Rolls-Royce aircraft engine division). The intensive exposure in solid-state physics, thin-film material science, and laser optics has been very instrumental in preparing Aaron’s subsequent academic career. Aaron’s publication covers 120 peer-reviewed journal papers, 120 conference presentation, 4 book chapters, and 5 United States and 16 Chinese patents.

xi

xii

About the Editors

Dr. Donghyun Kim received a B.S. with summa cum laude and an M.S. from Seoul National University in 1993 and 1995, respectively, both in electronics engineering. He graduated from the Massachusetts Institute of Technology (MIT), MA, USA in 2001 with a Ph.D. in electrical engineering in the area of novel multidimensional display technologies worked at MIT Media Laboratory and smart optical filters developed for military applications. He worked on next generation fiber-optic access communication systems at Photonic Research and Test Center, Somerset, NJ, of Corning Inc. as a senior research scientist and then investigated cellular biophotonic sensors for in vitro cell culture devices at Department of Chemical and Biomolecular Engineering of Cornell University, Ithaca, NY, as a postdoctoral fellow. Since 2004, he has been in charge of Biophotonics Engineering Laboratory of Yonsei University, Seoul, Korea. He served as program chair for the Information Technology Program of Underwood International College, Yonsei University, and has been the director of the Yonsei Institute of Medical Instruments Technology since 2011. He has also been involved with the Optical Society of Korea as an academic director since 2015. The main theme of his research has been fundamental studies of nanophotonic technology and applications in biomedical engineering with an emphasis on plasmonic techniques. Plasmonics rapidly emerges as a novel toolbox that would allow highly sensitive nanosensors as well as imaging platforms with superresolution. He has given 60+ invited lectures on related topics and written more than 100 peer-reviewed journal and conference publications on nano- and biophotonics, many of which were the results of collaboration with researchers of diverse backgrounds across the world. He also holds 30+ international patents and works closely with many renowned industrial partners in the area. In recognition of the research achievements, he was awarded a Korean Research Foundation Young Investigator Award in 2005, LG Scholar Fellowship in 2009, and recently Leap Research Award, one of the most prestigious funding awards, consecutively from the National Research Foundation of Korea. He has organized many local and international conferences in the field of nano/biophotonics including Asian and Pacific Rim Symposium on Biophotonics, Surface Plasmon Photonics 2011, SPIE Global Congress on Nanosystems in Engineering and Medicine 2012, International Conference on Nano-Bio Sensing, Imaging, and Spectroscopy 2015, and 5th Asia-Pacific Optical Sensors Conference. He also held visiting researcher appointments at Rutgers University, State University of New Jersey, and University of California at Irvine.

About the Editors

xiii

Professor Michael G. Somekh took his first degree from Oxford University in Metallurgy and Materials Science. He then completed his Ph.D. in Microwave Electronics from Department of Physics, University of Lancaster in 1981. He then returned to Oxford to work on contrast mechanisms in Acoustic Microscopy first as a research associate and then as an EPSRC Research Fellow. He then joined University College London as lecturer and Director of the Wolfson Unit for microNDE. In 1989 he joined the University of Nottingham as Senior Lecturer and was promoted to Reader (1992) and Professor of Optical Engineering (1994). While at Nottingham he founded the Applied Optics Group, now one of the largest in the UK, and become director of the IBIOS, the Institute of Biophysics Imaging and Optical Science. His research interests are novel microscopy, imaging sensors, and laser ultrasonics. He joined the Hong Kong Polytechnic University as Chair Professor in Biophotonics in 2014. He holds honorary professorships at Zhejiang University and also with his former institution, the University of Nottingham, where he maintains strong research collaborations. Mike was elected as Fellow of the Royal Academy of Engineering (the UK engineering Academy) in 2012 in recognition of his interdisciplinary work.

Contributors

Simon Ameer-Beg Randall Division of Cell and Molecular Biophysics, King’s College London, London, UK Richard Dimbleby Department of Cancer Research, Division of Cancer Studies, New Hunt’s House, King’s College London, London, UK F. Argoul LOMA (Laboratoire Ondes et Matière d’Aquitaine), CNRS, UMR 5798, Université de Bordeaux, Talence, France CNRS UMR5672, LP ENS Lyon, Université de Lyon, Lyon, France A. Arneodo LOMA (Laboratoire Ondes et Matière d’Aquitaine), CNRS, UMR 5798, Université de Bordeaux, Talence, France CNRS UMR5672, LP ENS Lyon, Université de Lyon, Lyon, France L. Berguiga CNRS, UMR 5270, INL, INSA Lyon, Bâtiment Blaise Pascal, Villeurbanne, France Kok Ken Chan School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Yin-Quan Chen Institute of Biophotonics, National Yang-Ming University, Taipei, Taiwan Arthur Chiou Institute of Biophotonics, National Yang-Ming University, Taipei, Taiwan Biophotonics and Molecular Imaging Research Center, National Yang-Ming University, Taipei, Taiwan Yu-Jui Chiu Materials Science and Engineering Program, University of California, San Diego, La Jolla, CA, USA Sung Hwan Cho NanoCellect Biomedical Inc, San Diego, CA, USA Chulhee Choi Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea

xv

xvi

Contributors

KAIST Institute for Optical Science and Technology, Korea Advanced Institute of Science and Technology, Daejeon, South Korea KAIST Institute for the BioCentury, Korea Advanced Institute of Science and Technology, Daejeon, South Korea Jong-ryul Choi School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea Medical Device Development Center, Daegu-Gyeongbuk Medical Innovation Foundation (DGMIF), Daegu, South Korea Wonshik Choi Department of Physics, Korea University, Seoul, South Korea Youngwoon Choi Department of Physics, Korea University, Seoul, South Korea Peter Han Joo Chong Department of Electrical and Electronic Engineering, Auckland University of Technology, Auckland, New Zealand Pei-Hua Chung Department of Physics, King’s College London, London, UK William Clements State Key Laboratory for Mesoscopic Physics, Department of Physics, Peking University, Beijing, People’s Republic of China Simao Coelho Randall Division of Cell and Molecular Biophysics, King’s College London, London, UK Richard Dimbleby Department of Cancer Research, Division of Cancer Studies, New Hunt’s House, King’s College London, London, UK Daniel Day Centre for Micro-Photonics, Faculty of Engineering and Industrial Sciences, Swinburne University of Technology, Melbourne, VIC, Australia U. S. Dinish Bio-Optical Imaging Group, Singapore Bioimaging Consortium (SBIC), A*STAR, Singapore, Singapore J. Elezgaray CBMN, CNRS UMR5248, Université de Bordeaux, Pessac, France Zhi-Chao Fan Institutes of Biomedical Sciences, Fudan University, Shanghai, China Department of Chemistry, Fudan University, Shanghai, China Wing-Ping Fong School of Life Sciences, The Chinese University of Hong Kong, Hong Kong, China Shoko Fujimura Department of Physics, Gakushuin University, Tokyo, Japan Min Gu Centre for Micro-Photonics, Faculty of Engineering and Industrial Sciences, Swinburne University of Technology, Melbourne, VIC, Australia Robert Henderson CMOS Sensors and Systems Group, Integrated Micro and Nano Systems, School of Engineering, University of Edinburgh, Edinburgh, UK Liisa M. Hirvonen Department of Physics, King’s College London, London, UK

Contributors

xvii

Aaron Ho-Pui Ho Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong Liying Hong School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Lawrence A. Hough Complex Assemblies of Soft Matter Lab, UMI 3254 CNRS/ UPENN/Rhodia, Bristol, PA, USA Rui Hu School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Yi-Wen Hu State Key Laboratory for Mesoscopic Physics, Department of Physics, Peking University, Beijing, People’s Republic of China Seonhee Hwang Department of Advanced Circuit Interconnection, Pusan National University, Busan, South Korea Eno Hysi Department of Physics, Ryerson University, Toronto, ON, Canada Wataru Inami Faculty of Engineering, Department of Mechanical Engineering, Shizuoka University, Hamamatsu, Japan Chulmin Joo Department of Mechanical Engineering, Yonsei University, Seodaemun-gu, Seoul, South Korea Takanobu A. Katoh Department of Physics, Gakushuin University, Tokyo, Japan Yoshimasa Kawata Faculty of Engineering, Department of Mechanical Engineering, Shizuoka University, Hamamatsu, Japan Chulhong Kim Departments of Electrical Engineering and Creative IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Gyeongsangbuk-do, Republic of Korea Donghyun Kim School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea Jeesu Kim Departments of Electrical Engineering and Creative IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Gyeongsangbuk-do, Republic of Korea Kyujung Kim Department of Cogno-Mechatronics Engineering, Pusan National University, Busan, South Korea Moonseok Kim Department of Physics, Korea University, Seoul, South Korea Michael C. Kolios Department of Physics, Ryerson University, Toronto, ON, Canada Siu-Kai Kong School of Life Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong

xviii

Contributors

Nikola Krstajic CMOS Sensors and Systems Group, Integrated Micro and Nano Systems, School of Engineering, University of Edinburgh, Edinburgh, UK EPSRC IRC “Hub” in Optical Molecular Sensing and Imaging, MRC Centre for Inflammation Research, Queen’s Medical Research Institute, University of Edinburgh, Edinburgh, UK Olga Latinovic Institute of Human Virology, University of Maryland School of Medicine, Baltimore, MD, USA Changho Lee Departments of Electrical Engineering and Creative IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Gyeongsangbuk-do, Republic of Korea Seunghun Lee Department of Cogno-Mechatronics Engineering, Pusan National University, Busan, South Korea Wonju Lee School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea James A. Levitt Department of Physics, King’s College London, London, UK Bei-Bei Li State Key Laboratory for Mesoscopic Physics, Department of Physics, Peking University, Beijing, People’s Republic of China Rongrong Liu Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China Yuxiang Liu Department of Mechanical Engineering, Worcester Polytechnic Institute, Worcester, MA, USA Pui-Chi Lo Department of Chemistry, The Chinese University of Hong Kong, Hong Kong, China Yu-Hwa Lo Materials Science and Engineering Program, University of California, San Diego, La Jolla, CA, USA Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, USA Changjun Min Nanophotonics Research Centre and Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, College of Optoelectronic Engineeringing, Shenzhen University, Shenzhen, China Dennis K. P. Ng Department of Chemistry, The Chinese University of Hong Kong, Hong Kong, China Takayuki Nishizaka Department of Physics, Gakushuin University, Tokyo, Japan Vasilis Ntziachristos Institute for Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany Chair for Biological Imaging, Technische Universität München, München, Germany

Contributors

xix

Youngjin Oh School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea Malini Olivo Bio-Optical Imaging Group, Singapore Bioimaging Consortium (SBIC), A*STAR, Singapore, Singapore Bio-photonics Group, School of Physics, National University of Ireland, Galway, Ireland H. Daniel Ou-Yang Bioengineering Program, Lehigh University, Bethlehem, PA, USA Department of Physics, Lehigh University, Bethlehem, PA, USA Sungjo Park Departments of Electrical Engineering and Creative IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Gyeongsangbuk-do, Republic of Korea Suejit Pechprasarn Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Simon P. Poland Division of Cancer Research and Randall Division of Cell and Molecular Biophysics, Guy’s Campus, King’s College London, London, UK Richard Dimbleby Department of Cancer Research, Division of Cancer Studies, New Hunt’s House, King’s College London, London, UK Jianan Y. Qu Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong SAR, People’s Republic of China Daniel J. Rohrbach Biomedical, Industrial and Human Factors Engineering, Wright State University, Dayton, OH, USA Indrajit Roy Department of Chemistry, University of Delhi, Delhi, India Dmitri A. Rusakov Institute of Neurology, University College London, London, UK Suho Ryu Department of Mechanical Engineering, Yonsei University, Seodaemungu, Seoul, South Korea Lei Shao Department of Physics, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China Alana Mauluidy Soehartono School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Michael G. Somekh Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Hyerin Song Department of Cogno-Mechatronics Engineering, Pusan National University, Busan, South Korea

xx

Contributors

Peiyi Song School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Eric M. Strohm Department of Physics, Ryerson University, Toronto, ON, Canada Klaus Suhling Department of Physics, King’s College London, London, UK Qiqi Sun Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong SAR, People’s Republic of China Ulas Sunar Biomedical Engineering, University at Buffalo, Buffalo, NY, USA Biomedical, Industrial and Human Factors Engineering, Wright State University, Dayton, OH, USA Yuanzhen Suo Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China Adrian Taruttis Institute for Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany Carolyn Tregidgo Department of Physics, King’s College London, London, UK Jianfang Wang Department of Physics, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China Yucheng Wang School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Stephen Weber Centre for Micro-Photonics, Faculty of Engineering and Industrial Sciences, Swinburne University of Technology, Melbourne, VIC, Australia Dan Wei Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China Ming-Tzo Wei Bioengineering Program, Lehigh University, Bethlehem, PA, USA Xun-Bin Wei Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China Institutes of Biomedical Sciences, Fudan University, Shanghai, China Department of Chemistry, Fudan University, Shanghai, China Xiao-Fu Weng Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China Amanda J. Wright Institute of Biophysics, Imaging and Optical Science (IBIOS), University of Nottingham, Nottingham, UK Shu-Yuen Wu Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong Tsung-Feng Wu Materials Science and Engineering Program, University of California, San Diego, La Jolla, CA, USA

Contributors

xxi

Yun-Feng Xiao State Key Laboratory for Mesoscopic Physics, Department of Physics, Peking University, Beijing, People’s Republic of China Guang Yang School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Hui Kit Stephanie Yap School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Hing-Yuen Yeung School of Life Sciences, The Chinese University of Hong Kong, Hong Kong, China Derrick Yong Precision Measurements Group, Singapore Institute of Manufacturing Technology, Singapore, Singapore Ken-Tye Yong School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Jonghee Yoon Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, South Korea KAIST Institute for Optical Science and Technology, Korea Advanced Institute of Science and Technology, Daejeon, South Korea Miao Yu Department of Mechanical Engineering, University of Maryland, College Park, MD, USA Xia Yu Precision Measurements Group, Singapore Institute of Manufacturing Technology, Singapore, Singapore Xiao-Chong Yu State Key Laboratory for Mesoscopic Physics, Department of Physics, Peking University, Beijing, People’s Republic of China Xiaocong Yuan Nanophotonics Research Centre and Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, College of Optoelectronic Engineeringing, Shenzhen University, Shenzhen, China Shuwen Zeng School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Yan Zeng Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong SAR, People’s Republic of China Butian Zhang School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Yating Zhang Precision Measurements Group, Singapore Institute of Manufacturing Technology, Singapore, Singapore Kaiyu Zheng Institute of Neurology, University College London, London, UK

Part I Biophotonic Sensing Techniques and Applications

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study Cancer Metastasis Xun-Bin Wei, Zhi-Chao Fan, Dan Wei, Rongrong Liu, Yuanzhen Suo, and Xiao-Fu Weng

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . History of In Vivo Flow Cytometry (IVFC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of IVFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fluorescence-Based IVFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Experimental Setup of a Two-Color Two-Channel IVFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Processing and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photoacoustic Flow Cytometry (PAFC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . In Vivo PAFC Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biomedical Applications of PAFC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 4 6 8 8 10 12 15 15 15 18

X.-B. Wei (*) Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China Institutes of Biomedical Sciences, Fudan University, Shanghai, China Department of Chemistry, Fudan University, Shanghai, China e-mail: [email protected] Z.-C. Fan Institutes of Biomedical Sciences, Fudan University, Shanghai, China Department of Chemistry, Fudan University, Shanghai, China e-mail: [email protected] D. Wei • R. Liu • Y. Suo • X.-F. Weng Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China e-mail: [email protected]; [email protected]; [email protected]; [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_17

3

4

X.-B. Wei et al.

IVFC in Studying Cancer Metastasis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Circulating Tumor Cells: Key of Cancer Metastasis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Real-Time Detection of Cancer Hematogenous Metastasis by IVFC . . . . . . . . . . . . . . . . . . . . . . Intravital Confocal Microscopy: The Complementary Tool for IVFC in Cancer Study . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20 20 21 24 25

Abstract

The quantification of circulating tumor cells (CTCs) is an emerging tool to diagnose and monitor patients with cancer metastasis. A number of methods have been developed to detect CTCs. However, conventional methods are limited by invasiveness, low sensitivity and difficulty to monitor CTCs. A novel technique named in vivo flow cytometry (IVFC) can overcome those limitations. A number of outstanding studies by IVFC have been published on cancer including leukemia, liver cancer and melanoma. However, there are still numerous questions about cancer metastasis, which could be investigated by IVFC combined with confocal microscopy. Keywords

in vivo flow cytometry • cancer metastasis • circulating tumor cells • confocal microscopy • fluorescence detection

Introduction History of In Vivo Flow Cytometry (IVFC) Flow cytometry (FCM) is a very powerful technique for quantitative analysis employed in biomedical research. Since the first flow cytometry device was invented in 1953, flow cytometer has been a standard tool for cell detection, identification, quantification, and analysis. In a typical FCM example, cells are extracted from a live object, labeled with fluorescent tracer(s), and introduced into a single file to form a high-speed single-cell flow. When an individual cell passes through the focused laser beam, the excited fluorescence, forward scattered light, and sideways scattered light will be detected by photomultiplier tube (PMT). Thus, diverse parameters can be quantified and analyzed (Fig. 1). Because the conventional FCM operation is carried outside the living object, it is also called ex vivo FCM [1]. However, once the cells leave their original environment, their properties may seriously change and thus make experimental results less convincing. Therefore, the ex vivo measurement processes have caused several limitations of ex vivo FCM as follows: 1. After cells are extracted from a living object, their physical and chemical properties might alter, such as morphology, protein synthesis, marker expression, etc. As a result, the ex vivo FCM measurement results can sometimes be different from the actual fact, which can be the most serious shortcoming of ex vivo FCM.

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

Flow Sheath

Laser

Cell

5

Dichroic Mirror

Filter

ADC

Computer for Data Acquisition

PMT

Fig. 1 Schematic diagram of a standard ex vivo flow cytometer

2. Some cells are rare in the blood circulating system and the lymphatic circulatory system. For example, if there is only one circulating tumor cell (CTC) in 1 mL blood sample, it is likely to be missed by ex vivo FCM method [2]. 3. Experimental animals have limited volume of blood. A 20 g nude mouse has no more than 3 mL blood in total. However, in a typical CTC detection experiment or blood component analysis experiment, up to 1 mL blood is needed. Thus, ex vivo FCM cannot be used for continuous, real-time, and long-term measurements. 4. Conventional FCM preparation procedure usually needs several hours or even days. Compared with the relative short detection time (about 5–10 min), preparations could be time-consuming. To overcome the above shortcomings of ex vivo FCM, innovative FCM instruments have been developed. In vivo flow cytometry (IVFC) emerged in 2004 [3]. IVFC has a similar principle with the ex vivo FCM except that a natural blood vessel in a living object replaced the single flowing sheath in ex vivo flow cytometer. A signal (fluorescence, ultrasound, Raman, etc.) from a cell population of interest can be recorded as the cells pass through a slit of laser beam focused across a blood vessel, which allows continuous monitoring of circulating cells in vivo usually at the location of upper layers such as the skin. Details of various kinds of in vivo flow cytometers will be introduced in the next sections. The first in vivo flow cytometer was developed by Lin’s group at Massachusetts General Hospital, Harvard Medical School, in 2004 [3], which was based on the detection of fluorescence-labeled circulating cells. Zharov’s group at the University of Arkansas built their in vivo flow cytometer based on photothermal (PT) and photoacoustic (PA) methods [4]. Then two-color and multicolor in vivo flow cytometer based on fluorescence was developed [5]. Other research groups also have contributed to the development of fluorescence-based in vivo flow cytometer,

6

X.-B. Wei et al.

such as Georgakoudi’s group at Tufts University and Wei’s Group at Shanghai Jiao Tong University [6, 7]. In the meantime, Zharov’s group combined their PA- and PT-based in vivo flow cytometer with high-speed optical imaging and Raman spectroscopy [8]. In 2007, Low’s group at Purdue University developed a multiphoton intravital flow cytometry [9]. Qu’s group at Hong Kong University of Science and Technology reported label-free in vivo flow cytometry using two-photon autofluorescence imaging in 2012 [10]. The development of in vivo flow cytometer has led to a number of novel biomedical applications. Lin’s group combined their in vivo flow cytometer with near-infrared real-time confocal microscopy, to monitor specialized bone marrow endothelial microdomains for tumor engraftment, and found out that CXCR4 molecule played a key role in leukemia metastasis in the bone marrow microenvironment [11]. Wei’s group applied their in vivo flow cytometer for the research of liver cancer metastasis [7, 12, 13]. This was the first time the positive correlation between the number of circulating tumor cells (CTCs) and tumor size was demonstrated; moreover, they demonstrated that tumor resection can reduce the number of CTCs and consequently the tumor metastasis. This might offer a breakthrough technology to elucidate mechanisms of hematogenous metastasis and to monitor the efficacy of cancer therapy. Zharov’s group mainly contributed in the PA- and PT-based IVFC, used for the research of circulating metastatic melanoma cells and carbon nanotube kinetics in the blood, lymph, and tissue [14, 15]. In summary, IVFC might provide a novel technique for biomedical research, such as mechanisms of metastasis, sickle cell crisis, and early detection for diseases.

Types of IVFC In conventional ex vivo FCM, cells are usually labeled and excited by a focused laser beam. Fluorescence and scattered light are picked up by detectors. In IVFC, not only fluorescence and scattered light are used for detection, but also photoacoustic signal, photothermal signal, Raman signal, and autofluorescence can be utilized. From 2004 to the present, two main types of IVFC have been developed, one based on fluorescent method and the other based on PA (with PT) method.

Fluorescence-Based IVFC This type of IVFC usually requires the labeling of targeted cells, which is summarized as follows [16]: 1. If the cells of interest have specific markers, the cells can be labeled directly by injecting fluorescently conjugated antibodies or antibody fragments into the circulation system in vivo. This method may be used for human clinical research if such tracers have been already approved for use. 2. Given that not all cells have specific expressing markers, another practical way is to isolate and purify specific cell populations from donor animals ex vivo and label them with fluorescent dye, such as DiD, DiO, ICG, etc. Then labeled cells

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

7

are transferred into recipient animal’s circulatory system. The donor cells can come from human body or experimental animals. 3. Specific type of cells in transgenic animals can express fluorescent proteins (FP) such as green fluorescent protein (GFP). Cultured cells expressing FP can also be injected into recipient animals and have the same fluorescence with transgenic animals. Fluorescence-based IVFC can be single-color, two-color, or multicolor. The microscopy flow cytometry and multiphoton intravital flow cytometry have similar principle and instruments. Here, we take the one-color, one-slit confocal IVFC as an example to demonstrate how IVFC works. In a typical experiment, the animal is anesthetized and placed on the stage with its ear adhered to the coverslip by glycerine. Typically, green light illumination can lead to good contrast bright-field images in CCD with blood vessels standing out, due to strong absorption of blood at this wavelength range. A blood vessel with appropriate diameter (50 μm) is selected and located at a focused laser beam by a three-dimension adjustment stage. The laser beam (e.g., a He–Ne laser) is shaped to a slit by a cylindrical lens and a mechanical slit. When a cell passes through the laser slit, fluorescent markers are excited and emit fluorescence. The fluorescence can be picked up by the microscope objective and directed to a PMT detection part, where the fluorescent signals are transformed to electronic signals. A data acquisition card and personal computer (PC) can complete the next processing and analysis procedure (Fig. 2). The two-color or multicolor in vivo flow cytometer has a more complicated structure and can characterize multiple fluorescently labeled cells. The multiphoton fluorescence method can increase the detection depth compared to single-photonbased IVFC. The use of optical fibers can make IVFC portable, which has potential for detecting blood vessels inside living animal in a minimally invasive way [17].

PT- and PA-Based IVFC These types of IVFC are based on photothermal and photoacoustic effects. When cells absorb high energy of a laser pulse, local temperature will rise rapidly. This will result in thermal elastic expansion and ultrasonic signals will be emitted. In the PT- and PA-based in vivo flow cytometer, the laser beam is shaped to a slit and focused onto an appropriate blood vessel. When a targeted cell passes through the slit, acoustic signals will be emitted and then detected by acoustic sensors. This method has an advantage of monitoring deep blood vessels up to 1–3 mm from the surface, compared with a depth limitation about 500 μm for the fluorescence-based IVFC. However, some cells may have no significant PT or PA signals. Raman-Based IVFC Raman spectra technique is a typical chemical measurement method, which was introduced to IVFC by Zharov’s group [18]. In the in vivo Raman flow cytometry, scattering signals from Raman active vibrational states are detected by highresolution spectral acquisition instrument. The signals can come from intrinsic contrast agent such as lipids with CH2 or labeling markers such as nanoparticle.

8

X.-B. Wei et al.

Fig. 2 (a) Anesthetized mouse on the stage; (b) microvessel image by a 10 microscope; (c) three-dimensional diagram of IVFC with an in vivo flow cytometer; OL objective lens, DM dichroic mirror, AL achromatic lens, CL cylindrical lens, MS mechanical slit; (d) microvessel image by a 40 microscope in which the short blue slit shows the location of shaped laser beam

Image-Based IVFC This type of IVFC has different detection principle from the abovementioned methods. While conventional ex vivo flow cytometer and in vivo flow cytometer quantify cells mainly by emitted signals, image-based in vivo flow cytometer depends on the image information. Therefore, it has higher demands on the image processing tools. Another typical feature is that the instrument must have high-speed image acquisition frames which ensure that no useful information is missed. Zharov’s high-speed transmittance digital microscopy (TDM) and Qu’s label-free in vivo flow cytometer are based on such a method.

Fluorescence-Based IVFC Basic Principle The underlying principle of fluorescence-based IVFC is confocal excitation and detection of fluorescently labeled cells in circulation with the concept of conventional flow cytometer [3]. The blood circulation of an experimental animal is a natural

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

9

Fig. 3 Vasculature of a mouse ear. The blood vessels are relatively dark due to light absorption of hemoglobin. The smaller one is an artery with the bigger one being a vein. The red slit is a laser beam which is positioned across the artery

Fig. 4 Cell detection mechanism of IVFC

fluidic system which has similar function as the sheath flow system in conventional flow cytometer [1]. Typically, a mouse for experiment is anesthetized and placed on the sample stage. One ear of the mouse is gently adhered to the microscope slide with glycerol, as depicted in Fig. 3. A slit-shaped laser beam is positioned across the blood vessel of the mouse ear so that the long dimension of the laser beam covers the whole width of the vessel. The cells flowing in this vessel are interrogated by the laser beam. Figure 4 depicts the mechanism for detecting fluorescent cells in circulation. When fluorescently labeled cells in the blood vessel flow through this slit, normally they will be excited one by one, yielding a burst of fluorescence for each of the cells. The emitted fluorescence can be detected by a photomultiplier tube after appropriate spectral and spatial filtering. Then the fluorescence signal is digitized and recorded in the computer. After data processing and analysis, the number of fluorescently labeled

10

X.-B. Wei et al.

cells detected within a specific time period can be obtained, as well as the fluorescence intensity and time duration for each fluorescence burst.

Experimental Setup of a Two-Color Two-Channel IVFC Figure 5 shows the schematic of a typical two-color two-channel in vivo flow cytometer [7, 13]. Generally speaking, fluorescence-based IVFC comprises three subsystems, optical subsystem, electronic subsystem, and software subsystem [6], as described below. The optical subsystem is responsible for the position guidance of the laser beam and the excitation and collection of fluorescence. Thus, it is further divided into three parts: transillumination part, laser excitation part, and fluorescence detection part. The illumination part provides live imaging for blood vessels and surrounding tissues, which are used to guide the laser beam to the blood vessels of interest. This subsystem comprises a LED light source (LED), an objective lens (objective), a beam splitter (BS3), a filter (F1), an achromatic lens (AL2), and a CCD camera (CCD). The wavelength of LED light ranges between 520 nm and 550 nm, with a central wavelength of 530 nm, which can provide good contrast between blood vessels and surrounding tissue due to the high absorption of light at this wavelength by hemoglobin. The LED light source is placed above the sample stage. The green light that transmits through the sample (typically the ear of a live mouse) is collected by the objective lens. The light from the front focal plane of the objective is made parallel after it passes though the objective. Then the light is directed to the CCD camera by BS3. BS3 is a beam splitter with a transmission rate of 75%. Therefore, 25% of the

Fig. 5 Schematic of a two-color two-channel IVFC

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

11

transilluminated light is reflected toward the CCD camera for imaging. Before the CCD camera, there are two additional optical components: the filter F1 and the achromatic lens AL2. The filter F1 can filter out the light whose wavelength does not lie between 520 nm and 550 nm. The achromatic lens AL2 is used for refocusing the parallel transilluminated light to the CCD camera. Thus, the image captured by the CCD camera depicts the structure information of the sample at the focal plane of the objective. The CCD is connected to a computer as navigation of blood vessels in biological tissues. The laser excitation part consists of two diode lasers (488–635 nm), two plane mirrors (M1 and M2), three beam splitters (BS1, BS2, and BS3), a cylindrical lens (CL), a mechanical slit (MS), an achromatic lens (AL1), a number of pinholes, an objective lens (objective), and a sample stage. The size of the diode laser is relatively small compared to other laser sources, such as solid-state laser. This makes the whole system compact and the miniaturization of IVFC a reality. Two laser beams emitted by the diode lasers are integrated into a single beam by the dichroic beam splitter BS1. BS1 is a low-pass beam splitter that transmits the 488 nm laser beam while reflecting 95% of the 635 nm laser beam. Then the integrated laser beam is focused by the cylindrical lens to its focal plane which is 150 mm away. The shape of the beam looks like a vertical slit at the focal plane. A mechanical slit with a width of 200 μm is positioned right here in order to spatially filter the laser beam. Then the slit-shaped beam is collimated by the achromatic lens AL1 150 mm away from the mechanical slit. The focal length of AL1 is just 150 mm so that the beam is focused to the back focal plane of the objective lens. The laser beam is then transmitted through the objective lens and refocused into a horizontal slit-shaped beam at the front focal plane of the objective lens. The approximate length and width of the slitshaped laser beam here are 30 μm and 5 μm, respectively. The fluorescence detection part collects the fluorescence and realizes the photovoltaic conversion. This part mainly consists of an objective lens (objective), three beam splitters (BS2, BS3, and BS4), a mirror (M3), two filters (F2 and F3), two achromatic lens (AL3 and AL4), two mechanical slits (MS), and two photomultiplier tubes (PMTs). The fluorescence emitted from the sample is collected by the objective lens and modulated into parallel beam. Then this beam passes through BS3 and reaches BS2. BS2 guides the fluorescence light toward the BS4 which is a high-pass beam splitter. BS4 reflects green fluorescence toward F2 and transmits the red fluorescence to M3. The passband of filter F2 is from 500 to 520 nm, while the passband of filter F3 is between 650 and 680 nm. These filters can filter out the unwanted light, such as the light from LED and laser sources. The achromatic lenses, AL3 and AL4, refocus the fluorescence beam to their focal planes, respectively. The fluorescence beam at the focal plane looks like a vertical slit as well. Two mechanical slits are placed right here in order to eliminate the fluorescence which is not emitted from the focal plane of the objective lens. The function of the mechanical slits in the IVFC system is similar to that of the pinholes in a confocal microscope. The design of the whole optical system makes the confocal excitation and detection a reality which improves the fluorescence signal and reduces the background noise.

12

X.-B. Wei et al.

The electronic system converts the optical signal into digital signal. This system consists of two multiplier tubes, a current amplifier and an analog-to-digital converter (A/D converter). The PMTs are responsible for photovoltaic conversion, which convert the fluorescence photons into the current which can be received and processed by electronic devices. As the amplitude of the current generated by PMTs is too low to act as the input for the A/D converter, a current amplifier is added between them. Then the digital signal produced by A/D converter is transmitted to and recorded in the PC through USB data interface.

Data Processing and Analysis The recorded and digitized fluorescent signal is processed offline with homemade software developed using MATLAB. Figure 6a shows a typical data trace acquired at the mouse ear artery using IVFC. There are mainly two kinds of signal component contained in the data trace (Fig. 6b). One is the flat low-intensity background, which is called the baseline; the other is the abrupt high-intensity pulses, called peaks. The baseline level is associated with background noise caused by autofluorescence. Each peak represents one fluorescently labeled cell which is excited as it flows through the slit-shaped laser beam. The peak signal has two major parameters: one is the peak height, which has the highest intensity; the other one is peak width, which is the time span of the corresponding peak. Figure 7 depicts the general procedure for processing and analyzing IVFC data. The raw data often contains some noise signal caused by electronic devices, such as power line noise, thermal noise, or static electricity pulse noise. Therefore, the first

Fig. 6 (a) Typical data trace acquired by IVFC at the mouse ear artery. (b) Two signal components for IVFC data: baseline and peak. The peak has two parameters: peak height and peak width. AU denotes arbitrary unit

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

13

Fig. 7 General procedures of data processing and analysis for IVFC

step is to reduce the noise. A number of approaches can achieve this goal. For instance, a moving averaging algorithm is deployed to smooth the data and reduce the noise level. For more complicated noise environments, several advanced denoising approaches are used: the Butterworth filter and the wavelet-denoising method are two common choices. The second step is to identify the pulse signal and list them as peak candidates. The pulse identifying algorithm is crucial and largely determines the accuracy and efficiency of IVFC signal analysis. The third step is to eliminate false-positive peaks. For example, static electricity pulse noise signals with high intensity beyond the gating threshold are often recognized as peak candidates. Thus, additional criteria are used to help exclude those false-positive peak candidates, like peak width information for this case. The peak width of a fluorescently labeled cell is much wider than that of the static electricity pulse noise. At last, the remaining peak candidates are listed as real positive peak signal. Their statistical results, such as number of peaks per minute, are calculated and recorded in the output file together with peak height and width information. One peak identifying algorithm is called the “line gating” method, which needs the control data as the criterion to set the gating threshold [7]. The following is a brief description of how it works. Firstly, the negative control data is analyzed with all the peak candidates extracted. Then, the peak height and peak width information of the peak candidates is used to generate a scatter plot. As a matter of fact, these peak candidates from the negative control data are all false-positive signals caused by autofluorescence or electronic noise. A line is drawn manually so that all the false-positive signals fall below this line. Thus, this line is treated as the gating threshold. For subsequent experimental data, once the dot in the peak height and width scatter plot falls above the gating line, this peak candidate is regarded as a real positive peak signal (Fig. 8).

14

X.-B. Wei et al.

Fig. 8 Illustration of line gating algorithm. (a) Blue dots are peaks detected in the negative control data, which are in fact false-positive peaks. Therefore, a red line is drawn manually above these dots as the gating threshold. (b) Peaks detected in the subsequent experimental data are separated into two categories by the red gating line: blue dots are false-positive peaks with the red dots being real positive peaks. AU denotes arbitrary unit

Another peak identifying algorithm is called the “adaptive threshold gating” method [7, 19]. Since IVFC deploys a confocal excitation and detection strategy, the signal-tonoise ratio is generally high enough to identify the real positive peak signals even without the help of the control data. Furthermore, during the detecting procedure, the baseline intensity level fluctuates with time and the subtle motion of the anesthetized animal. The effectiveness of the manually determined global gating threshold is decreased due to this phenomenon. As a result, the former “line gating” method is not suitable for analyzing this kind of unsteady IVFC data. Thus, the adaptive threshold algorithm is deployed in order to solve this problem. In this algorithm, despite that the whole data trace in a longtime span fluctuates. A segment of the trace in a relatively short-time span is regarded as steady. Thus, the long data trace is segmented into smaller parts at first, and then a local threshold is determined for each segment. Since the baseline is somehow similar to Gaussian white noise, identifying the peaks could be regarded as seeking outliers from a Gaussian-distributed data. Therefore, the threshold is calculated according to the following formula: threshold = median + multiplier  mad/0.6745. The “median” is the median value of the small data fragment, while the “mad” is the mean value of the absolute deviation of the fragment. The “multiplier” is a parameter that determines the rigor level of the threshold, whose value is generally 7. If the intensity of a pulse data surpasses the threshold, it will be regarded as a peak candidate. Afterward, every peak candidate will be interrogated in order to eliminate false-positive peaks with too narrow peak width. This approach needs no manual input to determine the gating threshold, which makes batch processing practical. It is very crucial for analyzing large amounts of IVFC data with longtime span. In addition, its adaptiveness in calculating the local threshold makes it a more effective and reliable method to identify the real peak signal.

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

15

Photoacoustic Flow Cytometry (PAFC) Basic Principle Photoacoustic flow cytometry (PAFC) works on the cellular/molecular level and focuses on the detection and characterization of single cells based on the photoacoustic effect of biological tissues, which combines the merits of both light and ultrasound. The optical properties of biological tissues in the visible 400–700 nm and nearinfrared (NIR) 700–1100 nm regions of the electromagnetic (EM) spectrum are related to the molecular constituents of tissues and their electronic and/or vibrational structures. The electromagnetic energy from visible to NIR is often utilized for photoacoustic (PA) excitation in soft tissues. EM waves in these regions are nonionizing and safe for human use and provide high contrast and adequate detection penetration depths in biological tissues. PAFC uses a short-pulsed laser source to excite the biological tissue with a low fluence of EM radiation [25]. A transient sound or stress wave is produced in this course because of thermoelastic expansion that is induced by a slight temperature rise, as a result of the energy deposition inside the biological tissue through the absorption of the incident EM energy. Then the photoacoustic signals are detected by ultrasonic transducers and recorded by a DAQ card and a computer. With the ability in quantitative assessment of single moving cells in vivo, PAFC holds promise for early diagnosis of many diseases, including cancer, diabetes, and cardiac disease, and for the study of the influence of various factors such as drugs, smoking, and radiation on individual cells [15, 20, 21]. The underlying principle of PAFC is illustrated in Fig. 9. When individual cells in blood or lymph flow are excited by one or a few focused laser beams operating at different wavelengths, they will generate photoacoustic (PA) signals which can be recorded by an ultrasonic transducer attached to the sample surface.

In Vivo PAFC Setup In PAFC, opto-physical time-dependent PA phenomena are generated in a cell through interaction with pulsed or intensity-modulated optical radiation laser at different wavelengths (multicolor flow cytometry). Blood or lymph vessels with circulating targets are illuminated by a LED and imaged by a CCD camera. Cells in blood or lymph flow are irradiated by a focused laser beam, while laser-induced PA waves are detected by an ultrasound transducer attached to the living sample (e.g., skin; Fig. 9) [22]. Figure 10 shows the general framework of PAFC. The red line indicates the pulsed laser, which reaches circulating targets in blood vessels of a living animal on the sample stage after a series of optical lens and components. PAFC has great potential for detecting disease-associated biomarkers in blood and lymph flow. The realization of such potential requires fast-detection schematics based on a high

16

X.-B. Wei et al.

Ultrasonic Transducer

Laser Blood Vessel

Red Blood Cell

Flow

White Blood Cell Cancer Cell

Ultrasound Waves

Fig. 9 An ultrasonic transducer on the sample surface detects acoustic waves generated by a laser beam interacting with a circulating cell

LED Light Computer

Pre-Amplifier Transducer DAQ Card

Sample Stage Objective

Cylindrical Lens Laser

Achromatic Lens Beam Splitter CCD

Mechanical Slit Mirror Focus Lens

Filter

Fig. 10 Framework of PAFC includes three parts. (1) Illumination part: LED light, a mirror, focus lens, filter, and CCD. (2) Excitation part: pulsed laser, focus lens, a mechanical slit, an achromatic lens, and an objective. (3) Detection part: ultrasonic transducer, preamplifier, DAQ card, and computer

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

17

Blood Vessel Objective

Iris Iris

Iris

Beam Splitter

Iris

Achromatic Lens

Laser Focus Lens

Cylindrical Lens

Mechanical Slit

Fig. 11 The PAFC detection of relatively rare cells with linear-shaped laser beam overlapping the selected blood vessel. The shape of the laser beam is illustrated by pink solid with blood vessel being red cylinder

pulse repetition rate (PRR) laser. A short-time interval between sequent laser pulses guaranteed that even at very high flow speed topping few meters per second level, there would be no cells passing the detection volume between laser pulses [23]. High pulse rate can increase SNR by acquiring several PA signals from the same circulating cell, with subsequent time- or frequency-domain averaging of the pffiffiffiffi acquired PA signals. The maximal increase in SNRs was proportional to N , where N is the number of PA signals from the same cell [23]. To detect each targeting cell, the laser pulse repetition rate f should be adjusted as f  V F =2RCTC , where VF is the flow velocity and RCTC is the radius of CTCs. The overlapping of acoustic waves should limit the maximum frequency fmax rep of a pulsed laser to generate PA signals. In the presence of acoustic reflections, the recorded acoustic response would be formed by a train of oscillations whose duration τTR is determined by transducer and setup parameters. In PAFC experiments, background PA signals from light absorption of the skin and blood can hide PA signals from circulating cells or nanoparticles. Most research in PAFC use lasers operating at wavelengths ranging from 650 to 900 nm, while background PA signals at this wavelength range are mostly associated with light absorption of water and oxyhemoglobin [17, 24]. An efficient PA effect can be observed under fulfillment of the acoustic confinement defined as T p  τA ¼ D=cs, where τA is the travel time for an acoustic wave through a target with diameter of D with cs being the speed of sound in the medium (for water Cs ¼ 1:5  103 m=s) [25]. PAFC focuses on detection of relatively rare cells with a linear-shaped laser beam overlapping the blood vessel selected (Fig. 11). Most studies of PAFC were performed on 50–300 μm diameter blood vessels with cell flow rate in the range of 104–106 cells/s. The typical detection positions in the animal are either the mouse/rat

18

Laser Pulses PA Signals

50

Amplitude (A.U.)

Fig. 12 The time correspondence between laser pulses and PA signals: One laser pulse can excite biological tissues (circulating targeting cells) to generate one train of oscillations, in which the time width of the train of oscillations is proportional to the time width of the laser pulse. AU denotes arbitrary unit

X.-B. Wei et al.

0

1 0

0

1

2

3

4 5 Time (A.U.)

6

7

8

9

ear or the mouse/rat abdominal mesenteric vasculature [12]. In order to locate appropriate blood vessels which can be used for detection of targeting particles, LED illumination (green line in Fig. 11) and optical microscope imaging serve the purpose of blood vessel navigation, which helps to find the ideal position of arteries and veins of vasculature during experiments, similar to fluorescence-based IVFC (also see Fig. 2). For detecting PA signals, most current PAFCs use an ultrasonic transducer, a preamplifier, a boxcar, an oscilloscope/DAQ card, and a computer to record and process PA signals. After the irradiation of high pulse repetition rate laser, the circulating targets (cells or particles) in the selected blood vessel generate PA signals which can be detected by a high-frequency and wideband ultrasonic transducer. Photons are absorbed to generate ultrasound in 1–50 MHz range in deep biological tissues [25]. Circulating particles of smaller sizes tend to generate PA signals with higher frequency and vice versa. An ultrasonic transducer with higher detecting frequency means greater resolution. The preamplifier amplifies the PA signals detected by the transducer. The bandwidth of a preamplifier should cover the detecting frequency of the transducer. The preamplifier is connected to a DAQ card which acquires and records the PA signals detected by the transducer through software (C/LabVIEW/MATLAB). The DAQ card is connected to a computer which further analyzes and processes the data by programs. The time correspondence between the laser pulse and PA signals is shown in Fig. 12, which is one laser pulse to generate one train of PA pulse oscillations.

Biomedical Applications of PAFC In comparison with conventional flow cytometry technologies, the advantages of PAFC include:

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

19

1. Noninvasive single-cell diagnosis in the host environment without the need of labeling 2. Excellent sensitivity and high spatial resolution 3. Fast imaging speed enabling detection of fast flow 4. Spectroscopic measurement capability for identifying different cells 5. Distinguish various morphologic and functional states of normal and abnormal cells. Further developments of PAFC by Zharov’s group are integrating PAFC with other advanced technologies, including two-beam photothermal in vivo flow cytometry [19–21], fluorescence image cytometry, light scattering and speckle flow cytometry [20], photoacoustic lymphography, absorption image cytometry, and photothermal (PT) therapy [22]. For example, in order to achieve high detection speed, a Yb-doped fiber laser with a pulse repetition rate at 0.5 MHz, a pulse width of 10 ns, and pulse energy up to 100 μJ was employed, which facilitated the measurement of very high linear flow velocity of up to 2.5 m/s [14, 23]. Furthermore, Zharov’s group has also developed a cytometry which is based on the photothermal and photoacoustic detection of Raman-induced thermal and acoustic signals in biological samples with Raman active vibrational modes. This cytometry, with enhanced chemical specificity and sensitivity, could contribute to the basic and clinical studies of lymph and blood biochemistry, cancer, and fat distribution at the single-cell level [24]. The clinically relevant capacity of in vivo PAFC techniques was demonstrated by real-time detection in blood and lymph flows of circulating individual normal cells (e.g., erythrocytes and leukocytes) in different functional states (e.g., normal, apoptotic, or necrotic), tumor cells (melanoma, breast, and squamous tumor cells), bacteria (e.g., E. coli and S. aureus), nanoparticles (e.g., gold nanorods, carbon nanotubes, magnetic and golden carbon nanotubes), and dyes (e.g., Lymphazurin, Evans blue, and indocyanine green) [22]. The detection of individual circulating absorbing objects such as nanoparticles and other contrast agents has demonstrated the capability of PAFC for real-time monitoring of circulating targets labeled with gold nanoparticles (GNR), ICG, and contrast dye Lymphazurin [21]. The capability of PAFC to detect and quantify continuously the number and flow characteristics of circulating objects in vivo can provide the information about their depletion kinetics and clearance rate. The detection of label-free biological circulating objects has demonstrated the most promising capabilities of in vivo PAFC compared with conventional flow cytometry. Firstly, it can be used to monitor low-pigmented human circulating tumor cells (CTCs). Zharov’s group used a high pulse repetition rate laser operating at 820 and 1064 nm wavelengths for early diagnosis of melanoma at the parallel progression of primary tumor and CTCs, detection of cancer recurrence, residual disease, and real-time monitoring of therapy efficiency by counting CTCs before, during, and after therapeutic intervention [23]. They also addressed the sensitivity of label-free detection of melanoma CTCs and introduced in vivo CTC targeting by magnetic nanoparticles conjugated with specific antibody and magnetic cell

20

X.-B. Wei et al.

enrichment [14, 26]. Secondly, the integration of label-free photoacoustic (PA) and photothermal (PT) flow cytometry enables the capabilities for dynamic monitoring of hemorheological parameters (RBC aggregation, deformability, shape, intracellular hemoglobin distribution, individual cell velocity, hematocrit, and likely shear rate) in vivo, which are referred to as PA and PT blood rheology [27]. Multiple dyes having distinctive absorption spectra can serve as multicolor PA contrast agents could monitor the clearance of three dyes [28, 29]. Thirdly, in vivo PAFC has been used for early detection of clots with different compositions as a source of thromboembolism including ischemia at strokes and myocardial infarction [30]. A low-absorbing, platelet-rich clot passing a laser-irradiated vessel volume will have a transient decrease in local absorption and result in an ultrasharp negative PA hole in blood background. Taking advantage of this phenomenon, PAFC could define risk factors for cardiovascular diseases in real time, and for prognosis and prevention of stroke, or use clot count as a marker for evaluation of therapeutic efficacy.

IVFC in Studying Cancer Metastasis Cancer is one of the major problems which endanger human public health worldwide. It is the leading cause of death in developed countries and the second leading cause of death in developing countries [31, 32]. The prognosis of cancer has been significantly improved in recent years due to earlier diagnosis and more effective treatments. However, recurrence and metastasis are still the major obstacles for long-term survival of cancer patient. In the processes of cancer metastasis and recurrence, hematogenous spreading of circulating tumor cells (CTCs) from primary tumor is a crucial step, which leads ultimately to the formation of overt metastases [33]. In vivo flow cytometry is the novel technique which can monitor circulating cells noninvasively, dynamically, and sensitively [3]. CTC dynamics can be monitored by IVFC. In addition, intravital confocal microscopy (ICM), which visualizes the primary tumor and metastasis in vivo, could explain the CTC formation and outcome. Combined with ICM, IVFC can monitor the processes of cancer metastasis comprehensively and provide new insights in both basic and clinical studies of cancer metastasis.

Circulating Tumor Cells: Key of Cancer Metastasis The most popular theory explaining the mechanism of cancer metastasis was firstly presented by Stephen Paget in 1889, which was named the “seed and soil” hypothesis [34] (as Fig. 14 shows). In his studies, he first found out that metastasis was not due to chances, but that certain tumor cells had specific affinity for the milieu of certain organs. Thus, he visualized the spreading tumor cells as “seeds” and targeting organ as “soil.” He concluded that metastases formed only when the seed and soil were compatible.

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

21

In 2003, Isaiah J. Fidler [35] mended this hypothesis accordingly. In Fidler’s opinions, the “seed and soil” hypothesis consists of three principles: first, tumors (both primary tumor and metastases) consist of both tumor cells and host cells, which include epithelial cells, fibroblasts, endothelial cells, and infiltrating leukocytes. Moreover, tumors are biologically heterogeneous and contain genotypically and phenotypically diverse subpopulations of tumor cells, each of which has the potential to complete some steps in the metastatic process, but not all. Second, the process of metastasis is selective for tumor cells. The successful metastatic cell (the “seed”) must be proficient in all events, which include invasion, embolization, survival in the circulation, arrest in a distant capillary bed, and extravasation into and multiplication within the organ parenchyma, rather than just a few. Although some of the steps in this process contain stochastic elements, as a whole metastasis favors the survival and growth of a few subpopulations of cells that preexist within the parent tumor. Thus, metastases can have a clonal origin, while different metastases can originate from the proliferation of different single cells. Third, and perhaps the most important principle for the design of new cancer therapies, is that metastases can only develop in specific organs. The microenvironments of different organs (the “soil”) are biologically unique. Endothelial cells in the vasculature of different organs express different cell surface receptors and growth factors that influence the phenotype of metastases that develop there. In other words, the outcome of metastasis depends on multiple interactions (“cross-talk”) of metastasizing cells with homeostatic mechanisms, which the tumor cells can usurp. As described above, both the seeds and the soil are important in metastasis. CTCs, the seeds, are defined as tumor cells originating from either primary sites or metastases and circulating freely in the peripheral blood of patients and extremely rare in healthy people [36]. CTCs have long been considered a reflection of tumor aggressiveness. Tumor-induced angiogenesis occurs in step with the action of invasion, which gives rise to the possibility that highly invasive but localized tumors may unleash CTCs into peripheral circulation before any bona fide metastases are established. Highly aggressive CTCs may not only establish metastases in distant organs but also be capable of self-seeding back to their original organs [37]. Testing for CTCs has emerged as a new and promising tool for stratifying and monitoring patients with metastatic disease [38]. A number of currently available CTC detection platforms have been verified in various clinical settings, which strongly suggest that CTC detection has enormous potential to assist malignancy diagnosis, estimate prognosis, and monitor response of the anticancer therapy [39].

Real-Time Detection of Cancer Hematogenous Metastasis by IVFC IVFC can noninvasively, dynamically, and sensitively monitor various types of circulating cells, including cancer cells [6, 7, 9, 11–13], hematopoietic stem cells [40, 41], lymphocytes [42, 43], red blood cells [3, 5, 44], and apoptotic cells [45]. In the study of cancer metastasis, the acquired real-time information of CTC dynamics

22

X.-B. Wei et al. CTC Dynamics 50

80

40

CTCs (#/hour)

CTCs (N. #/min)

CTC Depletion Kinetics 100 60 40 20

30 20 10 0

0 0

2

4 6 8 Time (hour)

10

12

2

3

4 5 6 Time (week)

7

8

Fig. 13 CTC counts measured by IVFC. CTC depletion kinetics (normalized number) and CTC dynamics (real number) of HCCLM3 cancer (a type of hepatocellular carcinoma with high lung metastasis potential) were shown, respectively. In CTC depletion kinetics measurement by IVFC, 106 cells were injected into circulation. The curve reflected the kinetics of CTC depletion from the circulation. In CTC dynamics measurement by IVFC, tumor was implanted into orthotopic liver. CTCs formed spontaneously and were detected. The curve reflected not only the processes of CTC metastasis but also the processes of CTC formation. # denotes number, N. # denotes normalized number

may provide novel insights into tumor progresses, metastasis processes, and responses to treatments. It can improve our knowledge of tumor metastasis and guide therapeutic schedules. The usage of IVFC in studying cancer began in 2004 [6]. The researchers in Lin’s group firstly presented the depletion kinetics of injected prostate cancer cells in mice and rats and demonstrated that IVFC might become a new method for monitoring CTCs dynamically. The depletion kinetics of CTCs may reflect the processes of CTC homing to targeted issues and assess the effect factors on these processes, as Fig. 13 shows. They found out that the depletion kinetics of CTCs showed differences among different tumor cell lines and host animals. This indicated that metastasis might be associated with the characters of both tumor cells and hosts, which was consistent to the “seed and soil” hypothesis we described above. In 2005, Lin’s Group and their collaborators studied the interactions between the “seed” and the “soil” in tumor metastasis with the help of IVFC [11]. This work unveiled the molecular mechanism of CTC homing for leukemia. Using IVFC, the effect of AMD3100, which is the specific small-molecule inhibitor for CXCR4 (C-X-C chemokine receptor type 4) receptor, to the depletion kinetics of CTCs was presented: AMD3100 was able to restrain CTC depletion and homing to bone marrow significantly. Combined with intravital bone marrow imaging (we will describe this technique particularly in the next section), they found out an important pair of molecules associate with bone marrow metastasis, CXCR4, and SDF-1 (stromal cell-derived factor 1). In this work, IVFC provided important evidence and demonstrated its importance in cancer study. Similar method was also used in the studies of multiple myeloma [46–48], prostate cancer [7], etc. Our group focused on liver cancer study using IVFC. We studied the depletion kinetics of intravenously injected liver cancer cells with different metastasis

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

23

potential [7, 13]. However, this tumor model with intravenous injection, as commonly used by many groups before, could not well mimic tumor metastasis. Although it reflected some circulating metastatic characteristics or therapeutic responses of cancer cells, the large number of injected CTCs does not exist in pathologic conditions [49, 50]. Thus, we built an orthotopic hepatocellular carcinoma (HCC) model and combined it with IVFC to study cancer metastasis [12]. It was the first study of monitoring CTC dynamics under clinically relevant oncology condition by IVFC. Different from the depletion kinetics of CTCs, the CTC dynamics (Fig. 13) we presented not only reflected the processes of CTC targeting to host tissues but also reflected the processes of CTC forming from primary tumor. Interestingly, we observed significant differences in CTC dynamics between orthotopic tumor mouse model and subcutaneous (s.c.) model. It was the first study to show the difference of hematogenous metastasis between orthotopic model and s.c. model. Our study has confirmed that local environment is essential for CTC-dependent metastasis, especially in CTC forming process. The orthotopic model is better than s.c. model to study cancer metastasis under clinically relevant oncology conditions. We also used our model to investigate whether liver resection promoted or restricted hematogenous metastasis in advanced HCC, which has been disputed. We provided direct evidence to assess the effectiveness of surgical resection for cancer metastasis and found out that both the number of CTCs and early metastases decreased significantly after tumor resection. Tumor progression and distant metastases development were prominently geared down after resection. Importantly, CTC numbers were correlated with tumor growth in the orthotopic tumor model, including the number and size of distant metastases. In addition, we found out that the number of CTCs dropped to undetectable level when the supplier – the primary tumor – was removed. CTCs in our model could not maintain in circulation without the supply from solid tumor. The presented CTC count is just the homeostasis of CTC formation and depletion (including apoptosis, immune system killing or metastasizing to target organs). Furthermore, we found out that after the primary tumor was removed, all the mice with CTC recurrence had observed metastases, while all the mice without CTC recurrence had no observed metastases. This implied that CTCs might become a biomarker to indicate tumor residual or recurrence. Our work has demonstrated that, when combined with orthotopic tumor models, the novel IVFC technique offers the capability to elucidate mechanisms that drive hematogenous metastasis and to monitor the efficacy of cancer therapy. These studies made significant contributions to solve some important biomedical controversies and potentially helpful to guide clinical therapies. It was mentioned previously that IVFC might have higher sensitivity than conventional methods [3, 9]. However, no researchers so far have made the comparison with experimental results. In our study, we have determined that our IVFC has 1.8-fold higher sensitivity than the currently used whole blood analysis by conventional flow cytometry [12]. He et al. [9] and Galanzha et al. [14, 15] introduced multiphoton, photoacoustic, and photothermal techniques into IVFC. Meanwhile, they used monoclonal antibody to label and detect CTCs. As reported, they succeeded in detecting rare CTCs

24

X.-B. Wei et al.

in subcutaneous lung cancer or in melanoma model. Moreover, Alt et al. designed a retinal in vivo flow cytometer which can detect circulating cells in retinal vessels [51]. This technique had five times higher sensitivity than conventional IVFC and provided a suitable site to detect, which could be used in clinic. These might show the possibility to use IVFC in clinical diagnoses or therapeutic evaluations.

Intravital Confocal Microscopy: The Complementary Tool for IVFC in Cancer Study IVFC presented the visualized and dynamic conditions of cancer hematogenous metastasis, which is the key process of metastasis. However, the processes of CTC formation and homing are also important. Intravital confocal microscopy (ICM) can image the primary tumor as well as metastasis-targeted issues [52] and provide complementary imaging information for IVFC data. Intravital microscopy (IVM) was first presented during the nineteenth century, when microscopes were being used to image tissues in living animals [53]. In the early stage of IVM development, most studies could only examine the vasculature and the microcirculation, because the optics available at that time limited the visualization of other tissues. During the 1950s, the visualization of cancer metastasis was pioneered in a rabbit ear chamber [54]. Major breakthroughs were made by considerably improved intravital imaging techniques combined with genetic tumor models of rodents which expressed fluorescent proteins (FPs) during the 1990s. Since then, IVM has evolved into an important tool for investigating the processes underlying cancer and metastasis [55–58]. Recently, a number of novel IVM techniques have become available with different properties in relation to imaging depth, resolution, timescale, and applications [59]. Techniques that are based on fluorescence microscopy provide subcellular resolution and have successfully been used to image single cells in living organisms in their natural environment. ICM is one of the major kinds of IVMs, which permits detailed visualization of structures deep within thick fluorescently labeled specimen. The study of Sipkins et al. [11] mentioned above was the first study that combined IVFC with ICM to study cancer metastasis. ICM imaging presented the condition of CTC homing in bone marrow and the microenvironments which led to CTC homing. These results supported and explained the variations of CTC depletion kinetics. Thus, the whole processes of CTC homing and the effect factors were presented thoroughly. Afterward, most of the IVFC cancer study combined ICM imaging to comprehensively investigate the hematogenous and distant metastasis [7, 12]. It deserves to be mentioned that, in our study which firstly investigated tumor metastasis under clinical relevant condition by IVFC, we found out that metastases could divide into early metastases and advanced metastases by size using ICM imaging. Moreover, we found out that the number of early metastases correlated with the number of CTCs. We also found out that CTCs could be detected before the formation of metastases. The number of advanced metastases reflected the accumulation of the number of CTCs. However, the processes of CTC

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . . Primary Tumor

25

Metastatic Tumor

CTCs

CTCs (#/hour)

4 3 2 1 0 2 ICM

3

4 5 6 7 Time (week) IVFC

8 ICM

Fig. 14 The processes of cancer metastasis and comprehensive detection by IVFC combined with ICM. Tumor cells invade through the tumor border and blood vessel wall to form circulating tumor cells. CTCs cruise in the circulating system. Most of them undergo apoptosis or are cleared by the immune system. Only a small fraction of CTCs can arrest at the vessel wall in distant organs and might generate secondary tumor, which are called metastases. IVFC can quantitate CTCs dynamically. ICM can image the tumor invasion, which forms CTCs, and CTC homing to metastasis target issues, which finally form distant metastasis

formation were not investigated in all the previous IVFC studies. ICM was demonstrated as a powerful tool to investigate the dynamic of tumor invasion [60, 61]. The addition of ICM imaging in tumor sites may complement IVFC in cancer study to monitor metastasis more comprehensively and dynamically (Fig. 14). In conclusion, IVFC provides a novel tool to investigate CTC dynamics in vivo. Combined with ICM, the panorama of cancer metastases might be visualized. It is helpful to investigate tumor generation, development, spreading, metastasis, and recurrence. It will provide novel insights in cancer study, not only to basic research but also in the clinic.

References 1. Shapiro HM (2003) Practical flow cytometry, 4th edn. Wiley-Liss, New York 2. Tuchin VV (2011) Advanced optical cytometry: methods and disease diagnoses. Wiley-VCH, Weinheim 3. Novak J, Georgakoudi I, Wei X, Prossin A, Lin CP (2004) In vivo flow cytometer for real-time detection and quantification of circulating cells. Opt Lett 29(1):77–79

26

X.-B. Wei et al.

4. Zharov VP, Galanzha EI, Tuchin VV (2005) Photothermal image flow cytometry in vivo. Opt Lett 30(6):628–630 5. Novak J, Puoris’haag M (2007) Two-color, double-slit in vivo flow cytometer. Opt Lett 32 (20):2993–2995 6. Georgakoudi I, Solban N, Novak J, Rice WL, Wei X, Hasan T, Lin CP (2004) In vivo flow cytometry: a new method for enumerating circulating cancer cells. Cancer Res 64(15):5044–5047 7. Li Y, Guo J, Wang C, Fan Z, Liu G, Wang C, Gu Z, Damm D, Mosig A, Wei X (2011) Circulation times of prostate cancer and hepatocellular carcinoma cells by in vivo flow cytometry. Cytometry A 79(10):848–854 8. Zharov VP, Galanzha EI, Tuchin VV (2005) Integrated photothermal flow cytometry in vivo. J Biomed Opt 10(5):051502–051513 9. He W, Wang H, Hartmann LC, Cheng JX, Low PS (2007) In vivo quantitation of rare circulating tumor cells by multiphoton intravital flow cytometry. Proc Natl Acad Sci U S A 104(28):11760–11765 10. Zeng Y, Xu J, Li D, Li L, Wen Z, Qu JY (2012) Label-free in vivo flow cytometry in zebrafish using two-photon autofluorescence imaging. Opt Lett 37(13):2490–2492 11. Sipkins DA, Wei X, Wu JW, Runnels JM, Cote D, Means TK, Luster AD, Scadden DT, Lin CP (2005) In vivo imaging of specialized bone marrow endothelial microdomains for tumour engraftment. Nature 435(7044):969–973 12. Fan ZC, Yan J, Liu GD, Tan XY, Weng XF, Wu WZ, Zhou J, Wei XB (2012) Real-time monitoring of rare circulating hepatocellular carcinoma cells in an orthotopic model by in vivo flow cytometry assesses resection on metastasis. Cancer Res 72(10):2683–2691 13. Li Y, Fan Z, Guo J, Liu G, Tan X, Wang C, Gu Z, Wei X (2010) Circulation times of hepatocellular carcinoma cells by in vivo flow cytometry. Chin Opt Lett 8(10):953–956 14. Galanzha EI, Shashkov EV, Kelly T, Kim JW, Yang L, Zharov VP (2009) In vivo magnetic enrichment and multiplex photoacoustic detection of circulating tumour cells. Nat Nanotechnol 4(12):855–860 15. Galanzha EI, Shashkov EV, Spring PM, Suen JY, Zharov VP (2009) In vivo, noninvasive, label-free detection and eradication of circulating metastatic melanoma cells using two-color photoacoustic flow cytometry with a diode laser. Cancer Res 69(20):7926–7934 16. Pitsillides CM, Runnels JM, Spencer JA, Zhi L, Wu MX, Lin CP (2011) Cell labeling approaches for fluorescence-based in vivo flow cytometry. Cytometry Part A 79(10):758–765 17. Nedosekin DA, Sarimollaoglu M, Shashkov EV, Galanzha EI, Zharov VP (2010) Ultra-fast photoacoustic flow cytometry with a 0.5 MHz pulse repetition rate nanosecond laser. Opt Express 18(8):8605–8620 18. Biris AS, Galanzha EI, Li Z, Mahmood M, Xu Y, Zharov VP (2009) In vivo Raman flow cytometry for real-time detection of carbon nanotube kinetics in lymph, blood, and tissues. J Biomed Opt 14(2):021006 19. Damm D, Wang C, Wei X, Mosig A (2009) Cell counting for in vivo flow cytometer signals using wavelet-based dynamic peak picking. Biomedical Engineering and Informatics, 2009. BMEI’09. 2nd International Conference on., IEEE 20. Galanzha EI, Kokoska MS, Shashkov EV, Kim JW, Tuchin VV, Zharov VP (2009) In vivo fiber‐based multicolor photoacoustic detection and photothermal purging of metastasis in sentinel lymph nodes targeted by nanoparticles. J Biophotonics 2(8–9):528–539 21. Zharov VP, Galanzha EI, Shashkov EV, Kim J-W, Khlebtsov NG, Tuchin VV (2007) Photoacoustic flow cytometry: principle and application for real-time detection of circulating single nanoparticles, pathogens, and contrast dyes in vivo. J Biomed Opt 12 (5):051503–051514 22. Tuchin VV, Tárnok A, Zharov VP (2011) In vivo flow cytometry: a horizon of opportunities. Cytometry Part A 79(10):737–745 23. Nedosekin DA, Sarimollaoglu M, Ye JH, Galanzha EI, Zharov VP (2011) In vivo ultra-fast photoacoustic flow cytometry of circulating human melanoma cells using near-infrared highpulse rate lasers. Cytometry Part A 79(10):825–833

1

In Vivo Flow Cytometry Combined with Confocal Microscopy to Study. . .

27

24. Poellinger A, Martin JC, Ponder SL, Freund T, Hamm B, Bick U, Diekmann F (2008) NearInfrared laser computed tomography of the breast. Acad Radiol 15(12):1545 25. Xu M, Wang LV (2006) Photoacoustic imaging in biomedicine. Rev Sci Instrum 77 (4):041101–041122 26. Sarimollaoglu M, Nedosekin D, Simanovsky Y, Galanzha E, Zharov V (2011) In vivo photoacoustic time-of-flight velocity measurement of single cells and nanoparticles. Opt Lett 36(20):4086–4088 27. Galanzha EI, Zharov VP (2011) In vivo photoacoustic and photothermal cytometry for monitoring multiple blood rheology parameters. Cytometry Part A 79(10):746–757 28. Zharov VP, Galanzha EI, Shashkov EV, Khlebtsov NG, Tuchin VV (2006) In vivo photoacoustic flow cytometry for monitoring of circulating single cancer cells and contrast agents. Opt Lett 31(24):3623–3625 29. Galanzha EI, Shashkov EV, Tuchin VV, Zharov VP (2008) In vivo multispectral, multiparameter, photoacoustic lymph flow cytometry with natural cell focusing, label-free detection and multicolor nanoparticle probes. Cytometry Part A 73(10):884–894 30. Galanzha EI, Sarimollaoglu M, Nedosekin DA, Keyrouz SG, Mehta JL, Zharov VP (2011) In vivo flow cytometry of circulating clots using negative photothermal and photoacoustic contrasts. Cytometry Part A 79(10):814–824 31. Mathers C, Fat DM, Boerma JT (2008) The global burden of disease: 2004 update. World Health Organization, Geneva 32. Jemal A, Bray F, Center MM, Ferlay J, Ward E, Forman D (2011) Global cancer statistics. CA Cancer J Clin 61(2):69–90 33. Sun YF, Yang XR, Zhou J, Qiu SJ, Fan J, Xu Y (2011) Circulating tumor cells: advances in detection methods, biological issues, and clinical relevance. J Cancer Res Clin Oncol 137 (8):1151–1173 34. Paget S (1889) The distribution of secondary growths in cancer of the breast. Lancet 133 (3421):571–573 35. Fidler IJ (2003) The pathogenesis of cancer metastasis: the ‘seed and soil’ hypothesis revisited. Nat Rev Cancer 3(6):453–458 36. Allard WJ, Matera J, Miller MC, Repollet M, Connelly MC, Rao C, Tibbe AG, Uhr JW, Terstappen LW (2004) Tumor cells circulate in the peripheral blood of all major carcinomas but not in healthy subjects or patients with nonmalignant diseases. Clin Cancer Res 10 (20):6897–6904 37. Kim MY, Oskarsson T, Acharyya S, Nguyen DX, Zhang XH, Norton L, Massague J (2009) Tumor self-seeding by circulating cancer cells. Cell 139(7):1315–1326 38. Andreopoulou E, Cristofanilli M (2010) Circulating tumor cells as prognostic marker in metastatic breast cancer. Expert Rev Anticancer Ther 10(2):171–177 39. Pantel K, Brakenhoff RH, Brandt B (2008) Detection, clinical relevance and specific biological properties of disseminating tumour cells. Nat Rev Cancer 8(5):329–340 40. Boutrus S, Greiner C, Hwu D, Chan M, Kuperwasser C, Lin CP, Georgakoud I (2007) Portable two-color in vivo flow cytometer for real-time detection of fluorescently-labeled circulating cells. J Biomed Opt 12(2):020507 41. Lo Celso C, Fleming HE, Wu JW, Zhao CX, Miake-Lye S, Fujisaki J, Cote D, Rowe DW, Lin CP, Scadden DT (2009) Live-animal tracking of individual haematopoietic stem/progenitor cells in their niche. Nature 457(7225):92–96 42. Fan Z, Spencer JA, Lu Y, Pitsillides CM, Singh G, Kim P, Yun SH, Toxavidis V, Strom TB, Lin CP, Koulmanda M (2010) In vivo tracking of ‘color-coded’ effector, natural and induced regulatory T cells in the allograft response. Nat Med 16(6):718–722 43. Lee H, Alt C, Pitsillides CM, Puoris’haag M, Lin CP (2006) In vivo imaging flow cytometer. Opt Express 14(17):7789–7800 44. Zhong CF, Tkaczyk ER, Thomas T, Ye JY, Myc A, Bielinska AU, Cao Z, Majoros I, Keszler B, Baker JR, Norris TB (2008) Quantitative two-photon flow cytometry – in vitro and in vivo. J Biomed Opt 13(3):034008

28

X.-B. Wei et al.

45. Wei X, Sipkins DA, Pitsillides CM, Novak J, Georgakoudi I, Lin CP (2005) Real-time detection of circulating apoptotic cells by in vivo flow cytometry. Mol Imaging 4(4):415–416 46. Alsayed Y, Ngo H, Runnels J, Leleu X, Singha UK, Pitsillides CM, Spencer JA, Kimlinger T, Ghobrial JM, Jia X, Lu G, Timm M, Kumar A, Cote D, Veilleux I, Hedin KE, Roodman GD, Witzig TE, Kung AL, Hideshima T, Anderson KC, Lin CP, Ghobrial IM (2007) Mechanisms of regulation of CXCR4/SDF-1 (CXCL12)-dependent migration and homing in multiple myeloma. Blood 109(7):2708–2717 47. Azab AK, Runnels JM, Pitsillides C, Moreau AS, Azab F, Leleu X, Jia X, Wright R, Ospina B, Carlson AL, Alt C, Burwick N, Roccaro AM, Ngo HT, Farag M, Melhem MR, Sacco A, Munshi NC, Hideshima T, Rollins BJ, Anderson KC, Kung AL, Lin CP, Ghobrial IM (2009) CXCR4 inhibitor AMD3100 disrupts the interaction of multiple myeloma cells with the bone marrow microenvironment and enhances their sensitivity to therapy. Blood 113 (18):4341–4351 48. Runnels JM, Carlson AL, Pitsillides C, Thompson B, Wu J, Spencer JA, Kohler JM, Azab A, Moreau AS, Rodig SJ, Kung AL, Anderson KC, Ghobrial IM, Lin CP (2011) Optical techniques for tracking multiple myeloma engraftment, growth, and response to therapy. J Biomed Opt 16(1):011006 49. Chang YS, di Tomaso E, McDonald DM, Jones R, Jain RK, Munn LL (2000) Mosaic blood vessels in tumors: frequency of cancer cells in contact with flowing blood. Proc Natl Acad Sci U S A 97(26):14608–14613 50. Mehes G, Witt A, Kubista E, Ambros PF (2001) Circulating breast cancer cells are frequently apoptotic. Am J Pathol 159(1):17–20 51. Alt C, Veilleux I, Lee H, Pitsillides CM, Cote D, Lin CP (2007) Retinal flow cytometer. Opt Lett 32(23):3450–3452 52. Beerling E, Ritsma L, Vrisekoop N, Derksen PW, van Rheenen J (2011) Intravital microscopy: new insights into metastasis of tumors. J Cell Sci 124(Pt 3):299–310 53. Wagner R (1839) Erlauterungstaflen zur physiologie und entwicklungsgeschichte. Leopold Voss, Leipzig 54. Wouters FS, Verveer PJ, Bastiaens PI (2001) Imaging biochemistry inside cells. Trends Cell Biol 11(5):203–211 55. Chishima T, Miyagi Y, Wang X, Yamaoka H, Shimada H, Moossa AR, Hoffman RM (1997) Cancer invasion and micrometastasis visualized in live tissue by green fluorescent protein expression. Cancer Res 57(10):2042–2047 56. Farina KL, Wyckoff JB, Rivera J, Lee H, Segall JE, Condeelis JS, Jones JG (1998) Cell motility of tumor cells visualized in living intact primary tumors using green fluorescent protein. Cancer Res 58(12):2528–2532 57. MacDonald IC, Schmidt EE, Morris VL, Chambers AF, Groom AC (1992) Intravital videomicroscopy of the chorioallantoic microcirculation: a model system for studying metastasis. Microvasc Res 44(2):185–199 58. Naumov GN, Wilson SM, MacDonald IC, Schmidt EE, Morris VL, Groom AC, Hoffman RM, Chambers AF (1999) Cellular expression of green fluorescent protein, coupled with highresolution in vivo videomicroscopy, to monitor steps in tumor metastasis. J Cell Sci 112 (Pt 12):1835–1842 59. Ntziachristos V (2010) Going deeper than microscopy: the optical imaging frontier in biology. Nat Methods 7(8):603–614 60. Stoletov K, Kato H, Zardouzian E, Kelber J, Yang J, Shattil S, Klemke R (2010) Visualizing extravasation dynamics of metastatic tumor cells. J Cell Sci 123(Pt 13):2332–2341 61. Le Devedec SE, Lalai R, Pont C, de Bont H, van de Water B (2010) Two-photon intravital multicolor imaging combined with inducible gene expression to distinguish metastatic behavior of breast cancer cells in vivo. Mol Imaging Biol 13(1):67–77

2

SERS for Sensitive Biosensing and Imaging U. S. Dinish and Malini Olivo

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface-Enhanced Raman Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chemical Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electromagnetic Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bioimaging and Sensing with SERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Label-Free SERS Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biosensing and Imaging with the Use of SERS Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clinical Applications of SERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cancer Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Study on Acute Renal Failure and Urinary pH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Perspectives and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30 30 31 32 32 33 39 52 52 54 54 56

Abstract

Surface-enhanced Raman scattering spectroscopy or SERS has gained wide popularity over the past two decades in the field of biomedicine due to its unique analytical capability and ease of use. This chapter provides an account of the recent developments in SERS for selected biomedical applications. Label free and labeled detection with Raman reporter schemes have been employed to

U.S. Dinish (*) Bio-Optical Imaging Group, Singapore Bioimaging Consortium (SBIC), A*STAR, Singapore, Singapore e-mail: [email protected] M. Olivo (*) Bio-Optical Imaging Group, Singapore Bioimaging Consortium (SBIC), A*STAR, Singapore, Singapore Bio-photonics Group, School of Physics, National University of Ireland, Galway, Ireland e-mail: [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_24

29

30

U.S. Dinish and M. Olivo

detect and identify various biomolecules such as nucleic acids, lipids, peptides, and proteins, as well as for in vivo and cellular sensing. A detailed account of various SERS biosensing strategies is reviewed with emphasis on the sensitivity and specificity by elaborating some recent examples of preclinical and clinical applications. Finally, a critical analysis of the technology is provided with regard to the challenges yet to be addressed and its future trend in biomedicine. Keywords

Surface-enhanced Raman scattering • Biosensing • Bioimaging • Raman reporter • Clinical applications • SERS nanotags • Label-free SERS detection • Detection with SERS labels • Cancer diagnosis • Multiplex sensing and imaging

Introduction The growing demand for reliable and robust methodology in medical diagnostics calls for continuous advancement of biosensor technologies. Clinically relevant biosensor should be capable of providing measurement parameters rapidly and sensitively. Sensing of various biomolecules such as metabolites, amino acids, proteins, nucleic acids, and peptides requires a sensitive analytical technique, which is capable of label-free identification. In this context, Raman scattering, though a powerful analytical tool, fails due to its inherent sensitivity limitations. In this context, the series of fundamental and technological advancements in the past three to four decades have made Raman scattering a viable biosensing modality, specifically, after the advent of surface-enhanced Raman scattering (SERS). In the past one decade, SERS has enabled the detection of Raman “fingerprint” spectroscopic features of numerous biomolecules using relatively simpler laboratory tools and bench top portable devices. This chapter is intended to provide a detailed coverage of recent developments in SERS as a biosensing/imaging modality. Firstly, in section “Surface-Enhanced Raman Scattering,” a brief overview of SERS is discussed along with the underlying mechanisms for the giant enhancement. This is followed by the application of SERS for preclinical bioimaging and sensing. Here, a detailed account of SERS techniques for biosensing with and without Raman labeling is discussed. Recent examples of clinical applications of SERS are discussed in section “Clinical Applications of SERS.” Lastly, in section “Perspectives and Conclusions,” the prospect of SERS will be looked into and discussed in the context of translating the technology from a lab-bench to a bedside biomedical technique.

Surface-Enhanced Raman Scattering In order to understand the interaction of light with matter, various spectroscopic techniques have evolved over the years. Among these, elastic and inelastic scattering of light plays a major role. Raman scattering, an inelastic scattering of light,

2

SERS for Sensitive Biosensing and Imaging

31

where incident photons experience a net change in energy with the vibrational and rotational motion of a particle, is a key technique. Raman spectra represent the weak Stokes and anti-Stokes-shifted vibrational “fingerprint” of the molecule. Because of the low-energy scattering, Raman signals are often difficult to distinguish from background, and theoretically one out of a million photons is converted to Raman photons. Though weak, Raman spectroscopy has emerged as a powerful analytical tool for various biomedical and chemical sensing applications over the years due to its inherent ability to generate “fingerprint” spectral features that are unique to particular chemical species/groups. In the past, Raman spectroscopy was used for biological applications such as urine analysis, cancer diagnosis, etc. though it lacks the high sensitivity, which is required for most biomedical and biochemical sensing applications [1, 2]. In general, Raman scattering efficiency is about 12–14 orders of magnitude lesser than fluorescence [3]. After more than four decades since the discovery of Raman scattering in 1928, Fleischman and coworkers observed a phenomena that could increase the Raman signal intensity, which required lower excitation powers and yet retained the sample’s signature Raman signal [4]. They first observed an enormous enhancement in Raman signals from pyridine molecule adsorbed on electrochemically roughened silver (Ag) electrode. Subsequently, in 1977, two separate research groups, Jeanmarie and Van Duyne and Albrecht and Creighton, replicated the giant enhancement of Raman signal on nanoroughened metal surfaces [5, 6]. They attributed the observed enhancement to intensified EM fields and various chemical effects on the metallic surfaces. This enhanced Raman scattering on a nanoroughened metal surface was later termed SERS. Initial SERS studies were all conducted on pyridine molecule adsorbed on a roughened Ag electrode. After that, various metals such as copper, Ag, and gold (Au) have also been used for enhancement. However, among these metals, Ag and Au are the most commonly used because of their special optical properties. The giant enhancement in SERS, as compared to normal Raman scattering, is attributed to (i) electromagnetic (EM) mechanism and (ii) chemical mechanism [7, 8]. Among these mechanisms, EM enhancement contributes significantly to the SERS enhancement, which is governed by the plasmonic properties of nanostructures of the substrates. A general overview of these enhancement mechanisms is provided in the following section.

Chemical Enhancement Only those molecules whose dipole moment can be changed to induce polarizability will result in Raman scattering. In particular, molecules with a lone electron pair bind to metal surface by chemisorption, where a charge transfer occurs between the molecules and the metal surface. Chemical enhancement mainly depends on the structural and chemical nature of the molecules under study, and it leads to few scenarios that determine the enhancement [9, 10]. When an analyte molecule is physisorbed onto the metal surface, where the metal induces a minor perturbation

32

U.S. Dinish and M. Olivo

that result in a small change in its electronic distribution. This leads to a change in polarizability and that in turn directs a change in Raman efficiency. In the second scenario, analyte forms a “surface complex” by covalent bond formation (e.g., thiol group-containing molecule attaching to Ag or Au) or by indirect binding with the help of an electrolyte ion. This results in a change in the intrinsic polarizability which leads to the SERS enhancement. In the third case, photo-driven charge transfer may occur between the analyte molecule and the noble metal surface. This occurs primarily when the difference in Fermi energy level of metal and highest occupied molecular orbital (HOMO) or lowest unoccupied molecular orbital (LUMO) energies of the analyte is matched by the excitation laser energy. Under such a scenario, light-induced charge transfer from HOMO and unoccupied energy states above Fermi level or between LUMO and unoccupied energy states just below Fermi level can occur. This will lead to a change in polarizability and hence affect the Raman efficiency. Generally, such factors account for the weak enhancement of up to only 102.

Electromagnetic Enhancement Discrepancy between theoretical and measured values of enhancement led to the concept of EM enhancement, which contributes largely to SERS. The EM enhancement is a wavelength-dependent effect arising from the excitation of the localized surface plasmon resonance (LSPR). Unlike a dielectric medium, free charges on the surface (surface plasmon) of noble metal in the form of nanoparticles (NPs), sharp metal tips, or a nanoroughened surface can be synchronized to collectively oscillate by light with an appropriate frequency that will lead to a redistribution and some sort of nanofocusing of the photon energy density at certain specific locations across the surface. This will contribute to an enhancement in electric field intensity by 102–104 times in the vicinity of the metal surface [11, 12]. Maximum electric fields can be reached at some of the focused spots (“hot spots”) when the incident light frequency matches the natural oscillating frequency of surface plasmon. This effect, known as surface plasmon resonance (SPR), contributes to the enhancements in molecules situated near these high-field hot spots [13, 14]. All in all, regardless of the geometry of the nanostructures employed, an enhancement factor (EF) obtainable with EM SERS is typically around 104–1010 [14].

Bioimaging and Sensing with SERS Rapid label-free detection of small target analytes is of great importance in biosensing. SERS is an excellent candidate for this because of the high sensitivity, the spectral “fingerprinting” that enables to produce distinct spectra from molecules similar in structure and function, and the relative easiness in sample preparation and measurement. Moreover, water has a very small Raman scattering cross section, which leads to minimal background signal from the aqueous samples. In most of the

2

SERS for Sensitive Biosensing and Imaging

33

cases, the complex biochemical composition of biological samples makes the interpretation of SERS spectra highly challenging, and the identification of a particular biomolecular system of interest is not trivial. In this context, the development of SERS detection using external labels, typically called as Raman reporters, is highly important. This method is very similar to the technique of conventional labeling with fluorescent dyes or quantum dots. In this section, we will be highlighting the recent developments of SERS as a sensitive biosensing/imaging tool either without the use of labels or with the use of Rama reporter (labeled detection).

Label-Free SERS Detection SERS can be employed for label-free detection of many biomolecular systems that includes microorganisms, cancer markers, cancer drugs, metabolites, and small molecules like peptides and amino acids. Such label-free detection will be successful with only those molecules that inherently possess high Raman scattering cross section.

Glucose Sensing There is an active research in the development of minimally invasive and biologically compatible methods for quantitative glucose detection. SERS has been explored as a sensitive glucose sensing modality by Van Duyne and coworkers. They used silver film over nanospheres (AgFON) as the SERS substrate in combination with specialized partition layer on top of it for generating concentration gradient [15]. They used partial least squares analysis to compliment with the sensing for the quantitative glucose measurement. Among the various partition layers, straight alkanethiols were found to be effective, and in this study, they used 1-decanethiol (1-DT) as partition layer. Subsequently, in vivo transcutaneous glucose measurement on a rat model was also developed by the same research group [16, 17]. Here, they functionalized the AgFON substrates with a mixed selfassembled monolayer (SAM) of 1-DT and 6-mercapto-1-hexanol (MH) and implanted the sensor chip subcutaneously in living rats as shown in Fig. 1 [16]. The mixed monolayers of 1-DT/MH were designed to have dual hydrophobic–hydrophilic functionality, which performed better as a partitioning layer than the layer used before. The SAM also acted as “filter” to exclude the nontarget bigger molecules like proteins to avoid biofouling. The sensor was able to perform accurately in lower glucose concentrations over 17 days after subcutaneous implantation. In another work, recently, Dinish et al. demonstrated the in vitro glucose sensing at physiologically relevant concentrations using nanogap SERS substrate that possess high reproducibility to eliminate the point-to-point intensity variation [18]. Bacteria Detection SERS can provide an excellent platform for the straightforward detection of pathogenic microorganisms. Haisch and coworkers reported the label-free SERS detection of Legionella pneumophila and Salmonella typhimurium cells by using a

34

U.S. Dinish and M. Olivo

Fig. 1 Schematic of (a) in vivo SERS measurement system in which a rat with a surgically implanted sensor and optical window was integrated into a conventional laboratory Raman spectroscopy system, (b) SAM-modified substrate design for glucose sensing, (c) morphology of the AgFON SERS substrate by atomic force microscopy, and (d) optical characterization of the substrate to determine the position of the localized surface plasmon resonance (LSPR) (Reprinted with permission from Stuart D. A et al., Anal. Chem. 78, 7211–7215, (2006). Copyright (2006) American Chemical Society. Adapted from Ref. [16])

SERS-based immunoassay [19]. Here, for the determination of bacterial contamination in water does not require enrichment or dehydration steps prior to analysis and labeling, which makes the total assay time of only 65 min. Capturing of the bacterial species was done by incubating the bacterial suspension on a glass chip containing the respective antibodies, anti-Legionella antibody, or anti-Salmonella antibody. These immobilized cells were subsequently placed in a polycarbonate tray which was filled with Ag colloid to enhance the Raman signal. The resultant SERS spectra obtained from the bacterial species were relatively weak, and it is further enhanced by the addition of sodium azide during the incubation with the Ag colloid to induce the specific agglomeration of the nanoparticles at the site of the bacterial cell wall. Bacterial detection was successful due to their whole-organism fingerprint spectra, where L. pneumophila showed strong peaks of amide I, II, and III. S. typhimurium exhibited strong peaks only of amide II and III. Prominent SERS peaks from S. typhimurium and L. pneumophila are shown in Fig. 2 [19]. SERS mapping on the chip was carried out at the amide III band, for S. typhimurium at 1290 cm1 and L. pneumophila at 1310 cm1 for the quantitative analysis.

Detection of Lipids Lipids generally display a large structural diversity, from amphiphilic structures with glycerol backbones like phospholipids to multiple ring structures like steroids.

2

SERS for Sensitive Biosensing and Imaging

35

Fig. 2 SERS spectra of S. typhimurium (red) and L. pneumophila (black). Their amide III band at 1290 and 1310 cm1 can be used for SERS sensing (Reprinted with permission from Knauer M. et al., Anal. Chem. 82, 2766–2772, (2010). Copyright (2010) American Chemical Society. Adapted from Ref. [19])

Detection of lipids is highly relevant in many ways such as (i) as structural elements of cell membranes, (ii) form of energy storage as fats in adipose tissue, and (iii) an important signaling molecules. Naomi Halas and coworkers studied the interaction of ibuprofen (a small molecule drug) with hybrid lipid bilayer (HBL)-coated nanoshells [20]. This study has great significance because the interaction of ibuprofen with lipid bilayers in the gastrointestinal tract was reported as one of the mechanisms of side effects caused by ibuprofen, such as gastrointestinal bleeding. In their study when HBLfunctionalized nanoshells were incubated with ibuprofen, resultant Raman spectra showed peaks at 803, 1185, 1205, and 1610 cm1, which is due to the ibuprofen partitioning into the HBL, and the intensity of the peaks increased with increase in ibuprofen concentration. Their study showed that ibuprofen partitioning into a deuterated HBL disrupted the order of the HBL. This was established by monitoring the Raman peak for carbon–deuterium stretch as a function of ibuprofen concentration. Later, Halas and team also used nanoshell to investigate the transfer of phospholipids from vesicles to HBLs [21]. The vesicles were mainly composed of deuterated 1, 2-dimyristoyl-sn-glycero-3-phosphocholine (D-DMPC), whereas the HBLs were a monolayer of DMPC spread over a self-assembled monolayer of dodecanethiol. They monitored the intensity of C-H Raman stretch at 2850 cm1

36

U.S. Dinish and M. Olivo

normalized to the intensity of the C-S stretch at 710 cm1 (ICH/ICS) to understand whether D-DMPC was transferred from the vesicles to the HBLs on the nanoshells. Significant reduction in ICH/ICS when D-DMPC vesicles were mixed with HBL-coated nanoshells indicated the transfer of lipids from vesicles to HBLs. SERS study of lipids is highly useful in understanding the membrane and lipid biology. This can be understood in the case of lipids such as sphingosine-1phosphate and platelet-activating factor, which are known to function as cellular signaling molecules but difficult to detect. With SERS study in conjunction with partition layer schemes, lipid signaling molecules could be detected in complex mixtures based on their Raman spectroscopic profiles [22].

Monitoring of Anticancer Drug Release Ock et al. studied the release of the thiopurine, an anticancer drug in vitro and in vivo using SERS [23]. In this study, 6-mercaptopurine (6MP) and 6-thioguanine (6TG) adsorbed on the surface of Au NPs were replaced by glutathione monoester (GSH-OEt) as an intracellular external stimulus as shown in Fig. 3a. The release of a portion of 6MP or 6TG molecules adsorbed on the Au NPs was clearly observed by monitoring their decrease in the SERS intensity. As a negative control, a tripeptide with a methyl group instead of a thiol group was used as an inactive GSH derivative. The strongest Raman peak, which can be assigned to the C-N stretching of the purine ring, was used to monitor the decrease in SERS intensity as shown in Fig. 3b. Monitoring the release of thiopurine drugs using intracellular endogenous GSH was performed by adding GSH-OEt into the cell culture medium, and the resultant decrease in the Raman peak intensities depends on the time taken after the addition of GSH-OEt (Fig. 3c). To emphasize the efficacy of this mechanism, they also studied the in vivo drug release in living mice by subcutaneous injection of 6TG-modified Au NPs. The Raman peak intensity of 6TG appeared to decrease when GSH-OEt was injected, whereas the control tripeptide did not show much influence, which confirms the potential of this method. Detection of Glutathione Glutathione is a biologically important tripeptide that exists both in the reduced form (glutathione, GSH) and oxidized dimeric form (glutathione disulfide, GSSG) in tissues. Glutathione plays a key role in the respiration of mammalian and plant tissues. It also protects cells against hydrogen peroxide and serves as a cofactor for various enzymes. Glutathione can be readily detectable with SERS by monitoring the C-S stretching Raman shift at 660 cm1. Ozaki and coworkers detected glutathione by mixing it with Ag NPs and heating it to  60–100  C until dry and, then, measured its SERS spectra [24]. An increased aggregation of Ag NPs was observed with the presence of glutathione. They could observe a linear response for the SERS signal to glutathione concentration in the 100–800 nM with a limit of detection at 50 nM. In another study, Deckert and coworkers measured tip-enhanced Raman spectra (TERS) of oxidized glutathione (GSSG) immobilized on a thin Au NP [25]. In their study the SERS spectra were mainly dominated by Raman shift of carboxyl bands at 1408 cm1 and amide bands at 1627 cm1.

2

SERS for Sensitive Biosensing and Imaging

37

a

GSH

+ Purine Analogue

1.0 20000 0 mM 0.98 2.9 4.7 6.5

10000

9.0 16.5mM

Normalized Intensity

Raman Intensity (Arbitr. Unit)

1258

b

Tripeptide (Control) Glutathione

0.5

0.0

0 1000

0

1500

5

c

5 0min 1200

1400

Raman shift (cm–1)

1600

Relative Intensity (Arbitr. Unit)

Raman Intensity (Arbitr. Unit)

105 90 75 60 45 30 15

1000

15

20

1.0

120min

800

10

Concentrations (mM)

1287

1258

Raman shift (cm–1)

0.8 GSH-OEt

0.6

Tripeptide

0.4

GSH-OEt Injection

0.2 0.0 0

5

15 30 Time (min)

45

60

Fig. 3 (a) Experimental scheme of release of thiopurine on Au NPs via GSH. (b) GSH concentration-dependent SERS intensities of 6MP (10  5 M) and the peaks at 1258 cm1 were used to compare the relative intensities from 6MP. (c) Time-lapse live cell images in a single A549 cell after treatment of glutathione ethyl ester (GSH-OEt) indicating the in situ release of 6MP from Au NPs (Reprinted with permission from Ock K. et al., Anal. Chem. 84, 2172–2178, (2012). Copyright (2012) American Chemical Society. Adapted from Ref. [23])

Detection of Nicotinic Acid Adenine Dinucleotide Phosphate Nicotinic acid adenine dinucleotide phosphate (NAADP) is a calcium secondary messenger that plays an important role in intracellular Ca2+ release. Gogotsi and coworkers used a glass substrate coated with Au NPs for the SERS detection of NAADP [26]. SERS detection was performed in a sample volume of 1 μL, of 100-μM NAADP. At this concentration, the adenine Raman band at 733 cm1

38

U.S. Dinish and M. Olivo

Fig. 4 (a) Concentration-dependent SERS spectra of an aqueous solution of NAADP. (b) PCA of SERS data collected on cells treated with acetylcholine, an aqueous solution of 100-μM NAADP, and untreated control cells. Each point in the principal component space represents an SERS spectrum with the distance between data points proportional to the degree of similarity between the spectra (Reprinted with permission from Vitol E. A. et al., Anal. Chem. 82, 6770–6774, (2010). Copyright (2010) American Chemical Society. Adapted from Ref. [26])

dominates the spectrum. Subsequently, they also studied an agonist-induced change of the NAADP concentration in breast cancer SkBr3 cells using the SERS sensor. SERS response of NAADP from cell extracts was detected in response to treatment with three agonists such as ATP, acetylcholine, and histamine that are studied. In order to quantify the NAADP concentration in treated cells, SERS spectra are compared with the reference NAADP spectra collected from the pure aqueous solution. Concentration-dependent SERS spectra of NAADP are shown in Fig. 4a. SERS intensity at 733 cm1, due to the peak of adenine, dominates the spectrum at higher concentrations. The correlation between the SERS spectra of cells treated with acetylcholine and that of pure NAADP was conducted using principal component analysis (PCA) as shown in Fig. 4b. This study demonstrates the potential of sensitive intracellular SERS detection of the calcium messengers, which could help in understanding the mechanisms cellular calcium signaling pathways.

Detection of Folic Acid Folic acid is found to be linked to a number of diseases, but there has not been much optical-based modality to quantify it. Ren et al. synthesized a hybrid nanostructure comprising of graphene oxide (GO) and Ag NPs and used to detect the SERS spectrum of folic acid in aqueous solution and in serum [27]. Hybrid SERS substrate was prepared by the self-assembly of Ag NPs on graphene oxide. The modification of graphene oxide with positively charged poly(diallyldimethyl ammonium chloride) (PDDA) was employed for the electrostatic self-assembly of the negatively charged Ag NPs onto PDDA-coated grapheme (Fig. 5a). The assembly of Ag NPs reduced the distance between the particles, which led to an increased

2

SERS for Sensitive Biosensing and Imaging

39

Fig. 5 (a) Illustration of the fabrication of GO/PDDA/Ag NPs and the procedure of SERS detection of folic acid using GO/PDDA/Ag NPs as substrates. (b) SERS spectra of different concentrations of folic acid in water: blank (a), 9 nM (b), 18 nM (c), 36 nM (d ), 90 nM (e), and 180 nM ( f ). (c) SERS intensity from series of dilution of folic acid in water based on the 1595cm1 peak (Reprinted with permission from Ren W. et al., ACS Nano 5, 6425–6433, (2011). Copyright (2011) American Chemical Society. Adapted from Ref. [27])

SERS activity. The positive charge on the surface of hybrid substrates was capable of anchoring the folic acid molecules via electrostatic attraction and will help in generating extra enhancement of the SERS signal. They observed a good linear response in the concentration range of 9–180 nM with a detection limit of 9 nM as shown in Fig. 5b, c. Serum was spiked with a known amount of folic acid and was tested with the same method, and the sensitivity and the linear response range were comparable to that in aqueous solution.

Biosensing and Imaging with the Use of SERS Labels In most of the cases, detection and interpretation of SERS spectra of biomolecules in the complex composition of the biological specimen are difficult. Due to this,

40

U.S. Dinish and M. Olivo

identification of a particular biomolecular system of interest is not trivial especially when looking for the trace detection. In this context, SERS-based label-free detection often fails and that led to the development of labeled SERS for medical and bioanalytical applications, in a way similar to the concept of labeling with dyes or quantum dots in fluorescence studies. Biosensing with the use of SERS labels can be achieved with three major components, such as (i) Raman reporter molecules (or simply reporter, RM) that have inherently high Raman scattering cross section to provide characteristic Raman peaks as readout signal, (ii) noble metal nanostructure (colloidal or solid planar SERS substrate) with strong electromagnetic fields to enhance the Raman signal from the reporter molecules, and (iii) a ligand anchored onto the nanoconstruct for recognizing the corresponding target molecule. This section covers the recent advancements in labeled SERS detection for ultrasensitive biosensing and imaging applications.

Nano-stress Sensor for Protein Detection As a means for multiplex protein detection, Kho et al. recently demonstrated the nano-stress SERS sensor concept. In this study, a novel readout method was employed based on the observation that the Raman frequencies of an antibodyconjugated SERS-active reporter molecule will be altered when it is binding to the targeted antigen. They attributed the observed frequency shifts to the structural deformations in the antibody-conjugated SERS reporter molecule as a result of the binding event [28]. Here, a single antibody-conjugated SERS reporter molecule could behave as a nano-mechanical stress sensor. SERS stress sensor was constructed by anchoring a “stress-sensitive” Raman reporter (6-mercaptopurine, 6MP and 4-aminothiophenol, 4ATP) on a stable SERS substrate (Fig. 6a–c). Antibodies are anchored onto these reporter molecules, and when target antigen binds to the antibody, it will induce a stress on the nano-stress on the reporter molecule which is reflected in the shift of its Raman peak. The observed frequency shift of the reporter molecule is in quantitative correlation with the concentration of the targeted antigen, where the peak at 1080 and 1580 cm1 of 4-ATP is shifted as function of H1 protein concentration (Fig. 6d–i). Detection limit achieved in this case is 2.2 nM, which is comparable to conventional ELISA. Additionally, as proof of concept, they also demonstrated the use of two “stress-sensitive” reporters for multiplexed detection of proteins (p-53 and H1 protein) at sub-diffraction limit. Detection of DNA Sequences and Mutation Common SERS detection of DNA-binding events is realized by functionalizing a Raman reporter (flurophore-like Cy3 or its derivative) on a Au or Ag NP and a single-stranded piece of DNA. Upon hybridization with a complementary strand of DNA, generally attached to another Au or Ag surface will generate SERS or surface-enhanced resonant Raman scattering (SERRS) signal from the reporter molecule. This approach was employed by various research groups for DNA detection. Vo-Dinh and coworkers used an immobilized DNA capture strand on a Ag surface

2

SERS for Sensitive Biosensing and Imaging

41

for the detection of the breast cancer gene (BRCA1) [29]. In a similar way, Moskovits and coworkers used a sandwich assay between single-stranded DNA attached to a planar Ag layer and a Ag NP labeled with the complementary DNA strand to detect hybridization [30]. In another study, Chad Mirkin and coworkers used Ag nanorods with etched gaps for the detection of DNA binding [31], while Moskovits and team detected protein after a Au NP functionalized with doublestranded DNA. They used bound fluorescence quencher molecule QSY21as Raman reporter, because of it has strong resonance at the 633-nm laser excitation and low fluorescence emission to minimize any background [32]. Detection and discrimination of mutations in DNA is highly important in diagnostic and forensic applications. In this context, Bartlett and coworkers reported a novel method that employs SERRS to follow denaturation of doublestranded DNA attached to a structured gold surface [33]. This denaturation is driven c

a SERS-active region

SERS-active Bi-metallic Nano-structures

Point 2

XPS measurements -COOH

Point 1

-NH2

b

Antibody

EDC/NHS

1 μm

d

0 nM 11.7 nM 23.4 nM 46.8 nM 93 nM

80000 70000

50000 40000 30000 20000 10000

0 nM 11.7 nM 23.4 nM 46.8 nM 93 nM

70000 60000 50000 40000 30000 20000 10000

0 200 400 600 800 1000 1200 1400 1600 1800 2000 –1

Raman Shift (cm )

Fig. 6 (continued)

−1

80000

Raman Intensity

Raman Intensity

60000

ΔV = 1.2 cm

e

0 1040 1050 1060 1070 1080 1090 1100 1110 1120 –1

Raman Shift (cm )

42

U.S. Dinish and M. Olivo

f

g

DV = 3.4 cm–1

80000

0 nM 11.7 nM 23.4 nM 46.8 nM 93 nM

Raman Intensity

–1

60000

Peak position (cm )

70000

1580 1579.5

50000 40000 30000 20000

1579 1578.5

4ATP(∼1578cm–1)

1578 1577.5 1577 1576.5 1576 1575.5

10000

1575 0

0 1540

1560

1580

1620

1600

1640

20

40

60

80

100

H1 concentration (nM)

Raman Shift (cm–1)

h

i

1078.4

–1

1078 1077.8

4ATP(∼1078cm–1)

1077.6 1077.4 1077.2 1077 1076.8 1076.6

Shifts in Peak (cm–1)

1078.2 Peak position (cm )

4 BSA H1

3.5 3 2.5 2 1.5 1 0.5

1076.4 0

20

40

60

80

100

0

∼1580cm–1

∼1080cm–1

H1 concentration (nM)

Fig. 6 Fabrication scheme of SERS nano-stress sensor. (a) Microscopic image of the SERS substrate, showing the dark SERS-active region and the smooth reflective Au surface, (b) fieldemission scanning electron microscope image of the SERS substrate, and (c) functionalization of SERS substrate with anti-H1/4-ATP. Response of the anti-H1/4-ATP to H1 concentrations. (d) Average SERS spectra at different H1 concentrations, (e–h) shifts in SERS peak position at around 1080 and 1580 cm1 in response to the H1 binding, and (i) specificity test showing selectivity of the two peaks (Reprinted with permission from Kho K. W. et al., ACS Nano 6, 4892–4902, (2012). Copyright (2012) American Chemical Society. Adapted from Ref. [28])

by electrochemically or thermally on SERS-active sphere segment void (SSV) gold substrates. SSV substrates were prepared by electrodeposition around templates made from closely packed monolayers. Optical properties of these substrates can be tuned through the choice of sphere diameter and film thickness. Au surface functionalized with disulfide-modified oligonucleotides was used to capture the corresponding target strand. After that the surface was incubated with mercaptohexanol in order to do blocking of unexposed surface regions to prevent nonspecific binding. Labeled target sequences hybridized to the capture strand, with the label molecules located near the Au surface results in SERRS. Subsequently, DNA melting was induced either by a temperature or a potential. Using this method, they could distinguish between wild type, a single point mutation, and a triple deletion in the cystic fibrosis transmembrane conductance regulator (CFTR) gene at the 0.02 attomole level, and the method can be used to differentiate the unpurified PCR products of the wild type and deltaF 508 mutation.

2

SERS for Sensitive Biosensing and Imaging

43

Chad Mirkin and coworkers used DNA sequences immobilized on glass beads and hybridized with Au NPs and labeled with a complementary singlestranded DNA and Raman reporter molecule. They used Raman reporters such as TMR, Cy3, and Cy 3.5 to generate multiplexing peaks [34]. In this study, they labeled DNA sequences for Ebola virus, hepatitis A, hepatitis B, HIV, Variola virus, Bacillus anthracis, Francisella tularensis, and hog cholera segment with the Raman reporters. SERS signal from the bead-NP complexes was then further enhanced with Ag plating. Duncan Graham and coworkers also reported similar study with the use of five different DNA sequences, each labeled with a different Raman reporter and Ag NPs. They constructed the probe by labeling, human papillomavirus with R6G, the VT2 gene of E. coli 157 with ROX, and a universal primer with FAM, CY5.5, and BODIPY TR-X. They could achieve a limit of detection from 1011 to 1012 M [35]. The linear response of SERS intensity from these probes at concentrations of interest indicates the highly promising future for DNA detection with SERS multiplexing capable reporters.

SERS Immunoassay SERS immunoassay was used successfully for the quantitative measurement of mucin protein (MUC4), which is a promising biomarker for pancreatic cancer (PC). Porter and coworkers demonstrated that MUC4 levels in the serum of PC patients can be studied using SERS assay [36]. In this study, a Au-coated glass slide was functionalized with dithiobis(succinimidyl propionate) (DSP) and then anchored with the capture antibody. The detection probe was prepared by incubating Au NPs together with a mixture of DSP and 4-nitrothiophenol (4-NTP), followed by conjugating with the detection antibody, and detection scheme is shown in Fig. 7a–c. The SERS spectra from 4-NTP are used to detect the MUC4 in the CD18/HPAF cell line, which is a positive control. Initially, the sensor was calibrated by detecting the concentration-dependent SERS response of MUC4 in PBS by monitoring the SERS intensity of 4-NTP at 1336 cm1 as shown in Fig. 7d–f. This data was later used to accurately determine the presence of biomarker in five sets of samples such as healthy individuals, patients with acute pancreatitis, and patients with PC. In order to overcome the inherent limitation such as long incubation time and multiple washing steps for separating free proteins from those bound to the capture antibody in conventional SERS immune assays, Hwang et al. proposed SERS optoelectrofluidics based on the electrokinetic motion of particles to detect alphafetoprotein (AFP), which is a prominent biomarker for hepatocellular carcinoma (HCC) [37]. In this work, they used polystyrene (PS) microspheres for the sandwich immunoassay instead of the flat chip as the capture substrate of antibodies. In such a nanoconstruct, faster immune reaction can be achieved because all the reactions occur in solution that helps in avoiding the diffusion-limited kinetics. Malachite green isothiocyanate was used as Raman reporter and anchored it onto 40-nm Ag NPs and then treated with dihydrolipoic acid for antibody conjugation. The AFP solution, PS microspheres with monoclonal AFP antibody, and Ag NP suspension

44

U.S. Dinish and M. Olivo

were placed together in the sample chamber of the optoelectrofluidic device, with bare indium tin oxide (ITO) electrode and a photoconductive electrode. SERS signals measured from the locally concentrated immune complexes and the assay could work at extremely low sample volume  500 nL (AFP solution). In this method, a detection limit of 0.1 ng/mL was achieved in a shorter assay time of 5 min after sample injection.

Fig. 7 SERS-based immunoassay chip design: (a) scheme of capture substrate to specifically extract and concentrate antigens from solution, (b) surface functionalized Au NPs to bind to captured antigens selectively and generate intense SERS signals, and (c) sandwich immunoassay with SERS readout. (d) SERS spectra acquired at various CD18/HPAF cell lysate concentrations, (e) dose–response curve for MUC4 in PBS buffer prepared by serially diluting CD18/HPAF cell lysates in PBS buffer, and (f) response at low-concentration range on a linear scale. Data points are the average of three separate assays and the error bars denoting their standard deviations (Reprinted with permission from Wang G. F. et al., Anal. Chem 83, 2554–2561, (2011). Copyright (2011) American Chemical Society. Adapted from Ref. [36])

2

SERS for Sensitive Biosensing and Imaging

45

SERS Nanotags These nanoconstructs were developed by immobilizing a highly active Raman molecule (Raman reporter) onto metal colloids followed by encapsulation and bioconjugation for specific targeted sensing applications [38, 39]. Such a nanoparticle-Raman reporter assembly is referred as SERS nanotag or SERS dots in analogy with quantum dots and schematically shown in Fig. 8. The encapsulation is generally carried out by forming a thin layer of silica coating [39, 40], polyethylene glycol (PEG) [38, 41], or bovine serum albumin (BSA) [42]. This encapsulation helps in providing the physical robustness, stable signals, immunity to the biochemical environment, and means for bioconjugation. The sensitivity of a SERS nanotag will primarily depend on the signal intensity generated by the Raman reporter molecule. Highly active Raman molecules such as crystal violet (CV), malachite green isothiocyanate (MGITC), DTTC, Nile blue, 2-napthalenethiol, DRITC, and DXRITC and various fluorescent dyes such as Cy3, Cy5, rhodamine, etc. [38, 41, 43–47] were used as reporter molecules. However, only a handful of such sensitive Raman reporters are presented in the literatures. In this context, Young Tae and group developed NIR-active Raman reporters under cyanine chemical library to achieve highly sensitive detection of cancer biomarkers in vivo [42]. In another interesting work, Malini Olivo and team recently developed osmium metal carbonyl as Raman reporter to develop SERS nanotags. This has the specific advantage of monitoring the CO vibration in the 1800–2100-cm1 region of Raman spectrum, which is far away from the “interference region” of biomolecules [48]. SERS nanotags constructed with Au NPs have the advantage of better biocompatibility and stability, which renders it as an efficient molecular bioimaging/sensing probe. Such a nanoconstruct that uses SERS to generate detectable Raman signals has recently emerged as a successful alternative to fluorescence labeling, which has the inherent limitations such as (i) photo bleaching, (ii) spectral overlapping that prevents multiplex sensing, and (iii) cytotoxicity. SERS nanotags have been used for various molecular imaging of biological applications both in vitro and in vivo in small animal model. Most prominent applications are summarized below. Imaging and Sensing of EGFR and HER2 Cancer Biomarkers Overexpression of the epidermal growth factor receptor (EGFR) and human epidermal growth factor receptor 2 (HER 2) was found to be associated with a number

PEG/BSA/Silica Au/Ag

Au/Ag

Au/Ag

Au/Ag

Encapsulation

Reporter

-Small molecules -Peptide -DNA

-scFv -Affibody -Antibody

Fig. 8 Schematic of the construction of SERS nanotags with various targeting ligands

46

U.S. Dinish and M. Olivo

1.581 1.613

Nontargeted 5,000

Tumor

Liver 400

800

1,200

1,600

1.362

1,000

2,000

Raman shift (cm−1)

1.581

1,000

1.169

427 525

3,000 913

Tumor

3,000

727

1.362

727 798 913

5,000

427 525

SERS intensity (a.u.)

Targeted

1.169

of solid tumors such as head and neck cancer, breast cancer, lung cancer, bladder cancer, colon cancer, etc. Moreover, EGFR expression level is associated with aggressive tumors and resistance to treatment with cytotoxic agents. Hence, these proteins are often used as biomarker for various cancers. In one of the pioneering studies of SERS nanotags for biomarker detection, Shuming Nie and coworkers demonstrated tumor detection that overexpresses EGFR in a mouse model [38]. Their PEGylated SERS nanotags with NIR Raman reporters were found to be brighter than quantum dots. They conjugated SERS nanotags with single-chain variable fragment (ScFv) that can target EGFR on a human cancer cells and in xenograft tumor models as shown in Fig. 9 [38]. They carried out spectral differentiation without Raman mapping. Recently, Young Tae and coworkers developed SERS nanotags by chemisorptions of Raman reporter molecule, B2LA onto Au NPs to target EGFR in squamous cell carcinoma (SCC-15) cell line [41]. In another work, EGF-peptide-conjugated SERS nanotags were successfully demonstrated for the measurement of circulating tumor cells in the presence of white blood cells in SCC of the head and neck [49]. Recent work on affibodyfunctionalized fluorescent SERS nanotags was used as effective multimodal contrast

400

800

1,200

1,600

Liver 2,000

Raman shift (cm−1)

785-nm Laser beam

Tumor

Liver

Tail vein injection

Tumor

Fig. 9 In vivo cancer targeting by using ScFv antibody-conjugated SERS nanotags that recognize the tumor EGFR biomarker. SERS spectra obtained from the tumor and the liver locations by using targeted (left panel) and nontargeted (right panel) nanoparticles. Photographs showing the focusing of laser beam on tumor site and on the anatomical location of the liver are also provided (Reprinted with permission from Qian X. et al., Nat. Biotechnol. 26, 83–90, (2008). Copyright (2008). Nature Publishing Group. Adapted from Ref. [38])

2

SERS for Sensitive Biosensing and Imaging

47

agents for molecular imaging of EGFR biomarker [50]. In their study, the signal from EGFR-positive tumors was found to be much higher than that from the EGFRnegative tumors, which was later validated by competitive inhibition and ex vivo flow cytometry analysis. Dinish et al. recently demonstrated a sensitive EGFR biomarker sensing in a cell lysate, which is immobilized inside the hollow core of the photonic crystal fiber (PCF) and detected using anti-EGFR antibody-conjugated SERS nanotag. This PCF-SERS platform could offer much higher sensitivity than the conventional ELISA detection of proteins due to the increased interaction length between the laser and immobilized protein. Moreover, this platform allows the detection of biomarkers in an extremely low sample volume of the order of  nL [51]. There have been some interesting works on the detection of HER2 biomarkers using SERS nanotags. Lee et al. constructed antibody-conjugated hollow gold nanospheres (HGNs) with CV as Raman reporter and used for the imaging of HER2 in cell culture [44]. Their SERS mapping study showed that in comparison to Ag NPs, these HGNs exhibited significantly better and homogeneous scattering. These nanoprobes were also used as multimodal agents for both dark-field imaging and SERS detection. SERS nanotag was also constructed with gold nanorods (Au NRs) for the detection of cancer biomarkers. Park et al. demonstrated antibody-conjugated Au NRs with 4-mercaptopyridine reporter molecule for the imaging of HER2 biomarker in cancer cells [52]. They conjugated anti-rabbit IgG onto the surface of the gold nanorods and treated with HER2-overexpressing cells to demonstrate the sensitive detection. Maiti et al. also demonstrated sensitive detection of HER2 biomarker both in vitro and in vivo. In the first case, Raman reports were chemisorbed onto Au NPs, and SERS screening was performed in cancer cell lines using anti-HER2-antibody-conjugated nanotags. Very high SERS signals were obtained from HER2-positive cells but not from the control cells [41]. In addition, in vivo SERS measurement was successfully carried out in a mouse model to detect subcutaneously injected SERS nanotag labeled cancer cells as shown in Fig. 10. In the in vivo study, SERS nanotags were constructed with specially synthesized sensitive NIR Raman reporters (absorption around 800 nm), and detection of HER2 was carried out using two HER2-recognition motifs: a full anti-HER2 monoclonal antibody (170 kDa) and a scFv anti-HER2 antibody (26 kDa). Initially in vitro specificity was shown using HER2-positive cancer cells. They also confirmed the target specificity of these nanotags in SKBR-3 cells (HER2 positive) by competition assays between antibody-conjugated nanotags and free HER2-recognition motifs [42]. This study also proved that signal intensities obtained with scFv-conjugated nanotags were at least 1.5 times stronger than those with the full-HER2 antibody. To validate the detection by scFvconjugated SERS nanotags in vivo, tail-vein injection of nanotags into nude mice bearing xenografts generated from SKBR-3 cells was carried out. After 5 h of injection, the SERS spectra from the tumor site perfectly resembled the spectra of the pure nanotag, whereas no signal was detected from other anatomical locations such as upper dorsal. In the negative control, no significant SERS signal was detected from the mouse with xenograft prepared with HER2-negative cancer cells (MDA-MB231) and injected with the bioconjugated nanotags.

48

U.S. Dinish and M. Olivo

20000

Skin

Raman Intensity (counts)

18000 16000

Cell (SKBR3) 14000 12000 10000

Subcutaneous injection

8000 6000

Pure tag

4000 2000

0 400 600 800 1000 1200 1400 1600 1800

Raman shift (cm−1)

Fig. 10 Spectral comparison of functionalized SERS nanotag used in in vitro and in vivo studies. Pure tag: SERS spectra obtained from HER2 antibody-conjugated nanotag in PBS suspension; subcutaneous injection: SERS spectra from SKBR-3 cell suspension with recognized antibodyattached nanotag measured through the skin; cell (SKBR-3 alone): from the skin (Reprinted with permission from Maiti K. K. et al., Biosens. Bioelectron. 26, 398–403, (2010). Copyright (2010) Elsevier. Adapted from Ref. [41])

Imaging of Prostate-Specific Antigen Schlucker et al. used SERS microscopy for the selective localization and detection of prostate-specific antigen (PSA), which is a prominent biomarker for prostate cancer. They carried out the detection of PSA in tissue specimens using an SERS nanotag constructed by immobilizing the Raman reporter, 5,50 -dithiobis (succinimidyl-2-nitrobenzoate), onto Au NPs and bioconjugating it with anti-PSA antibody [53]. This method combines the high specificity of antigen–antibody interactions along with the high sensitivity of SERS as a novel methodology for immunohistochemistry. In 2009, Jehn et al. used hydrophilic SERS labels for the construction of SERS nanotags and controlled conjugation of anti-PSA antibodies to the nanotag enabled immuno-SERS microscopy for imaging of PSA in prostate cancer tissues [54]. Imaging of pH Measurement of intracellular pH with subcellular resolution is highly challenging and critical for understanding various physiological processes. Kneipp et al. demonstrated spatially resolved probing and imaging of pH in live cells by biocompatible SERS nanotags with 4-mercaptobenzoic acid as Raman reporter, which is anchored onto Au nanoaggregates. In this study, measuring the relative intensity of pairs of spectrally narrow Raman lines in the same spectrum that allowed for quantitative measurement of pH without any correction regarding

2

SERS for Sensitive Biosensing and Imaging

49

cellular background absorption/emission signals [55]. This pH-sensitive SERS nanotag was used to measure and to image the pH values in subcellular structures in the range of 6.8 to 5.4. In order to achieve wide range of pH sensing, they used surface-enhanced hyper-Raman scattering (SEHRS) with two-photon excitation to exhibit a spectral signature suitable for measurement and differentiation of pH values between 8 and 2. Detection of Circulating Tumor Cells Sha et al. used commercial SERS nanotag (Nanoplex biotags, a trademark of Oxonica Inc.) for the direct detection circulating tumor cells (CTCs) in whole blood. As a sensitive measurement strategy, they used magnetic beads for capturing CTC and nanotags for rapid and sensitive detection directly in human whole blood [56]. Magnetic beads conjugated to an epithelial cell-specific antibody (epithelial cell adhesion molecule, anti-EpCAM), and the SERS tags conjugated to an antiHER2 antibody that binds to a tumor cell. Since the breast cancer cell is of epithelial origin, the constructed magnetic bead-EpCAM antibody complex will bind to this tumor cell but not to the regular circulating blood cells; while HER2 receptor is cell membrane receptor, the anti-HER2–SERS nanotag will specifically recognize these tumor cells. By adding this combination of magnetic bead-EpCAM and SERS tag–HER2 conjugates to a patient’s blood sample, circulating breast cancer cells (CTCs) were detected rapidly with good sensitivity in the presence of whole blood. In another study, Shuming Nie and team reported the direct measurement of targeted CTCs in the presence of white blood cells without any subsequent separation procedure. SERS nanotags with EGFR peptide as a targeting ligand have successfully identified CTCs in the peripheral blood of 19 patients with squamous cell carcinoma of the head and neck, with a range of 1–720 CTCs per milliliter of whole blood [49]. Detection of Bronchoalveolar Stem Cells Woo et al. developed sensitive dual mode (SERS and fluorescence) antibodyconjugated spectroscopic dots (F-SERS dots) for the detection of three cellular proteins, including CD34, Sca-1, and SP-C [57]. These proteins are simultaneously expressed in bronchoalveolar stem cells (BASCs) in the murine lung. F-SERS dots for this study were constructed using Ag NPs-embedded silica nanospheres with Raman reporters and fluorescent dyes. They used mercaptotoluene, benzenethiol, and naphthalenethiol as Raman reporters, while dye molecules such as fluorescein isothiocyanate and Alexa Fluor 647 were used as fluorescent molecules in the nanoconstruct of F-SERS dots. They could estimate the relative expression ratio of each protein in BASCs since external standards were used to evaluate SERS intensity in tissue. This was the first quantitative comparisons of multiple protein expression in tissue using SERS nanotags. Detection of Inflammation Understanding of early detection of inflammation is important for the diagnosis and treatment of autoimmune, infectious, and metastatic diseases. In this direction,

50

U.S. Dinish and M. Olivo

SERS nanotag was developed to detect intercellular adhesion molecule 1 (ICAM-1) in vivo in an animal model [58]. Au NPs modified with Raman reporter molecules were encapsulated with a silica shell and then conjugated to anti-ICAM-1 antibody to target ICAM-1 expressed in mice. ICAM-1 expression was induced by local lipopolysaccharide (LPS) injection in mouse ear pinnae. SERS spectra were measured 30 min after the injection of the nanotags. As a comparative study, fluorescent detection of ICAM-1 by using anti-ICAM-1-fluorescein isothiocyanate (FITC) conjugate was also performed under the same conditions. Experimental results indicated that SERS labels produce higher detection sensitivity compared to conventional fluorescence approach. Multiplex Imaging and Sensing As we highlighted previously, the most promising advantage of using SERS nanotag over fluorescent NPs (such as quantum dots) lies in the capability of multiplex detection. Multiplex detection of biologically relevant species is highly useful and can be easily achievable with SERS nanotags having various reporter molecules possessing distinct spectral profile and conjugated with different target moieties. In one of the earlier studies, the F-SERS dots developed by woo et al. were successfully used for in vitro multiplex SERS applications. They demonstrated highly selective and multifunctional characteristics for multiplex targeting, tracking, and imaging in cell lines. As described earlier, expression levels of three proteins (CD34, Sca-1, and SP-C), simultaneously expressed in bronchoalveolar stem cells in the murine lung, were quantitatively compared in the tissue using SERS-active nanoprobes [57]. In a later study, Matschulat et al. demonstrated full exploitation of the multiplexing capability of SERS nanotags by applying cluster methods and PCA for discrimination beyond the visual inspection of individual spectra. SERS spectra from five different nanotags were shown to be separable by hierarchical clustering and by PCA [59]. In a duplex detection study in live cells, they demonstrated the imaging positions of different types of SERS probes along with the spectral information from cellular constituents by combining various spectral processing techniques. Recently, Lei Wu et al. demonstrated the in vitro simultaneous detection of tumor suppressor p53 and cyclin-dependent kinase inhibitor p21 [60]. They used Au–Ag core–shell nanorods to construct SERS nanotags and were tagged with two different Raman reporters 4MBA and DTNB, followed by bioconjugating with different antibodies to specifically target each analyte. Detection of these biomarkers in the medium of blood serum and results showed high sensitivity (1 pg/mL), as well as high reproducibility. Vo-Dinh and coworkers demonstrated the multiplex detection of erbB-2 and ki-67 breast cancer biomarkers by using SERS nanotags constructed with Cy3 and TAMRA as reporters and by mixing the nanotags in the presence of single or multiple DNA targets [61]. In a recent study, Dinish et al. demonstrated the sensitive multiplex detection of cancer biomarkers using SERS-active hollow core photonic crystal fiber probe [62]. As a proof of concept, initially, sensing of the epidermal growth factor

2

SERS for Sensitive Biosensing and Imaging

51

receptor (EGFR) biomarker in oral squamous carcinoma cell lysate using three different SERS nanotags was demonstrated. Subsequently, as clinically relevant example, simultaneous detection of hepatocellular carcinoma (HCC) biomarkers alpha-fetoprotein (AFP) and alpha-1-antitrypsin (A1AT) secreted in the supernatant from Hep3b cancer cell line was also successfully demonstrated. In the future, this study may lead to sensitive biosensing platform for the low-concentration detection of multiple biomarkers to achieve early diagnosis of multiple diseases. In one of the pioneering studies of the demonstration of SERS nanotags for in vivo imaging, Sanjiv Gambhir and coworkers used silica-coated nanotags [39]. Subsequently, as a proof of principle, they demonstrated in vivo multiplex imaging using two different SERS nanotags by SERS mapping in mouse liver. In a follow-up study, same research group showed the superb multiplexing capability of SERS nanotags by detecting ten different nanotags, which were injected subcutaneously into a living mouse [63]. They also demonstrated the passively targeted detection of five nanotags in the liver, which were injected intravenously. All five SERS nanotags were successfully identified and spectrally separated. They could observe linear correlation of the Raman signal with concentration of SERS nanotags for both subcutaneous and intravenous injection. Young Tae and coworkers demonstrated the first proof of concept targeted multiplex detection of cancer in living mouse. In this study, SERS nanotags were prepared with three NIR reporter molecules, Cy7LA, Cy7.5LA, and CyNAMLA 381 followed by BSA encapsulation and antibody conjugation [64]. In this study, in vivo multiplex detection was carried out in tumor xenograft having overexpression of EGFR receptor by injecting an equal amount of three bioconjugated nanotags through the tail vein of a living mouse. As shown in Fig. 11, two nanotags conjugated with anti-EGFR antibodies were detected simultaneously in tumor, while all the three nanotags (two with anti-EGFR and one with

Fig. 11 In vivo multiplex SERS detection (a) in xenograft tumor (peaks from two EGFR-positive nanotags) and (b) from the liver (peaks obtained from two EGFR nanotags and one anti-HER2 nanotag) demonstrating the targeted and nontargeted detection in the tumor and liver, respectively. (c) SERS spectra from dorsal region (Reprinted with permission from Maiti K. K. et al., Nano Today 7, 85–93, (2012). Copyright (2012) Elsevier. Adapted from Ref. [64])

52

U.S. Dinish and M. Olivo

anti-HER2, which is a negative control) were accumulated in the liver via passive localization. These nanotags showed excellent stability over a period of 1 month on shelf with negligible SERS intensity fluctuation, which is a significant achievement especially when long-term monitoring of a SERS signal is required for both in vitro and in vivo applications. Sangeeta Bhatia and coworkers constructed SERS nanotag with Au nanorods and successfully employed for the in vivo cancer detection and photothermal therapy [43]. They also demonstrated the multiplex detection using subcutaneously injected nonbioconjugated nanotags. Recently, biocompatibility of SERS nanotags in zebra fish embryo was studied, and later two multiplexing capable SERS nanotags were injected directly into the embryo and monitored its distribution [65]. In an interesting work, Sanjiv Gambhir and team demonstrated that the fabrication and characterization of fiber optic-based Raman endoscope system that has the capability to detect and quantify the presence of single or multiplexed SERS nanotags were developed. In this study, the usability and capability of the developed Raman endoscope system to detect and multiplex an array of SERS nanotags within a phantom model and on excised tissue sample was performed [66].

Clinical Applications of SERS Cancer Diagnosis In one of the earlier studies of SERS as a clinical sensing tool, Shangyuan Feng et al. developed it for blood plasma biochemical analysis and nasopharyngeal cancer detection [67]. Ag NPs were used as the SERS-active nanostructures, which can be directly mixed with blood plasma to enhance the Raman scattering signals from various biomolecular constituents such as proteins, lipids, and nucleic acids. They measured SERS signals of blood plasma samples from 43 pathologically confirmed nasopharyngeal carcinomas to 33 healthy volunteers. Raman bands obtained from the plasma sample revealed that SERS spectra reveal interesting cancer-specific biomolecular differences, including an increase in the relative amounts of nucleic acid, collagen, phospholipids, and phenylalanine and a decrease in the percentage of amino acids and saccharide contents in the blood plasma of nasopharyngeal cancer patients as compared to that of healthy subjects. They employed PCA and linear discriminant analysis (LDA) for the spectral analysis, and it differentiated the nasopharyngeal cancer SERS spectra from normal SERS spectra with high sensitivity (90.7%) and specificity (100%). In another study, Shangyuan Feng et al. obtained SERS spectra of blood plasma samples from 32 gastric cancer patients to 33 healthy volunteers and compared the spectral data of these samples under four different excitation polarizations (nonpolarized, linearly polarized, left-handed circularly polarized, and righthanded circularly polarized), as shown in Fig. 12 [68]. They also used a combination of PCA and linear discriminant method for the analysis of data. They could

SERS for Sensitive Biosensing and Imaging

a

120

Raman intensity(a.u.)

100 80 60 40 20

53

b

120

Raman intensity(a.u.)

2

100 80 60 40 20

0

0

−20

−20 600

800

1000

1200

1400

1600

600

800

c

120

d

120 100

Raman intensity(a.u.)

100 80 60 40 20

20 0

1000

1200

1400

1600

40

−20 Raman Shift (cm−1)

1400

60

−20 800

1200

80

0

600

1000

Raman Shift (cm−1)

Raman intensity(a.u.)

Raman Shift (cm−1)

1600

600

800

1000

1200

1400

1600

Raman Shift (cm−1)

Fig. 12 Comparison of the mean SERS spectra with different polarized laser excitation for the normal blood plasma (black line, n = 33) against the gastric cancer (red line, n = 32). (a) Excited by nonpolarized laser, (b) excited by linear polarization laser, (c) excited by right-handed circularly polarized laser, (d) excited by left-handed circularly polarized laser; the shaded areas represent the standard deviations of the means. Also shown at the bottom of each figure are the difference spectra (healthy subject mean spectrum minus that of the cancer group). The most significant SERS peaks have been labeled by red arrows in (d) (Reprinted with permission from Feng S. et al., Biosens. Bioelectron. 26 3167–3174, (2011). Copyright (2011) Elsevier. Adapted from Ref. [68])

achieve a sensitivity and specificity of 71.9% and 72.7% for nonpolarized laser excitation, 75% and 87.9% for linear-polarized laser excitation, 81.3 % and 78.8% for right-handed circularly polarized laser excitation, and 100% and 97% for lefthanded circularly polarized laser excitation. This improvement with left-handed circularly polarized laser excitation may be related to its better capability in exploring the chirality of biomolecules in the gastric cancer blood plasma [68]. The results from this exploratory study demonstrated that plasma SERS spectroscopy with left-handed circularly polarized laser excitation has great promise in developing as clinically useful diagnostic tool for noninvasive gastric cancer detection. Omer Aydin et al. measured SERS spectra from liquefied brain tissue samples prepared within 2–3 h of surgery. SERS spectra were measured by mixing 50-nm Ag NPs with the sample and drying the mixture on a CaF2 condition. They could

54

U.S. Dinish and M. Olivo

observe significant variations in the acquired spectra corresponding to different tissue types, particularly, the ratio of the Raman peak intensity at 723 to that at 655 cm1 increased from healthy/peripheral brain tissue to tumor [69]. This study lacked the statistical number of patients involved, and hence more work need to be done to confirm it.

Study on Acute Renal Failure and Urinary pH Serum creatinine has been associated with acute renal failure (ARF). SERS technique was employed as rapid and reliable technique for detecting creatinine for ARF management. Recently, Hui Wang et al. have demonstrated that urinal creatinine can be used as a biomarker for ARF diagnosis. They used a novel SERS based on Ag-coated poly(chloro-p-xylylene) nanostructured films that is stable against the high ionic strengths in urine samples. When urine sample was tested, it was revealed that the two peaks appearing at 840 and 900 cm1 of the urinal SERS spectra can be assigned to creatinine and suitable for quantifying the urinal concentration of creatinine. Based on their study on 13 clinical urine samples (including two control samples), the SERS detection sensitivity creatinine is found to be comparable to that of enzyme-based method [70]. In a recent study on SERS-based pH sensing of clinical urine samples, Malini Olivo and coworkers proposed that urine pH can be used for the diagnosis renal tubular acidosis (RTA). RTA is a syndrome of accumulation of acid in the body because of the failure of kidneys to appropriately acidify the urine. Patients with type 1 RTA are unable to properly lower their urinary pH (>pH 5.5), whereas patients with type 4 are able to lower urine pH below 5.5 by excretion of adequate amounts of NH2. As a proof of concept to demonstrate the potential application SERS method, they carried out pH measurement at clinically relevant pH values by monitoring the Raman peak shift of the reporter molecule (metal carbonyl anchored aminothiophenol), anchored on a planar SERS substrates [71]. When the first urine sample was introduced into the SERS sensor, the Raman shift of the CO peak of reporter was found to be 1811 cm1, while for the second sample it was 1813.2 cm1, and this data when correlated to the calibrated data revealed a close correlation to the pH (at 5.2 and 5.8) measured from the standard bench top instrument.

Perspectives and Conclusions Over the last decade, SERS has evolved as a mature spectroscopic technique for biomedical applications. Uniqueness of SERS lies in its inherent high sensitivity and rich molecular information that makes it superior to other conventional spectroscopic techniques such as fluorescence and UV/Vis absorption spectroscopy. Moreover, the narrow band vibrational “fingerprint” spectra from molecules help

2

SERS for Sensitive Biosensing and Imaging

55

SERS technique to formulate it as an efficient multiplex sensing tool under complex environments. In all SERS sensing applications, irrespective of whether label-free detection or detection with the use of SERS labels, the central concept of biosensing has been to monitor some sort of correlation between Raman peaks in the SERS spectra and the targeted molecular markers. In the first section, we gave an overview of the background of SERS and its underlying mechanisms for the giant signal enhancement. Subsequently, a detailed analysis of label-free SERS biosensing, where one could identify the Raman spectra of the target molecule that are adsorbed on a SERS-active metallic substrate by their unique intrinsic Raman signatures, is provided. However, despite the promising results from initial studies, this approach would require some fundamental understanding of the technique. This is because disease-related biomarkers often belong to the same molecular species (e.g., proteins) that cannot be easily distinguished simply based on their SERS spectra. Moreover, label-free SERS detection is usually limited to molecules that can attach closely to nanostructures and possess inherently high Raman cross section. However, the majority of the biomolecules (especially, proteins and other macromolecules) involved in medical studies do not have these features, and hence label-free detection of target molecules will often significantly limit the application of SERS as a biosensing tool. In this context, SERS biosensing with the use of Raman labels (labeled SERS detection) is significant, and it is advanced considerably. A detailed analysis on all the recent significant achievements of labeled SERS technique for biosensing is provided. In labeled detection, the spectrum obtained is just used for readout purpose without the aim to gain any chemical or structural information on the target molecules. However, the SERS labels provide strong “fingerprint” spectra which are more sensitive and reliable than label-free detection, especially for biomolecules in a complex biological system. This is especially true in the case of detection cancer biomarkers. Among the labeled detection, SERS-based immunoassays and molecular-specific detection using SERS nanotags are highly promising. The limitation of labeled SERS detection lies in its poor ability to achieve quantitative detection, which is primarily due to the unresolvable conflict between reproducibility and sensitivity. The signal reproducibility and repeatability are highly dependent on the nanostructured SERS substrates or colloidal particles. Due to the limitation in producing highly uniform (in nanoscale) plasmonic nanostructures, SERS signal intensity will fluctuate and that in turn prevents it from the quantification of target analytes. To overcome this limitation, detailed research should be directed in developing highly reproducible plasmonic nanostructures with high enhancement capability. Development of SERS nanotags is exciting and it is highly promising. Until its development, SERS was often regarded only as an in vitro sensing modality. One added advantage of SERS nanotags is their amenability to operate at near infrared (NIR), making them suitable for deep-tissue imaging as NIR laser can penetrate into tissue layers up to  5 mm owing to low absorption and low scattering. SERS nanotags have been extensively used for in vivo detection of various cancer

56

U.S. Dinish and M. Olivo

biomarkers with high specificity. This would therefore greatly facilitate the investigation into metastasis and tumor localization, cell migration, and embryogenesis. NIR-active SERS nanotags can be constructed either by tuning laser wavelength to the plasmon peak of nanostructures or to the absorption peak of reporter molecules [72]. When laser wavelength matches with absorption of the reporter molecules, SERRS will occur and its sensitivity is better than SERS. In this context, it is always favorable to generate NIR SERS nanotags by the second approach. Currently, only a handful of NIR-active strong Raman reporters are available. In this context, the recent work by Young Tae and coworkers to develop library of NIR-active Raman reporter is highly commendable and noteworthy [42]. Another advantage of SERS nanotags is the possibility to use prudently engineered Raman reporter labels that could facilitate it with many non-overlapping spectral peaks required for multiplex detection. As initially demonstrated by Sanjay Gambhir and coworkers, simultaneous detection of up to ten differently SERSlabelled bio-analytes is possible for passively localized particles [63]. This is certainly in contrast to quantum dots, which possess only limited distinguishable spectral characteristics within the near-infrared window. SERS nanotag-based in vivo multiplex sensing/imaging is a relatively unexplored domain, which is mainly due to the lack of NIR-active SERS nanotags that possess multiplexing peaks. A detailed study should be directed into this aspect as well to realize a library of NIR-active multiplexing capable reporters. Such nanoprobes can find potential application in the early diagnosis of the disease where simultaneous multiplex detection of biomarkers can be achieved at low concentration. Further, it may find promising applications not only for the early diagnosis of diseases but also in the monitoring of the effectiveness of the cancer therapy. However, despite near-infrared SERS nanotags’ potential superiority to their fluorescent counterparts in terms of multiplexing capability, issues relating to their toxicity, biodistribution and clearance efficiency must be properly studied first before they can be routinely used in a clinical setting [73]. Nevertheless, SERS detection as proposed by Sanjiv Gambhir et al. for the topical spraying may find its clinical usage sometime soon [66]. In many ways, the ongoing development of SERS as clinical diagnostic tool is very exciting and promising, and hopefully the technique can complement other clinical imaging/sensing tools in the near future.

References 1. Dou X, Yamaguchi Y, Yamamoto H, Doi S, Ozaki Y (1996) Quantitative analysis of metabolites in urine using a highly precise, compact near-infrared Raman spectrometer. Vib Spectrosc 13:83–89 2. Nijssen A, Bakker Schut TC, Heule F, Caspers PJ, Hayes DP, Neumann MH, Puppels GJ (2002) Discriminating basal cell carcinoma from its surrounding tissue by Raman spectroscopy. J Invest Dermatol 119:64–69 3. Ko H, Singamaneni S, Tsukruk VV (2008) Nanostructured surfaces and assemblies as SERS media. Small 4(10):1576–1599

2

SERS for Sensitive Biosensing and Imaging

57

4. Fleischmann M, Hendra PJ, McQuillan AJ (1974) Raman spectra of pyridine adsorbed at a silver electrode. Chem Phys Lett 26:163–166 5. Jeanmaire DL, Van Duyne RP (1977) Surface Raman electrochemistry Part I. heterocyclic, aromatic and aliphatic amines adsorbed on the anodized silver electrode. J Electroanal Chem Interfacial Electrochem 84:1–20 6. Albrecht MG, Creighton JA (1977) Anomalously intense Raman spectra of pyridine at a silver electrode. J Am Chem Soc 99:5215–5217 7. Kambhampati P, Foster M, Campion A (1999) Two-dimensional localization of adsorbate/ substrate charge-transfer excited states of molecules adsorbed on metal surfaces. J Chem Phys 110:551–558 8. Schatz GC, Young MA, Van Duyne RP (2006) Electromagnetic mechanism of SERS. In: Kneipp K, Moskovits M, Kneipp H (eds) Surface enhanced Raman scattering physics and applications, vol 103, Topics in applied physics. Springer, New York, pp 19–46 9. Otto A (1984) In: Cardona M, G€ untherodt G (eds) Light scattering in solids IV, vol 54. Springer, Berlin/Heidelberg, pp 289–418, Chapter 6 10. Le Ru EC, Etchegoin PG (2009) Principles of surface-enhanced Raman spectroscopy. Elsevier, Amsterdam, pp 185–264 11. McCreery RL (2000) Raman spectroscopy for chemical analysis. Wiley, New York 12. Haes AJ, Haynes CL, McFarland AD, Schatz GC, Van Duyne RP, Zou SL (2005) Plasmonic materials for surface-enhanced sensing and spectroscopy. MRS Bull 30:368–375 13. Dieringer JA, McFarland AD, Shah NC, Stuart DA, Whitney AV, Yonzon CR, Young MA, Zhang X, Van Duyne RP (2006) Introductory lecture surface enhanced Raman spectroscopy: new materials, concepts, characterization tools, and applications. Faraday Discuss 132:9–26 14. Moskovits M (1985) Surface-enhanced spectroscopy. Rev Mod Phys 57:783–826 15. Shafer-Peltier KE, Haynes CL, Glucksberg MR, Van Duyne RP (2003) Toward a glucose biosensor based on surface-enhanced Raman scattering. J Am Chem Soc 125:588–593 16. Stuart DA, Yuen JM, Shah NC, Lyandres O, Yonzon CR, Glucksberg MR, Walsh JT, Van Duyne RP (2006) In vivo glucose measurement by surface-enhanced Raman spectroscopy. Anal Chem 78:7211–7215 17. Ma K, Yuen JM, Shah NC, Walsh JT, Glucksberg MR, Van Duyne RP (2011) In vivo, transcutaneous glucose sensing using surface-enhanced spatially offset Raman spectroscopy: multiple rats, improved hypoglycemic accuracy, low incident power, and continuous monitoring for greater than 17 days. Anal Chem 83:9146–9152 18. Dinish US, Fu CY, Agarwal A, Olivo M (2011) Development of highly reproducible nanogap SERS substrates: comparative performance analysis and its application for glucose sensing. Biosens Bioelectron 26:1987–1992 19. Knauer M, Ivleva NP, Liu XJ, Niessner R, Haisch C (2010) Surface-enhanced Raman scattering-based label-free microarray readout for the detection of microorganisms. Anal Chem 82:2766–2772 20. Levin CS, Kundu J, Janesko BG, Scuseria GE, Raphael RM, Halas NJ (2008) Interactions of ibuprofen with hybrid lipid bilayers probed by complementary surface-enhanced vibrational spectroscopies. J Phys Chem B 112:14168–14175 21. Kundu J, Levin CS, Halas NJ (2009) Real-time monitoring of lipid transfer between vesicles and hybrid bilayers on Au nanoshells using surface enhanced Raman scattering (SERS). Nanoscale 1:114–117 22. Bantz KC et al (2011) Recent progress in SERS biosensing. Phys Chem Chem Phys 13:11551–11567 23. Ock K et al (2012) Real-time monitoring of glutathione-triggered thiopurine anticancer drug release in live cells investigated by surface-enhanced Raman scattering. Anal Chem 84:2172–2178 24. Huang GG, Han XX, Hossain MK, Ozaki Y (2009) Development of a heat-induced surfaceenhanced Raman scattering sensing method for rapid detection of glutathione in aqueous solutions. Anal Chem 81:5881–5888

58

U.S. Dinish and M. Olivo

25. Deckert-Gaudig T, Bailo E, Deckert V (2009) Tip-enhanced Raman scattering (TERS) of oxidised glutathione on an ultraflat gold nanoplate. Phys Chem Chem Phys 11:7360–7362 26. Vitol EA, Brailoiu E, Orynbayeva Z, Dun NJ, Friedman G, Gogotsi Y (2010) Surfaceenhanced Raman spectroscopy as a tool for detecting ca2+ mobilizing second messengers in cell extracts. Anal Chem 82:6770–6774 27. Ren W, Fang YX, Wang EK (2011) A binary functional substrate for enrichment and ultrasensitive SERS spectroscopic detection of folic acid using graphene oxide/Ag nanoparticle hybrids. ACS Nano 5:6425–6433 28. Kho KW, Dinish US, Kumar A, Olivo M (2012) Frequency shift in SERS for biosensing. ACS Nano 6:4892–4902 29. Pal A, Isola NR, Alarie JP, Stokes DL, Vo-Dinh T (2006) Synthesis and characterization of SERS gene probe for BRCA-1 (breast cancer). Faraday Discuss 132:293–301 30. Braun G, Lee SJ, Dante M, Nguyen TQ, Moskovits M, Reich N (2007) Surface-enhanced Raman spectroscopy for DNA detection by nanoparticle assembly onto smooth metal films. J Am Chem Soc 129:6378–6379 31. Banholzer MJ, Qin L, Millstone JE, Osberg KD, Mirkin CA (2009) On-wire lithography: synthesis, encoding and biological applications. Nat Protoc 4:838–848 32. Bonham AJ, Braun G, Pavel I, Moskovits M, Reich NO (2007) Detection of sequence-specific protein-DNA interactions via surface enhanced resonance Raman scattering. J Am Chem Soc 129:14572–14573 33. Mahajan S, Richardson JA, Brown T, Bartlett PN (2008) SERS-melting: a new method for discriminating mutations in DNA sequences. J Am Chem Soc 130(46):15589–15601 34. Jin R, Cao YC, Thaxton CS, Mirkin CA (2006) Glass-bead-based parallel detection of DNA using composite Raman labels. Small 2:375–380 35. Faulds K, McKenzie F, Smith WE, Graham D (2007) Quantitative simultaneous multianalyte detection of DNA by dual-wavelength surface-enhanced resonance Raman scattering. Angew Chem Int Ed 46:1829–1831 36. Wang GF et al (2011) Detection of the potential pancreatic cancer marker muc4 in serum using surface-enhanced Raman scattering. Anal Chem 83:2554–2561 37. Hwang H, Chon H, Choo J, Park JK (2010) Optoelectrofluidic sandwich immunoassays for detection of human tumor marker using surface-enhanced Raman scattering. Anal Chem 82:7603–7610 38. Qian X et al (2008) In vivo tumor targeting and spectroscopic detection with surface enhanced Raman nanoparticle tags. Nat Biotechnol 26:83–90 39. Keren S, Zavaleta C, Cheng Z, de la Zerda A, Gheysens O, Gambhir SS (2008) Noninvasive molecular imaging of small living subjects using Raman spectroscopy. Proc Natl Acad Sci U S A 105:5844–5849 40. Kustner B et al (2009) SERS labels for red laser excitation: silica-encapsulated SAMs on tunable gold/silver nanoshells. Angew Chem Int Ed 48:1950–1953 41. Maiti KK et al (2010) Development of biocompatible SERS nanotag with increased stability by chemisorption of reporter molecule for in vivo cancer detection. Biosens Bioelectron 26:398–403 42. Samanta A et al (2011) Ultrasensitive near-infrared Raman reporters for SERS-based in vivo cancer detection. Angew Chem Int Ed 50:6089–6092 43. Von Maltzahn G et al (2009) SERS-coded gold nanorods as a multifunctional platform for densely multiplexed near-infrared imaging and photothermal heating. Adv Mater 21:3175–3180 44. Lee S et al (2009) Surface-enhanced Raman scattering imaging of HER2 cancer markers overexpressed in single MCF7 cells using antibody conjugated hollow gold nanospheres. Biosens Bioelectron 24:2260–2263 45. Huang PJ, Chau LK, Yang TS, Tay LL, Lin TT (2009) Nanoaggregate-embedded beads as novel Raman labels for biodetection. Adv Funct Mater 19:242–248

2

SERS for Sensitive Biosensing and Imaging

59

46. Han XX, Zhao B, Ozaki Y (2009) Surface enhanced Raman scattering for protein detection. Anal Bioanal Chem 394:1719–1727 47. Zhang Y, Hong H, Myklejord DV, Cai W (2011) Molecular imaging with SERS-active nanoparticles. Small 7:3261–3269 48. Kong KV, Lam Z, Goh WD, Leong WK, Olivo M (2012) Metal carbonyl-gold nanoparticle conjugates for live-cell SERS imaging. Angew Chem Int Ed 51:9796–9799 49. Wang X et al (2011) Detection of circulating tumor cells in human peripheral blood using surface-enhanced Raman scattering nanoparticles. Cancer Res 71:526–1532 50. Jokerst JV, Miao Z, Zavaleta C, Cheng Z, Gambhir SS (2011) Affibody-functionalized gold–silica nanoparticles for Raman molecular imaging of the epidermal growth factor receptor. Small 7:625–633 51. Dinish US, Fu CY, Soh KS, Ramaswamy B, Kumar A, Olivo M (2012) Highly sensitive SERS detection of cancer proteins in low sample volume using hollow core photonic crystal fiber. Biosens Bioelectron 33:293–298 52. Park H et al (2009) SERS imaging of HER2-overexpressed MCF7 cells using antibodyconjugated gold nanorods. Phys Chem Chem Phys 11:7444–7449 53. Schlucker S, Kustner B, Punge A, Bonfig R, Marx A, Strobel P (2006) Immuno-Raman microspectroscopy: in situ detection of antigens in tissue specimens by surface-enhanced Raman scattering. J Raman Spectrosc 37:719–721 54. Jehn C, Kustner B, Adam P, Marx A, Strobel P, Schmuck C, Schlucker S (2009) Water soluble SERS labels comprising a SAM with dual spacers for controlled bioconjugation. Phys Chem Chem Phys 11:7499–7504 55. Kneipp J, Kneipp H, Wittig B, Kneipp K (2007) One-and two-photon excited optical ph probing for cells using surface-enhanced Raman and hyper-Raman nanosensors. Nano Lett 7:2819–2823 56. Sha MY, Xu H, Nathan MJ, Cromer R (2008) Surface-enhanced Raman scattering tags for rapid and homogeneous detection of circulating tumor cells in the presence of human whole blood. J Am Chem Soc 130:17214–17215 57. Woo MA et al (2009) Multiplex immunoassay using fluorescent-surface enhanced Raman spectroscopic dots for the detection of bronchioalveolar stem cells in murine lung. Anal Chem 81:1008–1015 58. McQueenie R et al (2012) Detection of inflammation in vivo by surface-enhanced Raman scattering provides higher sensitivity than conventional fluorescence imaging. Anal Chem 84:5968–5975 59. Matschulat A, Drescher D, Kneipp J (2010) Surface-enhanced Raman scattering hybrid nanoprobe multiplexing and imaging in biological systems. ACS Nano 4:3259–3269 60. Wu L et al (2013) Simultaneous evaluation of p53 and p21 expression level for early cancer diagnosis using SERS technique. Analyst 138:3450–3456 61. Wang HN, Vo-Dinh T (2009) Multiplex detection of breast cancer biomarkers using plasmonic molecular sentinel nanoprobes. Nanotechnology 20:065101 (1–6) 62. Dinish US, Balasundaram G, Chang YT, Olivo M (2013) Sensitive multiplex detection of serological liver cancer biomarkers using SERS-active photonic crystal fiber probe. J Biophotonics 1–10. doi:10.1002/jbio.201300084 63. Zavaleta CL et al (2009) Multiplexed imaging of surface enhanced Raman scattering nanotags in living mice using noninvasive Raman spectroscopy. Proc Natl Acad Sci U S A 106:13511–13516 64. Maiti KK, Dinish US, Samanta A, Vendrell M, Soh KS, Park SJ, Olivo M, Chang YT (2012) Multiplex targeted in vivo cancer detection using sensitive near-infrared SERS nanotags. Nano Today 7:85–93 65. Wang Y, Seebald JL, Szeto DL, Irudayaraj J (2010) Biocompatibility and biodistribution of surface-enhanced Raman scattering nanoprobes in zebrafish embryos: in vivo and multiplex imaging. ACS Nano 4:4039–4053

60

U.S. Dinish and M. Olivo

66. Zavaleta CL et al (2013) A Raman-based endoscopic strategy for multiplexed molecular imaging. Proc Natl Acad Sci U S A 110:E2288–E2297 67. Feng S, Lin J, Cheng M, Li YZ, Chen G, Huang Z, Yu Y, Chen R, Zeng H (2009) Gold nanoparticle based surface-enhanced Raman scattering spectroscopy of cancerous and normal nasopharyngeal tissues under near-infrared laser excitation. Appl Spectrosc 63:1089–1094 68. Feng S, Chen R, Lin J, Pan J, Wu Y, Li Y, Chen J, Zeng H (2011) Gastric cancer detection based on blood plasma surface-enhanced Raman spectroscopy excited by polarized laser light. Biosens Bioelectron 26:3167–3174 69. Aydin O, Altas¸ M, Kahraman M, Bayrak OF, C¸ulha M (2009) Differentiation of healthy brain tissue and tumors using Surface-enhanced Raman scattering. Appl Spectrosc 63:1095–1100 70. Wang H, Malvadkar N, Koytek S, Bylander J, Reeves WB, Demirel MC (2010) Quantitative analysis of creatinine in urine by metalized nanostructured parylene. J Biomed Opt 15:027004 71. Kong KV, Dinish US, Lau WK, Olivo M (2014) Sensitive SERS-ph sensing in biological media using metal carbonyl functionalized planar substrates. Biosens Bioelectron 54:135–140 72. Kang H et al (2013) Near-infrared SERS nanoprobes with plasmonic Au/Ag hollow-shell assemblies for in vivo multiplex detection. Adv Funct Mater 23:3719–3727 73. Xie W, Schlucker S (2013) Medical applications of surface enhanced Raman scattering. Phys Chem Chem Phys 15:5329–5344

3

Photonic Crystal Fiber-Based Biosensors Xia Yu, Derrick Yong, and Yating Zhang

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction of PCF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction of PCF-Based Biosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface-Modified PCF Biosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface Modification by Biomolecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface Modification by Metal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface Unmodified PCF Biosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operating Principle and Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Experimental Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62 62 64 65 65 66 74 74 75 78 83 83

Abstract

Photonic crystal fibers (PCFs) are newly emerging optical fibers that present a diversity of new features beyond what conventional optical fibers can provide. Owing to their unique geometric structure and light-guiding properties, PCFs show an outstanding potential for microliter- or even nanoliter-volume biosensing purposes. In this chapter we briefly review applications of PCFs for developing compact and robust biosensors. This research subject has recently invoked much attention due to the gradually maturing fabrication techniques of fiber microstructures, as well as development of surface processing technique enabling activation of fiber microstructures with functional materials. Particularly, we consider two

X. Yu (*) • D. Yong • Y. Zhang Precision Measurements Group, Singapore Institute of Manufacturing Technology, Singapore, Singapore e-mail: [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_8

61

62

X. Yu et al.

sensor types: surface-modified and unmodified PCF biosensors. For the first sensor type, we focus mainly on biomolecule-decorated microstructures and metalized air hole arrays. The sensors functionalized with bioreceptors typically employ fluorescence of dye labels of the targeted biomolecules to track specific events, such as DNA hybridization and protein binding. The metallization of air holes of PCF with nanoscaled film or particle aggregates introduces a new physical phenomenon called surface plasmon effect to further strengthen the optical field that interacts with bio-samples and push-up, which is of great significance for biological analysis at ultra-low concentrations and even at single molecule level. The second sensor type directly relies on light absorption from aqueous bio-samples in the air channels of PCF. To elaborate the contribution of PCF in such sensing operation mode, two sensor implementations are presented in the following and detail the influence of fiber structures on the absorptionbased sensing performance. Keywords

Photonic crystal fiber • Total internal reflection • Photonic bandgap • Biosensors • Evanescent field • Surface plasmon • Surface-enhanced Raman scattering • Absorption • Resolution • Sensitivity • Biomolecular

Introduction Introduction of PCF Conventional optical fibers allow propagation of light along the length by confining light within its core. Guidance in such fibers is attained via total internal reflection (TIR), which requires the refractive index of the core to be higher than that of the cladding. In order to possess a higher refractive index, the core is doped, and to further achieve single-mode propagation, a narrower core is necessary. However, doping raises attenuation and a tighter core limits permissible optical power and elicits undesired nonlinear interactions over extensive fiber lengths [1]. An idea of trapping light within a hollow fiber core emerged in 1991. It comprised a hollow core surrounded by a microscopic periodic lattice of air holes in the silica cladding, forming a photonic crystal structure [2]. This photonic crystal structure banks on the regular arrangement of microstructures to fundamentally alter the material’s optical properties. Fibers possessing such structures are known as photonic crystal fibers (PCFs). The photonic crystal structure evokes a highly wavelength-dependent cladding index in the PCF, providing a host of customizable properties. Over the past years, PCFs have already demonstrated their superiority over conventional optical fibers in many aspects, leading to the emergence of numerous novel applications, particularly in sensing applications. In addition, they possess unbound potential to further surpass conventional optical fibers, as research in the field advances [3].

3

Photonic Crystal Fiber-Based Biosensors

Index-guiding

Hollow-core

All-solid

63

Hybrid

Silica

High-index doped rods Air

Fig. 1 Schematics of four basic types of PCF

Diverse designs for PCFs have arisen over the years and are mainly classified into four basic types [4]: (a) index-guiding PCF (solid core surrounded by periodic array of air holes), (b) hollow-core PCF (hollow core surround by periodic array of air holes), (c) all-solid photonic band gap (PBG) fiber (solid core surrounded by periodic array of high-index rods), (d) hybrid PCF (solid core surrounded by periodic array of air holes and high-index rods). The simplest form of guidance within PCF occurs in index-guiding PCF, where light is guided based on modified TIR [5–7]. As illustrated in Fig. 1, this form of PCF incorporates a solid core (an introduced defect in the periodic array of air holes) within a photonic crystal structure that constitutes the cladding. This configuration results in a higher refractive index in the core, allowing light to be confined within it. Although TIR is ideal for light guidance, it is unable to avoid scattering losses and intrinsic absorption due to propagation within a solid core. However, without a solid core, light is unable to be guided via TIR. In the case of hollow fibers, guidance of light is dependent on external reflection, which is highly leaky and multimodal [8]. In contrast, hollow-core PCFs are able to guide light within the hollow core without leakage. Also shown in Fig. 1 is a hollow-core PCF, which has a hollow core situated within a photonic crystal structure that again constitutes the cladding. This configuration traps certain bandwidths of light in the hollow core by photonic bandgap (PBG) effects of the cladding, instead of TIR, and enables single-mode guidance [9]. Unlike conventional optical fibers, hollow-core PCFs enable light to be guided in an air-filled space (hollow core) permitting minimal attenuation over extended lengths. Guidance is possible due to coherent Bragg scattering, where specific bandwidths of light are prevented from escaping into the cladding and are confined within the hollow core. Since only certain bands of light are confined and guided, these fibers are also known as PBG fibers. Furthermore, the photonic crystal structure of the cladding effectively encloses more than 99% of optical power within the hollow core, enabling low-loss propagation of light along the PCF [10]. Similar to index-guiding PCFs, loss in hollow-core PCFs is a comparably significant issue. Hollow-core PCFs, however, still portray the highest prospects for exceptionally low-loss fibers based on the fact that light propagates within the air-filled hollow core. Losses as low as 1.2 dBkm 1 have been reported in [10]. Nevertheless, theoretical predictions indicate potential low losses to the order of 0.1 dBkm 1 which essentially is lower than the conventional optical fibers [4].

64

X. Yu et al.

3

1 Drawing of capillaries

Drawing into preform

4 Drawing into fiber

~1mm

~10mm

~10mm

2

~1mm

Stacking of capillaries ~125µm

~10mm

~1mm

Fig. 2 Stack-and-draw process for PCF fabrication: (1) capillaries are drawn from larger tubes; (2) stack is manually assembled in a macroscopic scale on a hexagonally shaped jig; (3) stack is inserted into a tube and remaining gaps are filled with pure silica rods. Tube is drawn into a millimeter wide preform; (4) preform is further drawn into fibers using a fiber-drawing tower under optimally set temperatures and rate of drawing

Fabrication is essential in the realization of modeled PCF designs. As compared to conventional fibers, PCF fabrication is a less sophisticated process. In brief, silica capillaries are stacked and fused and eventually drawn into fibers [1]. This technique offers elevated flexibility by enabling variable sophisticated lattices to be assembled. The process is elaborated below in Fig. 2 [4]. Drawn fibers are verified via highprecision microscopy and are further polymer-coated to heighten mechanical properties. PCF fabrication with various materials through extrusion [11–13], drilling [14], and built-in-casting [15] has also been reported.

Introduction of PCF-Based Biosensors Fiber optic technology has permitted the miniaturization of numerous sensors leading to the vast exploration of fiber optic-based sensors over the past few decades. The device of PCF has provided new grounds for enhanced sensing capabilities with fiber optics. The novel manipulation and guidance of light within PCFs have distinctively heightened performance in aspects of precision as well as accuracy. In particular, modes in PCF, namely, the core, cladding, and hybrid modes, are sensitive to ambient conditions. These modes can thus provide the required data, either individually or collectively [16]. As reported, changes in strain and temperature readily produce a response in all modes, whereas changes in the surrounding medium only affect certain cladding modes. Further post-processing techniques, such as infiltration with gases [17] and liquids [18], coating with metal films [19, 20] and metallic nanoparticles [21, 22], as well as tapering [23], have also been investigated to increase sensitivity of PCF-based sensors. Surface-modified and unmodified PCF-based biosensors would be discussed in this chapter.

3

Photonic Crystal Fiber-Based Biosensors

65

Surface-Modified PCF Biosensors An important characteristic of PCF is its provision of an exceptionally high surface area to volume ratio in its array of air holes. Surface modifications performed on the walls of these air holes, thus, further elevate the sensitivity of the PCF-based sensor [24].

Surface Modification by Biomolecules The specificity of deoxyribonucleic acid (DNA) hybridization and protein-protein interactions forms the basis of optical sensing via surface functionalization with biomolecules. Although interaction is between biomolecules immobilized on surfaces and biomolecules introduced in analytes, detection in PCF is optics based; therefore, these biomolecules are often tagged with fluorophores. Rindorf et al. have reported such a PCF-based sensor where the sensing component (a surfacefunctionalized PCF) is incorporated into a biochip [25]. The PCF reported had single-stranded DNA (ssDNA) immobilized on the walls of its air holes. These ssDNAs were complementary to Cy3 (fluorophore)-labeled target ssDNAs. Hence, when hybridization occurs between this pair of ssDNAs, the resultant doublestranded DNA would possess a Cy3 molecule. Light is coupled into the PCF from an input multimode optical fiber (MMF) and collected via an output MMF as illustrated below in Fig. 3. Cy3 that is trapped within the PCF due to DNA hybridization would thus absorb a certain band of the input light. This absorbance would then be reflected in the output spectrum collected. However, if noncomplementary ssDNA was introduced, DNA hybridization would not have occurred, no Cy3 would be trapped, and hence, no absorbance would be observed.

Fig. 3 Schematics of optical fiber layout in biochip [25]

66

X. Yu et al.

In addition, the unique incorporation of PCF into biochips has facilitated the infiltration of PCF with desired samples. Apart from surface modification with DNA, protein immobilization in PCF has also been reported. Similar to DNA hybridization, proteins have specific interactions with other protein – antigen-antibody assays. Specifically, an estrogen receptor (ER) (antigen) from breast cancer cells, immobilized within a length of PCF, is to be conjugated with an anti-ER (antibody) [26]. Similar to [25], fluorescent dye labels are involved in the sensing process. Conversely, instead of labeling the target protein with a fluorophore, a secondary antibody, able to bond with the anti-ER, is labeled. Input light corresponding to the excitation wavelength of the fluorophore is utilized, and successful binding of anti-ER to ER is characterized by absorbance and corresponding emission peaks. Such a detection regime was able to identify 20 pg of ER in a 50 nL sample, enabling highly sensitive detection of breast cancer indicators, even with extremely low sample volumes. On the contrary, DNA hybridization detection without the requirement of fluorescence has also been reported in several articles. A non-labeled detection was experimentally performed on a PCF with long-period gratings (PCF-LPG) [27]. A layer of biomolecules immobilized on the air holes’ walls elicited resonant wavelength shifts and further enabled the layer’s thickness to be estimated from experimental data. Sensing was also theoretically demonstrated in a hollow-core PCF Bragg fiber [28]. Transmission properties of the Bragg fiber were altered with hybridization of DNA at the walls of the PCF’s air holes; similar to Ref. [27], thickness of the DNA was also quantifiable based on the reported theoretical model. On top of pairing-governed detection regimes, surface modifications have also enabled pH sensing in PCF-based devices. Specifically, pH sensing has been successfully demonstrated in PCFs surface modified with pH-sensitive polysaccharidebased films – comprising cellulose acetate doped with a pH-sensitive fluorescence dye, eosin [29]. This surface-modified microstructure polymer optical fiber probe also exhibits the capability of a surfactant-modifiable pH response range and highlights the feasibility of organic meshes as indicator carriers.

Surface Modification by Metal Surface plasmon resonance (SPR) has empowered optical sensing with remarkably high sensitivities. Typically, SPR sensing is performed in Kretschmann-Raether configuration where p-polarized light passes through a prism and is reflected from a thin layer of metal (usually gold or silver). At the point of reflection, evanescent waves formed at the metal-dielectric interface are phase-matched with plasmonic waves at the metal-analyte interface resulting in the formation of surface plasmon waves. Minute changes in the analyte’s refractive index are reflected in changes in amplitude or phase of the reflected light. To miniaturize this technology, it is wed with optical fibers and comprehensively discussed in [30]. PCF has facilitated issues in phase-matching and plasmon excitation by provision of a Gaussian-like core mode and metal-coated air holes’ walls, respectively. Furthermore, the enabling of

3

Photonic Crystal Fiber-Based Biosensors

67

microfluidics through its air holes provides practicality. The theoretical study reported yields a refractive index sensitivity of 10 4 refractive index unit (RIU) [19], providing possibilities in sensing of changes in biological analytes, deeming PCF ideal as an SPR fiber optic biosensor. Besides coating walls of the air holes with a full metal layer, investigations in metallic nanoparticle coatings have also been explored. Metallic nanoparticles serve as a substrate for surface-enhanced Raman scattering (SERS) which is highly molecular specific and able to largely enhance the Raman signal by magnitudes between 106 and 1015. The analyte solution was infiltrated via simple capillary action, allowing confined optical modes in the PCF’s air holes to interact with it and the immobilized gold nanoparticles to obtain measurable SERS signals. It provides a possible platform for detection of biomolecules. Generally, two types of PCF have been studied intensely for SERS applications, namely solid-core PCF (SCPCF) [31–34] and hollow-core PCF (HCPCF) [35–38]. Their distinct guidance properties yield different ways of light-sample interaction. In SCPCF, Raman signal is generated from sample molecules within the evanescent field, whereas for HCPCF, excitation light confined in the hollow core directly interacts with the sample there. HCPCF brings ultrahigh efficiency of surface plasmon excitation, but it has one major limitation – a rather narrow transmission window. This effect becomes especially significant after analyte infiltration as the fiber loses its PBG properties. This thus restricts the wavelength of excitation light and hinders its applications in samples with Raman wavelengths outside the transmission band. In contrast, SCPCF has a much broader wavelength range to accommodate diverse sensing regimes. However, the major challenge for SCPCF-based SERS sensors is the lower sensitivity, which arises from the smaller surface volume of light-sample interaction. For both PCF templates, great efforts have been made to further improve their SERS capabilities. For instance, to alleviate the restriction of bandgap effect in HCPCF, a selective-filling approach was proposed to optimize the guidance properties of fiber and meanwhile preserve a high SERS sensitivity [36]. There are also a few works devoted to enhance SERS efficiency in SCPCF, from the viewpoint of structural design, as well as the optimization of coating process [22, 34]. The reproducible deposition of metallic nanoparticles inside air holes mainly involves three methods. The first is stabilization with positive surfactant, for example, hexadecyltrimethylammonium bromide, which could easily absorb metal nanoparticles by opposite charge affinity [21]. The second is silanization of silica walls of air holes. The silane could chemically bind to air-hole surface and provides coupling sites for metal nanoparticles [39]. Another technique worth mentioning is highpressure chemical deposition, which was reported in [32]. Silver precursor complex was delivered into fiber holes under high pressure, and thermal reduction of the precursor was rigorously controlled to form an annular deposition of silver nanoparticles. In the rest part of this section, a new method of SERS substrate fabrication, multilayered deposition [40, 41], and recent progress of using an offset launch method to amplify the SERS efficiency in an SCPCF-based Raman sensor are presented.

68

X. Yu et al.

Fig. 4 Micrograph of SCPCF cross-section. (a) Circles A and B indicate launching positions for core launch and offset launch, correspondingly. (b) Micrograph of SCPCF axial profile after multilayer deposition of Au NPs. (c) Absorbance spectrum of prepared Au NPs (Inset: SEM micrograph of Au NPs) (Reproduced from Ref. [47])

Sensor Fabrication The SCPCF has a silica core and three rings of air holes arranged hexagonally, as illustrated in Fig. 4a. The diameter of the air holes is 5.4 μm and the hole-to-hole distance is 9.6 μm. The preparation of gold colloid follows the recipe reported in Kumar’s et al. work [42]. Briefly, 50 ml of HAuCl4 solution (2.54  10 4 M) was brought to boiling under vigorous stirring. Approximately 1 ml of sodium citrate solution (38.8 mM) was then quickly added to the mixture. Upon a color change, the heating was maintained for another 10 min and the solution was subsequently allowed to cool to room temperature under stirring. This procedure of gold reduction produces Au NPs, approximately 15 nm in diameter, with an absorption peak at 519 nm in the UV-vis spectrum corresponding to its excitation wavelength for localized surface plasmons (shown in Fig. 4c). The fiber was cut into segments of about 8 cm in length, with both ends carefully cleaved. Before modification, a portion of jacket (~3 cm) from the fiber tip was stripped off. Multilayer deposition of Au NPs in the air holes of the fiber was carried out in the following steps: (1) The fiber was first cleaned with freshly prepared piranha solution (30% H2O2 and 98% H2SO4 mixed in the volumetric ratio of 1:3). The solution was pumped into the fiber’s air holes with a custom-built pressure cell

3

Photonic Crystal Fiber-Based Biosensors

69

Fig. 5 Schematic of multilayered Au NPs deposition on the SCPCF inner walls of air holes via the “layer-by-layer” deposition method

and allowed to react for 30 min. (2) Then the air holes were flushed thoroughly with deionized (DI) water and dried in N2 gas. (3) Once cleaned, the air holes were infiltrated with a 3% (volume percentage) solution of APTMS in methanol and allowed to react for approximately 12 h. This step functionalized the air hole surfaces with amine groups (shown in Fig. 5). (4) Following which, the fiber was flushed with copious amounts of methanol and dried. (5) Next, the Au NP solution was infiltrated and allowed to react with the amine-functionalized surface for another 24 h. The strong affinity between amine groups and Au NPs enabled adherence of Au NPs to the silica surface. (5) Finally, the first round of deposition ends with continuous flushing with DI water and drying under N2 gas. The addition of subsequent Au NP layers can be achieved via alternating steps (3–5), until the desired number of layers was obtained. After eight cycles of Au NP deposition, a wine red color was observed in the air holes of the fiber under a microscope at 100 times magnification (Fig. 4b). To identify this red substance, a portion of fiber was characterized under an SEM/EDX system. The fiber was cracked with tweezers to expose the inner surface of the air holes (Fig. 6a). As shown in Fig. 6b, a size and shape distribution of nanoparticles exists on the silica wall. Specifically, the particle size varies between 15 and 60 nm. An EDX investigation (Fig. 6c) verified that the particles attached to the air hole walls are actually Au NPs. This nonuniformity in particle size arises from

70

X. Yu et al.

Fig. 6 (a) SEM micrograph of air holes deposited with Au NPs. (b) Magnification of region marked with red circle in (a). (c) Element spectrum of (b) collected with EDX (Reproduced from Ref. [47])

aggregation of the Au NPs as more layers of deposition were added [38]. The Au NPs together with their clusters create the SERS active area around the silica core.

Sensor Characterization and Application To test the SERS performance of the probe, Rhodamine B (RhB) solution – a commonly used Raman dye – was chosen as the sample for analysis. RhB was first infiltrated into the air holes under pressure, which only required a few seconds. The fiber was then mounted on a holder and placed under the microscope of a Raman spectrometer (Renishaw), where a 50 times objective lens (N.A. of 0.75) was used to couple the excitation laser (a 785 nm diode laser with 3 mW power) into the fiber and simultaneously collect the backscattered Raman emission. Here, Raman emissions were generated from RhB molecules that were adsorbed onto the SERS substrate – deposited Au NPs. Since the Raman spectrometer allows visualization of the fiber end face, the laser source (spot size of ~3 μm) launch position could be accurately determined. Utilizing this feature, different measurement methodologies were adopted. One approach was to directly focus the laser source into the solid fiber core (Circle A in Fig. 4a), the most common method mentioned in the literature [31–34]. An alternative involves illumination of the sample-filled air hole in the second layer of the cladding (Circle B in Fig. 4a), which will subsequently be referred to as offset launch. All the SERS spectra were acquired with three accumulations of 10 s exposures. Figure 7 presents the Raman spectra collected under the abovementioned launching conditions – core launch and offset launch – for RhB solutions of concentration 10 5 M. All the spectra were vertically separated for clarity but

3

Photonic Crystal Fiber-Based Biosensors

Fig. 7 Raman spectra of 10

5

71

M RhB in fabricated SCPCF Raman probe (Reproduced from Ref. [47])

were not scaled relative to one another. Curves A and B correspond to the signals collected from the SERS probe under core and offset launch, with the silica Raman background removed. For comparison, the Raman light in an uncoated SCPCF was also measured (Curves C and D). Note that regardless of the launch method, no Raman signals were observed from the uncoated SCPCF, whereas in the presence of Au NPs, a set of pronounced Raman peaks appeared. The Raman peaks were noted to mainly spread from 1,100 to 1,700 cm 1, which is consistent with those RhB Raman signatures reported in the literature [43]. Notably, for offset launch, the SERS spectrum displays better resolved peaks (e.g., peaks located at 1,076, 1,123, and 1,593 cm 1) with a higher signal to noise ratio than that from the core-launch scheme. Particularly, the Raman intensity at 1,504 cm 1 increases from 2,695 counts to 4,500 counts as the launch point switches from the core to the air hole, demonstrating the potential in achieving higher sensitivity by offset launch. Further investigation on the SERS enhancement was conducted at a lower sample concentration of 10 7 M. It is noted that the Raman signal corresponding to the core launch is “drowned” by the noise, as shown in Fig. 8. However, the SERS peaks for offset launch are still clearly distinguishable despite the noise of the system. These results thus highlight the advantage of offset launch in improving the sensing capability of the SERS probe. The achieved detection limit of 10 7 M is comparable with most of the existing SERS sensors [44]. A qualitative analysis of the difference in SERS signal strength of the two launching conditions, based on light distribution within the liquid-filled SCPCF, was carried out using the finite element method as follows. Since the concentration

72

Fig. 8 SERS spectra of 10 [47])

X. Yu et al.

7

M RhB in fabricated SCPCF Raman probe (Reproduced from Ref.

of RhB solution is sufficiently low, the refractive index of the solution is assumed to be that of water (~1.33). The infiltrated analyte thus does not disrupt the fiber guidance via TIR. Calculated power distribution of fundamental core modes is shown in Fig. 9. The normalized power distribution under core launch (Fig. 9a) shows that the low-loss core guided mode has most of its energy confined in the silica fiber core. In addition, only a small amount of power is located in the liquid-filled regions surrounding the core. This corresponds to the evanescent field that can interact with the Au NPs, thus exciting surface plasmons along the fiber length. Since the precise 3D geometry of the SERS substrate is unknown, assumption was made that the gold clusters in each hole occupy a thin layer with thickness of 60 nm on top of the inner wall of the air hole. Particularly, only the modal field penetrating these layers contributes to the surface plasmon excitation. Calculated using the method described in [45], the percentage power over these active regions is 0.0063%. In offset launch, the excited mode displays a distinct propagating behavior, as shown in Fig. 9b. The power is mainly localized in the silica region of the side waveguide, consisting of a partially liquid-filled core and six surrounding channels. Similarly, the effective sensing region covers 17 holes, with a total sample volume of ~30 nL. Here, a power fraction of 0.049% was estimated for plasmonic excitation, eight times that observed in the core-launch scheme. The increased power fraction is speculated to come from the much smaller core-cladding index contrast of the side waveguide, which yields a larger field overlap with Au NPs. However, the

3

Photonic Crystal Fiber-Based Biosensors

73

Fig. 9 Normalized power distribution of fundamental modes as a function of radial position (corresponding to white line in respective insets: 2D intensity plots) excited under (a) core launch and (b) offset launch at excitation wavelength of 785 nm (Reproduced from Ref. [47])

theoretically predicted improvement may be offset by three factors: (i) deviation of the mode intensity profile from Gaussian shape lowers the coupling efficiency of standard Gaussian laser beam, as compared to core launch. As a result, power entering the fiber probe is reduced. (ii) The larger gold-coated area involved in Raman enhancement in the case of offset launch will also induce a stronger absorption loss of the Raman light. (iii) Light propagating in the side waveguide has a

74

X. Yu et al.

higher confinement loss, as can be understood from the power leakage toward the adjacent solid fiber core (in Fig. 9b) and the outermost silica cladding. To some extent such optical launching method prevents the Raman light generated along the fiber from being well guided and collected by the objective lens. Therefore, the Raman signal strength finally detected is a trade-off between the amplified SERS effect and various sources of loss. However, it is noteworthy that offset launch still exhibits an obvious advantage over the common method of core launch. The design flexibility of PCF cross-sectional structures and the layer-by-layer Au NP deposition technology provide several dimensions of freedom that can be manipulated to achieve an optimal performance, namely, refractive index of fiber material, size of air holes, fiber length, size of Au NPs, and number of deposition cycles. The offset launch has similarly been applied in a hollow-core polymer PCF SERS probe as detailed in Cox et al.’s work [35]. In their experiment the laser source was randomly focused into the cladding air holes, but the collected SERS signal was observed to be much weaker than that obtained by core launch. This was attributed to the perfect light-confinement property in the kagome lattice cladding, which confines light within the big hollow core but lacks efficient mechanism of mode guidance in the ultrathin silica struts or the hollow capillaries in the cladding [46].

Conclusion A simple method of offset launch has been demonstrated as an efficient way in amplification of the SERS effect within SCPCF [47]. This was achieved by shifting the launching point from the solid fiber core to the sample-filled air hole in the cladding. The stronger SERS signal observed was attributed to the significant increase in the overlap between the excited mode and the Au NPs. The offset launch-induced enhancement was also demonstrated to have an improved detection limit of 10 7 M for sample volumes as low as 30 nL. Lastly, the mechanism of amplification also contributes to the design principle in future structural optimization of fiber SERS probes.

Surface Unmodified PCF Biosensors As previously discussed, fiber parameters strongly influence the guidance of light within PCFs. A strong-penetrating PCF was reported to have visible light propagating within the silica walls with evanescent waves in its air holes for interaction with infiltrated analyte [48, 49]. Its functionality was demonstrated through the detection of Cy5-labeled DNA molecules via absorbance spectrum analysis. In such PCFs, the extent of the evanescent field overlapping with the infiltrations is superior to conventional evanescent field-dependent spectroscopy devices [50]. The results show that the enhancement of sensitivity is attributed to a long effective interaction length while requiring only submicroliter of samples. Recently, liquid-core waveguide (LCW) cells have been widely used to minimize the loss of light. In a LCW

3

Photonic Crystal Fiber-Based Biosensors

75

cell, light propagates through the LCW fluid by TIR as the tube (cladding) has a lower refractive index than that of the fluid (core). Such cells have now been widely used for long path-length spectrophotometry [51]. However, the construct material for LCW cells, Teflon ® AF, is one of the most expensive commercial polymers. Moreover, as Teflon ® AF is highly gas permeable, it poses the problem of evaporation in the internal solution. In literature, different index-guiding PCFs were proposed as miniaturized waveguide flow cells, particularly for long path-length axial absorbance. This offers several advantages over conventional absorption detection methods. As light is propagated along the fiber, the optical path is equal to the length of the fiber, and in theory, this could essentially be any desired length. Sensitivity is therefore enhanced due to the extraordinarily long optical path. Moreover, stray light effects due to the increased path length are eliminated by the waveguide nature of PCFs. Furthermore, compared to Teflon ® AF, fibers are a much cheaper alternative, and the small cross-sectional area of air holes minimizes reagent consumption. Lastly, the robust and flexible nature of PCFs renders them extremely suitable for on-chip integration.

Operating Principle and Theoretical Analysis Absorption spectroscopy is the most widely used detection method. It refers to a range of techniques employing the interaction of electromagnetic radiation with matter. As defined by Beer-Lambert’s Law [52], absorbance A of light by a sample is proportional to the optical path length b, the chromophore concentration, c and its molar absorption coefficient ε. In PCFs, the fraction of light interacting with the aqueous solution Phole will also determine the absorbance. Therefore, the term Phole is included in the calculation of absorbance in the form of A = log Io/It = ebcPhole + α, where α is the attenuation of the PCF without absorption in dB. In absorption spectroscopy, the intensity of a beam of light measured before and after interaction with a sample is related by I = I0 exp( klL), where I0 is the intensity of the incident light, I is the intensity of the light passed through the layer of the substance, L is the thickness of the substance layer or the path length, and kl is the extinction coefficient which depends on the type of substance and wavelength of the incident light. Different molecules absorb radiation of different wavelengths. An absorption spectrum will show a number of absorption bands corresponding to structural groups within the molecule. Thus, by knowing the shape of the absorption spectrum, the optical path length, and the amount of radiation absorbed, one can determine the structure and concentration of the compound. Such spectra of substances are often obtained by a spectrophotometer, where the sample, often liquid, is contained in an optical container called a flow cell. The cell is subsequently placed into the spectrophotometer and allowed to interact with the light beam of varying wavelengths. In chemical sensing, solid-core PCFs based on index guiding do show some advantages over their hollow-core counterparts governed by PBG [53–57]. First, they support a broader spectral band. Second, the requirement for an accurate control of the air hole size and periodicity of the holes are less stringent in the former, which

76

X. Yu et al.

Fig. 10 SEM micrograph of two PCF cross-sections. Structure A, solid-core PCF; Structure B, hollow-core PCF. Corresponding modal field distributions (Top: evanescent field; Bottom: core mode) are shown on the right (Reproduced from Refs. [54] and [56])

increases the fabrication tolerance. Figure 10 shows the scanning electron micrographs of these two structures. For structure A, the average lattice period size Λ is 5.2 μm and the average diameter of the air holes in the cladding d is 3.2 μm. Since the index of the solid core is higher than that of the air-hole cladding, modified total internal reflection governs the guidance of light. In structure B, instead of filling the center of the preform with a solid rod, a hole with a diameter of 3.1 μm is introduced. Its Λ is 6.6 μm and d is 6.2 μm. Similarly, as the average index of the defected core is still higher than that of the cladding, the guiding mechanism is likewise governed by modified TIR. The structures shown in Fig. 10 were subsequently modeled via the full-vector beam propagation method [58], and the transparent boundary conditions were used to enable the analysis of leaky modes. At every point of internal reflection at the silica-hole interface, a small portion of the field penetrates and decays

3

Photonic Crystal Fiber-Based Biosensors

a 6

77

1.8

Evanescent field (%)

5 4

Evanescent field (%)

1.6

3

1.4 1.2 1 0.8 0.6 0.4 0.2 1

1.1

1.2 1.3 Index of infiltration

1.4

2 1 0 0.4

0.6

0.8 1 1.2 Wavelength (mm)

1.4

1.6

b Λ=4.8 μm Λ=5.0 μm Λ=5.2 μm

Evanescent field (%)

1.1

1.0

0.9

0.8

0.7

3.0

3.2

3.4 3.6 3.8 Air-hole diameter d (μm)

4.0

Fig. 11 Calculated percentage of evanescent field intensity in the holey region to the total confinement power (a) at different wavelengths. Inset: with various indices of the infiltrations. (b) With the change of air-filling fraction, different air hole diameter d and lattice period Λ (Reproduced from Ref. [56])

exponentially. By inserting a solution into the fiber holes, the core and cladding refractive indices increase accordingly with the solution’s refractive index. The fiber thus experiences higher leakage because the difference between the average indices of the core and cladding is decreasing. The corresponding core mode and the evanescent field, which penetrates into the holey regions with infiltrations, are also shown in Fig. 10. Taking structure A as an example, most of the guided light’s power was calculated to be confined within the solid region of the core, with only a fraction

78

X. Yu et al.

extending into the holey region (with refractive index of 1.33 at 510 nm). Moreover, the evanescent field is further enhanced with the increase in the infiltrated solution’s concentration. As shown in the inset of Fig. 11a, upon increasing the index of the infiltrated material from n = 1 (for air) to n = 1.4, the ratio of evanescent field intensity in the holey region to the total confinement power increases from 0.3% to 1.7%. Similar theoretical results could also be obtained in structure B. As shown in Beer-Lambert’s Law, the absorbance in PCF is wavelength dependent because the fraction of light interacting with the aqueous solution Phole is a strong function of wavelength. Moreover, the cladding mode itself is also highly dependent on wavelength [59] – as the wavelength increases, there is higher leakage into the air holes, and thus, the evanescent field too increases. Therefore, the fiber is expected to have a better performance in absorbance detection at longer wavelengths. Furthermore, as the microstructure array is mainly characterized by two parameters, namely, the air-hole diameter d and the pitch size Λ, more leakage would occur upon an increase in d, and thus, the interaction with the infiltrated liquid can be enhanced. This phenomenon is also verified through calculation, where an increase in evanescent field occurs with d/Λ as shown in Fig. 11b.

Experimental Results and Analysis The setup for axial absorbance measurement is shown in Fig. 12. The PCF was supported and aligned using two 3-axis translation stages with V-grooved fiber holders. Light from a broadband halogen light source (Ocean Optics, Dunedin, Florida, USA) with a customized fiber FC connector pigtail was coupled to the end face of the PCF, with its alignment adjusted under a CCD camera. The output port, on the other hand, comprised a fiber core coupled into a USB2000 miniature spectrometer (Ocean Optics, Dunedin, Florida, USA) by means of a 20X objective lens and a collimator. The utilized spectrometer employs “OO1 Base 32” software for the measurement of transmission spectrum with a resolution of 0.2 nm. Here, Cobalt (II) chloride (CoCl2.6H2O) (Sigma Aldrich, Missouri, USA) solution was

Fig. 12 Experimental setup for absorbance measurement. The fiber was supported and aligned by a pair of 3-axis translation stages with V-grooved fiber holders. Light was coupled into the fiber core by free-space butt-coupling with the aid of a CCD camera. The end face of the PCF was coupled to an Ocean Optics USB2000 miniature spectrometer using a focused lens and a collimator (Reproduced from Ref. [57])

3

Photonic Crystal Fiber-Based Biosensors

79

used as the absorption material. CoCl2 is a crystalline solid with pale rose color when hydrated; it absorbs visible wavelengths ranging from 450 to 580 nm with maximum absorbance at 510 nm. Through progressive dilution with deionized water, aqueous samples with CoCl2 concentration between 10 and 500 mM were prepared. A 30 cm length of PCF was subsequently infiltrated with CoCl2 solution by capillary effect. A new length of fiber was used for each infiltration of a different CoCl2 concentration, with complete infiltration verified under the microscope. A background transmission spectrum, with an infiltration of deionized water, was also recorded as a reference. Taking structure B as an example, the corresponding absorbance spectra shown in Fig. 13a are the transmission curves of CoCl2 solution, with the background subtracted. With the increase of the CoCl2 concentration, the absorbance intensity increased correspondingly. From absorption spectra, an absorption window of CoCl2 in PCFs was observed between 450 and 580 nm, indicating that PCF could be a sample container for absorption spectrometry. It was also noted that maximum absorbance of CoCl2 in PCF occurred at approximately 530 nm, about 20 nm away from the absorption maximum in conventional spectrophotometer which is 510 nm [56]. This was due to the inherent attenuation property of PCF. Calibration of CoCl2 was done at 510 nm with varying concentrations in the range of 10–500 mMol. A fitting curve was generated to determine the sensitivity for absorption detection, presented in Fig. 13b. A sensitivity of approximately 0.4 Mol 1 and excellent linearity with R2 = 0.9681 were obtained; this also provided a calibration curve for the determination of an unknown concentration of sample. A linear fitting to the experimental data from structure B gives an absorbance sensitivity of 1.6 Mol 1 and a relative standard deviation R2 of 0.996, which shows the excellent linearity of the absorbance response as shown in Fig. 14. The sensitivity from structure B is about four times larger than structure A due to the existence of the central air hole to enhance the evanescent field. The deviation between linear fitting curve and the experimental data is attributed to the alignment resolution and possible scattering at the fiber end faces. Repetition of absorbance measurements could also potentially reduce the uncertainty in the sensitivity. Alternatively, perpendicular measurement was carried out on the same fiber B samples by launching the light vertically onto the fiber cladding. The transmitted light was collected by the miniature spectrometer, placed just beneath the fiber. Polyimide coating was removed to create a detection window. The measured sensitivity was 0.1175 Mol 1, which is about 14 times lower than that from the longitudinal direction measurement. Furthermore, the R2 value was only 0.878. Such a deviation is relatively significant, especially in the low-concentration samples, which might be due to light deflection at multiple layers of the hole-silica interface. To better illustrate the effect of measurement direction, perpendicular detection using a single capillary tube was also conducted. This single capillary tube can be viewed as a single micro-channel with the same cross-sectional area as the central hole of our PCF is filled with CoCl2. The resultant data, plotted in the same figure, exhibited a sensitivity of 0.0248 Mol 1. This absorption sensitivity is a 64 times increase compared with the measurement from the perpendicular detection technique shown in Fig. 14. Moreover, the measurement range of absorbance efficiency in

80

X. Yu et al.

a 0.5 Mol 0.2 Mol 0.1 Mol 0.05 Mol 0.02 Mol 0.01 Mol

Absorbance (a.u.)

0.6

0.4

0.2

0.0 450

500

550

600

Wavelength (nm)

b 0.25 Measurement data Fitting curve

Absorbance (a.u.)

0.2

0.15

0.1

y = 0.3979x + 0.0329 2 R = 0.9681

0.05

0

0

0.1

0.2 0.3 Concentration (Mol)

0.4

0.5

Fig. 13 The absorbance spectra of CoCl2 solution with concentration from 10 to 500 mMol. 30 cm PCF samples were used for each infiltration. Absorption at wavelengths ranging from 400 to 800 nm was observed. Experiments were repeated three times for each concentration. (b) Calibration curve for axial absorbance detection in the range of 10–500 mMol (Reproduced from Ref. [54])

3

Photonic Crystal Fiber-Based Biosensors

Absorbance at 510 nm (a.u.)

0.8

81

Measured data for longitudinal detection Curve fitting for longitudinal detection Measured data for perpendicular detection Curve fitting for perpendicular detection

0.6

y = 1.6x + 0.0312 2

R = 0.996 0.4

0.2

y = 0.0248x + 0.02 2

R = 0.970 0 0.1

0.3 0.2 Concentration (Mol)

0.4

0.5

Fig. 14 Measurement and linear fitting of absorbance data at 510 nm for samples with different concentrations using longitudinal and perpendicular detection techniques, respectively (Reproduced from Ref. [54])

the perpendicular direction is at least one order lower which will therefore constrain the sensing resolution. The effect of path length on absorption was also investigated by reducing the PCF length from 45 to 33 cm with it filled with 0.5 Mol CoCl2 solution. Results are illustrated in Fig. 15, showing the dependence of absorbance on PCF length. The absorbance at 510 nm increased from 0.06 to 0.27 when the length of fiber increased from 33 to 45 cm. This calibration plot also demonstrated an excellent linear change of absorbance with R2 of 0.9868. This agrees with Beer-Lambert’s law, which states that absorbance is linearly dependent on path length. Sensitivity could thus be improved by increasing the length of PCF to detect even lower concentration of absorbing analytes. However, it was not demonstrated in this work due to the limited fiber samples. PCFs could be widely used either as a stand-alone flow cell or integrated into microfluidic chips to detect absorbing species such as ions, alkaloids, and biomolecules. Due to their robustness and flexibility, PCFs can easily be coiled up to minimize its footprint, rendering it suitable for microchip absorbance detection. The effects of bending (or coiling) and temperature are the typical external factors that may influence the guiding properties and the interaction strength. By bending the fiber with a diameter varying from 80 to 20 mm, the normalized absorbance change increased from approximately 0.3% to 8%. Relating the bending diameter to absorbance reveals sensitivity in the order of 10 3 mm 1 as shown in Fig. 16a. As

82

X. Yu et al.

0.25

Absorbance

y =0.018x-0.553 2 R =0.9868 0.2

0.15

0.1

32

34

36

38 40 Length (cm)

42

44

46

Fig. 15 Calibration plot for PCF with fiber lengths varying from 34 to 45 cm. The fiber was filled with 500 mMol CoCl2 solution (Reproduced from Ref. [56])

-3

0.08

Measurement data Fitting curve

0.06

0.04

0.02

0

y =−0.0013x +0.103 2 R = 0.9925

20

30 40 50 60 70 Bending diameter (mm)

b

80

8

x 10

Measurement data fit 1

Absorbance change

Absorbance change

a

7

-4

y= 4.10 x+ 0.005 2 R =0.972

6

5

30

35

40 45 50 55 60 Temperature (⬚C)

65

70

Fig. 16 Calibration plot for PCF with different (a) bending diameters varying from 20 to 80 mm. (b) Temperature ranging from 30  C to 70  C. The fiber was filled with 500 mMol CoCl2 solution, with a length of 30 cm

the fiber is bent, the cladding modes will tend to radiate outward when the phasematching condition occurs. However, the guided-mode profile in an optical fiber waveguide is relatively stable, which results in the stable bending performance. The effect of temperature was also studied by placing the fiber in a water bath heated between 30  C and 70  C via a hot plate. Results from this exhibit a thermal stability

3

Photonic Crystal Fiber-Based Biosensors

83

of 10 4  C 1 as shown in Fig. 16b. This is likely attributed to the homogeneity of the PCF material. PCFs are typically made of only a single material (fused silica); thus, a lower and more uniform thermal expansion coefficient can be expected. So far, thermal stability has been observed in many PCF-based devices such as gratings [60], interferometers [61], and laser [62]. The pure silica material in PCF is also chemically and biologically inert compared with their polymer counterparts [63]. Therefore, it well prevents the evaporation of water, which makes PCF a suitable candidate for chemical sensing.

Conclusion An evanescent field absorption sensor (EFAS) using a short length of PCF has been demonstrated. The results show that the enhancement of sensitivity is attributed to the possibility of achieving a long interaction length while being compact and only requiring submicroliter volumes of sample. The evanescent field in the fiber infiltrated with liquid is theoretically analyzed in a systematic manner. By infiltrating the microstructured waveguides with CoCl2 solutions, sensitive absorption detection from the transmission spectrum was obtained. Excellent linearity was also observed between absorbance and liquid concentration as well as with PCF length. The comparison from measurement results of two different structures reveals that the central air hole can effectively enhance the evanescent field and hence the absorption sensitivity significantly. Here, the absorption sensitivity of up to 1.6 Mol 1 has been achieved. In addition, it was observed that the sensitivity using the longitudinal detection method is at least 60 times higher compared with that using the perpendicular measurement technique. The temperature and bending related experiments further revealed good thermal and bending stabilities and highlight the PCF’s potential as a candidate for quantitative analysis of analyte even in harsh environments.

References 1. Russell PS (2001) A neat idea. IEE Rev 57:19–23 2. Russell PS (2003) Photonic crystal fibers. Science 299:358–362 3. Knight JC (2003) Photonic crystal fibers. Nature 424:847–851 4. Cerqueira A (2010) Recent progress and novel applications of photonic crystal fibers. Rep Prog Phys 73:024401–024421 5. Knight JC, Birks TA, Russell PS, Atkin DM (1996) All-silica single-mode optical fiber with photonic crystal cladding. Opt Lett 21:1547–1549 6. Birks TA, Knight JC, Russell PS (1997) Endlessly single-mode photonic crystal fiber. Opt Lett 22:961–963 7. Kurokawa K, Nakajima K, Tsujikawa K, Yamamoto T, Tajima K (2009) Ultra-wideband transmission over low loss PCF. J Lightw Technol 27:1653–1662 8. Dai J, Harrington JA (1997) High-peak-power, pulsed CO2 laser delivery by hollow glass waveguides. Appl Optics 36:5072–5077

84

X. Yu et al.

9. Cregan RF, Mangan BJ, Knight JC, Birks TA, Russell PS, Roberts PJ, Allan DC (1999) Singlemode photonic band gap guidance of light in air. Science 285:1537–1539 10. Roberts PJ, Couny F, Sabert H, Mangan BJ, Williams DP, Farr L, Mason MW, Tomlinson A, Birks TA, Knight JC, Russell PS (2005) Ultimate low loss of hollow-core photonic crystal fibers. Opt Express 13:236–244 11. Allan DC, West JA, Fajardo JC, Gallagher MT, Koch KW, Borrelli NF (2001) Photonic crystal fibers: effective index and band gap guidance. In: Photonic crystals and light localization in the 21st century, vol 563. Springer Netherlands, Dordrecht, pp 305–320 12. Kumar VVRK, George AK, Reeves WH, Knight JC, Russell PS (2002) Extruded soft glass photonic crystal fiber for ultrabroad supercontinuum generation. Opt Express 10:1520–1525 13. Monro TM, Kiang KM, Lee JH, Frampton K, Yusoff Z, Moore R, Tucknott J, Hewak DW, Rutt HN, Richardson DJ (2002) High nonlinearity extruded single-mode holey optical fibers. Paper presented at the optical fiber communication conference, Anaheim, 17 Mar 2002 14. Feng X, Mairaj AK, Hewak DW, Monroe TM (2004) Towards high-index glass based monomode holey fiber with large mode area. Electron Lett 40:167–169 15. Mori A, Shikano K, Enbutsu K, Oikawa KM, Kato KN, Aozasa S (2004) 1.5μm band zerodispersion shifted tellurite photonic crystal fibre with a nonlinear coefficient of 657W 1km 1. Paper presented at the 30th European conference on optical communication, Stockholm, 5–9 Sept 2004 16. Chen C, Laronche A, Bouwmans G, Bigot L, Quiquempois Y, Albert J (2008) Sensitivity of photonic crystal fiber modes to temperature, strain and external refractive index. Opt Express 16:9645–9653 17. Benabid F, Couny F, Knight JC, Birks TA, Russell PS (2005) Compact, stable and efficient all-fibre gas cells using hollow-core photonic crystal fibres. Nature 434:488–491 18. Woliński TR, Ertman S, Lesiak P, Domański AW, Czapla A, Dąbrowski R, NowinowskiKruszelicki E, Wójcik J (2006) Photonic liquid crystal fibers – a new challenge for fiber optics and liquid crystals photonics. Opt Electron Rev 14:329–334 19. Hassani A, Skorobogatiy M (2006) Design of the microstructured optical fiber-based surface plasmon resonance sensors with enhanced microfluidics. Opt Express 14:11616–11621 20. Yu X, Zhang Y, Pan SS, Shum P, Yan M, Leviatan Y, Li CM (2010) A selectively coated photonic crystal fiber based surface plasmon resonance sensor. J Opt 12:015055 21. Yan H, Gu C, Yang CX, Liu J, Jin GF, Zhang JT, Hou LT, Yao Y (2006) Hollow core photonic crystal fiber surface-enhanced Raman probe. Appl Phys Lett 89:204101 22. Yan H, Liu J, Yang CX, Jin GF, Gu C, Hou LT (2008) Novel index-guided photonic crystal fiber surface-enhanced Raman scattering probe. Opt Express 16:8300–8305 23. Wadsworth W, Witkowska A, Leon-Saval S, Birks TA (2005) Hole inflation and tapering of stock photonic crystal fibres. Opt Express 13:6541–6549 24. François A, Ebendorff-Heidepriem A, Monro TM (2009) Comparison of surface functionalization processes for optical fibre biosensing applications. Paper presented at the 20th international conference on optical fibre sensors, Edinburgh, 7–8 Oct 2009 25. Rindorf L, Høiby PE, Jensen JB, Pedersen LH, Bang O, Geschke O (2006) Towards biochips using microstructured optical fiber sensors. Anal Bioanal Chem 385:1370–1375 26. Padmanabhan S, Shinoj VK, Murukeshan VM, Padmanabhan P (2010) Highly sensitive optical detection of specific protein in breast cancer cells using microstructured fiber in extremely low sample volume. J Biomed Opt 15:017005 27. Rindorf L, Jensen JB, Dufva M, Pedersen LH, Høiby PE (2006) Photonic crystal fiber longperiod gratings for biochemical sensing. Opt Express 14:8224–8231 28. Passaro D, Foroni M, Poli F, Cucinotta A, Selleri S, Lægsgaard J, Bjarklev AO (2008) All-silica hollow-core microstructured Bragg fibers for biosensor application. IEEE Sens J 8:1280–1286 29. Yang XH, Wang LL (2007) Fluorescence pH probe based on microstructured polymer optical fiber. Opt Express 15:16478–16483 30. Sharma AK, Jha R, Gupta BD (2007) Fiber-optic sensors based on surface plasmon resonance: a comprehensive review. IEEE Sens J 7:1118–1129

3

Photonic Crystal Fiber-Based Biosensors

85

31. Xie Z, Lu Y, Wei H, Yan J, Wang P, Ming H (2009) Broad spectral photonic crystal fiber surface enhanced Raman scattering probe. Appl Phys B 95:751–755 32. Peacock AC, Amezcua-Correa A, Yang J, Sazio PJA, Howdle SM (2008) Highly surface enhanced Raman scattering using microstructured optical fibers with enhanced plasmonic interactions. Appl Phys Lett 92:141113 33. Oo MKK, Han Y, Martini R, Sukhishvili S, Du H (2009) Forward-propagating surfaceenhanced Raman scattering and intensity distribution in photonic crystal fiber with immobilized Ag nanoparticles. Opt Lett 34:968–970 34. Oo MKK, Han Y, Kanka J, Sukhishvili S, Du H (2010) Structure fits the purpose: photonic crystal fibers for evanescent-field surface-enhanced Raman spectroscopy. Opt Lett 35:968–970 35. Cox FM, Argyros A, Large MCJ, Kalluri S (2007) Surface enhanced Raman scattering in a hollow core microstructured optical fiber. Opt Express 15:13675–13681 36. Zhang Y, Shi C, Gu C, Seballos L, Zhang J (2007) Liquid core photonic crystal fiber sensor based on surface enhanced Raman scattering. Appl Phys Lett 90:193504 37. Yang X, Shi C, Newhouse R, Zhang J, Gu C (2011) Hollow-core photonic crystal fibers for surface-enhanced Raman scattering probes. Int J Opt 754610:1–11 38. Han Y, Tan S, Oo MKK, Pristinski D, Sukhishvili S, Du H (2010) Towards full-length accumulative surface-enhanced Raman scattering-active photonic crystal fibers. Adv Mater 22:2647–2651 39. Schroder K, Csaki A, Schwuchow A, Jahn F, Strelau K, Latka I, Henkel T, Malsch D, Schuster K, Weber K, Schneider T, Moller R, Fritzsche W (2012) Functionalization of microstructured optical fibers by internal nanoparticle mono-layers for plasmonic biosensor applications. IEEE Sens J 12:218–224 40. Addison CJ, Brolo AG (2006) Nanoparticle-containing structures as a substrate for surfaceenhanced Raman scattering. Langmuir 22:8696–8702 41. Andrade GFS, Fan M, Brolo AG (2010) Multilayer silver nanoparticles-modified optical fiber tip for high performance SERS remote sensing. Biosens Bioelectron 25:2270–2275 42. Kumar S, Aaron J, Sokolov K (2008) Directional conjugation of antibodies to nanoparticles for synthesis of multiplexed optical contrast agents with both delivery and targeting moieties. Nat Protoc 3:314–320 43. Zhang J, Li X, Sun X, Li Y (2005) Surface enhanced Raman scattering effects of silver colloids with different shapes. J Phys Chem B 109:12544–12548 44. Fan M, Andrade GFS, Brolo AG (2011) A review on the fabrication of substrates for surface enhanced Raman spectroscopy and their applications in analytical chemistry. Anal Chim Acta 693:7–25 45. Warren-Smith SC, Afshar S, Monro TM (2008) Theoretical study of liquid-immersed exposedcore microstructured optical fibers for sensing. Opt Express 16:9034–9045 46. Argyros A, Pla J (2007) Hollow-core polymer fibers with a Kagome lattice: potential for transmission in the infrared. Opt Express 15:7713–7719 47. Zhang YT, Yong D, Yu X, Xia L, Liu DM, Zhang Y (2013) Amplification of surface-enhanced Raman scattering in photonics crystal fiber using offset launch method. Plasmonics 8:209–215 48. Jensen JB, Pedersen LH, Høiby PE, Nielsen LB, Hansen TP, Folkenberg JR, Riishede J, Noordegraaf D, Nielsen K, Carlsen A, Bjarklev A (2004) Photonic crystal fiber based evanescent-wave sensor for detection of biomolecules in aqueous solutions. Opt Lett 29:1974–1976 49. Monro TM, Belardi W, Furusawa K, Baggett JC, Broderick NGR, Richardson DJ (2001) Sensing with microstructured optical fibres. Meas Sci Technol 12:854–858 50. Fini JM (2004) Microstructure fibres for optical sensing in gases and liquids. Meas Sci Technol 15:1120–1128 51. Tao SQ, Winstead CB, Xian H, Soni K (2002) A highly sensitive hexachromium monitor using water core optical fiber with UV LED. J Environ Monit 4:815–818 52. Smolka S, Barth M, Benson O (2007) Highly efficient fluorescence sensing with hollow core photonic crystal fibers. Opt Express 15:12783–12791

86

X. Yu et al.

53. Martelli C, Canning J, Stocks D, Crossley MJ (2006) Water-soluble porphyrin detection in a pure-silica photonic crystal fiber. Opt Lett 31:2100–2102 54. Yu X, Sun Y, Ren GB, Shum P, Ngo NQ, Kwok YC (2008) Evanescent field absorption sensor using a pure-silica defected-core photonic crystal fiber. IEEE Photon Technol Lett 20:336–338 55. Sun Y, Yu X, Nguyen N-T, Shum P, Kwok YC (2008) Long path-length axial absorption detection in photonic crystal fiber. J Anal Chem 80:4220–4224 56. Yu X, Kwok YC, Khairudin NA, Shum P (2009) Absorption detection of Cobalt(II) ions in an index-guiding microstructured optical fiber. Sens Actuators B 137:462–466 57. Yu X, Zhang Y, Kwok YC, Shum P (2010) Highly-sensitive photonic crystal fiber based absorption spectroscopy. Sens Actuators B 145:110–113 58. Feit MD, Fleck JA (1980) Computation of mode properties in optical fiber waveguides by a propagating beam method. Appl Optics 19:1154–1164 59. Knight JC, Birks TA, Russel PJ (1996) All-silica single-mode operational fiber with photonic crystal cladding. Opt Lett 21:1547–1549 60. Zhu Y, Shum P, Bay H, Yan M, Yu X, Hu J, Hao J, Lu C (2005) Strain-insensitive and hightemperature long-period gratings inscribed in photonic crystal fiber. Opt Lett 30:367–369 61. Zhao CL, Yang XF, Lu C, Jin W, Demokan MS (2004) Temperature-insensitive interferometer using a highly birefringent photonic crystal fiber loop mirror. IEEE Photon Technol Lett 16:2535–2537 62. Yu X, Liu D, Dong H, Fu S, Dong X, Tang M, Shum P, Ngo NQ (2006) Temperature stability improvement of a multi-wavelength Sagnac loop fiber laser by using HiBi-MOF as birefringent component. Opt Eng 45:044201–044204 63. Cordeiro CMB, Franco MAR, Chesini G, Barretto ECS, Lwin R, Brito Cruz CH, Large MC (2006) Microstructured-core optical fibre for evanescent sensing applications. Opt Express 14:13056–13063

4

Lab-on-a-Chip Device and System for Point-of-Care Applications Tsung-Feng Wu, Sung Hwan Cho, Yu-Jui Chiu, and Yu-Hwa Lo

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optofluidic Lab-on-a-Chip Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Labeled and Label-Free Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lab-on-a-Chip Flow Cytometers for Point-of-Care Application . . . . . . . . . . . . . . . . . . . . . . . . . . . Signal Detections Utilizing Optical-Coding Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optical Coding on Scattering Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multicolor Detection Employing Color-Space-Time Coding Technology (COST) . . . . . . . . Signal Improvement on Microfluidic Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hydrodynamic Focusing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88 88 92 100 105 105 111 112 113 117 118

Abstract

Optofluidic lab-on-a-chip (LOC) devices have drawn significant attention because of their special attraction to point-of-care applications. In this chapter, the advancement and key accomplishments of LOC devices that utilize optics as T.-F. Wu (*) • Y.-J. Chiu Materials Science and Engineering Program, University of California, San Diego, La Jolla, CA, USA e-mail: [email protected]; [email protected] S.H. Cho NanoCellect Biomedical Inc, San Diego, CA, USA e-mail: [email protected] Y.-H. Lo Materials Science and Engineering Program, University of California, San Diego, La Jolla, CA, USA Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, USA e-mail: [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_10

87

88

T.-F. Wu et al.

means to detect biomedical signals are discussed. The topics being covered include optofluidic waveguide designs, detection techniques, and various optofluidic LOC systems and experimental demonstration. Limited by the fabrication and material issues, most LOC devices produce lower signal quality than their large-size counterparts. Therefore, improving the signal quality and detection sensitivity of LOC devices is of tremendous importance to find their clinical applications. To this end, novel techniques that combine the optical-coding method with 3-D flow confinement designs are discussed. The optical-coding method enables us to use digital signal processing to enhance the signal-to-noise ratio, while the 3-D flow focusing designs help reduce variations in cell distribution inside the channel, thus removing some major sources of signal fluctuations. Besides a review of key efforts and accomplishments in the field, this chapter also points out promising pathways toward low-cost, high-performance optofluidic LOC systems for point-of-care clinics. Keywords

Lab-on-a-chip • Flow cytometry • Optofluidics • Point-of-care • Optical-coding technique • Flow confinement

Introduction Optofluidic Lab-on-a-Chip Devices For biosensing, optofluidics, a term first used in 2003 [1], means synergistic operation of fluidics and optics for analyzing biological and chemical samples [2–4]. People have applied the principles of optics such as Raman scattering, fluorescence, refractive index, adsorption, etc., to detect and analyze a wide range of biological samples in the fluid [5–10]. The information obtained is then used to retrieve the size, shape, granularity, and physical, chemical, and biological properties of samples. Since the last decade, the miniaturization of fluidics has led to the creation of the field of microfluidics, and a subsequent development is to integrate optical components and features onto a microfluidic device for detection, excitation, and manipulation of particles in microfluidics [6]. Optofluidics, being identified as a special branch of the general field of microfluidics, is best illustrated by the increasing number of publications that specifically refer themselves as “optofluidics” papers (see Fig. 1). For optofluidics, the guidance of light within the device is one of the most important features shared by many applications involving detection of beads, cells, and molecules [10]. Optical waveguides confine the light by the effect of total internal reflection. A majority of optofluidic devices are made of polydimethylsiloxane (PDMS) because PDMS is easy to process by the soft lithography process [11–13]. Aided by the capillary filling method, the integrated optical waveguides consisting of a higher refractive index PDMS (n  1.42) layer as the “core” and a lower refractive index PDMS (n  1.407) as the cladding layer have been

Lab-on-a-Chip Device and System for Point-of-Care Applications

Fig. 1 Statistics of the number of publications that include the keyword of “optofluidic” as the topic, according to Google Scholar. Summarized in September 2012

89

800

Publication Number

4

700 600 500 400 300 200 100 0

2003 2004 2005 2006 2007 2008 2009 2010 2011 2012*

Year

demonstrated by Lien et al. [14, 15]. As for some applications where the samples are illuminated within the microfluidic channel, a liquid-core and liquid-cladding waveguide has been demonstrated to deliver light through the microchannel [16]. In this approach, the choice of the liquids determines the waveguide characteristics. Wolfe et al. used CaCl2 solution (n  1.445) as the core layer and deionized water (n  1.335) as the cladding layer [17]. Because of the mixing effect between both liquid layers, the stability of the waveguide is hard to maintain although its properties can be adjusted dynamically by controlling the flow rates of the liquids. The liquid-core/ air-cladding waveguide has also been demonstrated by Lim et al. [18, 19]; however, the light confinement is limited due to the difficulties in controlling the air flow, resulting from its low viscosity, high compressibility, and surface tension at the air/liquid interface. To avoid the problem of flow control to assure good waveguide stability, people have developed liquid-core and solid-cladding waveguides using a low refractive index solid material, Teflon AF (DuPont Inc.), for the cladding layer. The refractive index of Teflon AF is 1.31, which is lower than the refractive index of water (n = 1.33), thus satisfying the total reflection condition to guide the light beam inside the liquid core. Moreover, Teflon AF is chemically stable and optically transparent from UV to IR, so a coating layer of Teflon AF as the cladding layer would not influence cell viability and signal detection. Cho et al. have coated the Teflon AF layer on the wall of PDMS microchannels to form such waveguides [20]. The Teflon AF solution is introduced through the PDMS microchannels by vacuum, and the thickness of the Teflon AF coating can be controlled by the viscosity, strength of vacuum, and channel geometry. The cross-section of microchannel having its wall coated with the Teflon AF cladding layer shows optical confinement in Fig. 2a. For comparison, the light output on the cross-section of the channel without the Teflon AF coating is shown in Fig. 2b. The results in Fig. 2 indicate that Teflon AF-coated waveguides provide excellent light confinement within the liquid core where the biological samples flow though, thus offering a convenient, low loss design for optical excitation along the flow channel. One salient feature of the optofluidic waveguide design is to allow multiple points of detection

90

T.-F. Wu et al.

Fig. 2 (a) The cross-sectional image of light output from a Teflon AF-coated optofluidic waveguide. The dotted line signifies the wall of the microchannel and the solid line marks the liquid core, with the space in between occupied by the Teflon AF-coated cladding layer. (b) The cross-sectional image of light output for the microchannel without Teflon AF-coated layer (Reprinted with permission [20])

because the guided excitation light follows the travel of particles. In the conventional setup, laser beam is focused to only one excitation spot, and multipoint detection not only reduces the light intensity by power splitting but also increases the complexity, size, and cost of optics tremendously. Next the discussion of the lab-on-a-chip devices will be introduced, especially microfluidic flow cytometers, as a particularly important group of optofluidic devices. The purposes of lab-on-a-chip (LOC) devices are to reduce the required amount of samples and reagents for analysis, to increase the sensitivity of detection, to lower the cost and size of the system, to simplify and expedite the test procedures, and to minimize chances for sample contaminations and infections for biosafety improvement. The feature dimensions of LOC devices are typically tens or hundreds of micrometers and the entire device is portable. Combining with the electrical, optical, acoustic, and magnetic components, LOC devices are able to detect, concentrate, and isolate chosen subpopulations of samples (cells, beads, molecules) from a mixed population. This chapter will mainly focus on those LOC devices that utilize optical detection. Many research groups have been applying optical detection on the analysis of biological samples. Among these applications, flow cytometer is among the most powerful tools for characterization of biological cells because flow cytometer supports single-cell analysis at high data throughput and is capable of simultaneous detection of multiple biomarkers important for investigations of immunology, cancer, and various diseases. In a flow cytometer system, external light sources are used to interrogate the flowing sample within the channel. The physical and biological properties of samples are investigated from their forward scattering (FS), side scattering (SS), and fluorescence (FL). The optical signals produced by each single particle are processed to detect and classify the particle individually. Figure 3 shows a generic design of a traditional benchtop flow cytometer, consisting of a fluidic system with sheath flow confinement; an optical system to illuminate and collect signals; a sorting component, if necessary, to isolate the desired samples from the original sample mixture; and an electronic system for data analysis [21].

4

Lab-on-a-Chip Device and System for Point-of-Care Applications

91

Fig. 3 Schematic of a FACS system that can sense two scattered signals (forward scattering and side scattering) and two distinct fluorescent signals excited by an external laser source. The system contains (1) a fluidic system, (2) an optical system, (3) a sorting system, and (4) an electronic control system for data collection and processing (Reprinted with permission [21])

In order to address the needs for various clinical tests, the miniaturization of flow cytometer is required. Most LOC flow cytometers use hydrodynamic flow confinement to confine the particles to the center of the channel to facilitate the optical detection and to reduce the signal variations characterized by the coefficient of variation (CV) [21]. However, unlike the benchtop flow cytometers where the samples are confined to the cylindrical core in the quartz tube, a microfabricated LOC flow cytometer provides only 2-dimensional flow confinement, lacking a mechanism to confine the sample flow in the direction normal to the flow plane. As a result, there exists a large variation of particle velocities in the channel since, in a laminar flow, the particle velocity is related to its position in the channel. In addition, the LOC devices are usually made of PDMS replicated from a lithographically defined mold. Thus, surface roughness in the mold is transferred to the PDMS microfluidic channel and contributes to the background scattering signals, affecting the detection of forward and side scattering of the particle. In the later sections, the

92

T.-F. Wu et al.

latest developments of improving the performance of LOC flow cytometers by employing the techniques of optical coding, surface treatments, 3-dimensional flow confinement, etc., will be discussed.

Labeled and Label-Free Detection As in any in vitro biosensing techniques, optofluidic devices for biomedical detection can be classified into two categories, labeled and label-free detection. The labelfree technique produces signals from samples without attaching fluorescent dyes, quantum dots, or beads to the samples [22–24]. On the other hand, labeled detection binds fluorescent dyes, proteins, or beads to samples so the signals are generated from the labels rather than the samples themselves [16, 25–28]. The label-free technique offers lower cost and faster results and is generally more desirable as long as feasible. However, the labeled detection still dominates the field today because it provides more specific and accurate detection for most applications. The following contents will cover these two approaches in the context of optofluidics.

Label-Free Detection Based on the properties of samples one wants to detect, label-free detection mainly includes light scattering, Raman scattering, and surface plasmon resonance. Each detection technique can be utilized individually or in combination to collect differentiable signals. Light Scattering The measurement of light scattering is relatively straightforward for optofluidic sensors. When particles suspended in the fluid are interrogated with an external light source, the refractive index difference between particles and fluid medium or the refractive index difference of organelles within the cells can generate scattering signals. The intensity of scattering light is determined by the refractive index difference as well as the size, shape, and orientation of cells [29–31]. Therefore, light scattering signals can be used to identify cells or particles. The scattering signals are usually collected at two different angles. The scattering close to the incident light beam is known as the FS signal, and the scattering at 90 from the incident light is referred to as SS signal. The forward scattering signal can be used to retrieve the particle size since it is measured at small angles (0.5 ~ 5 ) from the incident light source and its intensity is related to the volume of the particle in the flow medium. In some scenarios, the shape of cells in the fluid medium is not spherical so the orientation of cells might cause different forward scattering signals. The intensity of side scattering reveals information about the intracellular structures and is particularly sensitive to the granularity and internal structures (e.g., mitochondria) of the cells. Since the forward scattering and side scattering signals contain different information (i.e., size and granularity) of cells, they have been exploited for unlabeled cell classification in flow cytometers although with limited resolution. Figure 4 shows an on-chip integrated microfluidic

4

Lab-on-a-Chip Device and System for Point-of-Care Applications

93

Fig. 4 The prototype of a microfluidic cytometry chip with on-chip waveguides and lenses for an excitation source (EX), a forward scatter collection line (FS) including a beam stop (BS), a side scatter collection line (SS), a large-angle scatter collection line (LAS), and a line for fluorescence collection (FL) (Reprinted with permission [32])

flow cytometer with integrated waveguides for the collection of forward scattering, side scattering, large-angle scattering, and fluorescence signals [32].

Raman Scattering Raman spectroscopy is a technique that senses the spectrum to investigate vibrational and rotational modes of molecules. Raman scattering occurs when the incident light interacts with the electron cloud and molecular bonds and passes (receives) energies to (from) phonons. The energy shifts between the excitation light and scattered light due to Raman scattering carry information of rotational or vibrational state of molecules, which can be used for molecule detection or identification. However, as a third-order nonlinear optical effect, the efficiency of Raman scattering is very low so the intensities of Raman scattering are too weak for most applications. To address this problem, surface-enhanced Raman spectroscopy (SERS) has been developed. The generally accepted theory for SERS is that the enhancement of Raman scattering is due to the concentrated electromagnetic field via metallic nanoparticles and molecular amplification (Raman probes). Combining both effects, the signals of Raman scattering can be drastically enhanced by a factor of 1014 although most reported data show a much lower yet still significant enhancement in Raman scattering efficiency [33, 34]. Piorek et al. presented a microfluidic device for sensitive and real-time detection using SERS [8]. This device can quantitatively monitor the extent of colloid aggregation based on the change of the SERS intensity and simultaneously observe the aggregation process of nanoparticles, as shown in Fig. 5.

94

T.-F. Wu et al.

b

Excitation laser λ = 514.5 nm SERS hν

SERS hν

SERS Intensity (counts s−1)

a

50k 25k 0

xe

Analyte (4-ABT)

600 μm

1800 cm−1

Flow

Raman Shift 1500 1200

1.5 μm

xe

600 μm Flow

15 μm Detail of shallow open microchannel region

0 900

15 μm

0 0

Fig. 5 (a) Schematic of microfluidic sensor for gas-phase species analysis using surface-enhanced Raman spectroscopy. (b) SERS spectra of 4-aminobenzenethiol (4-ABT) from the microfluidic sensor, obtained by the stepwise measurement along the microchannel in 10-μm increments (Copyright (2007) National Academy of Sciences, USA [8])

Surface Plasmon Resonance (SPR) Surface plasmon represents modes of electron oscillations at the interface between the molecules and the metal surface, and the resonant frequencies of oscillation are determined by the dielectric properties of metal and molecules. When the electromagnetic field from an external light source matches the electron oscillations in polarization and frequencies, strong coupling occurs between the surface plasmon and the incident light, revealing a light absorption peak at the resonant frequencies (wavelengths) [35, 36]. Both the strength and the spectrum of the light absorption produce information about the concentration and characteristics of molecules (e.g., proteins) ligated to the functionalized metal surface. Currently, light is introduced to the interface between metal and analytes containing medium via a prism. To use the effect of surface plasmonics, the device generally uses the Kretschmann configuration [37], where a metal thin film is directly deposited on a prism and the evanescent optical wave interacts with the metal film via the prism coupling to excite surface plasmons. When the wave vector or incident angle of light matches the wave vector of surface plasma, resonant coupling occurs to give rise to a strong absorption of the incident light. By monitoring the shift of incident angle for the resonant coupling and the strength of light absorption, one can obtain information about the concentration, reaction kinetics, and affinity of biological molecules as well as cell phenotype [38]. Shan et al. also harnessed the surface plasmon effect in microfluidic devices to measure the surface charge density and particle heights [39, 40]. As shown in Fig. 6, a modified layer of Au is deposited onto the glass. The concentration and size of particles on the surface influence the surface charge density and produce signals for sample identification. Wang et al. have also demonstrated the surface plasmon

Lab-on-a-Chip Device and System for Point-of-Care Applications

SPR response

Fig. 6 Schematic illustration of using surface plasmonic effects to measure the surface charge density and particle heights (Reprinted with permission [40])

After Injection Before Injection Time

95

SPR response

4

After Injection Before Injection

Time

h

resonance microscopy (SPRM) for label-free imaging, allowing detection and size measurement of single viral particles [41]. Figure 7 shows the images of individual H1N1 influenza viral particle in the microfluidic channel for the study of binding activities of viral particles.

Labeled Detection Besides the label-free methods discussed above, a wide range of labeled detection techniques are omnipresent in the fields of biology and biomedicine. Although these methods require extra sample preparation steps compared to the unlabeled methods, they usually have higher detection sensitivity, accuracy, and specificity. A few commonly used labeled detection techniques are introduced in this section. On-Chip Fluorescence Detection Fluorescence detection is the most popular method to identify and enumerate cells and investigate their physical and biochemical properties. In conventional fluorescence detection settings, however, bulky peripheral setup is necessary. For example,

96

T.-F. Wu et al.

Fig. 7 (a) SPRM images of H1N1 influenza A virus and three types of silica nanoparticles dispersed in PBS buffer, respectively. (b) and (c) SPR intensity profiles of particular virus and particles in the directions of X and Y (Reprinted with permission [41])

a fluorescence microscope requires bulky lasers and optical components such as beam splitters, dichroic mirrors, optical filters, and photodetectors. This becomes an obstacle when deploying lab-on-a-chip fluorescence detection systems for point-ofcare needs, particularly in less-developed countries and areas. In order to reduce the cost, size, and complexity of the fluorescence detection systems, researchers have tried to integrate some key optical components to create monolithic optofluidic systems. For example, waveguides and lenses have been integrated to collect and guide fluorescence signals to the detector array that is also integrated on the same chip [17, 42, 43]. The integrated waveguides can also bring excitation light to the interrogation zone of microfluidic channels. In addition, on-chip lenses to collimate the excitation beam and to collect signals from fluorescently labeled cells have been used, thus removing the need for off-chip lenses. Since the lenses, the waveguides, and the channels are pre-aligned lithographically on a chip, the approach of on-chip illumination and detection assures precise alignment even in environments of constant motions and vibrations. The microfabrication of on-chip lenses enables the design of customized lens profiles such as aspherical lenses at much lower cost than conventional fabrication methods for aspherical lenses such as diamond turning. More recently, researchers have demonstrated optofluidic devices with tunable optical components. The curvature of lens is controlled by the flow ratio between the core stream and the cladding stream. Tang et al. demonstrated on-chip tunable lenses as shown in Fig. 8 [44]. The curvature of the tunable liquid lens can be controlled by adjusting the rates of the sample flow (e.g., the core stream) and the sheath flow (e.g., cladding stream) [44]. Xiongs et al.

4

Lab-on-a-Chip Device and System for Point-of-Care Applications

97

Fig. 8 (a) Schematics of the optofluidic prism. The shape is tunable by the ratio of flow rates: (b) symmetric prism, (c) right-shifting asymmetric prism, and (d) left-shifting asymmetric prism. (e) Schematic of microfluidic cylindrical liquid lens for the liquid-core/liquid-cladding on-chip device. (f) The image of the liquid lens to focus the laser coming from the right and out to the left (Reprinted with permission [44])

98

T.-F. Wu et al.

created an on-chip optofluidic prism capable of separating multiple fluorescence emission wavelengths [45]. In combination with on-chip detector arrays, the optofluidic prism can significantly reduce the optical path from fluorescently labeled targets to detector arrays. The interface between the two liquids is optically smooth and the potential light loss is also minimized in those configurations. Other researchers pushed the concept even further, developing on-chip integrated fluorescence filters by adding fluorescence dyes into PDMS layer [46, 47]. By mixing certain dyes or even food colors, they demonstrated various optical band-pass and long-pass filters. Adjusting the concentration and composition of dye additives and controlling the thickness of the layer can precisely engineer the optical cutoff wavelengths and transmission spectra. For point-of-care applications, microfluidic devices that can measure the absolute number of CD4+ and percentage of CD4 counts in patient blood have been demonstrated by various research groups [48]. Fluorescently Labeled Detection on Chip: Flow and Image Cytometers There are several ways to fluorescently label or stain cells: antibodies or proteins tagged with a fluorochrome(s) and protein- or DNA-conjugated quantum dots or nanoparticles, to name a few. Conventional flow-based or image-based systems are bulky, expensive, and complicated to operate, thus producing the incentives of developing microfluidic versions of flow cytometers for point-of-care clinical applications. Microfluidic Flow Cytometer (MFC) While flow cytometry is an essential tool in biomedical research and sometimes in clinical labs, the detection of multiple fluorophores often requires complicated optic systems equipped with many dichroic mirrors, optical filters, and photomultiplier tubes (PMTs), thus making the system expensive and bulky. This is one of the biggest obstacles to deploy flow cytometry in less-developed countries or individual clinical labs. Microfluidic flow cytometers are therefore promising tools for point-ofcare applications because they are compact, portable, disposable, and of low cost. Due to these advantages over benchtop flow cytometers, extensive efforts have been made over the past decades to develop microfluidic flow cytometers [5, 6, 42, 43, 49]. Recently, there have been promising works on enhancing detection sensitivities of microfluidic platform flow cytometers by creative methods such as space-time coding of fluorescence signals in time domain [46, 50–53]. Microfluidic Image Cytometer (MIC) Microfluidic image cytometry (MIC) that combines the advantages of microfluidicand microscopy-based cytometry is capable of quantitative, single-cell proteomic analysis of multiple signaling molecules using small number of cells [54]. Together with advanced bioinformatic analysis, the MIC platform enables in vitro molecular diagnostics for pathology analysis and personalized medicine in the future.

4

Lab-on-a-Chip Device and System for Point-of-Care Applications

a

b

MIC technology

Systems pathology analysis

(ii) Microscope-based cytometry for multiparametric single-cell measurements

Single-cell, four-parameter quantification

Log stain

(i) Microfluidic cell array chip for immunocytochemistry

Dissociated tumor specimen

PI3K/Akt/mTOR signaling

P

P

PTEN

PI3K PIP3

Akt

P P

mTOR S6

c

P

Frequency (%)

PIP2

5.0

EGFR PTEN pAKT pS6

2.5 0.0

SOMS and clustering

Plasma membrane EGFR

99

12.0 7.5 4.7 2.9 2.0 , which characterizes the motion of the scatterers. ko is the wavenumber of light in the medium and r1 (r2) are the distances between source (image) and the detector on the surface, as described in section “Diffuse Reflectance Spectroscopy.” For the case of random ballistic flow model, < Δr2 ðτÞ >¼ v2 τ2 ; v2 is the second moment of the cell velocity distribution. For the case of diffusive motion, < Δr2 ðτÞ >¼ 6DB τ; where DB is the effective diffusion coefficient of the tissue scatterers [57]. It has been observed that the diffusion model fits the autocorrelation curves (Fig. 5c) well (significantly better than random flow model), and αDB characterizes the blood flow in deep tissue in a broad range of studies including murine and human tumors and brain functions [28, 57–64]. Here α is a factor representing the probability that a scattering event in the tissue is from a moving scatterer (α is generally proportional to tissue blood volume fraction). Generally relative blood flow, rBF, is reported to describe blood flow changes during therapy: rBF is a blood flow parameter measured relative to its pretreatment value, i.e., rBF ¼ αDB =αDB ðbaselineÞ.

7

Monitoring Cancer Therapy with Diffuse Optical Methods

191

Diffuse Fluorescence Spectroscopy Tumor therapy monitoring depends on localizing the tumor (seeing) before treating [65]. The success of the therapy depends on the stage and size of the tumor. Detecting the tumor at the earliest stage and planning the treatment protocols accordingly will improve therapeutic outcome significantly. For accurate therapy planning under any treatment approach, the extent of the tumor has to be mapped [66]. Lesion structures, such as micronodules, may not be clinically evident and result in tumor recurrence. Since some tumors, like oral lesions, usually exhibit a multifocal, wide-field nature of invasion and occur at diverse sites, this can create considerable problems to clinicians for tumor demarcation during treatment planning. One approach is to enhance the tumor contrast for visualizing early cancers via tumor selective contrast agents. Fluorescence imaging has played a significant role in visualizing oral tumors. Autofluorescence imaging, which utilizes intrinsic tissue fluorophores (e.g., collagen, NADH, and FAD), has demonstrated usefulness in clinical settings in improved screening of suspicious lesions [67–70]. Since background tissue fluorescence is much smaller than absorption, exogenously administered fluorescence contrast can greatly increase contrast for tissue demarcation. Photosensitizers (PS) are exogenous fluorophores that have demonstrated utility in clinical use for therapy and diagnosis of several early malignancies such as oral, esophageal, and bladder as they preferentially accumulate in dysplastic and malignant cells [71–82]. While the administered PS dose will affect the overall tumor accumulation [83], there will still be patient-to-patient variation as well as variation within the tumor. In addition, strong tissue absorption and scattering in living tissue distort raw fluorescence signal (intrinsic and exogenous), confounding the true fluorescence contrast. Thus, accurate methods to address these issues are needed for a quantitative fluorescence imaging approach [84, 85]. In this section we describe diffuse waves propagating in fluorescent media. Fluorescence in diffuse media is a two-part process: when diffuse waves generated from a point source propagate in the medium from source position to fluorophore position, a fluorophore in the medium will be excited and acts as a secondary point source of fluorescent diffuse waves, which propagate to the detector (Fig. 6). If we Fig. 6 Excitation source propagation, fluorescence generation, and propagation

192

U. Sunar and D.J. Rohrbach

define the fluorescence transfer function as T ðr Þ ¼ eηN ðr Þ for the simple case of CW diffuse waves, where e is the fluorophore extinction coefficient, η is the fluorescence quantum yield, and N(r) is the fluorophore distribution, then the generated fluorescence wave in the scattering media is the summation over all the fluorophores [44, 86]: ð Φfl ðr s , r d Þ ¼ Φðr s , r ÞT ðr ÞGðr, r d Þdr

(8)

Here we refer to Φfl as the fluorescent diffuse photon wave, or simply fluorescent intensity signal in diffuse media, and G is the Green’s function or propagator, the first term in the integral (Φ(rs, r)) represents diffuse photon propagation from excitation source to a fluorophore, and the term T(r)G(r, rd) represents the generated fluorescence signal propagation from fluorophore to detector. For diffuse fluorescence spectroscopy (DFS) data analysis, fluorescence and optical parameter distributions are typically assumed to be homogeneous. Tissue fluorescence signal (Φfl) is usually assumed to be a linear combination of injected drug fluorescence (e.g., PS fluorescence) and tissue autofluorescence (auto); Φfl ¼ CPS ΦPS þ CAuto ΦAuto , where CPS and CAuto are spectral amplitudes of PS and autofluorescence, respectively. Here, it should be noted that extracted spectral amplitudes do not correspond to absolute concentrations, since raw fluorescence signal is affected by the optical properties at both excitation and emission wavelengths [87, 88]. Low fluorescence signal can mean low fluorescence concentration or high optical signal attenuation due to optical absorption and scattering. The ultimate aim of DFS is to quantify absolute fluorescence (drug) concentration in vivo and this is a field of active research. One simple approach is to normalize the fluorescence signal with the diffuse reflectance data (Rdata) to reduce the effects of the optical attenuation on the raw fluorescence signal [89, 90]. Another approach is to normalize the fluorescence signal by the autofluorescence [91, 92]. In addition, small diameter optical fibers minimize the effect of tissue absorption allowing for improved quantification, as shown by Pogue and Burke [93].

Diffuse Optical Imaging In some cases tissue can be very heterogeneous, and spectroscopic models that assume tissue homogeneity may not be accurate enough to characterize the tissue. For example, a basal cell carcinoma or melanoma introduces heterogeneity on the relatively homogeneous background tissue. The tumor itself can exhibit significant heterogeneity with respect to oxygen, vascular, or drug distribution. This heterogeneity can extend along the thickness and depth of the tumor. Apart from demarcation purposes, imaging is also important for therapy monitoring since localized tumor response might be very different than global response. In this case homogeneous model cannot accurately recover heterogeneities. Thus, there has been significant work on volumetric, quantitative imaging of these heterogeneities with optical contrast [63, 94–102].

7

Monitoring Cancer Therapy with Diffuse Optical Methods

193

In diffuse optical tomography (DOT), we obtain three-dimensional reconstruction of the tissue heterogeneities hidden in the tissue volume based on measurements at the tissue surface. This imaging method is similar to X-ray tomography, but it uses much less energetic photons, resulting in high scattering and low resolution. Optical imaging consists of two main parts: forward and inverse models. The forward model describes photon propagation in the heterogeneous medium to predict the measured signal. The region of interest is divided into voxels, and the forward model provides a weight for each voxel. Weight is defined as the probability of a photon from the source to a given voxel and then from voxel to detector. The object is to use the set of measurements to solve for the tissue optical properties (unknowns). In the inverse model the difference between measured and predicted (forward-modeled) signals is minimized to obtain the unknown parameters. For simplicity, it is assumed that only absorption and/or fluorescence properties exhibit spatial variations. The other properties such as scattering parameter are assumed to be constant. For solving heterogeneous diffusion equation (Eq. 1), we use Born or Rytov expansion of the photon waves (Φ(r)), and for solving the unknowns we use algebraic reconstruction technique (ART). The details of the technique are explained before [42, 44, 100], but here the basic mathematical approach is described. For the case of absorption parameter heterogeneity μa can be divided into two main parts, homogeneous background (μoa) and spatially varying heterogeneous part (δμa); μa ¼ μoa þ δμa ðrÞ. In the Born expansion we divide the total photon density wave Φ(rs, r) from a source at rs measured at r into a linear superposition of its incident (homogenous) and scattered (heterogeneous parts); Φðr s , r Þ ¼ Φo ðr s , r Þ þ Φsc ðr s , r Þ. The Born approximation also assumes the scattered wave is much smaller than the incident field, i.e., Φsc  Φo . Then, one obtains heterogeneous solution as [44] ð

  Φsc ðr si , r di Þ ¼ Φo ðr s , r ÞOðr ÞG r j  r di dr;

(9)

where Oðr Þ ¼ vδμa =D is the same heterogeneity term for the case of absorption contrast, and Gðr  r d Þ ¼ expðikjr  r d jÞ=4π jr  r d j is the Green’s function or free propagator. The solution implies that the photons pass from source position rs to some position r, scatter with an amplitude proportional to heterogeneity (δμa), and then propagate from position r to a detector at rd. In the Rytov approximation, we expand the photon density wave in the exponential form; Φðr s , r Þ ¼ expðΦo ðr s , r Þ þ Φsc ðr s , r ÞÞ. The Rytov approximation does not place restriction on the top magnitude of the scattered field but assumes that the scattering field is slowly varying and much smaller than the perturbation term, j∇Φsc j  δμa ; then the solution simplifies to Φsc ðr si , r di Þ ¼

ð   1 Φo ðr s , r ÞOðr ÞG r j  r di dr; Φ o ðr s , r d Þ

(10)

194

U. Sunar and D.J. Rohrbach

where Oðr Þ ¼ vδμa =D is the same heterogeneity term for the case of absorption contrast. Rytov is less restrictive compared to the Born approximation and has been shown to be more suitable for most biological tissue. The fluorescence solution is already given in Eq. 8. For image reconstruction of the unknown parameter, the region of interest is divided into voxels, and integral equations are digitized resulting in a set of linear equations (y ¼ Wx). For example, for Rytov equation, Φsc ðr si , r di Þ ¼

N X       Φo r si , r j O r j G r j  r di ;

(11)

j¼1

     where WijR is the Rytov weight; W Rij ¼ Φo r si , r j G r j , r di vh3 = Φo ðr si , r di Þ; indicating the relative importance of each voxel.

Instruments In this section, an example of instruments used in preclinical and clinical measurements is presented. First, a multimodal spectroscopic instrument that was used in clinical PDT studies is described briefly, and then diffuse optical tomography instrument that can work in absorption and fluorescence mode for small animal imaging is discussed. Then, optical probes that are utilized for spectroscopic measurements in preclinical and clinical settings are described.

Multimodal Optical Instrument The multimodal instrument can assess several parameters that may be required for understanding the complete picture related to mechanisms of a therapy. For example, PDT is a relatively complicated therapy involving mechanisms relating to oxygen, PS, and light. To understand all three parameters, one needs to quantify optical, photosensitizer, and oxygen parameters. Optical parameters allow modeling light distribution in the tissue. To assess PS distribution and oxygen, one needs to quantify PS fluorescence, absorption properties, and tissue oxygen saturation. An example of multimodal instrument was recently described [19]. The instrument performs sequential measurements of blood flow (by DCS method), optical parameters, blood oxygenation and blood volume (by DRS method), and fluorescence (by DFS method). Figure 7a, b shows the picture and schematic diagram of the instrument, respectively, while (c) shows the instrument in the operating room for PDT monitoring. The DCS instrument has a long coherence length laser (CrystaLaser), four single-photon-counting detectors (SPCD), and a custom-built autocorrelator board. Photodetector outputs were fed into a correlator board, and the intensity autocorrelation functions and photon arrival times were recorded by a computer. After blood flow measurements, the second laptop initiates fluorescence

7

Monitoring Cancer Therapy with Diffuse Optical Methods

195

Fig. 7 (a) Picture of multimodal clinical optical instrument. (b) Diagram of the instrument (c) during the measurements at the operating room for PDT monitoring

and reflectance data acquisition. In absorption mode, broadband diffuse reflectance measurements were taken by illuminating the tissue with tungsten halogen lamp and collecting the light with one of the channels of a two-channel spectrometer. In fluorescence mode, a ~410 nm laser diode excites most photosensitizers in their Soret bands and after passing through a 500 nm long-pass filter collects the fluorescence spectra with the second channel of the spectrometer. The light sources can be optimized according to the fluorophore of interest.

196

U. Sunar and D.J. Rohrbach

Fig. 8 Fast and dense sampling whole-body fluorescence tomography instrument. The main parts of the instrument include a fast galvo scanner for different source positions and a CCD camera for detection

Diffuse Optical Tomography Instrument DOT instrument can be utilized to monitor physiologic parameters such as THC or the concentration of fluorescence compounds such as PSs in small animals and humans for deeply seated or thick tumors. These instruments usually work in both absorption and fluorescence modes depending on the chosen filters. If filter is removed, then the instrument will image the intrinsic contrast of absorption and scattering parameters. If a fluorescence long-pass filter is used, then the instrument measures the fluorescence contrast of fluorophores such as intrinsic fluorescence (autofluorescence) or exogenously administrated fluorophores such as photosensitizers. An example of a small animal imaging instrument is shown in Fig. 8. The instrument allows dense source and detector sampling with a fast galvo scanner and a CCD detector for improved resolution and sensitivity. In this setup, a laser diode (LD) of appropriate wavelength to excite the desired fluorophore is directed to a beam splitter (BS) to split the laser beam into two: one beam goes to a photodiode (PD) to monitor laser beam fluctuations, and the other beam is directed to a lens (L1) to focus onto a fast galvo scanner. The two-dimensional galvo (XYGal) scans along x and y dimension creating dense source positions, and a lens coupled (L2) with an emission filter (F, bandpass) and a CCD camera at the detection side collects emitted fluorescence light.

Optical Probes Probe design is very important in diffuse optical techniques. Each probe must be desired for its intended application. Similar to US technique, the primary components of the diffuse optical instrument remain the same, but the probe interfacing the tissue needs to be changed according to the particular physiological application. One should arrange source-detector separations according to the tumor depth. If a tumor

7

Monitoring Cancer Therapy with Diffuse Optical Methods

197

Fig. 9 Different methods of optical imaging. (a) Contact probe can be placed directly on the skin. (b) Different source-detector separations define the interrogation volume of the light. (c) Face of a smaller handheld probe for imaging the oral cavity. (d) Picture of the handheld oral probe. (e) Noncontact probes use lenses to project an image of the fiber onto the skin, allowing the probe to not touch the surface. (f) Interstitial fibers are placed directly into the tumor and are good for deeply seated or thick tumors

is deep, large source-detector separations need to be used. On the other hand, if a tumor is superficial, a small probe with small source-detector separations is desirable. The golden rule is that source-detector separations need to be at least two times higher than the depth of the tumor being investigated. There are three types of optical probes: handheld contact, noncontact, and interstitial probes. Handheld Probe. Figure 9a shows the case of a relatively large probe used in clinical measurements of head and neck tumor patients where deep photon penetration is required. Fibers were arranged in such a way that the tissue was imaged with many source-detector separations (Fig. 9b). Generally, the probe consists of a simple black pad and fibers placed on it. The pad can be constructed of a plastic or rubber material according to desired flexibility. Black color eliminates background light leakages. A smaller probe for use in the oral cavity is similar in idea, but the source and detector fibers are held in a stainless steel tube. Figure 9c, d shows the end face and picture of the probe containing the fibers used for diffuse reflectance (white), diffuse fluorescence (blue), and diffuse correlation (red) spectroscopy measurements. This probe can be used for measuring superficial malignancies by directly placing the tip of the probe on the tissue surface. Noncontact Probe. For continuous measurements in animal models to get the dynamical information during therapy such as antivascular therapy and PDT, a noncontact probe (Fig. 9e) is used. In these cases a noncontact probe is superior because it allows continuous measurements during treatment without shading the surface. Additionally, it is more sanitary and will not perturb the surface with pressure. Care should be taken in the design of the probe to minimize the effects of subject movement.

198

U. Sunar and D.J. Rohrbach

Interstitial Probe. Although the instrument stays the same, the handheld surface probe is ill suited for interstitial light delivery and noninvasive measurements, and the probe-tissue interface must be changed accordingly. For an “interstitial” probe, source and detector fibers are placed inside a catheter and inserted directly into the tumor (Fig. 9f). This allows optical methods to be used on deep tumors far below the skin surface and in internal organs.

Preclinical Applications In this section, several examples of animal studies are presented. Preclinical work is important in drug and experimental therapy development since each must be tested for adverse effects, optimal dose, and optimized regimens that define the therapeutic dose. These tests need to be performed before translating these concepts to patients at the clinic. Animal studies can also supplement clinical work as well as provide insight about related biological mechanisms. Several examples from antivascular and photodynamic therapies will be shown.

Monitoring Antivascular Therapy There is an ongoing interest in vascular targeting agents that can modulate the sensitivity and response of tumors by modifying the oxygen and blood flow [103–106]. Therefore monitoring these drugs frequently, and possibly continuously, can provide insights about the working mechanisms and have potential value for clinical evaluation of these drugs. Combretastatin A4 phosphate (CA4P) is an example of an antivascular drug that disrupts tumor blood vessels [107–109]. Its effects were tested in K1735 malignant mouse melanoma tumor models. A representative near real-time blood flow kinetics is shown in Fig. 10a indicating a continuous blood flow decrease [60]. As Fig. 10b summarizes, from nine mice, the average blood flow decreased significantly (p < 0.001) by 64% within 1 h. Power Doppler ultrasound images confirmed optical results. The image of microbubble contrast agent showed that K1735 is a well-perfused tumor model, with nearly the entire tumor enhanced with the microbubbles (yellow areas in Fig. 10c). One hour after injection of CA4P, many of the vessels had shut down, and perfusion reduced substantially, as indicated by the less enhanced vasculatures in Fig. 10d. Blood flow decrease was accompanied by a significant (p < 0.002) decrease in blood oxygen saturation (StO2) from 42% to 14% in the tumor due to reduction of blood supply (Fig. 11a). Noninvasive StO2 measurements were confirmed with microscopic analysis of a hypoxia marker, nitroimidazole (EF5) binding. There was no measurable EF5 binding (Fig. 11b) for the control tumor, but the treated tumor showed considerable binding (as shown in red fluorescence area in Fig. 11c), indicating that CA4P induced substantial hypoxia. It should be noted that noninvasive optical measurements and microscopic analysis are indirectly related since StO2 quantifies blood oxygen saturation in the microvasculature while hypoxia is an indicator of oxygen levels in the cells.

7

Monitoring Cancer Therapy with Diffuse Optical Methods

a

199

b

c

d

Fig. 10 (a) Relative blood flow (rBF) measured by DCS shows the acute effects of the CA4P. (b) Mean percent change (SD) in rBF for N = 9 mice. (c) Microbubble contrast-enhanced power Doppler ultrasound, yellow regions indicate contrast enhancement in perfused blood vessels with uniform enhancement before CA4P. (d) Blood perfusion is reduced post treatment

Fig. 11 (a) Mean percent change (SD) of StO2 in mice (N = 5). (b) Hypoxia marker shows no binding for the control mice. (c) Significant binding (shown in red) in the treated tumors indicating induced hypoxia

Monitoring Photodynamic Therapy PDT uses light to activate a photosensitizer (PS) in the presence of oxygen for cell and tissue destruction. A specific time after administration of the PS, the PS typically accumulates more in the diseased site compared to surrounding normal tissue. At the optimal time point of tumor uptake, light determined by the optical absorption properties of the PS is shined at a predetermined power to activate the PS to create a photodynamic reaction. Due to specific uptake of the PS and localized light

200

U. Sunar and D.J. Rohrbach

illumination, photodynamic therapy (PDT) is a local therapy rather than a systemic therapy like chemotherapy. It can be repeated without accumulated side effects compared to conventional therapies such as chemo or radiation therapy. The efficacy of PDT is greatly affected by the tumor microenvironment [114, 115]. Tissue oxygen level is crucial for effective PDT since the photochemical reactions necessary for cell killing occur only in the presence of oxygen [116]. Tissue oxygenation is highly affected by vascular parameters such as blood flow and blood oxygenation. Moreover, there needs to be enough photosensitizer present in the target tissue. During PDT, the PS is consumed dynamically. Thus, the efficacy of PDT is dependent on both PS level before therapy [117] and PS consumption during therapy, also called photobleaching [90, 118, 119]. Since most PSs have high fluorescence quantum efficiencies, their fluorescence properties can be utilized for assessing the PS content. Thus quantifying vascular parameters and PS fluorescence is crucial for monitoring PDT response [120]. The incidence of nonmelanoma skin cancers (NMSCs) is drastically increasing worldwide leading to a growing demand for effective treatment modalities [121, 122]. Conventional approaches such as surgery are unattractive due to significant healing, cosmetic, and functional morbidity as well as high financial costs. Topical 5-aminolevulinic acid (ALA)-based PDT, with efficacy similar to surgery and substantially better cosmetic and functional outcomes, has recently become an attractive treatment option, especially for cases with multiple sites and large areas [123]. PDT is also an emerging treatment option for cancer of the head and neck [124–131] . The size of the treatment beam can be adjusted so that the whole widefield mucosa in the oral cavity can be treated easily without any fibrotic complications as compared to radiation therapy. For large and thick tumors, interstitial PDT can be applied similar to brachytherapy [132–138]. In the following subsections the applications for skin and head and neck tumor models will be given.

Skin Cancer Monitoring Many photosensitizers lead to vascular destruction, which is one of the mechanisms by which PDT kills cells and destroys tumors. Since oxygen is crucial for PDT, vascular disruption early in treatment must be identified and prevented for optimized PDT treatment. These vascular effects vary based on different PDT regimens such as different treatment light fluence rates. Early identification of vascular disruption can allow a time window for light adjustment in real time to improve the effectiveness of therapy. Thus, it is desired to continuously monitor blood flow and find the optimal time at which these vascular effects improve efficacy. Figure 12a, b shows a setup for continuous monitoring of relative blood flow (rBF) during PDT. The treatment laser wavelength (630 nm or 660 nm), which is different than blood flow laser wavelength of 785 nm, can be chosen based on the desired photosensitizer. The light shield blocks the treatment light from irradiating the surrounding healthy tissue [139]. The fluence rate is an important determinant of PDT responses including vascular effects [140–145]. Figure 12c summarizes a preclinical study from colon 26-bearing

7

Monitoring Cancer Therapy with Diffuse Optical Methods

201

Fig. 12 Continuous blood flow measurements during PDT. (a) Schematic of the setup for continuous PDT and blood flow measurements. (b) Picture of the noncontact optical probe that allows continuous measurements while PDT treatment light is on with a desired circular beam delivery on the target tumor area while shielding the surrounding normal tissue. (c) Relative blood flow changes during ALA-PDT with respect to different fluence rates. Error bars represent the standard error over five animals (N = 5)

mice treated with topical ALA at fluence rates of 10, 35, and 75 mW/cm2. It is clear that ALA-PDT induced acute early vascular changes: a quick initial drop was followed by an increase and a final gradual change toward initial levels for all irradiances. These results show that ALA-PDT induced early blood flow changes and the changes were fluence rate dependent. The differences in blood flow decreases with respect to fluence rate were statistically significant (p < 0.05). The early decrease may be due to constriction of the blood vessels caused by a lack of nitric oxide, since nitric oxide production decreases in deoxygenated conditions and deoxygenation can be caused by oxygen consumption during PDT [146]. Following this initial constriction, there may have been a temporary burst of blood flow

202

U. Sunar and D.J. Rohrbach

resulting in an increase trend. Since lowest fluence rate (10 mW/cm2) has higher blood flow throughout PDT treatment, it allows more oxygen in the tissue and thus is a more favorable treatment regimen.

Head and Neck Cancer Monitoring As mentioned in the previous section, most systemic photosensitizers including HPPH are vascular disrupting agents that can induce significant changes in blood flow and oxygenation. In this case the fluence rate dependency in modulating the vascular effects can be more pronounced. Continuous measurements are advantageous for finding the potential time window for optimal PDT. Previous work showed that kinetics of therapyinduced blood flow changes measured continuously might be predictive of PDT response [147]. Figure 13 shows the results from two different fluence rate treatments in a squamous cell carcinoma (SCC) model of head and neck cancer treated with HPPH-PDT using the same setup and probe shown in Fig. 12a, b. The “effective” PDT dose is defined as the dose delivered, while tumor blood flow is greater than 50% of its initial value since tissue oxygenation is relatively high enough when there is adequate enough blood supply [148]. Figure 13a shows the representative blood flow changes during relatively high fluence rate PDT (75 mW/cm2), which induces significant acute blood flow shutdown and effective treatment dose of only about 2 J/cm2 while administered dose was 100 J/cm2. In contrast, at the lower fluence rate (Fig. 13b) of 14 mW/cm2, blood flow remains relatively high compared to baseline, providing improved tissue oxygenation during PDT with about 45 J/cm2 effective dose. These are crucial findings in regard to PDT efficacy since it implies the effective local dose can be very different than the administered dose and blood flow parameter may provide real-time feedback to clinicians for adjusting the treatment light for improved efficacy.

Fig. 13 Effective PDT dose defined as the dose blood flow higher than 50% of its initial value. (a) High fluence rate induced rapid blood flow shutdown during the early phase of the treatment. Only ~2.1 J/cm2 of light could be delivered while blood flow was higher than 50% of its initial value (b) Low fluence rate does not induce rapid blood flow decrease, blood flow was high most of the treatment and an effective dose of ~45 J/cm2 could be delivered while blood flow was high enough (higher than 50 % of its initial value) during PDT

7

Monitoring Cancer Therapy with Diffuse Optical Methods

203

Monitoring Thick Head and Neck Tumors with Fluorescence Tomography Previous preclinical and clinical studies have demonstrated that HPPH-mediated PDT is an effective treatment option for superficial head and neck lesions in the oral cavity [19, 120]. However, many tumors occur deeper in tissue, or they grow to be very thick. This can make surface illumination and measurements impractical. Interstitial PDT, where catheter-based fibers are inserted directly into the tumor, can overcome this limitation. Treating thicker and deeper tumors can be challenging due to insufficient PS or light distributions. Recently there is an increasing interest for interstitial treatment of larger and deeper tumors such as those in the base of the tongue or large neck nodes [128, 149, 150]. For effective PDT dosimetry, one needs to know the PS dose in the tissue; thus, it is desirable to know volumetric PS distribution [31]. It has been also shown that changes in PS distribution (photobleaching) during PDT are related to singlet oxygen generation and thus PDT efficacy [112]. PS photobleaching can be monitored by changes in drug fluorescence yield. The application of fluorescence diffuse optical tomography (FDOT) for quantifying the PDT-induced changes in fluorescence yield in a clinically relevant head and neck tumor model has been reported [151]. Depth-resolved quantitative maps of fluorescence yield were obtained before and after PDT (75 mW/cm2, 10 min, 45 J/cm2). An SCID mouse was inoculated subcutaneously with human head and neck tumor tissue obtained from a patient. Fluorescence and excitation scans were performed at pre-PDT at 24-h post injection of HPPH. Then PDT treatment was done. After the PDT treatment is finished, fluorescence and excitation scans were performed again for post-PDT quantification. Figure 14 shows the reconstructed fluorescence yield images at multiple depths (Z = 5, 9,13 mm) for pre-treatment, post-treatment and the difference (pre – post) to determine the PDT-induced changes. The results show that the HPPH preferentially accumulated in the tumor, which allowed accurate localization of the tumor. It is also clear that HPPH-mediated PDT induced significant photobleaching and the photobleaching was depth dependent. Photobleaching was higher (~20%), close to the detector plane (Z = 13 mm), where PDT treatment light was administered, and thus light dose at that plane was higher. These results verify that our FDOT system can quantify changes in photosensitizer distributions at different depths and secondgeneration photosensitizers such as HPPH are feasible in imaging thick tumors.

Clinical Applications In this section, several examples are shown in clinical studies that demonstrate the clinical utility of diffuse optical methods for therapy monitoring. The ultimate aim is to provide noninvasive optical biomarkers for assessing therapy response early so that a clinician can adjust the treatment dose or completely change the therapy regimen accordingly. Since they are portable and highly available compared to other modalities, optical methods are advantageous for frequent, bedside monitoring of patients. First, PDT, then chemoradiation, and finally chemotherapy are shown.

204

U. Sunar and D.J. Rohrbach

Z = 5 mm

Z = 9 mm

Z = 13 mm

0.04

5

5

0

0

0

0.03

–5

–5

–5

0.02

–10

–10

–10

0.01

Pre

5

–15

–15

–15 –5

0

5 10 X [mm]

–5

15

Z = 5 mm

0

5 10 X [mm]

–5

15

Z = 9 mm

0

5 10 X [mm]

0

15

Z = 13 mm

0.04

5

5

0

0

0

–5

–5

–5

0.02

–10

–10

–10

0.01

Post

5

–15

–15

–15 –5

0

5 10 X [mm]

–5

15

Z = 5 mm

0

5 10 X [mm]

15

Z = 9 mm

0

0

0

–5

–5

–5

–10

–10

–10

Pre-Post

5

–15 15

5 10 X [mm]

15

0.01

0.005

–15

–15 5 10 X [mm]

0

Z = 13 mm

5

0

0 –5

5

–5

0.03

–5

0

5 10 X [mm]

15

–5

0

5 10 X [mm]

15

0

Fig. 14 Depth-resolved reconstruction of yield images of a mouse tumor and surrounding normal areas. Images at different depths (Z = [5, 9, 13] mm) are shown from left to right with Z = 5 mm close to source plane and Z = 13 mm close to the detector plane and the treatment light of the imaging slab. PDT induced substantial changes in yield contrast in a depth-dependent manner obtained by subtracting “Post-PDT” from “Pre-PDT” at different depths Z = 5, 9, 13 mm with the highest change occurred at the slice closest to the treatment light (Z = 13 mm)

Monitoring of Photodynamic Therapy of Head and Neck Cancer in the Oral Cavity The multimodal instrument described in section “Multimodal Optical Instrument” was used to measure drug concentration and vascular parameters such as blood flow and oxygenation in clinical settings of oral cancer, as described before [19, 152]. The instrument was utilized in a clinical trial of photocolor (HPPH)-mediated PDT in oral cancer patients [153]. HPPH is a second-generation PS developed at Roswell Park Cancer Institute (RPCI) [83]. It has an absorption peak wavelength of 665 nm in vivo

7

Monitoring Cancer Therapy with Diffuse Optical Methods

a

200

Tumor Normal

b

4

BVF (%)

2 1

50

0

0 Pre

Pre

Post

d

100 Tumor Normal

80 60 40 20 0 Pre

Post

HPPH Concentration (µM)

rBF(%)

100

StO2 (%)

Tumor Normal

3

150

c

205

Post

0.4 Tumor Normal

0.3 0.2 0.1 0.0 Pre

Post

Fig. 15 Extracted functional parameters from a head and neck patient before and after PDT. (a) Relative blood flow (rBF(%)). (b) Blood volume fraction (BVF (%)). (c) Blood oxygen saturation (StO2 (%)). (d) HPPH concentration (μM)

that allows enhanced tissue penetration and less skin phototoxicity compared to Photofrin, the first FDA-approved PS with an absorption peak at 630 nm. Figure 15 shows a set of data obtained from noninvasive measurements from a patient with squamous cell carcinoma of the oral cavity. The measurements were performed 1 day after systemic administration of HPPH in the operating room at pre- and post-PDT. Multiple measurements were obtained by positioning a handheld probe (Fig. 9c, d in section “Diffuse Optical Tomography Instrument”) at various locations. The treatment light was delivered by a single quartz lens fiber with the 3 cm beam diameter slightly larger than lesion diameter. Figure 15a shows that mean tumor blood flow (rBF(%)) decreased by ~85% following PDT. These results suggest that HPPH-PDT induced significant vascular changes and vascular disrupting effects of HPPH in tumor tissue. Reduction in blood flow was accompanied by changes in blood volume fraction (BVF (%)) and blood oxygen saturation (StO2 (%)). The mean baseline value of BVF was ~2.9% but decreased to ~1.7% after PDT (Fig. 15b). The mean StO2 decreased from ~76% to ~36% (Fig. 15c). The tumorto-normal tissue ratio of HPPH uptake was ~2.3 (Fig. 15d), but HPPH drug concentration decreased by ~41% due to photobleaching. In general changes in surrounding normal tissue were smaller. Changes in normal tissue may be due to physiological fluctuations in the operating room or due to possible tissue sampling errors originating from point measurements. These changes were supported by a molecular measure for the oxidative photoreaction, which is obtained by quantifying the cross-linking of the signal transducer

206

U. Sunar and D.J. Rohrbach Patient 0

0

FaDu Std 19.0

25.3

% crosslinked STAT3

III STAT3 II Crosslink Complexes I STAT3 Monomer – Tumor Pre

– Normal Post

+ Tumor Post

+

PDT

Fig. 16 STAT3 cross-linking functional parameters from a head and neck patient before and after PDT

Table 1 Performance of individual parameters at discriminating responders from nonresponders

Dysplasia

Blood flow Blood oxy. Blood vol. Fluorescence

Pre-PDT Sens. 72.7 100 45.5 90.9

Spec. 66.7 66.7 100 66.7

AUC 58 79 55 67

Changes Sens. 45.5 54.5 72.7 100

Spec. 100 100 66.7 33.3

AUC 0.70 0.79 0.70 0.52

and activator of transcription 3 (STAT3) [154–156]. Biopsy tissue analyzed from this patient showed 19.0% STAT3 conversion (Fig. 16), suggesting an effective photoreaction when compared to previous tumor biopsy analysis that showed maximal STAT3 with a median of ~12% cross-linking [19]. It is desirable to investigate predictive power of these noninvasive parameters. To evaluate the sensitivity and specificity of each parameter in predicting the clinical response, patients who had dysplasia (N = 12) were grouped into responders (N = 7) or nonresponders (N = 5) based on the clinical assessment of the response based on 3-month post-biopsy results. Responders are defined as showing complete absence of visible lesion and negative biopsy and size reduction of the lesion more than 50%, and nonresponders had stable disease with size reduction of less than 50% and progressive disease. Sensitivity and specificity are defined as predictive percentage of responders and nonresponders to therapy, respectively. The sensitivity, specificity, and area under the curve (AUC) for individual parameters (both pre-PDT and changes) are shown in Table 1. As can be seen individual parameters have different sensitivity and specificity in predicting the response. Although prediction power is quite good, individual parameters alone are not always the best predictors for response. When three parameters related to PDT dose are combined, for example, initial blood oxygen saturation, change in blood flow, and change in HPPH fluorescence, discrimination between

7

Monitoring Cancer Therapy with Diffuse Optical Methods

207

Fig. 17 Combined threeparameter classifier in predicting the PDT response of dysplasia group

responders and nonresponders becomes much stronger. Initial blood oxygen saturation is related to available tissue oxygen, a required element for effective PDT. Blood flow changes in the lesion are related to the effective light dose, and changes in HPPH fluorescence are related to the amount of photosensitizer consumed, both of which are also necessary for effective PDT. A logistic regression model based on the three PDT-related parameters was used to combine the three parameters into a single predictor. With this model, a receiver operating characteristic (ROC) curve was calculated (Fig. 17), providing the 100% sensitivity, 80% specificity, and AUC of 0.91 for the groups of dysplasia, which is considered excellent. These results support diffuse optical spectroscopies permit noninvasive monitoring and predicting the PDT response in clinical settings of head and neck cancer patients.

Monitoring of Chemoradiation Therapy of Head and Neck Cancer Sunar et al. [59] have applied the DCS and DRS for monitoring early relative blood flow (rBF), tissue oxygen saturation (StO2), and total hemoglobin concentration (THC) responses to chemoradiation therapy in patients with superficial head and neck tumor nodes. The noninvasive measurements consisted of pre-therapy measurements as baseline, followed by weekly measurements until the treatment was completed. The radiation treatment was fractionated on a daily basis for about 7 weeks. Patients were concurrently treated with weekly carboplatin and paclitaxel. The noninvasive DCS/DRS measurements were performed by placing a handheld probe (Fig. 18a) on the tumor and the forearm muscle (for control to assess systematic changes). For both DCS and DRS, the largest source-detector separation was 3 cm, with a penetration depth of about 2 cm.

208

U. Sunar and D.J. Rohrbach

Figure 18a, b shows a representative handheld scan and its corresponding pretreatment rBF contrast along the scan dimension, respectively. As the probe is on the tumor, tumor to surrounding normal tissue contrast is about 2.5 times, and this contrast diminishes as the probe moves away from the tumor. Similar pretreatment contrast for THC and StO2 can be obtained. To quantify the therapy-induced changes, one can obtain mean peak contrast from multiple measurements that can be obtained by replacing the handheld probe on to similar positions and obtain these values with respect to time interval of the treatment. Figure 18c, d, and e shows mean averages of rBF, StO2, and THC in seven responders (based on clinical evaluation

Fig. 18 (continued)

7

Monitoring Cancer Therapy with Diffuse Optical Methods

209

Fig. 18 (a) Diagram showing handheld probe with scan direction on a neck nodal mass. (b) One-dimensional profile of tumor contrast along the scan direction. (c) Average of tumor rBF changes during chemoradiation averaged over seven patients who showed a complete response to the treatment. (d) Average StO2 changes of complete responders. (e) Average of THC of complete responders. (f) rBF kinetics of a partial responder. (g) StO2 kinetics of a partial responder. (h) THC kinetics of a partial responder

with no evidence of residual cancer). It is clear from the plots that weekly rBF, StO2, and THC showed different kinetics during therapy with significant early changes during the first 2 weeks of the therapy. Average rBF increased (52.7  9.7%) in the first week and decreased (42.4  7.0%) in the second week. Averaged StO2 increased from (62.9  3.4%) baseline value to (70.4  3.2%) at the end of the second week, and averaged THC showed a continuous decrease from pretreatment value of (80.7  7.0) μM to 73.3  8.3 μM at the end of the second week and to (63.0  8.1) μM at the end of the fourth week of therapy. The right side of the Fig. 18f, g, h shows the changes on these parameters from a patient who was a partial responder to the therapy. The parameters obtained by the noninvasive optical methods showed substantially different trend compared to the average trend of the complete responders: rBF showed a continuous increase, while StO2 and THC also increased during the course of the treatment. Pretreatment computed tomography imaging indicated a large necrotic nodal mass initially, and the tumor was still palpable at the end of the treatment, which was also confirmed by the postsurgical pathology. This study suggests frequent optical measurements may be utilized for therapy monitoring and early changes in the optical measurements may be indicative chemoradiation therapy response. One can focus on the first 2 weeks of the treatment with frequent (such as daily-based) measurements that can potentially lead to predicting the response at the earliest stage.

Monitoring of Chemotherapy of Breast Cancer There is an extensive work on diffuse optical monitoring of breast chemotherapy [5, 20, 157–163], and interested readers can find the details from very recent reviews

210

U. Sunar and D.J. Rohrbach

Fig. 19 Percent change in oxyhemoglobin concentration during the first week of chemotherapy in responding and nonresponding patients. The number of tumors measured each day is indicated as n. The largest separation between two groups occurred on the first day

such as by Choe and Yodh [95] and Choe and Durduran [164]. Here two representative examples are given: one shows the predictive power of the diffuse optics parameters, and the other presents a clinical tomographic approach. The University of California Irvine group utilized DRS to monitor therapeutic response in stage II/III neoadjuvant chemotherapy patients [20]. They showed after DRS indices within 1 week of the therapy could predict therapy response. The best single predictor was THC with 83% sensitivity and 100% specificity, while combined parameters of THC and water concentration could discriminate responders from nonresponders with 100% sensitivity and specificity. This study demonstrated potential use of optical spectroscopy for predicting an individual patient response. In a more recent work, the same group reported that the functional hemodynamic parameter of oxyhemoglobin concentration could discriminate responders from nonresponders on the first day after chemotherapy treatment [161]. They measured several parameters including concentrations of oxyhemoglobin, deoxyhemoglobin, and water in 24 tumors. Figure 19 shows the observed mean percent changes (from baseline (B)) of oxyhemoglobin concentrations in responders and nonresponders during the first week of the therapy. Oxyhemoglobin concentration increased significantly in partial responders and complete responders, whereas it showed a decrease trend in nonresponders. This study indicates a significant impact of optical methods for therapy optimization in that very early measures of chemotherapy response offer the potential to alter treatment strategies for the case of nonresponders for improved therapeutic outcomes as well as for less toxic effects resulting in improved quality of life.

7

Monitoring Cancer Therapy with Diffuse Optical Methods

MRI (Axial)

a

DOT (Axial)

211

THC(µM)

Pre-chemo

20

10

After 4th

12

6 Post-chemo

12

6

b 1.4

rBF (normalized)

1.2 Chemo 1 0.8 0.6 0.4

Pre-chemo

After 4th

Post-chemo

Fig. 20 (a) A DOT example for chemotherapy monitoring of a complete responder. Only one reconstruction slice is shown for simplicity. Tumor is localized in THC image contrast, which diminished at the 4th cycle and post-therapy scans. Dynamic contrast-enhanced MRI image also shows well-localized tumor mass at pre-therapy and tumor size shrinkage in the course and posttherapy scans. (b) Relative blood flow obtained by handheld DCS probe indicates blood flow decrease with chemotherapy confirming tomographic scans

The imaging instrument at the University of Pennsylvania (UPenn) is very similar to the CCD camera-based DOT instrument mentioned in section “Optical Probes,” with the main differences being that the setup is housed under a patient bed where

212

U. Sunar and D.J. Rohrbach

patient lays down while her breast is inside an imaging chamber filled with matching fluid with similar background optical properties as breast tissue. The imaging instrument works in transmission geometry where laser scans on the source plane while CCD camera detects signal in the detection plane. There is the additional frequency-domain spectroscopy part for quantifying bulk background optical properties to be used in an image reconstruction algorithm. A female with locally advanced breast cancer (poorly differentiated invasive ductal carcinoma) was measured with UPenn DOT system and handheld DCS system. DCE-MRI was also performed at pre-therapy, mid-therapy, and post-therapy time points. The chemotherapy consisted of four cycles of a combination of doxorubicin with Adriamycin as a brand name and cyclophosphamide (also called AC treatment) followed by four Taxol cycles at 2-week intervals. This patient was denoted as complete responder since the pathological analysis of surgical tissue specimen at the end of the therapy showed no residual tumor. Figure 20a shows a DOT reconstruction of THC at pre-therapy indicating a localized tumor mass. However, this contrast diminished after four cycles of the AC treatment and at the completion of the whole treatment. Apart from this local tumor contrast, the THC values decreased globally, which may be due to the systematic side effects of chemotherapy that usually decreases hematocrit levels in the whole body. DCE-MRI images also showed a highly localized tumor at the very similar location, but this contrast was not as pronounced in the later scans. The last scan at the end of the treatment did not show contrast enhancement. Figure 20b shows the blood flow changes measured by the DCS system, indicating that tumor blood flow also decreased over the course of the treatment supporting the THC observations obtained by DOT.

Summary With the advent of many novel approaches in therapeutics, there is a need for testing and standardization of clinical protocols. Diffuse optical methods can provide doserelated parameters as well as vascular and oxygen metabolism-related parameters for assessing the therapy response early and providing clinicians a feedback tool for adapting and ultimately optimizing the intervention accordingly. Since the techniques are noninvasive, instruments are portable, and extracted parameters are directly clinically relevant metabolic parameters, the most significant impact diffuse optical methods is expected to provide is prediction of the therapy response at the earliest time point that can result in survival benefit to patients and reduction of the health-care costs. Acknowledgments We would like to thank Dr. Arjun G. Yodh for providing supervisorship and mentorship at the University of Pennsylvania that initiated most of the work presented here. We also acknowledge Dr. Britton Chance for his excellent mentoring and guidance. We thank Shoko Nioka and Bruce J. Tromberg for their continuous support. Additional thanks go to current and past researchers of Yodh lab at Penn, particularly Turgut Durduran, Regine Choe, Guoqiang Yu, Chao Zhou, Soren D. Konecky, Kijoon Lee, Hsing-wen Wang, David R. Busch, Alper Corlu, and Leonid

7

Monitoring Cancer Therapy with Diffuse Optical Methods

213

Zubkov. U. Sunar acknowledges the support from the NCI grants, P30CA16056 (Startup grant) and CA55791 (Program Project Grant).

References 1. McCarthy K, Pearson K, Fulton R, Hewitt J (2012) Pre-operative chemoradiation for non-metastatic locally advanced rectal cancer. Cochrane Database Syst Rev 12, CD008368 2. Rydzewska L, Tierney J, Vale CL, Symonds PR (2012) Neoadjuvant chemotherapy plus surgery versus surgery for cervical cancer. Cochrane Database Syst Rev 12, CD007406 3. Ueda S, Roblyer D, Cerussi A et al (2012) Baseline tumor oxygen saturation correlates with a pathologic complete response in breast cancer patients undergoing neoadjuvant chemotherapy. Cancer Res 72:4318–4328 4. Garland ML, Vather R, Bunkley N et al (2014) Clinical tumour size and nodal status predict pathologic complete response following neoadjuvant chemoradiotherapy for rectal cancer. Int J Colorectal Dis 29:301–307 5. Jiang S, Pogue BW, Kaufman PA et al (2014) Predicting breast tumor response to neoadjuvant chemotherapy with diffuse optical spectroscopic tomography prior to treatment. Clin Cancer Res 20:6006–6015 6. Vaupel P, Kallinowski F, Okunieff P (1989) Blood flow, oxygen and nutrient supply, and metabolic microenvironment of human tumors: a review. Cancer Res 49:6449–6465 7. Tromberg BJ, Pogue BW, Paulsen KD et al (2008) Assessing the future of diffuse optical imaging technologies for breast cancer management. Med Phys 35:2443–2451 8. Lehtio K, Eskola O, Viljanen T et al (2004) Imaging perfusion and hypoxia with PET to predict radiotherapy response in head-and-neck cancer. Int J Radiat Oncol Biol Phys 59:971–982 9. Jacobson O, Chen X (2013) Interrogating tumor metabolism and tumor microenvironments using molecular positron emission tomography imaging. Theranostic approaches to improve therapeutics. Pharmacol Rev 65:1214–1256 10. DeVries AF, Kremser C, Hein PA et al (2003) Tumor microcirculation and diffusion predict therapy outcome for primary rectal carcinoma. Int J Radiat Oncol Biol Phys 56:958–965 11. Hermans R, Lambin P, Van der Goten A et al (1999) Tumoural perfusion as measured by dynamic computed tomography in head and neck carcinoma. Radiother Oncol 53:105–111 12. Preda L, Calloni SF, Moscatelli ME et al (2014) Role of CT perfusion in monitoring and prediction of response to therapy of head and neck squamous cell carcinoma. Biomed Res Int 2014:917150 13. Anderson H, Price P, Blomley M et al (2001) Measuring changes in human tumour vasculature in response to therapy using functional imaging techniques. Br J Cancer 85:1085–1093 14. Pirhonen JP, Grenman SA, Bredbacka AB et al (1995) Effects of external radiotherapy on uterine blood flow in patients with advanced cervical carcinoma assessed by color Doppler ultrasonography. Cancer 76:67–71 15. Chen B, Pogue BW, Goodwin IA et al (2003) Blood flow dynamics after photodynamic therapy with verteporfin in the RIF-1 tumor. Radiat Res 160:452–459 16. Huilgol NG, Khan MM, Puniyani R (1995) Capillary perfusion – a study in two groups of radiated patients for cancer of head and neck. Indian J Cancer 32:59–62 17. Goertz DE, Yu JL, Kerbel RS et al (2002) High-frequency Doppler ultrasound monitors the effects of antivascular therapy on tumor blood flow. Cancer Res 62:6371–6375 18. Stone HB, Brown JM, Phillips TL, Sutherland RM (1993) Oxygen in human tumors: correlations between methods of measurement and response to therapy. Summary of a workshop held November 19–20, 1992, at the National Cancer Institute, Bethesda, Maryland. Radiat Res 136:422–434 19. Sunar U, Rohrbach D, Rigual N et al (2010) Monitoring photobleaching and hemodynamic responses to HPPH-mediated photodynamic therapy of head and neck cancer: a case report. Opt Express 18:14969–14978

214

U. Sunar and D.J. Rohrbach

20. Cerussi A, Hsiang D, Shah N et al (2007) Predicting response to breast cancer neoadjuvant chemotherapy using diffuse optical spectroscopy. Proc Natl Acad Sci U S A 104:4014–4019 21. Cutler M (1929) Transillumination of the breast. Surg Gynecol Obstet 48:721–727 22. Jobsis FF (1977) Noninvasive, infrared monitoring of cerebral and myocardial oxygen sufficiency and circulatory parameters. Science 198:1264–1267 23. Bank W, Chance B (1997) Diagnosis of defects in oxidative muscle metabolism by non-invasive tissue oximetry. Mol Cell Biochem 174:7–10 24. Jacques SL (1996) Origins of tissue optical properties in the UVA, visible, and NIR regions. In: Advances in optical imaging and photon migration. OSA trends in optics and photonics, vol 2, pp 364–371 25. Mourant JR, Freyer JP, Hielscher AH et al (1998) Mechanisms of light scattering from biological cells relevant to noninvasive optical-tissue diagnostics. Appl Opt 37:3586–3593 26. Mourant JR, Fuselier T, Boyer J et al (1997) Predictions and measurements of scattering and absorption over broad wavelength ranges in tissue phantoms. Appl Opt 36:949–957 27. Laughney AM, Krishnaswamy V, Rizzo EJ et al (2012) Scatter spectroscopic imaging distinguishes between breast pathologies in tissues relevant to surgical margin assessment. Clin Cancer Res 18:6315–6325 28. Cheung C, Culver JP, Takahashi K et al (2001) In vivo cerebrovascular measurement combining diffuse near-infrared absorption and correlation spectroscopies. Phys Med Biol 46:2053–2065 29. Boas DA, Campbell LE, Yodh AG (1995) Scattering and imaging with diffusing temporal field correlations. Phys Rev Lett 75:1855–1858 30. Boas DA, Yodh AG (1997) Spatially varying dynamical properties of turbid media probed with diffusing temporal light correlation. J Opt Soc Am A 14:192–215 31. Bussink J, Kaanders JH, Rijken PF et al (2000) Changes in blood perfusion and hypoxia after irradiation of a human squamous cell carcinoma xenograft tumor line. Radiat Res 153:398–404 32. Fenton BM, Lord EM, Paoni SF (2001) Effects of radiation on tumor intravascular oxygenation, vascular configuration, development of hypoxia, and clonogenic survival. Radiat Res 155:360–368 33. Busch TM (2006) Local physiological changes during photodynamic therapy. Lasers Surg Med 38:494–499 34. Busch TM (2010) Hypoxia and perfusion labeling during photodynamic therapy. Methods Mol Biol 635:107–120 35. Gibbs-Strauss SL, O’Hara JA, Hoopes PJ et al (2009) Noninvasive measurement of aminolevulinic acid-induced protoporphyrin IX fluorescence allowing detection of murine glioma in vivo. J Biomed Opt 14:014007 36. Rollakanti KR, Kanick SC, Davis SC et al (2013) Techniques for fluorescence detection of protoporphyrin IX in skin cancers associated with photodynamic therapy. Photonics Lasers Med 2:287–303 37. Warren CB, Lohser S, Wene LC et al (2010) Noninvasive fluorescence monitoring of protoporphyrin IX production and clinical outcomes in actinic keratoses following short-contact application of 5-aminolevulinate. J Biomed Opt 15:051607 38. Cerussi AE, Tanamai VW, Mehta RS et al (2010) Frequent optical imaging during breast cancer neoadjuvant chemotherapy reveals dynamic tumor physiology in an individual patient. Acad Radiol 17:1031–1039 39. Jakubowski DB, Cerussi AE, Bevilacqua F et al (2004) Monitoring neoadjuvant chemotherapy in breast cancer using quantitative diffuse optical spectroscopy: a case study. J Biomed Opt 9:230–238 40. Vishwanath K, Klein D, Chang K et al (2009) Quantitative optical spectroscopy can identify long-term local tumor control in irradiated murine head and neck xenografts. J Biomed Opt 14:054051

7

Monitoring Cancer Therapy with Diffuse Optical Methods

215

41. Yu G, Durduran T, Zhou C et al (2006) Real-time in situ monitoring of human prostate photodynamic therapy with diffuse light. Photochem Photobiol 82:1279–1284 42. Yodh AG, Boas DA (2003) Functional imaging with diffusing light. In: Vo-Dinh T (ed) Biomedical diagnostics. CRC Press, Boca Raton, Florida, pp 311–356 43. Boas DA (1996) Diffuse photon probes of structural and dynamical properties of turbid media: theory and biomedical applications. University of Pennsylvania, Philadelphia 44. O’Leary MA (1996) Imaging with diffuse photon density waves. In: Physics and astronomy. University of Pennsylvania, Philadelphia 45. Haskell RC, Svaasand LO, Tsay TT et al (1994) Boundary conditions for the diffusion equation in radiative transfer. J Opt Soc Am A Opt Image Sci Vis 11:2727–2741 46. Kienle A, Patterson MS (1997) Improved solutions of the steady-state and the time-resolved diffusion equations for reflectance from a semi-infinite turbid medium. J. Opt. Soc.Am.A Opt. Image Sci. Vis 14:246–254 47. Tseng SH, Bargo P, Durkin A, Kollias N (2009) Chromophore concentrations, absorption and scattering properties of human skin in-vivo. Opt Express 17:14599–14617 48. Bard MP, Amelink A, Skurichina M et al (2006) Optical spectroscopy for the classification of malignant lesions of the bronchial tree. Chest 129:995–1001 49. Gamm UA, Kanick SC, Sterenborg HJ et al (2011) Measurement of tissue scattering properties using multi-diameter single fiber reflectance spectroscopy: in silico sensitivity analysis. Biomed Opt Express 2:3150–3166 50. Kanick SC, Gamm UA, Schouten M et al (2011) Measurement of the reduced scattering coefficient of turbid media using single fiber reflectance spectroscopy: fiber diameter and phase function dependence. Biomed Opt Express 2:1687–1702 51. Middelburg TA, Kanick SC, de Haas ER et al (2011) Monitoring blood volume and saturation using superficial fibre optic reflectance spectroscopy during PDT of actinic keratosis. J Biophotonics 4:721–730 52. Finlay JC, Foster TH (2004) Hemoglobin oxygen saturations in phantoms and in vivo from measurements of steady-state diffuse reflectance at a single, short source-detector separation. Med Phys 31:1949–1959 53. Hull EL, Nichols MG, Foster TH (1998) Quantitative broadband near-infrared spectroscopy of tissue-simulating phantoms containing erythrocytes. Phys Med Biol 43:3381–3404 54. Brown W (1993) Dynamic light scattering: the method and some applications. Oxford University Press, Oxford, England 55. Berne BJ, Pecora R (1990) Dynamic light scattering: with applications to chemistry, biology, and physics. R.E. Krieger, Malabar 56. Pine DJ, Weitz DA, Chaikin PM, Herbolzheimer E (1988) Diffusing wave spectroscopy. Phys Rev Lett 60:1134–1137 57. Mesquita RC, Durduran T, Yu G et al (2011) Direct measurement of tissue blood flow and metabolism with diffuse optics. Philos Trans A Math Phys Eng Sci 369:4390–4406 58. Yu G, Durduran T, Zhou C et al (2011) Near-infrared diffuse correlation spectroscopy (DCS) for assessment of tissue blood flow. In: Boas DA, Pitris C, Ramanujam N (eds) Handbook of biomedical optics. Taylor & Francis Books, Florence, Kentucky, pp 195–216 59. Sunar U, Quon H, Durduran T et al (2006) Noninvasive diffuse optical measurement of blood flow and blood oxygenation for monitoring radiation therapy in patients with head and neck tumors: a pilot study. J Biomed Opt 11:064021 60. Sunar U, Makonnen S, Zhou C et al (2007) Hemodynamic responses to antivascular therapy and ionizing radiation assessed by diffuse optical spectroscopies. Opt Express 15:15507–15516 61. Durduran T, Choe R, Baker WB, Yodh AG (2010) Diffuse optics for tissue monitoring and tomography. Rep Prog Phys 73:43 62. Carp SA, Dai GP, Boas DA et al (2010) Validation of diffuse correlation spectroscopy measurements of rodent cerebral blood flow with simultaneous arterial spin labeling MRI; towards MRI-optical continuous cerebral metabolic monitoring. Biomed Opt Express 1:553–565

216

U. Sunar and D.J. Rohrbach

63. Culver JP, Durduran T, Furuya D et al (2003) Diffuse optical tomography of cerebral blood flow, oxygenation, and metabolism in rat during focal ischemia. J Cereb Blood Flow Metab 23:911–924 64. Li J, Dietsche G, Iftime D et al (2005) Noninvasive detection of functional brain activity with near-infrared diffusing-wave spectroscopy. J Biomed Opt 10:44002 65. de Visscher SA, Witjes MJ, van der Vegt B et al (2013) Localization of liposomal mTHPC formulations within normal epithelium, dysplastic tissue, and carcinoma of oral epithelium in the 4NQO-carcinogenesis rat model. Lasers Surg Med 45: 668–678 66. van Leeuwen-van Zaane F, van Driel PB, Gamm UA et al (2014) Microscopic analysis of the localization of two chlorin-based photosensitizers in OSC19 tumors in the mouse oral cavity. Lasers Surg Med 46:224–234 67. Ramanujam N (2000) Fluorescence spectroscopy of neoplastic and non-neoplastic tissues. Neoplasia 2:89–117 68. Kasischke KA, Lambert EM, Panepento B et al (2011) Two-photon NADH imaging exposes boundaries of oxygen diffusion in cortical vascular supply regions. J Cereb Blood Flow Metab 31:68–81 69. Manjunath BK, Kurein J, Rao L et al (2004) Autofluorescence of oral tissue for optical pathology in oral malignancy. J Photochem Photobiol B 73:49–58 70. Brancaleon L, Durkin AJ, Tu JH et al (2001) In vivo fluorescence spectroscopy of nonmelanoma skin cancer. Photochem Photobiol 73:178–183 71. Bogaards A, Sterenborg HJ, Wilson B (2007) In vivo quantification of fluorescent molecular markers in real-time: a review to evaluate the performance of five existing methods. Photodiagnosis Photodynamic Ther 4:170–178 72. Fisher CJ, Niu CJ, Lai B et al (2013) Modulation of PPIX synthesis and accumulation in various normal and glioma cell lines by modification of the cellular signaling and temperature. Lasers Surg Med 45:460–468 73. Casas A, Fukuda H, Meiss R, Batlle AM (1999) Topical and intratumoral photodynamic therapy with 5-aminolevulinic acid in a subcutaneous murine mammary adenocarcinoma. Cancer Lett 141:29–38 74. Johansson J, Berg R, Svanberg K, Svanberg S (1997) Laser-induced fluorescence studies of normal and malignant tumour tissue of rat following intravenous injection of delta-amino levulinic acid. Lasers Surg Med 20:272–279 75. Kobuchi H, Moriya K, Ogino T et al (2012) Mitochondrial localization of ABC transporter ABCG2 and its function in 5-aminolevulinic acid-mediated protoporphyrin IX accumulation. PLoS One 7, e50082 76. Korbelik M, Krosl G (1995) Accumulation of benzoporphyrin derivative in malignant and host cell populations of the murine RIF tumor. Cancer Lett 97:249–254 77. Korbelik M, Krosl G (1995) Photofrin accumulation in malignant and host cell populations of a murine fibrosarcoma. Photochem Photobiol 62:162–168 78. Millon SR, Ostrander JH, Yazdanfar S et al (2010) Preferential accumulation of 5-aminolevulinic acid-induced protoporphyrin IX in breast cancer: a comprehensive study on six breast cell lines with varying phenotypes. J Biomed Opt 15:018002 79. Saczko J, Mazurkiewicz M, Chwilkowska A et al (2007) Intracellular distribution of Photofrin in malignant and normal endothelial cell lines. Folia Biol (Praha) 53:7–12 80. Uekusa M, Omura K, Nakajima Y et al (2010) Uptake and kinetics of 5-aminolevulinic acid in oral squamous cell carcinoma. Int J Oral Maxillofac Surg 39:802–805 81. Wang C, Chen X, Wu J et al (2013) Low-dose arsenic trioxide enhances 5-aminolevulinic acid-induced PpIX accumulation and efficacy of photodynamic therapy in human glioma. J Photochem Photobiol B 127:61–67 82. Zaak D, Sroka R, Khoder W et al (2008) Photodynamic diagnosis of prostate cancer using 5-aminolevulinic acid – first clinical experiences. Urology 72:345–348

7

Monitoring Cancer Therapy with Diffuse Optical Methods

217

83. Bellnier DA, Greco WR, Loewen GM et al (2003) Population pharmacokinetics of the photodynamic therapy agent 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a in cancer patients. Cancer Res 63:1806–1813 84. Sunar U, Rohrbach D, Morgan J et al (2013) Quantification of PpIX concentration in basal cell carcinoma and squamous cell carcinoma models using spatial frequency domain imaging. Biomed Opt Express 4:531–537 85. Saager RB, Cuccia DJ, Saggese S et al (2011) Quantitative fluorescence imaging of protoporphyrin IX through determination of tissue optical properties in the spatial frequency domain. J Biomed Opt 16:126013 86. O’Leary MA, Boas DA, Li XD et al (1996) Fluorescence lifetime imaging in turbid media. Opt Lett 21:158–160 87. Gardner CM, Jacques SL, Welch AJ (1996) Fluorescence spectroscopy of tissue: recovery of intrinsic fluorescence from measured fluorescence. Appl Opt 35:1780–1792 88. Welch AJ, Gardner C, Richards-Kortum R et al (1997) Propagation of fluorescent light. Lasers Surg Med 21:166–178 89. Kanick SC, Davis SC, Zhao Y et al (2014) Dual-channel red/blue fluorescence dosimetry with broadband reflectance spectroscopic correction measures protoporphyrin IX production during photodynamic therapy of actinic keratosis. J Biomed Opt 19:75002 90. Finlay JC, Conover DL, Hull EL, Foster TH (2001) Porphyrin bleaching and PDT-induced spectral changes are irradiance dependent in ALA-sensitized normal rat skin in vivo. Photochem Photobiol 73:54–63 91. Montan S, Svanberg K, Svanberg S (1985) Multicolor imaging and contrast enhancement in cancer-tumor localization using laser-induced fluorescence in hematoporphyrin-derivativebearing tissue. Opt Lett 10:56–58 92. Sterenborg HJ, Saarnak AE, Frank R, Motamedi M (1996) Evaluation of spectral correction techniques for fluorescence measurements on pigmented lesions in vivo. J Photochem Photobiol B 35:159–165 93. Pogue BW, Burke G (1998) Fiber-optic bundle design for quantitative fluorescence measurement from tissue. Appl Opt 37:7429–7436 94. Busch DR, Guo W, Choe R et al (2010) Computer aided automatic detection of malignant lesions in diffuse optical mammography. Med Phys 37:1840–1849 95. Choe R, Yodh A (2008) Diffuse optical tomography of the breast. In: Suri J, Rangayyan R, Laxminarayan S (eds) Emerging technology in breast imaging and mammography. American Scientific Publishers, Valencia, California, pp 317–342 96. Culver JP, Ntziachristos V, Holboke MJ, Yodh AG (2001) Optimization of optode arrangements for diffuse optical tomography: a singular-value analysis. Opt Lett 26:701–703 97. Gu X, Zhang Q, Bartlett M et al (2004) Differentiation of cysts from solid tumors in the breast with diffuse optical tomography. Acad Radiol 11:53–60 98. Herve L, Koenig A, Da Silva A et al (2007) Noncontact fluorescence diffuse optical tomography of heterogeneous media. Appl Opt 46:4896–4906 99. Ntziachristos V, Hielscher AH, Yodh AG, Chance B (2001) Diffuse optical tomography of highly heterogeneous media. IEEE Trans Med Imaging 20:470–478 100. Pogue BW, Davis SC, Song X et al (2006) Image analysis methods for diffuse optical tomography. J Biomed Opt 11(3):33001 101. Srinivasan S, Pogue BW, Dehghani H et al (2004) Improved quantification of small objects in near-infrared diffuse optical tomography. J Biomed Opt 9:1161–1171 102. Konecky SD, Panasyuk GY, Lee K et al (2008) Imaging complex structures with diffuse light. Opt Express 16:5048–5060 103. Gridelli C, Rossi A, Maione P et al (2009) Vascular disrupting agents: a novel mechanism of action in the battle against non-small cell lung cancer. Oncologist 14:612–620 104. Kim S, Peshkin L, Mitchison TJ (2012) Vascular disrupting agent drug classes differ in effects on the cytoskeleton. PLoS One 7, e40177

218

U. Sunar and D.J. Rohrbach

105. Tozer GM (2003) Measuring tumour vascular response to antivascular and antiangiogenic drugs. Br J Radiol 76(Spec No 1):S23–S35 106. Rossi A, Maione P, Ferrara ML et al (2009) Angiogenesis inhibitors and vascular disrupting agents in non-small cell lung cancer. Curr Med Chem 16:3919–3930 107. Ding X, Zhang Z, Li S, Wang A (2011) Combretastatin A4 phosphate induces programmed cell death in vascular endothelial cells. Oncol Res 19:303–309 108. Greene LM, O’Boyle NM, Nolan DP et al (2012) The vascular targeting agent CombretastatinA4 directly induces autophagy in adenocarcinoma-derived colon cancer cells. Biochem Pharmacol 84:612–624 109. Li J, Cona MM, Chen F et al (2013) Sequential systemic administrations of combretastatin A4 Phosphate and radioiodinated hypericin exert synergistic targeted theranostic effects with prolonged survival on SCID mice carrying bifocal tumor xenografts. Theranostics 3: 127–137 110. Kessel D, Oleinick NL (2010) Photodynamic therapy and cell death pathways. Methods Mol Biol 635:35–46 111. Dougherty TJ, Gomer CJ, Henderson BW et al (1998) Photodynamic therapy. J Natl Cancer Inst 90:889–905 112. Wilson BC, Patterson MS (2008) The physics, biophysics and technology of photodynamic therapy. Phys Med Biol 53:R61–R109 113. Zhu TC, Finlay JC (2008) The role of photodynamic therapy (PDT) physics. Med Phys 35:3127–3136 114. Maas AL, Carter SL, Wileyto EP et al (2012) Tumor vascular microenvironment determines responsiveness to photodynamic therapy. Cancer Res 72:2079–2088 115. Chen B, Pogue BW, Zhou X et al (2005) Effect of tumor host microenvironment on photodynamic therapy in a rat prostate tumor model. Clin Cancer Res 11:720–727 116. Foster TH, Murant RS, Bryant RG et al (1991) Oxygen consumption and diffusion effects in photodynamic therapy. Radiat Res 126:296–303 117. Zhou X, Pogue BW, Chen B et al (2006) Pretreatment photosensitizer dosimetry reduces variation in tumor response. Int J Radiat Oncol Biol Phys 64:1211–1220 118. Wilson BC, Patterson MS, Lilge L (1997) Implicit and explicit dosimetry in photodynamic therapy:a new paradigm. Lasers Med Sci 12:182–199 119. Sheng C, Hoopes PJ, Hasan T, Pogue BW (2007) Photobleaching-based dosimetry predicts deposited dose in ALA-PpIX PDT of rodent esophagus. Photochem Photobiol 83: 738–748 120. Sunar U (2013) Monitoring photodynamic therapy of head and neck malignancies with optical spectroscopies. World J Clin Cases 1:96–105 121. Rogers HW, Weinstock MA, Harris AR et al (2010) Incidence estimate of nonmelanoma skin cancer in the United States, 2006. Arch Dermatol 146:283–287 122. Goldberg LH, Landau JM, Moody MN et al (2012) Evaluation of the chemopreventative effects of ALA PDT in patients with multiple actinic keratoses and a history of skin cancer. J Drugs Dermatol 11:593–597 123. Lehmann P (2007) Methyl aminolaevulinate-photodynamic therapy: a review of clinical trials in the treatment of actinic keratoses and nonmelanoma skin cancer. Br J Dermatol 156:793–801 124. Biel M (2006) Advances in photodynamic therapy for the treatment of head and neck cancers. Lasers Surg Med 38:349–355 125. Biel MA (2010) Photodynamic therapy of head and neck cancers. Methods Mol Biol 635:281–293 126. D’Cruz AK, Robinson MH, Biel MA (2004) mTHPC-mediated photodynamic therapy in patients with advanced, incurable head and neck cancer: a multicenter study of 128 patients. Head Neck 26:232–240 127. Quon H, Finlay J, Cengel K et al (2011) Transoral robotic photodynamic therapy for the oropharynx. Photodiagnosis Photodyn Ther 8:64–67

7

Monitoring Cancer Therapy with Diffuse Optical Methods

219

128. Quon H, Grossman CE, Finlay JC et al (2011) Photodynamic therapy in the management of pre-malignant head and neck mucosal dysplasia and microinvasive carcinoma. Photodiagnosis Photodyn Ther 8:75–85 129. Rigual NR, Thankappan K, Cooper M et al (2009) Photodynamic therapy for head and neck dysplasia and cancer. Arch Otolaryngol Head Neck Surg 135:784–788 130. Jerjes W, Hamdoon Z, Hopper C (2012) Photodynamic therapy in the management of potentially malignant and malignant oral disorders. Head Neck Oncol 4:16 131. Jerjes W, Upile T, Betz CS et al (2007) The application of photodynamic therapy in the head and neck. Dent Update 34:478–480, 483–474, 486 132. Baran TM, Wilson JD, Mitra S et al (2012) Optical property measurements establish the feasibility of photodynamic therapy as a minimally invasive intervention for tumors of the kidney. J Biomed Opt 17:98002-1 133. Johansson A, Axelsson J, Andersson-Engels S, Swartling J (2007) Realtime light dosimetry software tools for interstitial photodynamic therapy of the human prostate. Med Phys 34:4309–4321 134. Oakley E, Wrazen B, Bellnier DA et al (2015) A new finite element approach for near real-time simulation of light propagation in locally advanced head and neck tumors. Lasers Surg Med 47:60–67 135. Rendon A, Beck JC, Lilge L (2008) Treatment planning using tailored and standard cylindrical light diffusers for photodynamic therapy of the prostate. Phys Med Biol 53:1131–1149 136. Thompson MS, Johansson A, Johansson T et al (2005) Clinical system for interstitial photodynamic therapy with combined on-line dosimetry measurements. Appl Opt 44:4023–4031 137. Kruijt B, van der Ploeg-van den Heuvel A, de Bruijn HS et al (2009) Monitoring interstitial m-THPC-PDT in vivo using fluorescence and reflectance spectroscopy. Lasers Surg Med 41:653–664 138. Samkoe KS, Chen A, Rizvi I et al (2010) Imaging tumor variation in response to photodynamic therapy in pancreatic cancer xenograft models. Int J Radiat Oncol Biol Phys 76:251–259 139. Becker TL, Paquette AD, Keymel KR et al (2010) Monitoring blood flow responses during topical ALA-PDT. Biomed Opt Express 2:123–130 140. Busch TM, Wang HW, Wileyto EP et al (2010) Increasing damage to tumor blood vessels during motexafin lutetium-PDT through use of low fluence rate. Radiat Res 174:331–340 141. Busch TM, Wileyto EP, Emanuele MJ et al (2002) Photodynamic therapy creates fluence ratedependent gradients in the intratumoral spatial distribution of oxygen. Cancer Res 62:7273–7279 142. Ericson MB, Sandberg C, Stenquist B et al (2004) Photodynamic therapy of actinic keratosis at varying fluence rates: assessment of photobleaching, pain and primary clinical outcome. Br J Dermatol 151:1204–1212 143. Henderson BW, Busch TM, Snyder JW (2006) Fluence rate as a modulator of PDT mechanisms. Lasers Surg Med 38:489–493 144. Sitnik TM, Hampton JA, Henderson BW (1998) Reduction of tumour oxygenation during and after photodynamic therapy in vivo: effects of fluence rate. Br J Cancer 77:1386–1394 145. Sitnik TM, Henderson BW (1998) The effect of fluence rate on tumor and normal tissue responses to photodynamic therapy. Photochem Photobiol 67:462–466 146. Henderson BW, Sitnik-Busch TM, Vaughan LA (1999) Potentiation of photodynamic therapy antitumor activity in mice by nitric oxide synthase inhibition is fluence rate dependent. Photochem Photobiol 70:64–71 147. Yu G, Durduran T, Zhou C et al (2005) Noninvasive monitoring of murine tumor blood flow during and after photodynamic therapy provides early assessment of therapeutic efficacy. Clin Cancer Res 11:3543–3552 148. Rohrbach DJ, Tracy EC, Walker J et al (2015) Blood flow dynamics during local photoreaction in a head and neck tumor model. Frontiers in Physics 3 149. Jerjes W, Upile T, Hamdoon Z et al (2011) Photodynamic therapy: the minimally invasive surgical intervention for advanced and/or recurrent tongue base carcinoma. Lasers Surg Med 43:283–292

220

U. Sunar and D.J. Rohrbach

150. Story W, Sultan AA, Bottini G et al (2013) Strategies of airway management for head and neck photo-dynamic therapy. Lasers Surg Med 45:370–376 151. Mo W, Rohrbach D, Sunar U (2012) Imaging a photodynamic therapy photosensitizer in vivo with a time-gated fluorescence tomography system. J Biomed Opt 17:071306 152. Rohrbach DJ, Rigual N, Tracy E et al (2012) Interlesion differences in the local photodynamic therapy response of oral cavity lesions assessed by diffuse optical spectroscopies. Biomed Opt Express 3:2142–2153 153. Rigual N, Shafirstein G, Cooper MT et al (2013) Photodynamic therapy with 3-(10 -hexyloxyethyl) pyropheophorbide a for cancer of the oral cavity. Clin Cancer Res 19:6605–6613 154. Henderson BW, Daroqui C, Tracy E et al (2007) Cross-linking of signal transducer and activator of transcription 3 – a molecular marker for the photodynamic reaction in cells and tumors. Clin Cancer Res 13:3156–3163 155. Liu W, Oseroff AR, Baumann H (2004) Photodynamic therapy causes cross-linking of signal transducer and activator of transcription proteins and attenuation of interleukin-6 cytokine responsiveness in epithelial cells. Cancer Res 64:6579–6587 156. Srivatsan A, Wang Y, Joshi P et al (2011) In vitro cellular uptake and dimerization of signal transducer and activator of transcription-3 (STAT3) identify the photosensitizing and imagingpotential of isomeric photosensitizers derived from chlorophyll-a and bacteriochlorophyll-a. J Med Chem 54:6859–6873 157. Zhou C, Choe R, Shah N et al (2007) Diffuse optical monitoring of blood flow and oxygenation in human breast cancer during early stages of neoadjuvant chemotherapy. J Biomed Opt 12:051903 158. Choe R, Corlu A, Lee K et al (2005) Diffuse optical tomography of breast cancer during neoadjuvant chemotherapy: a case study with comparison to MRI. Med Phys 32:1128–1139 159. Schegerin M, Tosteson AN, Kaufman PA et al (2009) Prognostic imaging in neoadjuvant chemotherapy of locally-advanced breast cancer should be cost-effective. Breast Cancer Res Treat 114:537–547 160. Shah N, Gibbs J, Wolverton D et al (2005) Combined diffuse optical spectroscopy and contrast-enhanced magnetic resonance imaging for monitoring breast cancer neoadjuvant chemotherapy: a case study. J Biomed Opt 10:051503 161. Roblyer D, Ueda S, Cerussi A et al (2011) Optical imaging of breast cancer oxyhemoglobin flare correlates with neoadjuvant chemotherapy response one day after starting treatment. Proc Natl Acad Sci U S A 108:14626–14631 162. Lee K (2011) Optical mammography: diffuse optical imaging of breast cancer. World J Clin Oncol 2:64–72 163. Leproux A, van der Voort M, van der Mark MB et al (2011) Optical mammography combined with fluorescence imaging: lesion detection using scatterplots. Biomed Opt Express 2:1007–1020 164. Choe R, Durduran T (2012) Diffuse Optical Monitoring of the Neoadjuvant Breast Cancer Therapy. IEEE J Sel Top Quantum Electron 18:1367–1386

8

Optical and Optoacoustic Imaging in the Diffusive Regime Adrian Taruttis and Vasilis Ntziachristos

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photon Propagation, Interactions, and Contrast in Tissue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Absorption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radiative Transfer Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diffusion Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bioluminescence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fluorescence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optical Imaging with Diffusive Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simple Photographic Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optical Tomography in the Diffusive Regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications of Optical Imaging in the Diffusive Regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optoacoustic Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signal Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signal Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multispectral Optoacoustic Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optoacoustic Image Contrast and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

222 223 223 225 225 226 226 227 228 228 230 234 235 236 237 237 239 242 243 245 245

A. Taruttis (*) Institute for Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany e-mail: [email protected] V. Ntziachristos Institute for Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany Chair for Biological Imaging, Technische Universität München, München, Germany e-mail: [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_19

221

222

A. Taruttis and V. Ntziachristos

Abstract

Optical imaging is capable of providing valuable molecular contrast, cell tracking, genetic reporters, and a wide range of biomarkers that reveal the biological processes underlying a disease. For centuries, optical imaging has primarily been confined to superficial tissue layers, due to the high scattering of photons in tissue. Optical tomographic methods based on accurate models of diffusive deep-tissue light propagation have allowed fluorescence and endogenous contrast to be visualized and volumetrically quantified at depths of centimeters. Emerging optoacoustic methods allow optical absorption contrast to be pinpointed at high spatial resolutions by means of ultrasound waves, breaking through the resolution limitations imposed by diffusive light. This chapter introduces the principles of optical and optoacoustic methods for imaging biomedically relevant contrast in the diffusive regime. Keywords

Optical imaging • optoacoustic imaging

Introduction This chapter concerns itself with optical imaging in deep tissue layers by scattered light. The primary motivation is to extend the reach of optical imaging to living organisms. Light microscopy has been used for biological imaging since the seventeenth century, when pioneers such as Robert Hooke and Antonie van Leeuwenhoek famously used the technique to discover cells and microorganisms. To this day, microscopy is the most common domain of optical imaging, used for biological research and medical pathology. More recently however, there has been a growing trend to use light to image on a larger scale and in live organisms. The driving force behind this innovation has been the use of near-infrared light, which is capable of penetrating orders of magnitude deeper than light in most of the visible spectrum. Instead of being confined to imaging thin slices of tissue, or extremely superficial layers of the illuminated tissue surface, optical imaging can now be applied to whole small animals or anatomical regions such as the breast, extending its reach in both basic research and clinical practice. Optical imaging brings a number of advantages over other methods for visualizing tissue. Unlike X-ray-based or nuclear medicine methods, it involves only safe, nonionizing radiation. Its implementations are generally simpler, better portable, and less expensive than CT and MRI systems. Perhaps most significantly, it offers highly specific molecular contrast, through both fluorescence and absorption spectroscopy, providing information on the biological processes underlying disease. Although near-infrared light can provide several centimeters of tissue penetration, making optimal use of signals detected from depth remains challenging. The primary obstacle is photon scattering. Light rapidly loses its direction when propagating through tissue and cannot be focused after the first few hundred microns beneath the surface. The highly diffusive nature of multiply scattered light in

8

Optical and Optoacoustic Imaging in the Diffusive Regime

223

tissue makes it difficult to determine exactly where detected signals originated, therefore degrading spatial resolution [1]. Sophisticated measurement and image reconstruction techniques are required to attain accurate results in the diffusive regime. This chapter introduces the necessary physical models for optical imaging with diffusive light. Optical imaging techniques are presented together with the corresponding modeling and image reconstruction frameworks that make them work. Further, optoacoustic imaging is introduced as a technique for high-resolution imaging with diffusive light. The chapter concludes with a discussion of the current and future applications of deep-tissue optical and optoacoustic imaging.

Photon Propagation, Interactions, and Contrast in Tissue For the purposes of optical imaging in deep tissue layers, a simple model of photon propagation is considered, which includes absorption and elastic scattering. Fluorescence is also discussed, because it represents an important source of optical imaging contrast.

Absorption Absorption is the process by which the energy of a photon is taken up by matter. As a bulk material or tissue property, absorption is characterized by the optical absorption coefficient, μa (cm1), which describes the rate at which photons are absorbed per unit length of propagation. In a medium where absorption dominates over scattering, the Beer-Lambert law provides the transmission T¼

I ¼ eμa l I0

(1)

where I is the transmitted light intensity (Wcm2), I0 is the incident light intensity, and l is the distance the light travels through the medium (cm). This relationship is commonly used to characterize the absorption properties of solutions in a spectrometer. Such measurements assume a negligible level of scattering, that is, all losses are attributed to absorption. Absorption depends on the structure of the absorbing material as well as the energy of the photon. The wavelength λ in empty space is typically used in biomedical optics to characterize the photon energy. Typical absorbers of light in tissue include hemoglobin, lipids, water, and melanin. Chemicals can be characterized by their absorption (or extinction) per concentration, which is called the molar absorptivity or molar extinction coefficient e (M1 cm1) and which is dependent on the optical wavelength. While optical microscopy on thin tissue slices commonly applies visible light to obtain a superior diffraction-limited spatial resolution, imaging of deeper tissue layers relies on longer optical wavelengths. This is justified by a window in the

224

A. Taruttis and V. Ntziachristos

Fig. 1 The absorption spectra of hemoglobin (left) and water (right). Their low absorption in the region from 700 to 900 nm presents a window for deep-tissue light propagation and imaging (The data was compiled by Scott Prahl, Oregon Medical Laser Center (http://omlc.ogi.edu/spectra), from various sources)

far-red and near-infrared wavelength regions in which optical absorption of typical tissue absorbers is comparatively low. Hemoglobin absorption is orders of magnitude lower in the near-infrared than at blue and green wavelengths, and water absorption is comparatively low at wavelengths below 900 nm (Fig. 1). Further tissue absorbers, such as melanin and lipids, also have low absorption in this window. It is for these reasons that optical imaging in deep tissue layers typically utilizes wavelengths in the range 650–900 nm. Absorbers (or chromophores) in tissue should be considered not only in terms of their effect on photon propagation through tissue but also as valuable forms of imaging contrast. Imaging of optical absorption in tissue can be achieved by optical and optoacoustic techniques and can be used to visualize hemoglobin, including its different oxygenation states, as well as areas of high melanin and lipid concentration, which can be exploited for applications in functional imaging, melanoma, and atherosclerosis characterization, respectively.

8

Optical and Optoacoustic Imaging in the Diffusive Regime

225

Scattering Photons propagating through tissue are not scattered isotropically – there is a strong preference for the forward direction, i.e., the original trajectory of the photon. The anisotropy can be defined by g, which is a parameter in the Henyey-Greenstein phase function that provides a probability density function: pHG ðθÞ ¼

1 1  g2 4π ½1 þ g2  2g cos θ3=2

(2)

where θ is the deviation from the forward trajectory and varying g between 0 and 1 provides probability density function corresponding to isotropic scattering and no scattering (i.e., purely forward scattering), respectively. A typical value for g in tissue is 0.9. An isotropic scattering approximation is considered where scattering is characterized by a reduced scattering coefficient μ0s = (1  g)μs that can be used in connection with the diffusion approximation in highly scattering media.

Radiative Transfer Equation Photon propagation in a medium with absorption and anisotropic scattering can be modeled by the radiative transfer equation (RTE). The equation describes the transfer of energy via absorption and scattering and can be derived by conservation of energy considerations. The equation is   ⇀ h   i   ^ @L r , s , t 1 ⇀ ⇀ ¼ ∇  L r , ^s , t ^s  ðμa þ μs ÞL r , ^s , t @t c ð     ⇀ ⇀ þ μs L r , ^s , t Pð^s 0  ^s ÞdΩ 0 þ Q r , ^s , t

(3)



  ⇀ where L r , ^s , t is the radiance, which is the energy flow per unit area and solid ⇀

angle (W/m2sr) at position r along the direction of unit vector ^s at time t; Pð^s 0  ^s Þ is a phase function which represents the probability of a change in  photon  propagation ⇀

angle from ^s 0 to ^s ; dΩ0 is a solid angle element around ^s 0; and Q r , ^s , t represents an h   i ⇀ illumination source. The term ∇  L r , ^s , t ^s models the divergence of the photon   ⇀ beam as it propagates, ðμa þ μs ÞL r , ^s , t represents energy loss by absorption and ð   ⇀ scattering, and μs L r , ^s , t Pð^s 0  ^s ÞdΩ0 represents photons scattered into the 4π

226

A. Taruttis and V. Ntziachristos

considered path. The RTE is difficult to solve because the angular dependencies caused by anisotropic scattering add to the already large amount of independent variables. Numerical simulations by the Monte Carlo method are frequently used when the accuracy provided by the RTE is necessary. The Monte Carlo method uses a stochastic model of absorption and scattering over a large number of simulated photons to provide accurate solutions. Because each photon propagates independently of others, the method is highly suited to parallelization.

Diffusion Approximation The Monte Carlo method is computationally expensive, even though it is highly suited to parallelization, and the complexity of the RTE motivates the consideration of simpler models. Because light loses its directionality after many scattering events, it seems that a model of isotropic scattering would be applicable, particularly in deep tissue layers. Indeed, the diffusion approximation has been shown to very accurately follow more exact models in highly scattering tissue and at sufficient depth. The relevant diffusion equation is stated without derivation as   ⇀ @ϕ r , t c@t

  h  i   ⇀ ⇀ ⇀ þ μa ϕ r , t  ∇  D∇ϕ r , t ¼ S r , t

(4)

  ⇀ where ϕ r , t is the fluence rate (Wm2), D = 1/3(μa + μ0s) is the diffusion   ⇀ coefficient, and S r , t is an isotropic illumination source. Additional modeling needs to be performed for illuminated boundaries between nondiffuse and diffuse media. The diffusion equation allows the problem of photon propagation to be solved in reasonable time and therefore forms the basis for a large number of optical imaging techniques in the diffusive regime.

Bioluminescence Bioluminescence is the emission of light by a chemical reaction within an organism. A prominent example of this is the light emitted by fireflies. In fireflies, luciferin, a substrate, is oxidized in a reaction catalyzed by an enzyme, luciferase. For in vivo imaging use (e.g., as a genetic reporter), the luciferase gene (luc) is introduced into the cells of interest. Prior to imaging, luciferin is administered to the animal – often injected intraperitoneally in mice. The luciferin, which is a small molecule, quickly distributes throughout the entire body. Bioluminescence is produced where it encounters cells expressing the luciferase gene. This combination results in emitted light with a relatively broad spectrum. Other bioluminescence systems are available for imaging purposes, but firefly luciferase/luciferin system is the most prominent.

8

Optical and Optoacoustic Imaging in the Diffusive Regime

227

Because of the relatively low light yield of bioluminescence and its nonoptimal emission wavelength for tissue penetration, the light that escapes the subject is typically very weak and cannot therefore be observed by the eye, but rather requires a dark box/room, sensitive camera, and long integration times. Bioluminescence imaging has the advantage of relative simplicity: animals are typically placed in a light-tight box and imaged with a camera. No excitation light or filters are required. Common applications include monitoring of tumor growth in laboratory mice, in vivo tracking of other cells labeled in vitro, and in vivo profiling of gene expression [2].

Fluorescence Fluorescence is considered primarily in its role as a source of optical imaging contrast. It is a widely established tool in microscopy, which, thanks to the discovery and development of fluorophores in the far-red and near-infrared wavelength ranges, has emerged as an important component of deep-tissue optical imaging. In fluorescence, a molecule (fluorophore) emits a lower energy (i.e., longer wavelength) photon shortly after absorption. The difference in energy is accounted for by rapid thermal relaxation before the fluorescence photon is emitted. Fluorophores thus have a wavelength gap between their peak absorption or excitation wavelength and their peak emission wavelength. This gap is referred to as their Stokes shift. The Stokes shift provides a means for highly sensitive detection of fluorescence, since the emitted light can be separated from the excitation light with optical filters. Further parameters to consider are the molar extinction coefficient, which gives a measure of how much excitation light the fluorophore absorbs, and the quantum efficiency, which is a ratio of photons emitted to photons absorbed. Three sources of fluorescence are considered for imaging with diffusive light: exogenous fluorophores, fluorescent proteins, and tissue autofluorescence.

Exogenous Fluorophores Exogenous fluorophores for the use of in vivo imaging are commonly administered by intravenous injection but can also be introduced by other means, such as ingestion. They can be classified as follows: • Fluorophores that have no specific targets. These can be used to highlight the blood pool or the lymphatic system. Often, simple dyes with no added functionalization are applied for these purposes. It should be noted that indocyanine green (ICG) and methylene blue (MB), which both fall into this category, are clinically approved and can thus be used in clinical fluorescence imaging applications. • Targeted fluorescent agents. These are used to tag ligands of specific biological targets (e.g., cell surface receptors) to highlight them. Potential ligands include small molecules, peptide sequences, proteins, and entire antibodies. The protocol is typically to administer the agent, wait for binding and elimination of unbound agent, and then perform imaging. While such agents are mainly used in the laboratory setting in small animal imaging, first clinical investigations have been reported [3].

228

A. Taruttis and V. Ntziachristos

• Activatable fluorescent agents. Activatable fluorescent agents produce little emission when first administered but start fluorescing when activated by specific biological processes, usually cleavage by enzymes. This activation of fluorescence commonly relies on some form of dequenching, i.e., by separation of fluorophores which are very close together on a cleavable substrate. These agents have so far been confined to the preclinical setting [4]. • Fluorescent tags on nanoparticles or cells. Fluorophores can be used to label nanoparticles or cells so that they can later be tracked in vivo.

Fluorescent Proteins Reporter genes encoding fluorescent proteins have become a standard tool in biological research, based on the ubiquitous green fluorescent protein (GFP) and related proteins of other colors [5]. The concept is relatively simple: genes encoding the fluorescent protein, a fluorophore, are introduced to the target cells by viral or other means. Their restriction to the visible spectrum long confined fluorescent proteins to microscopy. However, the recent emergence of red-shifted and nearinfrared fluorescent proteins has made it possible to use them in deeper tissues [6]. Fluorescent proteins are used for cell labeling for subsequent in vivo monitoring and as a reporter of gene expression. Protein-protein interactions can be studied using Förster resonance energy transfer (FRET), by which absorption/emission spectral overlaps between different fluorescent proteins are exploited by this energy coupling mechanism at nanometer-range distances. Tissue Autofluorescence Autofluorescence refers to the endogenous fluorescence found in tissue molecules. In much of fluorescence imaging practice, it is considered as an unwanted parasitic signal, which can mask the signal from the fluorophore of interest. While autofluorescence is weaker in the near infrared than in the visible spectrum, near-infrared light propagates much further through tissue and can therefore remain a significant parasitic source in deep-tissue imaging. Autofluorescence is exploited in a number of applications, including the imaging of lipofuscin in the retina for ophthalmic purposes.

Optical Imaging with Diffusive Light Simple Photographic Approaches The simplest technique for optical imaging is to use a camera to take photographs. In the case of bioluminescence imaging, the system and subject must be within a lighttight box or dark room. A sensitive, low-noise camera, such as a cooled electronmultiplying charge-coupled device (EMCCD), is often applied to maximize the SNR. Clearly, the absorption and scattering of light in tissue limit bioluminescence imaging to relatively superficial tissue (a few mm). The strong attenuation caused by absorption and scattering causes signals from deeper regions to be overwhelmed by

8

Optical and Optoacoustic Imaging in the Diffusive Regime

229

Fig. 2 Configurations for planar fluorescence imaging. In epi-illumination mode, the tissue is illuminated from the same side as the detector (camera) is placed on. In transillumination mode, illumination and detection are on opposite sides of the tissue

signals from the surface, that is, the signals are surface-weighted. This makes it impossible to compare bioluminescence signals from different depths in such a system. This photographic or planar imaging system can be extended to fluorescence imaging by introducing illumination at the appropriate wavelength to excite the fluorophore of interest and adding an emission filter to the camera to reject excitation light and allow only the fluorescence emission to enter the camera. Suitable illumination sources can be continuous wave (CW) laser diodes or appropriately filtered white-light sources. The most common configuration for planar fluorescence imaging is epi-illumination (reflectance) mode (Fig. 2). In this mode, the excitation light is on the same side of the imaged tissue as the camera. Strong attenuation from absorption and scattering and blurring from scattering apply to both the excitation light as it propagates into the tissue and the fluorescence emission light as it propagates toward the tissue surface on its way to the camera. Again, as with bioluminescence imaging, this configuration is therefore strongly surface-weighted. This limits the attainable penetration depth, even with near-infrared light, to a few mm and severely blurs signals that originate in deeper tissue layers. To reduce surface weighting, imaging can in some cases be performed in transillumination (or transmission) mode (Fig. 2, right). In this mode, the illumination and camera are on opposite sides of the tissue. No matter how deep the fluorescence signal originates, the light, in the form of either excitation or emission, must travel through the whole thickness of the tissue. However, this configuration requires that there is access to the tissue from two sides, which are close enough together that light can propagate through the thickness between them at detectable levels. This requirement restricts its applicability to small animals and extremities such as the fingers

230

A. Taruttis and V. Ntziachristos

and the breasts. A further requirement implicit in this argument is that the thickness of the tissue is uniform throughout, which can be met by mechanical manipulation and/or an imaging chamber filled with fluid that matches the tissue optical properties. It is important to note that neither planar bioluminescence nor fluorescence imaging is quantitative, since absorption and scattering of light from deep tissue layers are not taken into account. Spatially, they represent a projection of a volume onto two dimensions and are therefore not volumetric. Furthermore, variations in the optical properties of the tissue being imaged can lead to inaccuracies in the imaging results from deep-tissue imaging approaches. For example, highly absorbing regions will allow less fluorescence light to escape than less absorbing regions. A highly vascularized tumor may therefore display less fluorescence signal than a less vascularized tumor, even if they contain the same fluorophore concentration. These inaccuracies can be reduced to a certain extent by normalization methods. One such approach is to acquire an image of the excitation light, that is, without the emission filter, and divide the fluorescence image by this, that is, I norm ¼

I fluo I intr

(5)

where Inorm is the resulting normalized image, Ifluo is the raw fluorescence image, and Iintr is the excitation light (intrinsic) image. The rationale behind this normalization is that the intrinsic image will have less intensity in regions that are highly attenuating, and dividing by these values will therefore correct the fluorescence image values. However, even with appropriate normalization, planar optical imaging approaches in deep tissue remain neither quantitative nor volumetric but have advantages in simplicity and potential real-time operation that have led to their widespread adoption in biomedical research. Such methods are suitable when the presence or absence of the signal is sufficient for the application, i.e., when accurate quantification is unnecessary.

Optical Tomography in the Diffusive Regime Planar images can be considered as projections from a three-dimensional tissue to a two-dimensional image. Volumetric imaging of tissue requires multiple projections and an algorithm for image reconstruction. Such tomographic approaches can be compared to X-ray CT or other computed tomography modalities. In the case of X-ray CT, X-rays are assumed to travel in straight lines (no scattering) and the measured intensity emerging after propagating through tissue is considered as a line integral over the X-ray absorption properties along that line in the tissue. In diffusive optical imaging, the situation is more complicated. Strong scattering means that photons emerging from the tissue could have traveled a complicated path through much of the tissue volume, being scattered multiple times on the way. Image

8

Optical and Optoacoustic Imaging in the Diffusive Regime

231

Fig. 3 Image reconstruction for optical tomography in the diffusive regime

reconstruction is therefore more complicated than in X-ray CT. The general approach to the problem of image reconstruction is illustrated in Fig. 3. Reconstruction is based on a forward model, which describes how photons propagate through the tissue (e.g., using the diffusion model). The forward model is combined with the measurement data to build a matrix formulation of the imaging problem. The matrix is then inverted to arrive at the image. The output image can be a fluorophore distribution if that is the objective or a distribution of optical properties, for example, absorption. The imaging system and forward model must be adapted for the type of information that should be produced. Several different approaches have been developed. The first major difference in optical tomography approaches is the treatment of the time domain, where three major categories exist: time domain imaging, frequency domain imaging, and time-independent imaging. Time domain imaging relies on time-resolved photon detection and can potentially provide the most information. It relies on ultrafast lasers for excitation and time-resolved detection (e.g., photomultiplier tubes, avalanche photodiodes, or gated intensified CCDs). Frequency domain imaging can be performed with excitation from amplitude modulated laser diodes at a particular frequency. The simplest approach, timeindependent tomography, can be performed with a CW laser and integrating detector, such as a CCD camera. Besides the imaging hardware, the forward model applied is dependent on the treatment of the time domain in the measurements. For example, time-independent implementations may apply time-independent versions of the diffusion approximation in the forward model. Individual projections in optical tomography are formed by different combinations of source and detector positions. To gain the most possible spatial information, the illumination is concentrated on a small spot on the tissue surface to form an approximate point source. This can be implemented by focusing or collimating

232

A. Taruttis and V. Ntziachristos

Fig. 4 Illumination/detection modes in optical tomography

optics in free space or by positioning a fiber or fiber bundle output on the tissue surface. Free space source positions can be scanned mechanically to populate a set of multiple positions, while an array of fiber outputs can be read out in parallel. Detector positions are implemented either by multiple collection fibers placed on the tissue surface or by individual pixels on a sensor chip (e.g., CCD). Illumination and detection can be performed in epi-illumination or transillumination mode, where transillumination provides more information, especially regarding depth (Fig. 4). Additionally, the projections around the whole volume of the tissue can be acquired by either using different sources/detector fibers or rotating the instrumentation or subject around 360 . While this produces the most highly resolved information, limited views and epi-illumination can be used in cases where access to the tissue from multiple angles is not practical. Other variables in optical tomography in the diffusive regime are the contrast mechanism and whether or not the implementation applies a multispectral approach. In terms of contrast, an optical tomography system can be implemented for imaging of fluorescence (see section “Fluorescence Molecular Tomography”) or imaging of endogenous tissue optical properties (absorption and scattering). Separating absorption and scattering properties generally requires time or frequency domain imaging. Multispectral imaging, i.e., excitation with multiple different optical wavelengths, can be applied to image multiple different fluorophores with distinct spectra or to distinguish different tissue absorbers from one another, e.g., hemoglobin in its different oxygenation states. In general, strong photon scattering in the diffusive regime causes the inverse problem associated with optical tomography to be ill-posed. This is intuitive because the path of detected photons propagating from sources to detectors could travel through much of the imaged volume, and the overall probability density functions for light fluence are therefore quite diffusive. The problem requires regularization and is inherently limited in terms of spatial resolution. Typical spatial resolutions obtainable in optical tomography at a depth of 1 cm are on the order of 1 mm and degrade further with increasing depth. However, the molecular specificity provided by optical imaging can still provide valuable information. In comparison with planar imaging approaches, tomographic techniques have the advantage that they are

8

Optical and Optoacoustic Imaging in the Diffusive Regime

233

quantitative and volumetric. However, this is gained at the cost of complexity and real-time operation. Depending on the particular implementation, a significant scanning time is required and image reconstruction takes several minutes.

Fluorescence Molecular Tomography Fluorescence molecular tomography (FMT) is a form of optical tomography in the diffusive regime that aims to recover the fluorophore distribution in tissue. The modality has primarily been used in the context of preclinical biomedical research, in particular on laboratory mice. FMT is in the class of time-independent imaging, utilizing CW lasers for excitation. Modern systems use CCDs to capture projection data. We consider the forward model applied in FMT. The diffusion approximation is applied as it provides an accurate model of light propagation at the scale of several millimeters and centimeters, which is suited to imaging mice. The time-independent diffusion equation that describes the propagation of the fluorescence emission light can be written as   h  i     ⇀ ⇀ ⇀ ⇀ μa U m r  ∇  D∇U m r ¼ n r U x r

(6)

  ⇀ where Um is the emission (fluorescence) photon density (m3), U x r is the   ⇀ excitation light photon density, and n r is a function that is proportional to the fluorophore concentration. Assuming known optical properties, the diffusion equation can be solved by Green’s function approach, which leads to ð         ⇀ ⇀ ⇀ ⇀ ⇀ ⇀ G r , r 0 n r Ux r 0 d r Um r ¼ ⇀0 r



⇀ ⇀0

where G r , r

(7)

V

 are Green’s functions (in the volume V ), which can be computed by

numerical methods. The same Green’s functions can be used to describe the propagation of excitation light and emission light, provided that the optical properties are approximately equal at both wavelengths. In a simplifying approximation, optical properties can be assigned to known realistic values. A normalization equivalent to that described for planar imaging is applied, that is, Um/Ux, which has the effect of eliminating the effect of variations in source intensities as well as reducing the sensitivity to optical property variations. For one source-detector pair in the discretized volume Ω, the measurement can be formulated as follows:       ⇀ ⇀ ⇀ ⇀ ⇀ G r , r , r n r G r X s d Um   ¼ ΔV ⇀ ⇀ Ux G rs , rd Ω

(8)

234

A. Taruttis and V. Ntziachristos

Fig. 5 Green’s function in two dimensions describing the paths photons take from a point source to a detector in a diffusive medium. The involvement of many image pixels in the propagation prevents the inverse problem from being well posed and limits the attainable spatial resolution in optical tomography

  ⇀ ⇀ where G r s , r is Green’s function describing the propagation from the source     ⇀ ⇀ ⇀ ⇀ position r s to each point; n r G r , r d describes the propagation of fluorescence light from each point,  weighted by the local fluorophore concentration to the ⇀ ⇀ detector; G r s , r d describes the propagation from source to detector for normalization (Fig. 5); and ΔV is the volume enclosed by the voxel. These equations are used to build the FMT forward model, which is inverted, typically using an iterative approach with regularization. FMT provides the fluorescence biodistribution, which can be applied in conjunction with target-specific fluorescent imaging agents to provide valuable molecular imaging. However, FMT does not provide any anatomical information, since the tissue optical property values are generally fixed inputs. It is therefore natural to combine FMT with anatomical imaging modalities such as MRI and CT in a dual modality approach. This can be achieved either with imaging cassettes to hold the imaging subject that can be inserted sequentially in each modality or by combining two modalities in a hybrid approach. Besides the added value of coregistered molecular and anatomical images, images from MRI or CT can be used to improve the accuracy of the FMT forward model, for example, by assigning different optical properties to different tissue types, or to tune regularization techniques based on prior knowledge of the fluorescence distribution [7] (Fig. 6).

Applications of Optical Imaging in the Diffusive Regime Applications of optical imaging in the diffusive regime span a wide range of fields in preclinical and basic research and clinical imaging. Preclinical research on mouse models in particular is aided by established, commercially available planar and tomographic systems. Imaging of gene expression and labeled cells is commonly performed by bioluminescence or fluorescent

8

Optical and Optoacoustic Imaging in the Diffusive Regime

235

Fig. 6 An FMT-CT image of fluorescence originating in tumors in the lungs of a mouse model [7]. A fluorescent agent targeting integrins was applied. The fluorescence signal is imaged by FMT and shown in gold. The bones and lungs are made visible by CT and shown in gray and bluegreen, respectively

proteins. Exogenous fluorescent agents which bind to specific targets or are activated by specific enzymes are used to study the biological mechanisms underlying disease and investigate the effect that therapeutics have on these mechanisms. In the clinical domain, there is more focus on imaging of endogenous biomarkers because the use of exogenous agents in humans requires the costly establishment of safety by means of toxicity studies in each case. Frequency domain optical tomography in particular has been studied for the characterization of breast tumors, the diagnosis of arthritis in finger joints, and the detection of brain lesions in neonates, all by multispectral illumination and reconstruction of absorption and scattering coefficients as well as the related hemoglobin concentration and oxygen saturation. More recently, the first target-specific fluorescent agents have been considered for use in humans. The first in-human use of a tumor-targeted fluorescent agent to highlight tumor tissue during surgery, to allow surgeons to better discriminate between healthy and cancerous tissues, has been reported [3]. A future trend will be the utilization of approved targeted drugs, such as monoclonal antibody-based therapies, tagged with fluorophores, for molecular intraoperative fluorescence imaging.

Optoacoustic Tomography Optoacoustic tomography is an emerging modality aimed at high-resolution imaging of optical absorption in deep tissue. As described in the previous section, optical tomography in the diffusive regime has a spatial resolution limited by photon scattering, typically around 1 mm at 1 cm depth. This is a problem inherent to diffuse optical imaging: photon scattering makes it difficult to determine where detected photons originated, since they may have been scattered several times during their propagation through tissue. Optoacoustic imaging is attractive in this respect because it produces optical contrast at a high spatial resolution independent of photon scattering. It exploits the photoacoustic effect, whereby thermal expansion due to transient optical absorption gives rise to ultrasound waves which propagate

236

A. Taruttis and V. Ntziachristos

outward and can be detected noninvasively. Because ultrasound scatters orders of magnitude less than light, the sources of optical absorption giving rise to the ultrasound waves can be pinpointed at a resolution limited by ultrasound detection. Optoacoustic tomography (also referred to as photoacoustic tomography) typically applies short (ns) laser pulses to the tissue and applies ultrasound detectors to measure the time-resolved optoacoustic signals.

Signal Generation The physical basis of optoacoustic signal generation is considered, based on some simplifying assumptions that allow the energy deposition and hence the signal generation to be considered at an instantaneous event. The first assumption is that thermal confinement holds, meaning that the energy deposited does not diffuse away from the deposition site during signal generation. The condition for thermal confinement can be described as τp  τth ¼

d 2c αth

(9)

where τp is the laser pulse duration, τth is the thermal relaxation time, dc is the characteristic dimension of the imaged structure, and αth is the thermal diffusivity (ms2). For typical values of αth and resolutions down to single microns and beyond, thermal confinement will hold if ns-range pulse durations are used. A second assumption is that of stress confinement, which means that the stress generated by thermal expansion should not propagate outward during the process of energy deposition. The condition can be expressed as τp  τs ¼

dc vs

(10)

where τs is the stress relaxation time and vs is the speed of sound in the medium. As an example, using the speed of sound in water of approximately 1,500 ms1 and a pulse duration of 10 ns, we see that the stress will propagate 15 μm during the pulse, setting a limitation on the attainable spatial resolution. Under these conditions, the optoacoustic excitation pulse can be considered as instantaneous. Considerations of thermal expansion then result in an expression for the initial pressure p0 increase generated by the illumination pulse at each point in space: p0 ¼ Γμa ϕ

(11)

where μa is the local absorption coefficient and ϕ is the local fluence (J cm2). Γ is referred to as the Grüneisen coefficient, which describes the thermodynamic properties of the material. The initial pressure distribution is the quantity to be imaged.

8

Optical and Optoacoustic Imaging in the Diffusive Regime

237

Since ϕ varies smoothly in space because of diffusive photon transport, sharp image features depend on the spatial distribution of the absorption coefficient. Optoacoustic imaging therefore represents optical absorption contrast.

Signal Propagation   ⇀ The initial pressure distribution in space p0 r is an initial condition from which pressure waves propagate outward according to a wave equation: 

  ⇀    p 0 r 1 @ d ⇀ 2 δðtÞ ∇  2 2 p r ,t ¼  vs @t v2s dt 2

(12)

  ⇀ where p r , t is the space- and time-dependent pressure. Green’s function approach is applied to arrive at an expression for the pressure " ð  ⇀ ⇀ 0 !#    r  r  1 @ 1 ⇀0 ⇀ d r p r δ t  p r ,t ¼ 0 4πv2s @t vs t vs 



(13)

The essential insight provided by this equation is that the pressure measured at a point in space will at each point in time be an integral of the initial pressure distribution on a sphere defined by the time it takes at the speed of sound vs for the pressure wave to propagate to that point. The radius of that sphere is then vst. A time-resolved measurement of pressure is therefore a series of integrals over concentric spheres. Measurements on points around a boundary can be used to reconstruct the initial pressure distribution in the enclosed region. The pressure signal originating from a compact absorption feature is considered. A simulation of such a signal is shown in Fig. 7. The feature, which represents a distribution of deposited energy or the resulting initial pressure, is a paraboloid restricted to support within a circle of 2 mm radius (Fig. 7a). The pressure is measured at a distance of 10 mm from the center of the feature. The time-resolved pressure signal is bipolar (Fig. 7b). By multiplication by the speed of sound in the medium, the time axis can be changed to a distance axis, representing the distance from the point of measurement (Fig. 7b). The signal is confined to the support of the feature and encodes its geometrical dimensions. It is also notable that the frequency of the signal depends strongly on the dimensions of the feature, with small dimensions resulting in higher signal frequencies.

Image Reconstruction Image reconstruction   in optoacoustic tomography aims to recover the initial pressure ⇀

distribution p0 r which is proportional to the local absorption properties. Several

238

A. Taruttis and V. Ntziachristos

Fig. 7 Simulation of an optoacoustic signal. (a) The input image is an initial pressure distribution of paraboloid intensity confined to a circle of 2 mm diameter. (b) The time-resolved optoacoustic (pressure) signal measured at a point in space 10 mm away from the center of the feature is bipolar. (c) By converting the time of flight to distance, it can be observed that the signal encodes the dimensions of the initial pressure feature

approaches to image reconstruction have been investigated and are described in the literature. Of these, two categories are considered here. The first are delay-and-sum or backprojection approaches which exploit the fact that the pressure measurements at each time point are integrals on spheres (or circles in two dimensions). These are based on analytical formulas and allow implementation by fast algorithms. However, even if the formulas applied provide an exact reconstruction to the theoretical imaging problem, they are not conveniently adaptable to nonideal imaging systems which necessarily deviate from several assumptions, such as pressure detection at infinitesimal points in space and nonideal detector placement. The second category of reconstruction algorithms is based on linear models of the imaging problem, in which real characteristics of the system can be incorporated. These models can either be built by a theoretical consideration of the forward problem in the particular detection geometry applied or by characterization of the system using calibration measurements of point sources or a combination of both

8

Optical and Optoacoustic Imaging in the Diffusive Regime

239

approaches [8]. The result is a model matrix that describes the pressure signals obtained for a given image. Image reconstruction then involves an inversion of this forward problem. The forward problem can be described as follows: Ax ¼ b

(14)

where A is the model matrix (or forward matrix) which describes how the system transforms an initial pressure distribution into time-resolved pressure measurements at points around the imaged subject. x is then made up of spatial samples of the initial pressure distribution in vectorized form, with an amount of elements equal to the number of image pixels. b is made up of time samples of the pressure signals at measured locations around the subject, also in vectorized form, with the number of elements equal to the number of time samples multiplied by the number of different points at which the pressure is measured or the number of projections. Inversion to obtain images from measured pressure fields can then be performed either by computing the pseudoinverse of A, yielding a least squares error solution, or by applying iterative methods. It is of particular importance to note that in general, provided that the detection geometry is a reasonable approximation of the ideal, the inverse problem in optoacoustic imaging is far better posed than the inverse problem in optical tomography in the diffusive regime, because ultrasound waves scatter much less than photons, such that they can be assumed to travel in straight lines.

Instrumentation A basic optoacoustic imaging system combines pulsed laser excitation with ultrasound detection to capture the time-resolved optoacoustic signals at multiple spatial positions (Fig. 8).

Optical Excitation Although other time-varying approaches have been considered, the most prominent excitation scheme in optoacoustic tomography is pulsed. In particular, pulse durations in the 10 ns range are common, provided by Q-switched Nd:YAG lasers. Pulse repetition rates are typically in tens of Hz for tomographic imaging, thus governing the imaging frame rate. Multispectral imaging requires the availability of multiple excitation wavelengths, which can be provided by an optical parametric oscillator (OPO) or a Ti-sapphire laser. Ultrasound Detection The detection of time-resolved pressure signals in the ultrasound frequency range can be achieved by means of piezoelectric transducers like those used in conventional ultrasound imaging. These technologies are well established and robust and probably find use in the majority of optoacoustic imaging systems today. A disadvantage of using piezoelectric technology is that it is highly resonant, therefore ideally capturing only a small range of ultrasound frequencies. Optoacoustic signals,

240

A. Taruttis and V. Ntziachristos

Fig. 8 A basic optoacoustic imaging system

however, are broadband in nature and signal frequencies are related to the spatial frequencies of the absorption features. Severely limiting the detection bandwidth will therefore result in imaging artifacts and the inability to visualize features that primarily emit frequencies outside of the detection bandwidth. Generally, spatial point detection is challenging and detectors often have significant diameters, thereby reducing the attainable spatial resolution. Systems can be implemented based on mechanically scanning an ultrasound detector across multiple spatial sampling positions or by multielement detectors that measure at multiple positions in parallel or a combination of both approaches. The ultrasound detection geometry is critical for the image quality and imaging performance. Accurate reconstruction relies on sufficient spatial sampling of the pressure field. Common implementations aim to sample the pressure on a circle (in two-dimensional imaging) or sphere (three-dimensional) around the subject. Measurement points should be spaced according to the required spatial resolution in accordance with sampling theory. Clearly, three-dimensional optoacoustic imaging then requires many detector elements and/or long scanning times for sufficient spatial sampling. It is therefore common to reduce the problem to one of two dimensions, that is, to image a plane or a slice through the subject. This can be achieved by applying focused ultrasound detectors, so that signals are predominantly captured from one plane. Figure 9 shows the imaging axes in the case of circular detection with focusing on a slice (elevational focusing). In that case, there are three different spatial resolutions, namely, in the axial, lateral, and elevational directions. Derivations of the expressions for these resolutions can be found in the literature [9]. The axial resolution is related to the maximum frequency of the time-resolved ultrasound signal which the detectors can capture:

8

Optical and Optoacoustic Imaging in the Diffusive Regime

241

Fig. 9 Axes for spatial resolution in a circular detection geometry with elevational focusing on a plane

Rax 

0:8vs fc

(15)

where fc is the cutoff frequency of the detection system, usually limited by the detector itself. The lateral resolution is a function of the detection geometry: Rlat ðr Þ 

Dr r0

(16)

where r is the distance from the optoacoustic source to the center of the circle, r0 is the radius of the detection circle, and D is the diameter of the detector. This can be interpreted as the resolution tending toward the diameter of the detector as sources get closer to the detection circle. Reducing the detector diameter or placing the detectors further from the subject can therefore improve the lateral resolution. The elevational resolution in the geometry under consideration is defined by the focusing of the detector, which can be achieved using an acoustic lens or by shaping the detector surface. The elevational resolution is estimated by Rel  1:02

Fvs f De

(17)

where F is the focal length of the detector, f is the ultrasound frequency, and De is the diameter of the transducer in the elevational direction. Since the elevational resolution depends on the ultrasound frequency, it is difficult to define a single elevational resolution.

242

A. Taruttis and V. Ntziachristos

Multispectral Optoacoustic Tomography Contrast in optoacoustic images depends on optical absorption (Eq. 11). However, the sources of optical absorption contrast can be multiple, and it is not directly possible to determine to which potential source in particular a signal can be attributed. To overcome this challenge, excitation at multiple wavelengths can be applied and sources of contrast can be identified by means of their known absorption spectra. Multispectral optoacoustic tomography (MSOT) can thus be considered as a form of optical absorption spectroscopy [10]. The basic technique can be described as follows. First, the tissue in question is imaged at multiple wavelengths. The wavelengths applied should be selected in such a way that the different absorbers can be distinguished from another. The resulting set of optoacoustic images at single wavelengths is then fed into a spectral unmixing algorithm. The purpose of this algorithm is to convert the set of images at single wavelengths into a set of images of specific absorbers. A number of different algorithms have been proposed for spectral unmixing, and they can be broadly categorized according to whether they utilize explicit prior knowledge of the absorption spectra of contrast sources in the tissue or find unknown sources by statistical methods.

Spectral Unmixing with Known Source Spectra The simplest approach to spectral unmixing is based on prior knowledge of the potential absorbers present in the imaged tissue and their absorption spectra. The images at individual wavelengths are then analyzed on a per-pixel basis, fitting the measured optoacoustic spectra to a linear combination of potential source spectra. The problem can be described as follows, for each image pixel: p0 ð λ i Þ ¼ k

M X

ej ðλi Þcj

(15)

j¼1

where p0(λi) is the optoacoustic image pixel value (initial pressure distribution) at each of the considered optical wavelengths λi  {λ1,. . . , λN} and cj are the concentrations of M different absorbers with known absorption spectra ej(λi). k is a constant factor up to which the concentrations can be solved for. This formulation results in an equation for each of the N wavelengths. The system of linear equations is typically solved using the least squares method. The nonnegative nature of absorption in this scenario motivates the use of a nonnegativity constraint in the least squares algorithm. There is an assumption implicit in this description, which is that the light fluence on which the initial pressure depends (Eq. 11) does not vary with wavelength. However, this is not true if the optical properties of the tissue, which govern light transport, vary with wavelength, which they must do for spectroscopic approaches to be useful. Either the wavelength dependence of the fluence must be corrected for by other means prior to spectral unmixing, or it must be considered insignificant in light

8

Optical and Optoacoustic Imaging in the Diffusive Regime

243

of the required accuracy of the unmixing results. When the fluence is strongly wavelength dependent, the resulting optoacoustic spectra, of which each point is proportional to the product of the fluence and the absorption coefficient, will not match up to the source spectra, which are independent of fluence. This leads to inaccurate results.

Blind Spectral Unmixing Blind spectral unmixing (often referred to as blind source separation in other contexts) does not apply prior knowledge of the source spectra in the tissue, but rather attempts to determine these spectra by analysis of the information contained in the input images. The advantages of such approaches are that they do not require exact knowledge of the all the potential absorbers in tissue and that they are more robust in the face of corruption of optoacoustic spectra by the wavelength dependence of local fluence. A potential disadvantage is that such approaches are not suited where an accurate combination of specific components, such as hemoglobin in its separate oxygenation states, is required. A number of algorithms have been investigated for blind spectral unmixing in MSOT. The most prominent examples are principal component analysis (PCA) and independent component analysis (ICA) [11]. PCA finds sources by computing the eigenvectors of the covariance matrix over all image pixels. This results in orthogonal sources. ICA in the other hand finds maximally statistically independent sources over all image pixels.

Optoacoustic Image Contrast and Applications Optoacoustic images are proportional to optical energy deposition, which in turn depends on the optical absorption coefficient μa and the fluence ϕ. The fluence varies smoothly with space. Sharp image features will therefore be dominated by the absorption coefficient. Any material which strongly absorbs light will produce contrast in optoacoustic images. Typical absorbers in tissue include hemoglobin, melanin, and lipids. Exogenous light-absorbing agents ranging from organic dyes to novel nanoparticles can be introduced to the tissue to provide additional contrast. The most prominent absorber in optoacoustic image is hemoglobin. Blood vessels typically provide an order of magnitude more optical absorption than surrounding tissue because of hemoglobin absorption, even at near-infrared wavelengths where it absorbs light less strongly (see Fig. 1). For this reason, optoacoustic images commonly show blood vessels with high contrast. Hemoglobin displays different absorption spectra depending on whether it is oxygenated or not. MSOT can therefore be applied to distinguish between oxygenation states (Fig. 10, bottom). Hemoglobin oxygen saturation can then be computed on a per-pixel basis. MSOT thus provides functional imaging of the vasculature that can be applied preclinically and clinically to study the effects of tumor angiogenesis, tissue perfusion, and related viability and other related parameters.

244

A. Taruttis and V. Ntziachristos

Fig. 10 MSOT image showing multispectrally resolved fluorescent agent signal (green) following 6-h incubation. (Bottom) Multispectrally resolved oxyhemoglobin (red) and deoxyhemoglobin (blue) distribution within the tumor. (Inset) Photograph of a corresponding cryosection through the tumor

Melanin detection is primarily of interest in relation to melanoma. Lymph node screening for metastases by optoacoustic imaging has been investigated, as well as imaging of primary tumors by melanin contrast. Lipids are considered largely in connection with atherosclerotic plaques. Ongoing efforts are aimed at developing intravascular optoacoustic imaging systems, based on illumination by an optical fiber, to detect lipid accumulations. Optoacoustic detection of exogenous contrast agents is currently largely confined to preclinical studies, as the safety of the agents must be established in each case before use in humans. Organic dyes utilized in fluorescence imaging can be detected optoacoustically by means of their optical absorption, provided that they are present in the subject at sufficient concentrations. Such dyes are in widespread use for targetspecific fluorescence imaging, and their availability therefore makes them attractive for optoacoustic imaging (Fig. 10, top). Nonfluorescent agents that absorb light are also considered for optoacoustic imaging, especially since they cannot be detected by fluorescence without the addition of fluorescent tags. Nanoparticles investigated in connection with drug delivery or for therapeutic effects, such as gold nanoparticles and carbon nanotubes, are the foremost example of such agents. There is therefore much research interest invested in the application of optoacoustic imaging to study the pharmacokinetic and biodistribution profiles of novel nanoparticles.

8

Optical and Optoacoustic Imaging in the Diffusive Regime

245

Summary This chapter has introduced the principles of optical and optoacoustic imaging in the diffusive regime. Optical imaging and optoacoustic imaging are tied together by rich, biologically relevant optical contrast. They allow label-free functional imaging of hemoglobin oxygenation states and multiple further endogenous biomarkers as well as a multitude of exogenous agents for molecular imaging and theranostics. It has been noted several times in this chapter that much of the challenge in imaging of optical contrast is a result of the strong scattering of photons in biological tissue. Light microscopy on or near the illuminated tissue surface is capable of optical diffractionlimited resolution, but deeper beneath the surface, in the so-called diffusive regime, photons have little remaining directionality, causing blurred images. Nevertheless, the tissue penetration of near-infrared light is exploited in photographic approaches, commonly in connection with fluorescence imaging. It is mainly because of photon scattering that such images are not accurately quantitative, especially with respect to blurred subsurface signals. In response to the need for accurate volumetric optical imaging using diffusive light, tomographic techniques have been developed, which allow fluorescence and endogenous tissue optical properties to be imaged, depending on the implementation. A separate development, optoacoustic imaging, allows a breakthrough in optical visualization by decoupling spatial resolution from photon scattering. Detection of optical absorption by means of ultrasound waves, which scatter far less than light, allows high spatial resolutions to be obtained with diffusive light. Overall, optical methods in the diffusive regime are advancing from biomedical research to investigations in the clinical setting. While suitable exogenous agents for molecular imaging have yet to gain general clinical approval, the potential of optical and optoacoustic methods to safely and easily assess biomarkers drives widespread research interest.

References 1. Ntziachristos V (2010) Going deeper than microscopy: the optical imaging frontier in biology. Nat Methods 7(8):603–614. doi:10.1038/nmeth.1483 2. Contag CH, Bachmann MH (2002) Advances in in vivo bioluminescence imaging of gene expression. Annu Rev Biomed Eng 4:235–260 3. van Dam GM, Themelis G, Crane LM, Harlaar NJ, Pleijhuis RG, Kelder W, Sarantopoulos A, de Jong JS, Arts HJ, van der Zee AG, Bart J, Low PS, Ntziachristos V (2011) Intraoperative tumor-specific fluorescence imaging in ovarian cancer by folate receptor-α targeting: first in-human results. Nat Med 17(10):1315–1319. doi:10.1038/nm.2472 4. Jaffer FA, Calfon MA, Rosenthal A, Mallas G, Razansky RN, Mauskapf A, Weissleder R, Libby P, Ntziachristos V (2011) Two-dimensional intravascular near-infrared fluorescence molecular imaging of inflammation in atherosclerosis and stent-induced vascular injury. J Am Coll Cardiol 57(25):2516–2526. doi:10.1016/j.jacc.2011.02.036 5. Tsien RY (2009) Constructing and exploiting the fluorescent protein paintbox. Angew Chem Int Ed Engl 48(31):5612–5626. doi:10.1002/anie.200901916 6. Filonov GS, Piatkevich KD, Ting LM, Zhang J, Kim K, Verkhusha VV (2011) Bright and stable near-infrared fluorescent protein for in vivo imaging. Nat Biotechnol 29(8):757–761. doi:10.1038/nbt.1918

246

A. Taruttis and V. Ntziachristos

7. Ale A, Ermolayev V, Herzog E, Cohrs C, de Angelis MH, Ntziachristos V (2012) FMT-XCT: in vivo animal studies with hybrid fluorescence molecular tomography-X-ray computed tomography. Nat Methods 9(6):615–620. doi:10.1038/nmeth.201 8. Rosenthal A, Razansky D, Ntziachristos V (2010) Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography. IEEE Trans Med Imaging 29 (6):1275–1285. doi:10.1109/TMI.2010.2044584 9. Xu M, Wang LV (2003) Analytic explanation of spatial resolution related to bandwidth and detector aperture size in thermoacoustic or photoacoustic reconstruction. Phys Rev E Stat Nonlin Soft Matter Phys 67(5 Pt 2):056605 10. Ntziachristos V, Razansky D (2010) Molecular imaging by means of multispectral optoacoustic tomography (MSOT). Chem Rev 110(5):2783–2794. doi:10.1021/cr9002566 11. Glatz J, Deliolanis NC, Buehler A, Razansky D, Ntziachristos V (2011) Blind source unmixing in multi-spectral optoacoustic tomography. Opt Express 19(4):3175–3184. doi:10.1364/ OE.19.003175

9

Multifunctional Photoacoustic Tomography Changho Lee, Sungjo Park, Jeesu Kim, and Chulhong Kim

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principle of PAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PAT Modalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PAT for Morphology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PAT for Physiological Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Total Hemoglobin Concentration (HbT) and Hemoglobin Oxygen Saturation (SO2) . . . . . Photoacoustic Doppler Flowmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metabolic Rate of Oxygen Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PAT for Molecular Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Organic Dyes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metallic Nanostructures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Organic Nanostructures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theranostic Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

248 248 249 253 257 257 258 259 261 261 262 262 264 265 265

Abstract

Photoacoustic tomography (PAT) is becoming a novel biomedical imaging modality which exploits conversion of laser energy to sound waves in optically irradiated tissue. PAT has several advantages: (1) it is safe because it uses ionizing radiation; (2) it overcomes the optical diffusion limitation in optically scattering media and consequently achieves high-resolution imaging with a range greater than one optical transport mean free path (i.e., ~1 mm) in tissues; (3) it provides uniquely high contrast of optical absorption unlike other optical imaging modalities which typically are sensitive to optical scattering, polarization, and C. Lee • S. Park • J. Kim • C. Kim (*) Departments of Electrical Engineering and Creative IT Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Gyeongsangbuk-do, Republic of Korea e-mail: [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_30

247

248

C. Lee et al.

fluorescence; (4) it can be easily adapted to existing conventional ultrasound imaging scanners; thus, PAT systems are relatively cheap and portable; and (5) it can provide information about multiple physiological parameters such as temperature, blood flow, total hemoglobin concentration, oxygen saturation of hemoglobin, metabolic rate, and conversion efficiency between radiative and nonradiative energy decays. This chapter will cover (1) basic principles, (2) various imaging systems, (3) morphological PAT, (4) functional PAT, and (5) molecular PAT. Keywords

Photoacoustic microscopy • Functional imaging • Molecular imaging

Introduction Principle of PAT As an emerging biomedical imaging system, photoacoustic tomography (PAT) utilizes the photoacoustic (PA) effect. When light shines on a sample, an acoustic wave can be generated [1]. Figure 1 indicates the principle of PAT. When a short (e.g., nanosecond) pulsed laser irradiates target tissues that absorb light, thermoelastic expansion generates PA signal waves [2, 3]. After the PA waves pass through the medium, it is captured by an ultrasound (US) detector. The measured PA signal offers information about the optical absorption distribution of tissues. The PA wave equation is   !     2 @H r , t 1 @ β ! ∇2  2 2 p r , t ¼  ; @t vs @ t cp

(1)

  ! where vs [1,480 m/s] is the sound velocity in water, p r , t is the acoustic pressure [MPa], β indicates the coefficient   of volume thermal expansion, cp is the definite heat !

at fixed pressure, and H r , t is the thermal energy originated from injected light per unit volume and time. As shown in Eq. 1, the time-sensitive temperature rise can generate acoustic pressure waves because the thermal energy term, H, is related to the derivative with respect to time. The initial PA pressure rise is induced as po ¼ Γηth Ae ¼ Γηth μa F;

(2)

where Γ = βv2s /cp is the Grueneisen parameter [unitless] and ηth is the percentage of optical absorption Ae which indicates optical excitation energy converted into heat. Because Ae = μaF, where the optical absorption coefficient is expressed as μa and the optical fluence is denoted as F, the optical absorption coefficients and laser radiation amplify the PA signal linearly. When an unfocused US transducer is used

9

Multifunctional Photoacoustic Tomography

249

Fig. 1 Fundamental principles of PAT

to detect PA waves, mathematical algorithms are required to reconstruct the crosssectional PA image [4]. However, when a focused US transducer is used, it can use raster scanning to build the images directly from the detected PA data [5]. PAT has several advantages for in vivo imaging. (1) PAT can penetrate up to ~8 cm into biological tissues [6]; this distance exceeds typical mean free path of optical transport (i.e., ~1 mm). (2) Various molecules can be visualized photoacoustically by selecting an optical wavelength λo [7, 8]. (3) PAT can also provide scalable spatial resolution and imaging depth by conserving the depth-to-resolution ratio constant [9, 10]. (4) Due to the use of the light as an energy source, PAT is nonionizing and therefore safe. (5) PAT images do not contain speckle artifacts, in contrast to optical coherence tomography (OCT) and US imaging [11]. (6) PAT can be easily combined with optical and US imaging systems [12–14].

PAT Modalities Photoacoustic Microscopy (PAM) Acoustic Resolution Photoacoustic Microscopy (AR-PAM) Optical focusing approach cannot be maintained because of strong light scattering in tissue. Thus, both lateral and axial resolutions of AR-PAM depend on parameters of the US transducer (e.g., center frequency and frequency bandwidth). AR-PAM is generally realized in a reflection type. A dark-field confocal AR-PAM system (Fig. 2) provides good penetration depth and high SNR. It delivers a laser beam to a spherical conical lens, which causes the laser beam to assume a ring-shaped pattern. This ring-shaped laser beam is reflected by a custom-made optical condenser and focused in water. The focused beam in the water is coaxially aligned with the US transducer. Samples (i.e., biological tissues) are positioned below a water tank with a bottom opening which is covered by a transparent membrane. The generated PA

250

C. Lee et al.

a

PS2

PS4

PS5

CL

Nd : YAG pump laser

Tunable PS1 OPO laser 1st movable stage

z

PS3 nd

2

y

movable stage

x

SCL

DAQ

5 MHz

AMP

Switchable US 40 MHz

US

Water tank

Whole body scanning stage

Ultrasound gel Animal

Sample stage

b Scanning stage

ND:YAG pump laser

US

PAT image display

Fig. 2 Schematic and photograph of dark-field acoustic resolution photoacoustic microscopy system (Reprinted with permission from Ref. [17])

wave is captured by a single-element US transducer through US gel, which is used to increase acoustic coupling between membrane and sample. AR-PAM has an acoustic lateral resolution (RL,AR) of [15]. RL, AR ¼ 0:71

λA νs ¼ ; NAA NAA  f A

(3)

9

Multifunctional Photoacoustic Tomography

251

where NAA is the acoustic numerical aperture. λA is the acoustic center wavelength. νs and fA are the sound velocity and center ultrasound frequency, respectively. Increasing the λA of a US transducer refines its lateral resolution but also increases its attenuation rate. Following eq. (3), 590 μm and 150μm are calculated as the lateral and axial resolutions of the AR-PAM system with a 5-MHz US transducer, respectively. A maximum penetration depth reaches ~30 mm with this frequency. Moreover, 45 μm and 15μm are calculated as the lateral and axial resolutions of the AR-PAM with a 50-MHz US transducer, respectively. Its maximum penetration depth is ~3 mm. The acoustic attenuation in tissues is reported as 0.6 dB/(cm  MHz) [16]. Images can be assembled by combining information from different scans. Analysis of the arriving time of the generated PA signals in a sample can provide one depth-resolved image. A raster scanning in one transverse direction can offer 2D image data acquisition. Further scanning in the perpendicular direction to the 2D plane can generate 3D images. Optical Resolution Photoacoustic Microscopy (OR-PAM) To enhance US spatial resolution to several micrometers, the imaging penetration depth becomes reduced because US attenuation in tissue is restricted to ~0.6dB/ (cm  MHz). Optical resolution PAM (OR-PAM) is an alternative approach that exploits the phenomenon that if the beam focal spot size of illuminated light in a sample is smaller than the size of the US transducer’s acoustic focal spot, then the size of the optical beam focal spot determined the lateral resolution. Typically, the focal beam diameter is less than the acoustic focal spot size by approximately ten times. Optical focusing can offer a microscale lateral resolution, whereas axial resolution is constrained by the US bandwidth. OR-PAM has an optical lateral resolution (RL,OR) of RL, OR ¼ 0:51

λo NAo

(4)

where NAo and λo are the optical numerical aperture and the optical wavelength, respectively. As shown in Eq. 4, the optical focusing spot size affects RL,OR. In this type, the dominant optical light scattering limits the penetrating depth in biological tissues like other pure optical microscopic techniques. For 3D imaging, OR-PAM requires 2D raster scanning. Thus, its scanning method (e.g., mechanical or optical scanning) determines the OR-PAM speed of image acquisition. Additionally, the pulsed laser repetition rate also influences the imaging speed. OR-PAM can be classified to transmission mode (Fig. 3a) or reflection mode (Fig. 3b), depending on the application [4, 5]. In a transmission mode, the sample is located between the focusing light and PA wave detector; in a reflection mode, the sample is placed under the focusing light and the PA wave detector. To increase imaging speed of system, the optical scanning method (Fig. 3c) is used (i.e., 2D Micro Electro Mechanical Systems (MEMS) mirror scanning, 2D galvo mirror scanner). The optical scanning system has 100–1,200 times faster imaging speed than mechanical raster scanning [15].

252

C. Lee et al.

Fig. 3 Schematic of OR-PAM systems in (a) transmission mode and (b) reflection modes. (c) Schematic of MEMS scanner-based OR-PAM system (Reprinted with permission from Ref. [18–20])

Photoacoustic Computed Tomography (PACT) A PACT system with circular scanning using a single US transducer has been implemented for brain PA imaging in vivo [21–25]. This system requires mechanical scanning time; to increase the data acquisition rate, a full-ring or curved US transducer has been used (Fig. 4) [26]. The imaging speed was increased up to 0.9-Hz frame rate for 2D images using a 10-Hz repetition rate of pulsed laser. Further, real-time dynamic PACT has been investigated by applying several imaging reconstruction methods [4, 27–29]. PACT systems can also use a curved array transducer (Fig. 5a) [30, 31]. The single-element transducer are placed in one shell to form a spherical concave array transducer. This structure allows the cylindrical focusing of an array transducer to generate the cross-sectional PA images. Recent approaches use parallel detection from each element to acquire high-resolution images at video frame rate (Fig. 5b) [32–34]. The real-time imaging enables visualization of dynamic processing such as hemodynamic changes and distribution of contrast agents [35, 36]. PACT that uses a linear array of US transducers has been used in preclinical and clinical settings [37–41]. The program sequence of a clinical US system is modified to acquire raw data of PA signals, then the PA image is reconstructed using receiving beamforming. The image acquisition rate (i.e., 10 Hz) is limited by the laser

9

Multifunctional Photoacoustic Tomography

a

Laser beam

253

b Max

Prism

Coupling medium Ultrasonic waves

64 -c D han AQ n el

512-element fullring ultrasonic transducer array

PA amplitude

Diffuser

Membrane Anesthetic gas and air Animal holder Motorized elevator

Min

Fig. 4 (a) Schematic of the full-ring array PACT system. (b) Cortical vasculature of a mouse acquired from the full-ring array PACT system (Reprinted with permission from ref [26])

Fig. 5 (a) Photograph of the curved array transducer probe. (b) Schematic diagram of the PACT system with a clinical handheld curved array transducer (Reprinted with permission from Refs. [33, 34])

repetition rate; therefore, PA images were obtained at 1 Hz. To generate a PA signal, the pulsed laser system can be integrated with a clinical US machine such as a handheld probe (Fig. 6a) that uses a US transducer with bifurcated fiber bundle laser or a PA mammography system (Fig. 6b, c).

PAT for Morphology PAT at appropriate λo can provide anatomical information about intrinsic chromophores in biological tissues. Oxyhemoglobin (HbO2), deoxyhemoglobin (HbR), fat, melanin, and water are regarded as the main optical absorbers in tissues; these

254

C. Lee et al.

a

holding plates

b

c

Patient’s body

NIR laser pulse acoustic signals

time absorbers and thermo-elastic waves

Acoustic detector

Fig. 6 (a) Photograph of a linear-array-based PA and US imaging system by adapting with a commercial clinical US system. (b) Photograph of the PA mammography system. (c) Schematic diagram of the PA mammography system (Reprinted with permission from Refs. [38, 41]

absorbers have distinct absorption spectra (Fig. 7) [42]. Oxy- and deoxyhemoglobin and melanin are regarded as the prominent absorbers in visible wavelengths, whereas fat and water are the dominant absorbers in the near infrared (NIR). PAT is appropriate for 3D morphological studies that use no exogenous contrast agent to image microvasculatures. By exploiting several intrinsic contrasts, PAT can offer in vivo multiplane spectroscopic whole-body PA images of the internal organs of small animals as shown in Fig. 8 [43]. The process involves acquiring a series of sagittal PA images (Figs. 8a–d) and depth-encoded PA images (Figs. 8e–h) at λo = 532, 700, 850, or 1,064 nm. Under 532-nm laser excitation, the subsurface and main blood vessels are clearly visible, but the spleen is invisible. Under 700-nm and 850-nm laser excitation, the subsurface blood vessels are invisible, but the main blood vessels and spleen and cecum are easily observed. Under 1,064-nm laser excitation, the main blood vessels emit a weaker PA signal than at the other wavelengths tested. These results demonstrate the feasibility of using PAT in tumor research, cancer diagnosis, and theranostic monitoring. Single cells can be also imaged by PAT. Compared to a phase contrast optical image of a single melanoma cell (B16-F1) (Fig. 9a), the PAM image at 532 nm has lower resolution but shows strong PA signals in the area that contains melanin (Fig. 9b). Because melanin absorbs light sufficiently at λo = 532 nm, this is an appropriate wavelength to image a melanoma cell [44]. Both images provide similar

9

Multifunctional Photoacoustic Tomography

255

Fig. 7 Total absorption coefficient of water, blood at 75 oxygen saturation, fat, and melanin (Reprinted with permission from Ref. [42])

Fig. 8 Noninvasive whole-body left sagittal PA imaging of a mouse in vivo. (a–d) Images processed along the z axis and acquired at optical wavelength of 532, 700, 850, and 1,064 nm. (e–h) Depth-encoded images of (a–d), respectively. (i) Close-up image of (c). (j) Commercial mouse anatomy illustration (Biosphera). H head, T tail; 1, descending aorta; 2, kidney; 3, spleen; 4, intercostal vessels; 5, cranial mesenteric vessels; 6, femoral vessels; 7, cephalic vessels; 8, brachial vessels; 9, liver; 10, cecum; 11, lateral marginal vessels; 12, popliteal vessels; and 13, mammalian vessels (Reprinted with permission from Ref. [43])

256

C. Lee et al.

Fig. 9 Images of a single fixed B16-F1 melanoma cell by (a) optical bright-field microscopy and (b) photoacoustic microscopy at λo = 532 nm. IVPA images of a human atherosclerotic plaque at (c) 1,210 nm and (d) 1,230 nm. Arrows: needle for marking (Reprinted with permission from Refs. [44, 46])

information about melanin distribution. To show the melanoma and its closed vasculatures in a PA image, λo = 764 nm and λo = 584 nm have been used [45]. PAT can also visualize lipids. Lipid-rich plaques in the aorta can cause occlusive thrombus heart attack. A high absorption peak wavelength of lipid appeared at λo = 1,210 nm; this trait can be exploited to obtain intravascular photoacoustic (IVPA) images of a human atherosclerotic plaque [46]. Because the lipid has a high absorption peak at λo = 1,210 nm, a strong PA signal is captured in the IVPA image at λo = 1,210 nm (Fig. 9c), which reveals the deep layer of the odd plaque and the periadventitial fat; in contrast, reduced PA signals are acquired at λo = 1,230 nm (Fig. 9d) as a result of the absorption spectrum of lipid. Combining spectroscopic PAT and US imaging can provide more comprehensive information of disease than can use of either method alone [47].

9

Multifunctional Photoacoustic Tomography

257

PAT for Physiological Functions PAT can measure various physiological information, such as total hemoglobin concentration (HbT), hemoglobin oxygen saturation (SO2), blood velocity, and metabolic rate of oxygen consumption (MRO2). The functional parameters can provide comprehensive understanding, and thus diagnosis and treatment of diseases.

Total Hemoglobin Concentration (HbT) and Hemoglobin Oxygen Saturation (SO2) HbT and SO2 play a significant role in biomedical field such as imaging brain activation, monitoring wound healing process, studying tumor physiopathology, and studying gene expression [48–50]. For hemoglobin, HbT and SO2 are commonly used indexes of blood perfusion and oxygenation, respectively [15]. PAT can acquire images of two types of hemoglobin (HbR and HbO2) because they are dominant absorbers in tissues and have different molar extinction spectra (Fig. 10). The concentration of each form of hemoglobin can be calculated from the detected optical absorption at multiple optical wavelengths. The blood absorption coefficient μa(λi) can be calculated as μa ðλi Þ ¼ εOX ðλi ÞCOX þ εde ðλi ÞCde

(5)

where μa (cm1) indicates the absorption coefficient and λi (i = 1, 2) indicates optical wavelengths used to obtain the PA image. The molar extinction coefficients (cm1 M1) of HbO2 and HbR are expressed as εOX and εde respectively. COX and Cde are the concentrations (mM) of HbO2 and HbR, respectively. SO2 and HbT are defined as Fig. 10 Molar extinction profiles of oxy- and deoxyhemoglobin (Reprinted with permission from Ref. [52])

258

C. Lee et al.

Fig. 11 In vivo PA MAP images of (a) HbT and (b) SO2 in a mouse ear acquired by an OR-PAM system. Two optical wavelengths of λo = 532 and 559 nm are used (Reprinted with permission from Ref. [54])

COX ðCOX þ Cde Þ

(6)

CHbT ¼ COX þ Cde

(7)

SO2 ¼ and

μa can be replaced with measured PA amplitude because the localized PA amplitude has a linear relationship with local optical energy deposition. The PA contrast is only sensitive to HbT at isosbestic optical wavelengths (e.g., 498, 568, and 794 nm, Fig. 10), at which the molar extinction coefficients of HbO2 and HbR are the same [51]. HbT and SO2 are very important indicators in cancer detection. In particular, increased HbT (angiogenesis) and decreased SO2 (hypoxia) are both hallmarks of late-stage cancers, whereas hyperoxia is associated with early-stage cancers [53]. Total hemoglobin and SO2 over the field of view of 5  10 mm2 in the ear of a mouse have been acquired based on OR-PAM at λo = 532 and 559 nm (Fig. 11a, b).

Photoacoustic Doppler Flowmetry Measurement of blood flow is a major interest in functional PAT. Various methods based on PAM (e.g., Doppler shift, time domain autocorrelation, frequency domain bandwidth broadening, etc.) have been explored for this purpose [55–59]. In vivo

9

Multifunctional Photoacoustic Tomography

259

Fig. 12 In vivo PA images of three different shaped blood vessels (i.e., (a) loop, (b) straight, and (c) bifurcation). The black crosses indicate the measurement locations. The red arrow indicates flow direction. (d–f) PA profiles over slow time from Av and Bv in (a–c), respectively (Reprinted with permission from Ref. [59])

Table 1 Photoacoustic flow velocity measurement in different shaped vessels (Fig. 12) (Reprinted with permission from Ref. [59]) Corresponding figures (a) and (d) (b) and (e) (c) and (f)

d (μm) 7.8 3.2 5.8

Δt (ms) 2.3 13.9 55.1

v (mm/s) 3.4 0.23 0.11

blood flow velocities in a nude mouse ear were measured by OR-PAM (Fig. 12). The different shaped blood vessel networks (i.e., loop, straight, and bifurcation) were investigated using an OR-PAM system to analyze blood flow velocity in the targeted vessels (Fig. 12a–c), and flow velocity (Fig. 12d–f) at two close points (Av, Bv) in each blood vessel was calculated from flow velocity v, distance d from Av to Bv, and elapsed time Δt (Table 1).

Metabolic Rate of Oxygen Consumption MRO2 is a principal quantity in pathophysiological investigations and in diagnosis and treatment of several diseases. MRO2 represents oxygen consumption rate of tissues directly, as opposed to other oxygenation indexes (i.e., oxygen saturation of hemoglobin (SO2) and partial oxygen pressure (PO2)) that measure it indirectly [60]. If feeding (in) and draining (out) vessels are well defined in the region of interest (ROI), then

260

C. Lee et al.

Fig. 13 Representative in vivo PAM images of a mouse ear showing (a) HbT, (b) SO2 in a dotted box, and (c) blood flow. (d) Depth-encoding PAM image of the mouse ear containing tumor. (e) Top: MRO2 variance due to tumor growth of (d). Bottom: averaged SO2 between inside and outside the tumor (Reprinted with permission from Ref. [61])

MRO2 ¼ ε CHb ðSO2in Ain Vin  SO2out Aout Vout Þ=W

(8)

where ε indicates the oxygen-binding capacity of hemoglobin (typically, 1.36 ml O2/ gram hemoglobin), CHB (g/ml blood) is the total hemoglobin concentration (HbT), SO2 (%) is the hemoglobin oxygen saturation, A (mm2) is the cross-sectional area of the blood vessels, and the averaged blood flow velocity and the weight of region of interest are expressed as V (mm/s) and W (g), respectively [51]. Reflection mode OR-PAM has been used to noninvasively measure the parameters to quantify MRO2 in mouse ear under normothermia. Input and output vessels are selected by one artery-vein pair (AVP) (Fig. 13a, dotted area), which fed the entire ear. The concentration of total hemoglobin (Fig. 13a) in the mouse ear was measured at λo = 584 nm,

9

Multifunctional Photoacoustic Tomography

261

Fig. 14 PA images of sentinel lymph node of a rat. (a) Before injection, (b) 68 min and (c) 251 min after Cu2-xSe injection (Reprinted with permission from Ref. [68])

because this is an isosbestic wavelength of hemoglobin. Anatomical parameters A and W were quantified by volumetric PA imaging at λo = 584 nm. The measured diameter of artery and vein was ~65 and ~116μm, respectively. The average specific weight of the ROI was assumed to be 1.0 g/ml. To calculate SO2 two PA images were acquired, one λo = 584 and one at λo = 590 nm (Fig. 13b). The SO2 values were high (>90%) in the artery and low (60–80%) in the vein. V was measured using PA Doppler bandwidth broadening (Fig. 13c). The average blood flow speeds of artery and vein were 5.5 and 1.8 mm/s, respectively. MRO2 measurements obtained using OR-PAM can be used in early cancer detection (Fig. 14d, e).

PAT for Molecular Imaging Differences in intrinsic optical contrasts of substances (i.e., hemoglobin, lipid, melanin, and water) can provide contrast in PAT images without injecting any exogenous materials. However, the imaging depth is shallow when using these intrinsic contrasts due to the wavelengths of the optical absorption peaks. These chemicals have optical absorption peaks in the visible range; the penetration depth of visible light is significantly decreased by both scattering and strong optical absorption. To increase imaging depth, several kinds of exogenous contrasts such as organic dyes, metallic nanostructures, and organic nanostructures are used in PAT to increase its capability to detect the chemicals in deep tissues [62]. Further, for some kinds of biological tissues which do not have optical absorption contrast, exogenous contrasts can increase sensitivity, specificity, and contrast in PAT.

Organic Dyes Organic dyes have been used for PAT in many biomedical applications. Organic dyes such as methylene blue (MB), lymphazurin blue (LB, isosulfan blue), and

262

C. Lee et al.

indocyanine green (ICG) have already acquired FDA approval for use in humans. MB and LB have been used for biomedical imaging [37–39, 63]. MB is variously used in biology, chemistry, and medicine. Especially, MB is utilized to detect sentinel lymph nodes (SLNs) for determining metastasis of breast cancer. Though LB is the only dye which is approved for SLN identification during breast surgery by the FDA [64], MB is more suitable than LB for SLN biopsy because MB is widely available and inexpensive. Moreover, LB causes several side effects such as high reaction and skin stain. ICG is nontoxic and water soluble and has a principal optical absorption spectrum in the NIR region. ICG penetrates ~5 mm into biological tissue [65]. ICG has received the permission from the FDA approved for diagnosing abnormal human cardiac, hepatic, and ophthalmic blood flow and function. In addition, ICG can also be used as a dual-modal imaging agent for PAT and fluorescence imaging owing to its moderate fluorescence quantum yield (e.g., ~10% in dimethyl sulfoxide and nw ng

(1)

where the refractive index of the medium with the lower index is nw and the subscript may denote water, which is a typical, but not the only, material used in the lower medium (see Fig. 1). Similarly, the higher index material is denoted as ng, where the subscript denotes glass but again may be any suitable material. Note that if there are any intermediate layers between the upper and the lower medium, denoted by ni, the critical angle still depends only on the values of ng and nw. These intermediate layers

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

Fig. 1 Schematic of layered structure representing the total system for both total internal reflection and surface plasmon excitation; θ is incident at an angle greater than the critical angle. Absence of the layer ni corresponds to simple total internal reflectance, whereas the properties of ni can correspond to a series of layers

505

ng

θ ni nw

can be metal, dielectric waveguides, or analyte. Moreover, the critical angle is independent of the polarization state of the input radiation. Of course, one of the main themes in this chapter is the crucial effect of the intermediate layers and input polarization on the response to the incoming light field. There is, however, one other property that depends only on the index of the upper and lower media, namely, the penetration depth of the wave in the final medium. When a wave is incident, from ng, beyond the critical angle, the light in the low-index medium, nw, is evanescent, whose nontechnical meaning is fading or disappearing, and this is exactly what happens with an evanescent wave in total internal reflection system since the wave is non-propagating with an imaginary Poynting vector decaying with a characteristic decay length. Let us make this penetration depth concept a little more definite and quantitative. Consider the incident wave in the high-index medium; this leads to a wave whose 2πn k-vector, kg, is given by λ g, where λ is the free space wavelength; a similar definition can be given for kw replacing ng with nw. The wavevector in the x-direction shown in Fig. 1a is kx (=kg sinθ). This must be continuous in both media, so that as the incident angle increases, the value of kx becomes larger than the value of kw which means the z-component of the k-vector in the lower medium becomes imaginary. This shown by Eq. 2 below: k2w ¼ k2x þ k2zw ¼ k2g sin2 θ þ k2zw

(2)

We can immediately see that k2zw becomes negative; kzw becomes imaginary so that rather than a propagating wave, we have a decaying (or growing) wave. In essence, an evanescent wave is formed when the wave is squeezed to a dimension smaller than its wavelength in one direction so that it is forced to become evanescent in an orthogonal direction. We can further see that as the incident angle is increased beyond the critical angle, the magnitude of k2zw increases, so the decay is more rapid. Defining the penetration depth as the distance the field decays to 1/e of its initial value, we obtain

506

M.G. Somekh and S. Pechprasarn

1 zp ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 kg sin2 θ  k2w

(3)

It is very common in the literature to see expressions like “the penetration depth of the evanescent wave is 250 nm”; this is meaningless unless we know which definition of penetration depth is used. Small values usually mean the 1/e intensity decay and larger (double) values mean 1/e field decay, but this is only an indication when the authors have not been sufficiently rigorous in their definitions! Figure 2 shows a graph of penetration depth versus incident angle for different systems, defined as the distance for the intensity (field) to decay to 1/e2 (1/e) of its original value; that is the definition in Eq. 3. We note that in all cases at the critical angle, the penetration depth is infinite as the wave at this angle and can be considered either as a wave propagating along the interface or an evanescent wave with infinite penetration depth. For the case of a glass/water interface about 3 above the critical angle, the penetration depth is around 180 nm, decreasing to c. 70 nm when the incident angle is in 90 . This has very important consequences in sensing and microscopy. This distance means that the features close to the surface interact more strongly with the penetrating field. For imaging this means that surface features such as cell attachment are imaged preferentially giving, in many cases, very high-contrast images. The penetration depth is such that the distal side of typical cells is not visible; on the other hand, the penetration depth is considerably greater than the thickness of the cell membrane even allowing for the fact that they are not always in tight contact, possibly a few tens of nm from the surface, so the evanescent wave will 700

Penetration depth (nm)

600 500 400 300 200 100 0 40

50

60

70

80

90

Angle of incidence (degrees)

Fig. 2 Plot of penetration depth (defined as field to decay to 1/e its value at the interface) versus incident angle. Red curve is penetration depth at glass (n = 1.52) air interface. Blue curve corresponds to penetration depth at glass water (n = 1.33) interface

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

507

penetrate through into the cytosol. Similarly, the penetration depth is much greater than the extent of most macromolecules, so a single monolayer will only interact with a small portion of the evanescent wave field; for this reason some manufacturers of surface plasmon sensors such as Biacore use a sugar molecule, dextran, to attach to the surface, so that there are multiple attachment points, via different groups such as –SH, –NH2, and –COOH groups, for the target molecule within the evanescent field; this greatly enhances the observed signal change. In essence, then the limited penetration depth of evanescent waves is extremely useful for detection of features located close to the surface; there are, however, cases where even shorter penetration depths would be particularly useful in order to have stronger interaction with small features or to look at conformational changes. This issue will be discussed later in the chapter. Let us now continue to look at the simple case depicted in Fig. 1 and now consider two waves with identical incident angle but different polarization state. Figure 3 shows the field strength as a function of distance from the interface. We note that when the incident angle is close to the critical angle (61.18 ), the wave decays very slowly; we can also see that the TM wave is stronger by a factor of ng/nw, due to the different boundary conditions that need to be satisfied. For a considerably greater incident angle of 70.85 (the significance of this angle will become apparent shortly), the decay length of the evanescent field is approximately 187 nm, and here we see that both transmitted fields at the interface although still greater than the incident field are now considerably smaller than the value close to the critical angle; we also note that the TE polarization gives rise to a slightly larger field than the TM polarization.

Relative field strength

2 1.5 1 0.5 0 10

20

30

40

50

60

70

80

90

100

Distance from interface (nm)

Fig. 3 Decay of field at glass/water interface for different incident angles, solid curve wave slightly (3.3  103degrees) greater than the critical angle, 61.185 (hence very slow decay), dashed curves 70.85 . Red curves TE polarization, green curves TM polarization. Field values are relative to the incident field. Wavelength = 633 nm

508

M.G. Somekh and S. Pechprasarn

If we now insert a thin gold layer whose thickness is 47 nm, layer ni of Fig. 1 between the glass and the water dramatic changes occur as shown in Fig. 4. This figure again shows the effect of TE and TM polarized light beam incident at 61.19 and 70.85 , respectively. We see that the decay rate of the fields in the water depends on the incident angle only; however, the field profiles and their magnitude depends strongly on the polarization state. The interface between the gold layer and the water is shown by the black vertical line; for TE polarization (shown in Fig. 4a) for both incident angles, the light field decays continuously to a small value at the gold/water interface (since the forward and back waves are shown separately, it is their sum that is continuous). For the TM polarization, on the other hand, more striking things happen close to the critical angle; the field in the metal decays to a low value; however, the evanescent field in the water is greater than for the TE case. It is at the larger incident angle, however, that the dramatic effects occur; the field increases within the gold layer and is strongly enhanced at the water interface. This behavior is due to the presence of surface plasmons (SPs); from a purely phenomenological

Relative field strength

a 0.5 0.4 0.3 0.2 0.1 0

0

20

40

60

80

100

120

140

40 60 80 100 120 Distance from interface (nm)

140

Distance from interface (nm)

b 4.5 4 Relative field strength

Fig. 4 Decay of field for different incident angles in presence of 47 nm gold layer (index = 0.171 + 3.516i). The gold/water interface is depicted by the vertical black line. Red curves incident angle slightly (3.3  103degrees) greater than the critical angle, 61.185 (hence very slow decay), red curve incident angle 70.85 . (a) TE polarization, solid curve “forward” wave dashed curve “back” wave; note that total fields at the interface are continuous for TE polarization. (b) TM polarization case (note scale change on y-axis); fields are not continuous at the interface here. Field values are relative to the incident field. Wavelength = 633 nm

3.5 3 2.5 2 1.5 1 0.5 0

0

20

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

509

Absolute value of reflection coefficient

point of view, this is a direct consequence of the negative real part of the permittivity which at 633 nm is approximately 12.33 + 1.2i [2], which corresponds to a refractive index of 0.1721 + 3.5156i. From a more physical view, the SP effect can be regarded as arising from collective oscillations of electrons, whose movement is in antiphase with the driving field over a range of incident angles and wavelengths [3]. Whatever viewpoint one takes, the SP leads to rich variety of properties that make them extremely useful in sensing and microscopy; however, their properties also lead to particular challenges in microscopy and localized measurement applications. Rather than look at the transmitted field, we now look at the reflection coefficient of the structure depicted in Fig. 1. Here the response for TE polarization is shown in the dashed curve, and we see that nothing particularly eye-catching happens except the reflectivity gradually increases with incident angle. When an additional layer is added to the gold as shown in Fig. 5, in this case a 10 nm layer with index 1.52 (layer n2), the reflectivity of the TE incident light is barely affected so it is not shown. For TM polarization, on the other hand, there is sharp dip in the reflectivity (which with fine tuning of the layer thickness) can go to zero. This is due to the excitation of the SPs; the SPs are strongly excited close to 71 ; the angle for excitation of SPs in this system, θp, as discussed above and the strong electronic oscillations induced in the gold layer are subject to ohmic loss which results in the conversion of optical energy to heat. When a dielectric layer is deposited on the gold, the basic shape is similar, but the dip position, θp, moves to larger incident angles; in other words the k-vector of the SP becomes larger. This is the basis of SP sensors, namely, the propagation properties of the SP sensor change with the local environment. Even a layer whose 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 40

45

50

55 60 65 70 Angle of incidence (degrees)

75

80

85

Fig. 5 Reflection coefficient of TE (red) polarization and TM (green) polarization from 47 nm gold layer for incident beam in glass index 1.52 and backing layer water index 1.33. The dashed green curve is obtained when a 10 nm layer of index 1.52 is present between the gold and water layers. The same layer has very little on the TE polarization and is not shown. Wavelength = 633 nm

510

M.G. Somekh and S. Pechprasarn

thickness is much smaller than the penetration depth of the evanescent field can have a significant effect. Indeed measuring the position of the dip is the basic detection mechanism used in most commercial SP sensors [4]. The presence of the dip is, however, a symptom of the excitation of SPs but is not the primary manifestation. Let us consider an “ideal” material which has the same properties of gold except the resistivity disappears; in other words, the refractive index is now given by pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 12:33 ; we can call this “ideal” gold. There is now no loss mechanism, so the dip observed in real metals cannot be present, and indeed from the absolute value of reflectance function, there is no obvious evidence of the excitation of SPs. The phase tells a different story, however, and Fig. 6 shows that around the angle for excitation of SPs, there is a rapid change in phase and this phase change means that energy is moving laterally across the sample. The phase shift observed in the reflection coefficient can be thought of as a spatial analog of Fano resonance. A typical Fano resonance phenomenon occurs from the interference between overlapping responses from a slowly changing response and one that changes rapidly with, typically, the wavelength or photon energy, of the light. The interference between these contributions can lead to an asymmetric signal due to the different phase relationships on either side of resonance. In the reflectivity curves, we may conceptually replace the wavelength of the light with the input wavenumber. In this case there is a slowly varying response due to a direct reflection of the SP and a resonant response that arises as the incident wavenumber matches that of the SP. This is discussed in some detail in [5] where the interference between the direct reflection and SP reflection result in the observed dip in the SP reflection curve for the lossy case. The approach also explains the phase change observed at resonance and shows how in the lossless case there is a phase shift on reflection but no change in amplitude. These conditions are shown in the polar diagrams of Fig. 7.

Phase of reflection coefficient (rad)

–1 –2 –3 –4 –5 –6 –7 40

45

50

55

60

65

70

75

80

85

Angle of incidence (degrees)

Fig. 6 Phase of reflection coefficient for “ideal” gold, ε = (12.33)1/2, for TE (red) and TM (green) input polarizations. Other parameters as Fig. 5

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

511

In this plot the red line from the origin represents the direct reflection from the sample surface; this line does not change significantly as the angle of incidence is changed and indeed it is very similar for both “ideal” and “real” gold. Its value may be approximately determined by interpolating between the reflection coefficient 5 below the SP angle and 5 above it. For simplicity we depict the direct reflection as one single representative value in Fig. 7. The magenta line represents the contribution of the SP at a particular incident angle for ‘ideal’ gold which changes in amplitude and phase so that the resultant signal arising from the interference of the two contributions follows the blue locus of the “ideal” gold and the green locus corresponds for “ideal” gold in this case the SP contribution (not shown) is much smaller due to the attenuation. The blue locus has a rapidly changing phase but constant magnitude above the critical angle. From the green locus, we see that the reflection coefficient is very close to zero at the plasmon angle. The crosses on each locus correspond to a change in incident angle of 1 ; the spacing of these markers

Imaginary part of reflection coefficient

1

71

0.8 0.6 0.4

SP contribution to reflection 72 coefficient

Reflection coefficient

0.2 70

0

71

–0.2 –0.4 69

–0.6

72

70 Direct reflection

69

–0.8 –1 –1

–0.5

0

0.5

1

Real part of reflection coefficient

Fig. 7 Polar diagram showing evolution of the reflections coefficient. The green and blue loci show the complex reflection coefficient for different incident angle. For instance, the red line depicts the direct reflection from the sample without excitation of surface plasmons; this is approximately the same whether the sample has loss or is lossless and changes little. The magenta line depicts the phasor corresponding to the excitation of the surface plasmons at an incident angle close to the optimum angle. The observed reflection coefficient is the resultant of these two contributions (black). The blue locus refers to the case where there is no loss in the substrate, “ideal” gold, so above the critical angle the locus of the reflection coefficient traces out a unit circle. The green locus corresponds to the situation for “real” gold where we see that the maximum strength of surface plasmon excitation is approximately half the value of the lossless case. For the lossy case close to 70 incident angle, the two phasors almost perfectly cancel resulting in a minimum reflectivity close to zero. Increasing angle of incidence is denoted by clockwise rotation; the crosses on the loci correspond to 1 changes in incident angle

512

M.G. Somekh and S. Pechprasarn

Fig. 8 Transmission coefficients for “real” gold in red and “ideal” gold in green. Gold layer thickness 47 nm, wavelength 633 nm

shows that the phase changes much more rapidly around the plasmon angle. It should be mentioned here that the rapid phase change around θp is the basis for the improved sensitivity claimed for SP sensors based on phase measurement [6, 7]. We will now reconsider the transmission coefficients; it should, of course, be reiterated that above the critical angle, the Poynting vector in the final medium is imaginary, so regardless of the absolute value, there is no contravention of conservation of energy since the imaginary Poynting vector corresponds to stored rather than propagating energy. From Fig. 8 we see the red curve corresponds to the field at the interface between the gold and the final medium; this value reaches its peak value when the incident wavevector is matched to the real part of the wavenumber of the SPs. Note that the peak value is the same as the field at the interface shown in Fig. 5 (dashed green). When the losses are removed, the peak value becomes almost twice as large. Clearly, the transmitted field corresponds to the field that actually interacts with the sample; moreover, these results are rather easier to interpret as only the plasmon field appears at this interface and there is virtually no direct transmission interfering with that field. In Fig. 9 we compare the phase of the transmission coefficients of gold and silver, whose thicknesses have been adjusted so the minimum reflectivity value is similar in each case (47 and 50 nm, respectively). The transmission coefficients are presented rather than the reflection coefficients as these represent the effects of the surface plasmon without interference from the directly reflected beam as mentioned above. We see that (i) the phase shift is sharper which is proportional to propagation distance of the surface waves and (ii) that the resonance angle is lower for silver compared to gold. These are both advantages in sensor applications since (i) it means greater measurement sensitivity is possible and (ii) the incident angle can be smaller. These advantages are, however, offset by the chemical reactivity of silver, which means that gold is more compatible with biological experiments, so gold is the preferred choice in most practical experiments.

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

513

Fig. 9 Phase of transmission coefficients for silver (blue) and gold (green); the steeper gradient indicates lower attenuation for the silver. Gold thickness 47 nm, silver thickness 50 nm, wavelength 633 nm

Before we leave this section, we will observe that it is relatively easy to produce surface waves using only dielectric layers. These have both similarities and differences compared to structures producing SPs. In earlier work [8] a surface wave supporting layer was produced for use at 925 nm. This layer consisted of a glass substrate, n0 = 1.51 (very slightly different from previous value because of the different operating wavelength), n1 = 1.34 (Al2F3) thickness 750 nm, n2 = 1.45 (fused SiO2) thickness 350 nm, and nw water. For this structure the reflection coefficient above the critical angle is, of course, unity since there are no losses, but there is a phase shift corresponding to excitation of surface waves. The gradient is much steeper than in the case of the SPs partly because of the weaker coupling as well as the absence of absorption. It is interesting to note that the waveguiding structure works with both TE as well as TM waves, at slightly different incident angles, so the surface waves are not confined to a single polarization state. Since all the layers are transparent, fluorescent emission will pass through the layers in epi-configuration without significant attenuation. Two other useful features are the angle of excitation is less than for the case of SPs which can be useful particularly in microscopy applications where the aperture of available objectives is limited and also the field enhancement is much greater than the case of SPs, particularly useful for fluorescent excitation, especially two photon excitation where the signal enhancement goes as the fourth power of the field strength [8, 9]. Finally, if the index of the final medium, nw, of Fig. 1 is changed, the change in k-vector of the surface wave is approximately six times smaller compared to the case of SPs, so these layers are much less effective than SP layers for measuring refractive index changes.

514

M.G. Somekh and S. Pechprasarn

Overview of Different SP Microscopy Methods Early Work to Obtain Localized SP Imaging: Prism-Based Excitation When the idea of obtaining improved spatial resolution and localization with SPs emerged, it was natural to modify the existing prism-based Kretschmann configuration (see ▶ SPR Biosensors chapter in this handbook). The earliest work in the field was that of Yeatman and Ash [10] in the UK closely followed by Rothenhausler and Knoll [11] in Germany. There are several different approaches to using the prism configuration for imaging, and for a more detailed summary, the interested reader is referred to [5]. One example is a wide-field configuration where the sample is illuminated with a plane beam of light at an incident angle close to θp. In the “bright-field” configuration, the light reflected from the sample is imaged onto the light-sensitive detector where the image is recorded as shown in Fig. 10. Local variations in the SP propagation and local scattering will change the intensity of the reflected light allowing an SP image to be formed. Yeatman and Ash [10] also demonstrated a scanning configuration with similar performance. A key observation made by these early authors was the link between lateral resolution and sensitivity, where higher numerical aperture means a larger range of interaction angles with a corresponding reduction in angular resolution; we will discuss how this link may be broken in section “Confocal and Interferometric Approaches to SP Imaging.” A dark-field path is also shown in Fig. 10 in which the scattered SPs are detected by an imaging arm beyond the hypotenuse of the prism. Fig. 10 Schematic of widefield prism-based microscope showing both bright- and dark-field channels (Reprinted with permission from Springer)

‘Dark field’ Imaging lens

CCD

Prism

Source illumination

‘Bright field’

CCD

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

515

The lateral resolution of the prism-based SP microscope has been studied in detail by Berger et al. [12]. When the observed contrast arises from the change in the k-vector of the SPs in different regions of the sample, it was demonstrated experimentally that the propagation distance or decay length of the SP determined the lateral resolution that could be obtained in a test structure; we have demonstrated this point explicitly using Green’s function model in reference [13]. Using the fact that the decay length of the SPs depends on the excitation wavelength, Berger et al. [12] were able to show that the lateral resolution could be improved by using a shorter wavelength with a correspondingly smaller propagation length. The best resolution obtained was approximately 2 μm obtained at 531 nm; unfortunately when the decay length is this small, the sensitivity to the surface properties is greatly reduced. A similar approach to improving the lateral resolution has been presented by Giebel et al. [14], where a coating of aluminum was used, which has a relatively large imaginary part of the dielectric constant with a consequent reduction in decay length. Once again the improvement in lateral resolution is obtained with a reduction in sensitivity to surface properties. Despite this useful images monitoring, cell attachment and motility were obtained. The difficulty with Kretschmann-based prism configurations arises from two principal factors: (i) The prism means the system is not readily compatible with a conventional microscope configuration, for instance, imaging through the prism is difficult with large aberrations, especially when the plasmon is excited close to grazing incidence. (ii) The lateral resolution is poor compared to that achievable with optical microscopy, so that the range of applications will be restricted. In particular, applications in extremely important fields, such as cell biology, will be severely limited. For these reasons it is necessary to develop techniques capable of overcoming these limitations.

Objective Lens-Based Surface Plasmon Microscopy: Non-confocal This section will concentrate of SP microscopy through objective lenses and the challenges to obtain good lateral resolution and quantitative results. Clearly, if we want to perform microscopy then imaging through an objective lens is a natural approach. SPs can be generated through an objective has with sufficient numerical aperture to produce incident waves with incident angle at and beyond θp. The light returning to the back focal plane for linear incident polarized illumination is shown in Fig. 11a. The radial position in the back focal plane maps to the sine of the incident angle, and from the figure we see that at the angle coinciding with excitation of SPs there is a dark band that corresponds to the dip observed in Fig. 5. Since the light is linearly polarized along the horizontal direction, the light is pure TM polarized in this direction and TE polarized vertically, so along this line no dip is observed. Clearly, in this situation only half of the energy of the incident light is TM polarized; it is possible to use radial polarization where all the incident light is TM

516

M.G. Somekh and S. Pechprasarn

Fig. 11 Calculated back focal plane distributions for (a) linear polarization, polarization along horizontal direction, and (b) radial polarization. Numerical aperture of object = 1.25, sample is 47 nm gold with air ambient, incident wavelength = 633 nm

polarized. The back focal plane distribution corresponding to this situation is depicted in Fig. 11b where the dark band is uniform over the whole azimuth. The focal distribution produced by the radial polarized distribution is also axially symmetrical with a tight focus [15, 16]. There are several ways of producing radial polarization and there are now commercially available units that can be used to switch between radial and tangential polarization; one popular example is produced by Arcoptix [17] where a liquid crystal and an auxiliary phase plate convert one state of linear polarized light to radial polarization and its orthogonal linear state to tangential polarization. The earliest example of using an oil immersion objective lens for SP generation was given in a paper by Kano et al. [18]. In addition, they demonstrated a system that may be thought of as analogous to a “dark-field” system whereby SPs were excited on the sample through an oil immersion objective (NA = 1.3). The presence of local scatterers was detected on the far side of the sample by scattering into waves propagating away from the source; these were collected with a dry objective. A simplified schematic of this system is shown in Fig. 12. This experiment demonstrated the potential of excitation using an objective lens as a means of exciting SPs and showed that resolution comparable to the spot size is obtainable. On the other hand, the arrangement described is not practical for many imaging applications on account of the fact that like a prism-based “dark-field” arrangement, detection takes place on the far side of the sample. Measuring the distribution of reflected light in the back focal plane is the basis of the technique of back focal plane ellipsometry, where the sample reflectivity as a function of incident angle and polarization state may be monitored. The advantage of this technique is, of course, that the focal spot is confined to a small submicron area allowing the properties to be measured in a highly localized region. This technique has been used with dry objectives by several authors to measure film thickness in semiconductors [19], to compensate for material variations in sample properties

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

Immersion oil

From source

Fluid immersion objective

517

Scattered SPs

To optical detector

Dry objective

Fig. 12 Schematic diagram of early scanning objective lens microscope relying on scattering of SPs (Reprinted with permission from Springer)

when measuring surface profile [20], and to extract ellipsometric properties over a local region [21]. This concept has also been extended by Kano and Knoll [22] using an oil immersion objective to measure the local thickness of Langmuir-Blodgett films. In essence, they measured a back focal plane distribution such as the one shown in Fig. 11 and followed the position of the dip as the objective was focused on different regions of the sample, thus obtaining a local value for θp. Local measurements corresponding to four Langmuir-Blodgett monolayers were detected reasonably easily with this technique. These authors subsequently extended this approach to make a scanning microscope configuration [23] where the position of the ring is monitored as a function of scan position, thus allowing microscopic imaging. The lateral resolution obtained with this method appears to be approximately 1.5 μm. The difficulty in achieving the better lateral resolution may, in part, arise from the fact that the experimental distributions in the back focal plane were rather prone to the presence of interference artifacts, as well as delocalization of the interrogating field due to propagation of the SPs. A similar approach, using radially polarized light, has been used more recently [12] to image phase separation in lipid bilayers. The lipids separate into different domains with different local structures and slightly different thickness; clear differentiation of the domains was observed. This approach has been more recently used to visualize mixed lipid domains by observation of the change in effective index in patterned bilayers [24]. The fact that an objective lens can view a range of azimuthal angles has been exploited in Tanaka and Yamamoto [25]. In this technique a laser was used to illuminate the back focal plane of an objective lens, thus resulting in excitation of surface waves as discussed earlier. Four detectors were placed in the back focal plane of the objective oriented so that they would detect scattering both along and normal to the direction of propagation; the resolution and contrast obtained by each detector differed depending on the orientation of the detector relative to the feature. Imaging was achieved by sample scanning. Although reasonable lateral resolution (c. 1.6 μm) was achieved, this method is somewhat cumbersome and there are easier ways to exploit the fact that objective lens can image a range of azimuthal angles.

518

M.G. Somekh and S. Pechprasarn

Another approach to the development of localized imaging using an objective lens has been developed by Xuan’s group [26]. This system uses an objective lens to generate the range of angles necessary for excitation of SPs. The key to the method lies in generating two sources with radially and axially polarized light. The system then operates as two parallel interferometers one for each polarization state. The back focal plane distribution for the two polarization states are then independently monitored on different CCD detectors. The position of the dark ring produces a coarse measure of the SP k-vector; this part of the measurement is similar to the approach of Kano and Knoll [23] and Moh et al. [15], while the phase around the plasmon dip provides a fine measure of the local refractive index changes (reflected in the changes in SP k-vector). The analysis of the phase variation of around the angle for SP excitation is similar to the methods developed by Ho and coworkers [7, 27–31] to analyze the phase variation obtained using the prism-based Kretschmann configuration. The system claims excellent sensitivity around 107 RIU with large dynamic range of 0.35 RIU [26]. Although a little involved in terms of system complexity, the system is expected to be stable to external microphonics since both arms have a similar range of incident angles. In the following subsection, we will also demonstrate that confocal microscopy also offers a large dynamic range. The techniques described above all involve scanning the optics relative to the sample; however, it is possible to produce a wide-field SP microscope with similar performance. A conventional wide-field reflection optical microscope with a suitable objective lens will excite SPs. Consider the system shown in Fig. 13; the mask

Fig. 13 Schematic of wide-field surface plasmon microscope (Reprinted with permission from Journal of Microscopy)

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

519

allows SPs to be excited while blocking light not contributing to the SP contrast. The spatial coherence of the laser source was destroyed by the rotating diffuser, so that there was a monochromatic spatially incoherent light source conjugate with the back focal plane of the objective. Lenses F1 to F3 project the field between images and Fourier planes. Two CCD cameras are used in the detection arm, one to record the back focal plane distribution and the other to record the image. Figure 14 shows a back focal plane distribution from a 45 nm thick gold grating sample taken with the system using a Zeiss Plan Fluar 100 objective

Fig. 14 Experimental (left) and theoretical, calculated by rigorous coupled wave diffraction (right); back focal plane distributions obtained from grating structure shown schematically in (c). (a) Corresponds to linear polarization parallel to grating vector, i.e., SP propagating across the grating, (b) corresponds to linear polarization perpendicular to the grating vector, that is, SPs propagate parallel to the grooves of the grating (Reprinted with permission from Journal of Microscopy)

520

M.G. Somekh and S. Pechprasarn

with an NA of 1.45. We see the characteristic dips arising from the excitation of SPs; note, however, that the distributions obtained with the wide-field system differ from those obtained with point scanning methods such as that of Tanaka and Yamamoto [25], because the back focal plane distribution is now recording information from an extended area on the sample surface rather than a single focused spot. The use of an extended incoherent source has removed the speckle artifacts visible in the back focal plane distributions. It is indeed very difficult to remove such artifacts when a spatially coherent source is used. Figure 14a, b shows the back focal plane distributions obtained from the grating sample shown diagrammatically in Fig. 14c. Note that several periods (c. 7–10) of the grating are illuminated. Figure 14a shows the grating vector oriented parallel to the direction of maximum p-polarization (i.e., SPs propagating across the grating). We see that there is characteristic dip in the back focal plane corresponding to excitation of the surface wave. In fact measurement of the position of the dip shows that it corresponds to the position expected for a gold layer with a uniform 15 nm coating of silicon nitride. This is the mean coverage of the layer (6 μm of 20 nm thick gold and 2 μm bare). We can also observe a second dark crescent corresponding to a diffracted order. Measurements [32] show that the diffraction crescent corresponds to the expected position to within a few percent. An important point to note from this figure is that the diffraction effects are only observed around the SP resonance, indicating much stronger potential image contrast from SPs compared to the background. Figure 14b shows the back focal plane distribution that occurs, then the grating vector is perpendicular to the maximum p-polarization. The pattern is more complicated but again shows strong diffraction around the plasmon angle. Once again the diffraction theory gives very good prediction of the experimental distributions. The back focal plane distributions are therefore strongly indicative that SP imaging can be achieved with a wide-field microscope configuration. If one simply tries to image, however, there is virtually no contrast. This arises from the fact that there is a great deal of background from other angles of incidence which do not contribute to the image contrast; we will return to this point when we discuss the confocal and wide-field interferometric microscope in section “Confocal and Interferometric Approaches to SP Imaging.” The obvious solution is to use an annular mask conjugate with the back focal plane which allows the only light incident at or close to the angle for excitation of surface waves to illuminate the sample. This blocks light that does not contribute to the image contrast. In our experiments we have used a physical mask and also a spatial light modulator (SLM); the former has excellent contrast, whereas the latter allows for the preferred angles to be varied in a very convenient manner. Note that in these experiments, no mask was inserted in the detection arm so that the full aperture was available to detect scattered light. The mask allowed light incident between 45 and 52 to illuminate the sample. The images of Fig. 15 show the grating structure when the grating vector lies parallel and normal to the principal direction of SP propagation; that is the direction of maximum p-polarization. From the height of the transition, we estimate that the resolution along the direction of SP propagation is approximately 1.3 and 0.93 μm normal to

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

521

Fig. 15 SP images of grating structure shown in Fig. 14c, Fig. 14a illumination parallel to grating vector, Fig. 14b illumination perpendicular to grating vector. Image width approx. 64 μm (Reprinted with permission from Journal of Microscopy)

this direction. The lateral resolution is clearly still somewhat dependent on the propagation direction of surface plasmon relative to grating vector; however, the range of azimuthal angles has reduced this effect. This system has been used to image live cells in culture medium. Unlabelled images of focal contacts and adhesions associated with cell surface attachment, spreading, and migration were imaged in real time using this approach [33]. The images in aqueous media were obtained with a 1.65 NA objective to excite SPs a reasonable distance from edge of the aperture. Another approach to wide-field SP imaging is to focus the illumination beam into a spot on the back focal plane; by scanning the spot position, the contrast can be varied, so that specific regions appear bright and dark [34]. The method has the advantage that all illuminating light can be used to generate SPs which can improve contrast; on the other hand, it is more difficult to eliminate speckle artifacts with single-point illumination. The resolution along the direction of propagation is limited by the decay length of the SPs, and resolution improvement was achieved by limiting the propagation length by decreasing the illumination wavelength or

522

M.G. Somekh and S. Pechprasarn

using copper rather than gold to support the SPs. A similar approach was adopted by Tan [35] where he used an SLM to vary the incident illumination angle over a complete annulus. The incident angle could then be varied without mechanical movement, as well as producing spatial maps of SP excitation. Another more recent paper that uses the single-point excitation in the back focal plane has been described by He et al. [36]. Again they illuminate the sample with a single point in the back focal plane, but here rather than ensure all the incident light is TM polarized, the polarization is a controlled mixture of TM and TE so that two beams are incident on the sample which can then be interfered. In this way the phase of SP can be measured. This approach appears to give very stable high-contrast phase images from the sample. In addition to SP phase, the authors used the field enhancement associated with the SP excitation to obtain enhanced fluorescent excitation in a separate imaging channel. Several other label-free evanescent wave techniques have been reported. One example is the recently published “photonic crystal-enhanced microscopy” [37], where the sample is illuminated with a broadband light source through a photonic crystal grating which also supports the sample. The proximity of different structures within the evanescent field perturbs the spectral response from the photonic crystal so that the presence and potentially the location of different features can be monitored by the emission spectrum monitored with a spectrometer.

Confocal and Interferometric Approaches to SP Imaging In this section we discuss methods that limit the effective path of the SPs; this ensures that measurements are performed over a distance determined by the optical system rather than the propagation path of the SPs, thus improving the lateral resolution and limiting the cross talk between adjacent points. These methods use confocal microscopy, heterodyne interferometry (which is essentially a scanning Linnik interferometer with the reference beam frequency shifted relative to the sample beam), and a wide-field speckle Linnik interferometer. The important point to realize is that despite the very different optical configurations, all three systems have essentially the same transfer function. We will therefore start this section by explaining the equivalence between confocal and interferometric detection.

Equivalence Between Heterodyne and Confocal Detection The following argument demonstrates the equivalence between heterodyne interferometry and confocal operation. Consider simplified scanning microscope system as shown in Fig. 16 which omits the illumination path for clarity. In the heterodyne interferometer, the field in the back focal plane is represented as Eb(x,y). Now consider first the heterodyne interferometer with a reference field Er(x, y)expiΔωt where the exponential term indicates that the reference beam and the beam in the back focal planes are frequency shifted relative to each other. The output from the interferometer at a position x, y is given by Ii(x,y):

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

523

Interference with heterodyne reference beam and heterodyne detection

Eb(x,y) Confocal pinhole Sample

Back focal plane

Fig. 16 Schematic diagram to explain equivalence of confocal and heterodyne microscopy

I i ðx, yÞ ¼ jEb ðx, yÞj2 þ jEr ðx, yÞj2 þ 2jEb ðx, yÞjjEr ðx, yÞj cos ½ωt þ ϕðx, yÞ

(4)

where the relative phase between the beam in the back focal plane and that in the reference beam is given by ϕ. Now the overall signal detected in the interferometer is integrated over the aperture. If we assume a uniform reference beam, the magnitude  is the integral over the whole of the complex field Eb, that is, of the signal at ω   ð    jEb jexpiϕ . Now consider the confocal system here; the field Eb(x,y) is    aperture

focused onto a pinhole with the second lens performing a Fourier transform. If the pinhole is placed on the optical axis, the value at this point represents the DC value of the transform or the mean value of the field Eb. The mean value is proportional to the integral of the field, so the output of the ideal confocal system is simply 2    ð    jEb jexpiϕ . Apart from the squaring, the response for the heterodyne inter   aperture

ferometer and the confocal are thus the same under the ideal conditions of a uniform reference beam for the former and a point pinhole for the latter. Since the output of the heterodyne interferometer is proportional to the field returning to the sample, it is possible to measure phase as well as amplitude. In practice, it is usually the amplitude of the interference signal that is measured. The heterodyne interferometric and confocal plasmonic microscopes behave in very similar ways, although the squaring effect has some implications that are discussed below. Similar arguments can be used to explain the equivalence between a wide-field speckle interferometer and the scanning confocal and scanning heterodyne systems [38]. We will now consider each system in turn.

Heterodyne Microscopy for SP Microscopy Shortly after Kano et al. [18] demonstrated that a microscope objective was a convenient way to excite SPs, our group [39, 40] demonstrated an SP microscope based on heterodyne interferometry, in which the spatial resolution was limited by the optical system rather than the propagation length of the SPs, thus giving far

524

M.G. Somekh and S. Pechprasarn

better and more controlled lateral resolution. Practically, the frequency shifting between reference and sample beams was accomplished with acousto-optic modulators in each arm driven at slightly different frequencies. The operation of the heterodyne interferometer is discussed in some detail in [41] and the reasons for the resolution improvement are discussed there. More recently the heterodyne approach has been used to image detailed cellular structure of fixed cells without the need for contrast agents [42, 43]. Moreover, the fact that the heterodyne signal gives a phase as well as an amplitude output was used to produce high-contrast phase images of cells. The importance of SPs as a means of generating contrast was demonstrated by comparing images with radial polarization (generating SPs) and axial polarization (no SPs generated), where far stronger contrast is seen in the radially polarized images. These results are discussed in section “Implications of SP and Evanescent Wave Properties” where they are discussed in the context of other imaging modalities.

Confocal SP Microscopy More recently, however, we have shown that the confocal microscope arrangement provides a similar performance with a simpler and possibly more adaptable configuration compared to the heterodyne interferometer; moreover, the advantages in terms of resolution improvement and localization are much easier to understand with reference to the confocal arrangement which, of course, explains the advantages of the interferometric system due to the equivalence discussed in section “Equivalence Between Heterodyne and Confocal Detection.” Figure 17 shows a simplified schematic of a confocal microscope system in defocus mode. We can see that any TM polarized light on the sample surface will generate SPs which radiate continuously back into the coupling fluid. It is the fact that the position of reradiation is so poorly defined that contributes to the degraded lateral resolution of SP systems. If, however, the system is operated as a confocal microscope, only radiation generated at “a” in Fig. 17 and reradiated at “b” (and vice versa) returns to the pinhole. In other words, in the confocal system, the path of the SPs is defined by the optical system and the defocus rather than the propagation length of the SPs. In addition to this, most of the energy is concentrated close to the optical axis by virtue of the focusing of the SPs which are excited on an annulus on the sample surface. In addition to the light involved in the generation of SPs, the other light path that returns to the pinhole is the beam that appears to hit the sample close to normal incidence. Since only light that appears to come from the focus passes through the pinhole, the output signal is predominantly formed from these two major contributions if we consider a sample defocused by a distance z. We can associate the phase of path P1 with the normally reflected beam as 4πnλ oil z and for the wave involved in 4πn cos θ generation of the SP as oilλ p z, where λ is the free space wavelength, noil is the refractive index of the immersion medium, and θp is the angle of incidence for the excitation of SPs. These beams interfere at the pinhole so that as the sample is defocused, there will be a periodic oscillation whose period, Δz, is determined from

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

525

Fig. 17 Simplified schematic of confocal microscope used for SP imaging (omitting illumination optics). There is a spatial light modulator conjugate with the back focal plane that allows amplitude and phase modulation of the light distribution (Reprinted with permission from Optics Express)

the defocus necessary to change the relative phase between the two contributions by 2π, giving Δz ¼

2noil



λ  1  cos θp

(5)

Figure 18 shows examples of the so-called V(z) curves showing the periodic ripple as a function of defocus. The output, V(z), for linear input polarization is given by I ðzÞ ¼ jV ðzÞj2 2  2π s   ð max ð     2 2  ¼ Pin ðsÞPout ðsÞ r p ðsÞ cos ϕ þ r s ðsÞ sin ϕ expð2jnkz cos θÞsdsdϕ (6)   0

0

where I(z) is the output signal and V(z) is the integrated field; s is the sine of the incident angle, Pin and Pout represent the input and output pupil functions, respectively; rp(s) and rs(s) are the amplitude reflection coefficients for p- and s- polarizations, respectively; and θ is the angle incidence and ϕ is the azimuthal

526

M.G. Somekh and S. Pechprasarn

Normalized modules V(z)

1.4 0.1 0.25 0.5 1

1.2 1 0.8 0.6 0.4 0.2 0 −5

−4

−3

−2

−1

0

1

2

3

4

5

Sample defocus, microns Fig. 18 Simulated V(z) curves for different pinhole diameters. Solid curves 50 nm bare gold, dashed curves gold with 10 nm overlayer with refractive index 1.5. Each pinhole is displaced by 0.1 units on the y-axis and curves corresponding to the overlayers are displaced by a further 0.05 on the y-axis. Pinhole radii are defined in terms of Airy disk radius (0.61λ/NA) shown on the legend (Reprinted with permission from Optics Express)

angle. Note that for radial polarization, the term in the square brackets is simply replaced by rp(s). Note that the ideal response is obtained when the pinhole is vanishingly small, and as the pinhole size is increased, the ripples become less distinct because the path of the SPs is less well defined. In the experiments it is, of course, necessary to use a finite-sized pinhole to allow detection of sufficient light. In practice a pinhole diameter of between 0.25 and 0.5 of the Airy disk diameters is a good compromise. Control of the pinhole diameter can be performed very conveniently using a CCD camera at the pinhole plane rather than a physical pinhole; this allows one to vary the diameter simply by selecting the pixels that are used to form the signal. We can also see that the presence of the overlayers changes the period of the ripples so that a specific defocus one region would appear bright relative to another and at different defocii the situation is reversed. This effect accounts for the changes in image contrast observed in the cell images shown in Fig. 25, where different regions show differing relative contrast with defocus. Similar effects were used for the grating images presented in [39]. There is one important challenge associated with the confocal arrangement that does not arise with the heterodyne system. In the heterodyne interferometer, the interference signal is linearly proportional to the returning field since the output is proportional to the product of the signal from the sample and the reference beam, so the output is proportional to |V(z)|. In the confocal system, the output is proportional to |V(z)|2 which means the relative size of the ripples is rather small; for this reason it

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

527

is useful to remove the intermediate spatial frequencies that contribute to neither the normally reflected beam nor the SPs. Moreover, some apodization of the hard edge of the pupil function is valuable in order to remove ripple related to the aperture of the objective lens [16, 44]. This may be achieved with a spatial light modulator conjugate to the back focal plane. The effect of physical defocus is to impose a relative change of phase between the normally incident radiation and the radiation that generates an SP; if we use a spatial light modulator, we can avoid using physical defocus as this can be affected by imposing the appropriate phase profile in the back focal plane, so that the SLM can perform an effective defocus without physical movement; moreover, it can also perform pupil function apodization by setting adjacent pixels in antiphase to create a “dark” superpixel. One can go further than simply mimicking the effect of defocus. If we look at the phase profile associated with defocus as depicted in Fig. 19b, we see that close to normal incidence, there is increasing curvature with defocus which reduces the strength of the reference beam P1 of Fig. 17. This means that the V(z) signal decays rapidly with defocus. If, however, we replace the green profile in Fig. 19b with the dashed red curve close to normal incidence, while retaining the same phase profile elsewhere, we can maintain the same strength of SP excitation (P2 of Fig. 17) while keeping a strong reference beam P1 for all defocuses, thus greatly improving the SNR of the recovered V(z) curves, whose strength is proportional to the product of the fields associated with P1 and P2. The same configuration can be used to afford even more flexibility. Since the SLM can impose a phase delay on the reference beam relative to the SP beam, it is possible to phase step one relative to the other to recover the amplitude and phase of the SP signal [45]. The use of a phase stepping algorithm then allows one to extract the phase of SP signal which can then be plotted against defocus to recover the SP wavevector, kp ¼ 4πnλ oil sin θp , where the slope is given by   dϕ ¼ 2k 1  cos θp dz

(7)

Figure 20 shows the unwrapped phase plotted against the defocus for a bare gold layer and a gold layer coated with different layers of indium tin oxide (ITO); the values obtained from the SP measurements are in excellent agreement with those from ellipsometry. The principal advantage of the SP measurement is, of course, that it is made through the gold rather than from the ITO layer. The elegant point of this measurement is that the phase of the SP is recovered rather than the phase of the V(z) which is the resultant of the reference beam and the SP beam. The phase reconstruction approach can be used to monitor analyte binding in real time. By measuring the change in gradient as binding and disassociation takes place, the confocal system can be used to follow a process in a similar manner to a conventional SP, the difference being that the interrogated region is several orders of magnitude smaller that a prism-based SP system, thus potentially allowing the use of far more dense chips with lower utilization of reagents (Fig. 21).

528

M.G. Somekh and S. Pechprasarn

Fig. 19 Comparison of V(z) curve obtained by physical defocus (red) and effective defocus by imposing phase profile on SLM (a). (b) Shows wrapped phase profile on SLM corresponding to defocus of 4 μm (sample above focal plane). The dashed red curve in (b) shows an alternative phase profile around normal incidence that retains the strength of the reference beam around defocus thus increasing the signal strength (Reprinted with permission from Optics Express)

We can see that the addition of BSA increases the effective refractive index due to the binding process, and the addition of PBS releases the upper layers of BSA thus reducing the effective index. The blue curve is obtained with a standard curved reference beam corresponding to the green curve of Fig. 19a, exactly analogous to a physical defocus; the green curve with vastly improved signal-to-noise ratio uses a flat phase profile for the reference beam which maintains a strong reference beam while using an identical phase profile to excite the SPs. If shot noise-limited performance can be obtained, the confocal arrangement should allow close to single molecule detection [46]. The system has an extremely high dynamic range in terms of refractive index units, since indices from 1 to close 1.37 can be monitored. Using a 1.65NA objective, even higher refractive index values can be measured.

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

529

Fig. 20 Phase of SP signal versus defocus obtained on five different regions of sample as depicted by the schematic on the left. R0 bare gold, R1 = 3.31 nm of ITO coating, R2 = 6.32 nm, R3 = 8.02, and R4 = 10.02 nm measured by ellipsometry. The gradient of the phase variation in the region of negative defocus allows us to recover the SP wavevector and the thickness of the ITO coating (Reprinted with permission from Light: Science and Applications)

Fig. 21 Shows the nonspecific binding of BSA to a gold substrate. Blue curve is for conventional measurement with curved reference beam as shown by the green curve of Fig. 19b; the green curve shows the same curve with a flat top occupying 60 % of the back focal plane as shown schematically with the dashed red line of Fig. 19b

530

M.G. Somekh and S. Pechprasarn

There is insufficient space to give a comprehensive review of the power of the SLM; however, we should point out that it also allows one to make a system remarkably immune from environmental noise. Consider microphonic vibrations of the sample in this case; a phase error is introduced because the phase of P1 and P2 changes differently because they are incident at different angles. Ideally, we would like the reference beam to hit the sample at the same incident angle as the SP; this can be easily arranged; however, in the defocus state, this light will not return to the pinhole so that no significant interference takes place. The SLM can be arranged to form a wedge in the region where the reference beam returns so that the light that would normally miss the pinhole is deflected back into the pinhole. We have shown that this approach reduces the microphonic noise variance by over three orders of magnitude [47].

Wide-Field Confocal SP Imaging We have explained why the confocal system and the heterodyne interferometer are essentially equivalent optically. In addition the output from the speckle-illuminated wide-field Linnik interferometer has the same confocal transfer function as the scanning heterodyne interferometer as demonstrated both theoretically and in experiment [38]. We may therefore expect similar V(z) properties from a widefield speckle interferometer as a scanning confocal or heterodyne microscope. A schematic of the wide-field system is shown in the main part of Fig. 22. The diffuser randomizes the spatial coherence of the 633 nm He–Ne laser so that a speckle pattern is imaged on the back focal plane (BFP) of the sample and reference objectives. When the sample is in focus, the speckle patterns imaged to the detector plane are similar, resulting in a large interference signal. As the sample is defocused, there is a reduction in the correlation between reference and signal beam, so there is a reduction in the interference term that brings about the confocal response. As the diffuser rotates, a new set of speckle patterns emerge with the same underlying statistical properties, and provided that many different speckle patterns are presented to the sample in the frame acquisition time, the speckle noise will be averaged out. A standard four-step phase stepping algorithm was used to extract the interference signal. Compared to the heterodyne interferometer, there are some practical issues associated with wide-field SP imaging which are similar although even more severe than those encountered with the confocal system discussed in subsection “Confocal SP Microscopy.” In a scanning-based system, the detectors generally have a large dynamic range; this is obviously true with the photodetectors used in heterodyne interferometry, but even in a confocal system which uses a CCD camera as a virtual pinhole, the fact that very many pixels are used for each point means that the effective dynamic range is orders of magnitude larger than if a single pixel were used. In the wide-field system, on the other hand, each pixel corresponds to an image point, so it is not always easy to detect the interference signal on the large background; for this reason an aperture to block the midrange angles, which do not contribute to the interference signal, was again used. Figure 23 shows two V(z) curves obtained for input polarization direction

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

531

Fig. 22 Schematic speckle-based wide-field confocal interferometer setup (Reprinted with permission from Optics Letters)

1 0.9 0.8

Normalized, V(z)

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 −5

−4

−3

−2 −1 Defocus, micrometers

0

1

Fig. 23 V(z) curves obtained on wide-field speckle interferometer. Solid curve on bare gold; dotted curve on coated region (Reprinted with permission from Optics Letters)

532

M.G. Somekh and S. Pechprasarn

perpendicular to the grating vector, on bare gold and a coated region. Similarly, we can see that in focus, there is little image contrast and changing the defocus can invert the image contrast; however, the pupil function, resulting from the aperture function, gives a somewhat less regular pattern than the scanning V(z) curves presented in [39, 48]. The core of our interest in this family of techniques is that they utilize a welldefined path for the SP and thus confer better spatial resolution and also excellent quantification of the SP properties. This is validated experimentally and more recently a detailed analysis using rigorous diffraction theory [41] has been carried out that indicates that interferometric (and confocal) methods give resolution determined by the optical system rather than the propagation length of the SPs.

Implications of SP and Evanescent Wave Properties In this section we will summarize the properties of evanescent waves, surface waves, and SPs and discuss their implications in the development of instrumentation and microscopy systems. We will also show images that illustrate particular properties of surface waves. The main properties of interest are as follows: 1. All wave modes produce evanescent waves in the final medium that is a function of the ng and nw and the incident angle only. 2. There is usually a field enhancement in the final medium that is generally stronger in SPs and guided surface waves compared to waves produced by simple total internal reflection (TIR). 3. For SPs and surface waves, there is considerable lateral energy transport across the sample. 4. When an index of different refractive index is deposited on the guiding wave structure, there is a change in k-vector of the propagating surface waves which is generally considerably larger in SPs compared to dielectric surface waves. The benefit and disadvantage of these properties depends on the applications. In particular, consider the situation depicted in Fig. 24, which shows two principal mechanisms for surface wave imaging. Figure 24a shows the scattering of the surface wave by a small local object. Figure 24b shows a different measurement regime where we wish to measure the local change in SP propagation vector over a small but extended region. Our aim in this situation is to obtain a measure as close as the one that would be obtained with a uniform layer of the same thickness; in other words we require the surrounding regions to have as little influence as possible on the measurement. The confocal arrangement discussed in section “Confocal SP Microscopy” excels in this measurement. Several papers have been advocated SPs as a means of imaging cellular attachment to a substrate for instance [15, 49]. In this measurement the attachment point of the cell acts as a small scatterer as it penetrates the evanescent field.

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

533

Fig. 24 Schematic diagram showing principal mechanisms of interaction of surface plasmons with localized structures within the evanescent field (a) scattering by a small local scattering into propagating waves (this can include a small region of a cell membrane penetrating into the evanescent field) (b) change in local SP wavevector as effected by deposition of an analyte over a small region (Reprinted with permission from Journal of Microscopy)

Figure 25 shows high-resolution scanned images of fixed cells in air taken using a heterodyne interferometric microscope [16, 39, 40, 42, 48, 50, 51], whose details are discussed in section “Overview of Different SP Microscopy Methods.” In this case the field enhancement associated with the SPs is valuable, as it leads to greater signal from both mechanisms described in Fig. 24. The effect of the lateral movement of the SPs is not too serious because the interferometric arrangement limits the effective distance interrogated by the surface waves. Figure 25a shows different V(z) curves corresponding to different regions of the sample which indicate how the contrast between these regions will change with defocus; this is a very useful feature as it allows one to tune the image contrast without significant degradation of resolution. This is an attractive technique and phase images are also presented in [42]. The negative values on the grayscale arise from post processing involving subtraction of the mean value. The point sample scanning, however, makes image acquisition slow. Figure 26a shows wide-field SPR images of live 3T3 cells obtained; the field enhancement (property 2 above) leads to excellent signal contrast, but the propagation of the SPs has degraded the lateral resolution. Moreover, imaging live cells is more demanding compared to fixed cells since in the air environment the SPs are excited at incident angles around 45 compared to over 70 in an aqueous environment. Of course, the wide-field speckle-based system discussed in section “Wide-Field Confocal SP Imaging” gives the merits of wide-field imaging with the resolution advantages of the confocal/interferometric approach. The method has not to our knowledge been used for wide-field imaging of either live or fixed cells possibly because there are simpler methods to get comparable results using evanescent waves. If we examine the system for wide-field SP imaging shown in Fig. 13, a simple modification allows it to be used for total internal reflection imaging microscopy (TIRM), which differs from the well-known total internal fluorescence as this method relies on the light scattered out of the evanescent field back through the

534

M.G. Somekh and S. Pechprasarn

Fig. 25 Scanning surface plasmon images obtained from scanning heterodyne interferometry of fixed IMR90 fibroblasts for different defocus positions in radial polarization. Top figure (a) shows the V(z) curve at three different sample positions indicated by markers on figure (b). The different defocus positions alter image contrast without significantly affecting image resolution, (b) in focus (c) defocus 0.4 μm, (d) 0.8 μm, and (e) 1.2 μm (sample above focal plane) (Reprinted with permission from Optics Express)

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

535

Fig. 26 Images of live 6-day-old 3T3 cells. Left image wide-field surface plasmon image, right image corresponding transmission bright-field image. SP image width 120 by 90 μm, Nikon 60 1.49, 633 nm. Imaging live cells requires exciting SP in aqueous media at high incident angle. Although strongly indicative of a cell attachment, the resolution is affected by SP propagation. In the left image, the SPs propagate in a vertical direction (Courtesy Jing Zhang, Nottingham, University)

optical system. To implement this one simply uses a coverslip without a metal coating and the incident angles are slightly different; in the TIRM case, we require the illumination to be predominantly light incident just above the critical angle rather than the SP angle in the case of wide-field SP microscopy. Since the incident angle is lower than the SP case, the penetration depth is larger; furthermore the field enhancement although significant is smaller than the case for SP excitation. These factors are both disadvantages; on the other hand, the fact that there is little lateral movement of energy means that special measures do not need to be taken to eliminate the effect of wave propagation along the substrate. Figure 27 shows a comparison of a TIRM image with bright-field image on live differentiating mouse neuronal cells. The lateral resolution is diffraction limited and the path of the structure emerging from the cell body can be tracked. For instance, the processes “a” and “b” look essentially the same in the bright-field image, but the fact that the contrast is greatly reduced in the region marked “b” compared to “a,” the lower contrast of “b” indicates that the process emerging from the cell body is moving out of the evanescent field. This method is now being used to actively study stem cell differentiation processes. TIRM can also complement other techniques such as phase contrast microscopy and TIRF. For instance, in a combined TIRM/TIRF system, 3T3 fibroblast cells were genetically modified using standard molecular biology protocols to express the fluorescent fusion protein EGFP-clathrin LCa (enhanced green fluorescent protein clathrin light chain). The colloidal particle uptake was observed in TIRM and their passage through the cell membrane could be monitored as it left the evanescent field. The relationship between the TIRM signal and the fluorescence signal measured in the TIRF channel gave insights into the involvement of clathrin-mediated endocytosis process which was found to depend on the particle size [52]. For particle size around 1 μm diameter, the TIRM and TIRF images were not correlated, whereas for

536

M.G. Somekh and S. Pechprasarn

Fig. 27 Comparison of total internal reflection microscopy (right) and bright-field transmission (left) images of differentiating C17.2 mouse neuronal stem cells. Image size 120  100 μm. Objective lens Nikon 60 1.49, wavelength 660 nm. Note dark regions indicate strong cell attachment (a), and low-contrast (b) regions show regions where process detaches from substrate (the changes are not visible on bright-field image). The limited wave propagation in TIR leads to sharper images compared to SP microscopy (Courtesy Jing Zhang, Nottingham, University)

the 500 nm particles, the signals from TIRM and TIRF channels were correlated in both space and time indicating clathrin-mediated endocytocis for this particle size. One of the most important areas where SP methods are superior to simple total internal reflection is when we wish to determine the changes in refractive index or the presence of a thin layer whose index differs from the background. In this situation, properties 3 and 4 above are beneficial. In other words, the wavevector of the propagating SP needs to change with the deposited analyte and a steep amplitude or phase transition around the angle for efficient excitation of SPs means there is a larger signal change. As mentioned above the steepness of the transition is a measure of the propagation length of the SP, so a longer propagation length is normally associated with greater sensitivity, which makes intuitive sense since it implies a greater interaction length with the sample. If, however, we want to measure the change in the refractive index over a small localized region as depicted in Fig. 25b, the long propagation length is a liability since most of the wave does not interact with the region of interest; moreover, the long propagation length implies that there will be poor spatial resolution [41, 53] and there will be unwanted interaction with adjacent regions. For the confocal system where the measured path of the SPs is controlled by the optical system, a long propagation length is not necessary; in fact, it is beneficial to use thinner gold layers, say 35 nm [16], where the propagation length is shorter but the coupling to SPs is much stronger. This gives a better signal to noise ratio. In summary for the measurement of local scattering as depicted in Fig. 24a, alternative methods using simple evanescent methods are usually more convenient and do not suffer from the fact that surface waves by their very nature propagate laterally. For local measurement of refractive index, SPs offer major advantages and long-term promise.

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

537

Exotic Methods The methods discussed here have concentrated on obtaining resolution approaching the diffraction limit with surface waves; we will briefly discuss two methods which potentially break the normal diffraction limit by (i) utilizing evanescent waves in image formation and (ii) using surface waves with extraordinarily small wavelength so that diffraction limit becomes extremely small. Both methods are in their infancy and are a long way off from being practical imaging techniques in biomedical research; however, their potential possibly indicates a route to obtaining spatially resolved imaging and sensing of biomolecules and is therefore worth mentioning. The first method involves the use of the “Pendry” lens [54]. The essential idea is that if both the permittivity and permeability can be made negative, then a material has a negative refractive index. The result is that evanescent waves can be amplified with the result that “perfect” image construction is possible. Consider a sample with very fine features; the light scattered from this will consist of propagating waves and evanescent waves which will decay as they propagate through “free” space. A slab of negative index will regenerate the evanescent waves so that, in principle at least, the original spectrum is recovered in the image plane. In many cases it is not necessary to achieve a true negative refractive index, and it has been shown by Pendry that if the electric and magnetic fields are decoupled in the near field so that only the permittivity needs to be considered, hence plasmonic materials such as silver can be used as (imperfect) “Pendry” lens since the permittivity is negative. Fang et al. [55] have performed an elegant if somewhat contrived experiment where a fine featured structure was imaged through a 35 nm of PMMA and a 35 nm slab of silver. The image was projected onto photoresist, which was read with an atomic force microscope. Alternatively, the image can be read directly using a near-field optical microscope. These results showed that the image resolution was considerably enhanced compared to free space propagation, indicating that the plasmon structure had enhanced the resolution. The problem with the simple Pendry lens as originally conceived is that is does not perform magnification so the readout problem is merely transferred from the sample to the image. Two principal approaches have been discussed in the recent literature to address this problem both of which rely heavily on the plasmonic properties to obtain the necessary permittivities. The first method the so-called hyperlens is composed of cylindrical multilayers of dielectric and metal to produce a hyperbolic dispersion profile which acts as spatial frequency transformer as the waves propagate through the material, essentially compressing the spatial frequencies (magnifying) emitted from the object as they propagate radially [56, 57]. The next stage of development is the so-called metalens, which rather than simply scaling spatial frequencies has the properties of phase compensation analogous to a conventional lens. Such metalens can focus plane waves and obeys a modified lens formula. These metalenses behave differently for waves entering in different directions, again opening the way for intriguing possibilities. Pendry’s paper has certainly galvanized the plasmonic community to develop ways to exploit methods for imaging with evanescent waves.

538

M.G. Somekh and S. Pechprasarn

Another approach involves genuine generation of far-field imaging with SPs [58, 59]. The method uses a small drop of glycerine placed on a gold-coated prism. The sample to be examined is positioned on the gold under the droplet. At first sight this might appear to be rather like a Kretschmann configuration; it is, however, very different. A key concept behind the experiment is that the k-vector of the SP becomes extremely large when the dielectric constant of the metal is equal and opposite to that of the dielectric (assuming no losses). The glycerine is used because it best meets this condition. The illuminating light is converted to a very short wavelength SP mode by the sample. The droplet subsequently performs its second function that is acting as a mirror for the surface waves, thus forming a greatly magnified (if distorted) image of the sample. Since the surface roughness in the sample scatters light back into free space, the image can then be viewed with a conventional objective, since the magnified image can be faithfully represented with propagating light. Lateral resolution consistent with the predicted SP wavelength of 70 nm has been observed.

Some Future Developments One of the challenges in plasmonic imaging is to move away from highly expensive oil immersion objectives. One approach to solving this issue would be in the development of customized plasmonic lenses. It should be remembered that the high NA objectives have been optimized to minimize aberration over the whole range of incident angles, while for plasmonic imaging particularly in the defocused condition, we are primarily concerned only with angles close to θp, so that far more inexpensive custom objectives could be produced with much more relaxed design conditions. Another approach to the design is to remove the need for high NA objectives altogether by using grating structures to increase the incident k-vector so that SPs can be excited at low angles of incidence from air. Grating surfaces offer other exciting possibilities for low NA excitation of plasmon modes. We mentioned that on plane surfaces the penetration depth of SPs depends only on the incident angle and the refractive index of the first and final medium, which means that the field penetration depth is confined to approximately 200 nm. To reduce the penetration depth further, we need to seek other wave modes. If we excite wave modes with large k-vectors, the penetration depth will be small compared to the usual SP (see Eq. 3). The problem, of course, is that even for SPs we are close to the limit where a propagating plane wave in glass can excite a wave on a plane substrate. If, however, a grating is incorporated into the sample excitation, it is possible to excite such a mode, although the wave propagation will be affected by the presence of the grating. The simplest approach is to use a plane structure that is periodically broken with narrow grooves as depicted in Fig. 28. Figure 29 shows the zero order reflectivity response of the structure when light is incident through a glass substrate (index 1.52). Around 70 incident angle, the normal SP response is visible, albeit somewhat wider because of the thinness of the gold and the variations induced by the grating. As the ambient index is increased, the position of the dip moves to larger incident angles as is expected for a

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

539

Fig. 28 Schematic diagram

Fig. 29 Response to change in refractive index from grating structure of Fig. 28. Blue curve ambient refractive index 1.33, green ambient refractive index 1.35. Note move for a-plasmon (around) 30 is opposite to conventional plasmon (around 70 )

conventional SP response. Around 30 there is wide dip which corresponds to the excitation of another wave mode; the large width indicates that the wave has a short propagation length. In this case the change in the ambient refractive index moves the position of the dip toward smaller incident angles. This is a consequence of excitation by the grating, where we can express the wavevector, km, of the wave mode in terms of the incident wavevector, kinc (kgsinθ of Eq. 3), and the grating wavevector, kgrat; thus km ¼ kinc þ nkgrat

(8)

where n is the diffracted order. If the wave is excited by a negative order, an increase in km results from a decrease in kinc, provided kgrat > kinc. Calculations of the field distribution around the structure show that the field decay is considerably faster than for a conventional SP, for instance, the field around the middle of the gold region decays to 1/e in approximately 62 nm and around the hot spots more quickly. Since several diffracted orders are involved in forming the resultant field distribution, the decay is not a single exponential, so the initial decay is more rapid than for a single wave mode; the penetration depth is a great deal smaller than for conventional SPs which should lead to exciting opportunities to measure conformational changes in molecules. This mode appears to be related to higher order plasmon modes such as

540

M.G. Somekh and S. Pechprasarn

the a-plasmon [60] which predominate on thin films. Indeed, as the gap spacing is reduced, the k-vector approaches that of the a-plasmon. These grating structures provide penetration depths comparable to particle plasmons in a format that can be interrogated by variable angle single wavelength measurements. Although there is considerable work necessary to optimize these structures, they do illustrate the rich range of properties that can be accessed with structures supporting evanescent waves.

Conclusion Surface waves are an invaluable tool for sensor applications giving simple, sensitive, robust, and label-free measurements. The large propagation length of SPs while traditionally associated with high sensitivity is also a limitation when trying to probe smaller areas, which will become increasingly important with the need to make dense sensor chips capable of measuring very small numbers of molecules. This chapter has looked at methods to harness the benefits of the properties of surface waves and evanescent waves for localized measurements and has examined how different optical systems can optimize performance. The physics of SPs and evanescent waves provide exciting possibilities to create new generations of exquisitely sensitive instruments; a major challenge is bridging the physics to the application.

References 1. Axelrod D (2007) Total internal reflection fluorescence microscopy. In: Optical imaging and microscopy. Springer, Berlin/Heidelberg, pp 195–236 2. Johnson PB, Christy RW (1972) Optical constants of the noble metals. Phys Rev B 6 (12):4370–4379 3. Barnes WL, Dereux A, Ebbesen TW (2003) Surface plasmon subwavelength optics. Nature 424 (6950):824–830 4. Homola J, Yee SS, Gauglitz G (1999) Surface plasmon resonance sensors: review. Sens Actuators B 54(1–2):3–15 5. Somekh M (2007) Surface plasmon and surface wave microscopy, Ch. 14. In: Optical imaging and microscopy. Springer series in optical sciences, vol 87. Springer, Berlin/Heidelberg/New York, pp 347–399 6. Kabashin AV, Patskovsky S, Grigorenko AN (2009) Phase and amplitude sensitivities in surface plasmon resonance bio and chemical sensing. Opt Express 17(23):21191–21204 7. Huang YH, Ho HP, Wu SY, Kong SK (2012) Detecting phase shifts in surface plasmon resonance: a review. Adv Opt Technol 2012:471952 8. Goh JYL, Somekh MG, See CW, Pitter MC, Vere KA, O’Shea P (2005) Two-photon fluorescence surface wave microscopy. J Microsc Oxford 220:168–175 9. Oheim M, Michael DJ, Geisbauer M, Madsen D, Chow RH (2006) Principles of two-photon excitation fluorescence microscopy and other nonlinear imaging approaches. Adv Drug Deliv Rev 58(7):788–808 10. Yeatman E, Ash EA (1987) Surface-plasmon microscopy. Electron Lett 23(20):1091–1092 11. Rothenhausler B, Knoll W (1988) Surface-plasmon microscopy. Nature 332(6165):615–617

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

541

12. Berger CEH, Kooyman RPH, Greve J (1994) Resolution in surface-plasmon microscopy. Rev Sci Instrum 65(9):2829–2836 13. Zhang J, Pitter MC, Liu S, See C, Somekh MG (2006) Surface-plasmon microscopy with a two-piece solid immersion lens: bright and dark fields. Appl Optics 45(31):7977–7986 14. Giebel KF, Bechinger C, Herminghaus S, Riedel M, Leiderer P, Weiland U et al (1999) Imaging of cell/substrate contacts of living cells with surface plasmon resonance microscopy. Biophys J 76(1):509–516 15. Moh KJ, Yuan XC, Bu J, Zhu SW, Gao BZ (2008) Surface plasmon resonance imaging of cellsubstrate contacts with radially polarized beams. Opt Express 16(25):20734–20741 16. Berguiga L, Zhang S, Argoul F, Elezgaray J (2007) High-resolution surface-plasmon imaging in air and in water: V(z) curve and operating conditions. Opt Lett 32(5):509–511 17. ArcOptix. Radial polarizer plate. http://www.arcoptix.com/radial_polarization_converter.htm [cited 2014 1st Feb 2014] 18. Kano H, Mizuguchi S, Kawata S (1998) Excitation of surface-plasmon polaritons by a focused laser beam. J Opt Soc Am B 15(4):1381–1386 19. Fanton JT, Opsal J, Willenborg DL, Kelso SM, Rosencwaig A (1993) Multiparameter measurements of thin-films using beam-profile reflectometry. J Appl Phys 73(11):7035–7040 20. See CW, Somekh MG, Holmes RD (1996) Scanning optical microellipsometer for pure surface profiling. Appl Optics 35(34):6663–6668 21. Shatalin SV, JuŠKaitis R, Tan JB, Wilson T (1995) Reflection conoscopy and microellipsometry of isotropic thin film structures. J Microsc 179(3):241–252 22. Kano H, Knoll W (1998) Locally excited surface-plasmon-polaritons for thickness measurement of LBK films. Opt Commun 153(4–6):235–239 23. Kano H, Knoll W (2000) A scanning microscope employing localized surface-plasmonpolaritons as a sensing probe. Elsevier, Amsterdam. 11-5 p 24. Watanabe K, Miyazaki R, Terakado G, Okazaki T, Morigaki K, Kano H (2012) Localized surface plasmon microscopy of submicron domain structures of mixed lipid bilayers. Biomed Opt Express 3(9):2012–2020 25. Tanaka T, Yamamoto S (2003) Laser-scanning surface plasmon polariton resonance microscopy with multiple photodetectors. Appl Optics 42(19):4002–4007 26. Zhang CL, Wang R, Wang YJ, Zhu SW, Min CJ, Yuan XC (2014) Phase-stepping technique for highly sensitive microscopic surface plasmon resonance biosensor. Appl Optics 53(5):836–840 27. Ho HP, Lam WW (2003) Application of differential phase measurement technique to surface plasmon resonance sensors. Sens Actuators B 96(3):554–559 28. Ho HP, Law WC, Wu SY, Lin C, Kong SK (2005) Real-time optical biosensor based on differential phase measurement of surface plasmon resonance. Biosens Bioelectron 20 (10):2177–2180 29. Ho HP, Law WC, Wu SY, Liu XH, Wong SP, Lin C et al (2006) Phase-sensitive surface plasmon resonance biosensor using the photoelastic modulation technique. Sens Actuators B 114 (1):80–84 30. Ho HP, Yuan W, Wong CL, Wu SY, Suen YK, Kong SK et al (2007) Sensitivity enhancement based on application of multi-pass interferometry in phase-sensitive surface plasmon resonance biosensor. Opt Commun 275(2):491–496 31. Wong CL, Ho HP, Suen YK, Kong SK, Chen QL, Yuan W et al (2008) Real-time protein biosensor arrays based on surface plasmon resonance differential phase imaging. Biosens Bioelectron 24(4):606–612 32. Stabler G, Somekh MG, See CW (2004) High-resolution wide-field surface plasmon microscopy. J Microsc Oxford 214:328–333 33. Jamil MMA, Denyer MCT, Youseffi M, Britland ST, Liu S, See CW et al (2008) Imaging of the cell surface interface using objective coupled widefield surface plasmon microscopy. J Struct Biol 164(1):75–80 34. Huang B, Yu F, Zare RN (2007) Surface plasmon resonance imaging using a high numerical aperture microscope objective. Anal Chem 79(7):2979–2983

542

M.G. Somekh and S. Pechprasarn

35. Tan H-M (2011) High resolution quantitative angle-scanning widefield surface plasmon microscopy. In: Tan HM, Pechprasarn S, Zhang J, Pitter MC and Somekh MG (eds), Scientific Reports 6 article 20195 36. He RY, Lin CY, Su YD, Chiu KC, Chang NS, Wu HL et al (2010) Imaging live cell membranes via surface plasmon-enhanced fluorescence and phase microscopy. Opt Express 18 (4):3649–3659 37. Chen W, Long KD, Lu M, Chaudhery V, Yu H, Choi JS et al (2013) Photonic crystal enhanced microscopy for imaging of live cell adhesion. Analyst 138(20):5886–5894 38. Somekh MG, See CW, Goh J (2000) Wide field amplitude and phase confocal microscope with speckle illumination. Opt Commun 174(1–4):75–80 39. Somekh MG, Liu SG, Velinov TS, See CW (2000) High-resolution scanning surface-plasmon microscopy. Appl Optics 39(34):6279–6287 40. Somekh MG, Liu SG, Velinov TS, See CW (2000) Optical V(z) for high-resolution 2 pi surface plasmon microscopy. Opt Lett 25(11):823–825 41. Pechprasarn S, Somekh MG (2012) Surface plasmon microscopy: resolution, sensitivity and crosstalk. J Microsc 246(3):287–297 42. Argoul F, Monier K, Roland T, Elezgaray J, Berguiga L (2010) High resolution surface plasmon microscopy for cell imaging. In: Popp J, Tuchin VV, Matthews DL (eds) Biophotonics: photonic solutions for better health care II. SPIR, Brussels 43. Berguiga L, Roland T, Monier K, Elezgaray J, Argoul F (2011) Amplitude and phase images of cellular structures with a scanning surface plasmon microscope. Opt Express 19(7):6571–6586 44. Zhang B, Pechprasarn S, Zhang J, Somekh MG (2012) Confocal surface plasmon microscopy with pupil function engineering. Opt Express 20(7):7388–7397 45. Zhang B, Pechprasarn S, Somekh MG (2013) Quantitative plasmonic measurements using embedded phase stepping confocal interferometry. Opt Express 21(9):11523–11535 46. Pechprasarn S, Somekh MG (2014) Detection limits of confocal surface plasmon microscopy. Biomed Opt Express 5(6):1744–1756 47. Pechprasarn S, Zhang B, Albutt D, Zhang J, Somekh M (2014) Ultrastable embedded surface plasmon confocal interferometry. Light Sci Appl 3:e187 48. Berguiga L, Roland T, Fahys A, Elezgaray J, Argoul F (2010) High resolution surface plasmon imaging of nanoparticles. In: Andrews DL, Nunzi JM, Ostendorf A (eds) Nanophotonics III. SPIR Conference Volume 7712, Brussels 49. Abdul Jamil MM, Youseffi M, Britland ST, Liu S, See CW, Somekh MG et al (2006) Widefield surface plasmon resonance microscope: a novel biosensor study of cell attachment to micropatterned substrates. In: Ibrahim F, Abu Osman NA, Usman J, Kadri NA (eds) 3rd Kuala Lumpur international conference on biomedical engineering 2006, Kuala Lumpur, pp 334–337 50. Elezgaray J, Roland T, Berguiga L, Argoul F (2010) Modeling of the scanning surface plasmon microscope. J Opt Soc Am A Opt Image Sci Vis 27(3):450–457 51. Roland T, Berguiga L, Elezgaray J, Argoul F (2010) Scanning surface plasmon imaging of nanoparticles. Phys Rev B 81(23):235419 52. Byrne GD, Vllasaliu D, Falcone FH, Somekh MG, Stolnik S (2015) Live imaging of cellular internalization of single colloidal particle by combined label-free and fluorescence total internal reflection microscopy. Mol Pharm 12:3862–3870 53. Yeatman EM (1996) Resolution and sensitivity in surface plasmon microscopy and sensing. Biosens Bioelectron 11(6–7):635–649 54. Pendry JB (2000) Negative refraction makes a perfect lens. Phys Rev Lett 85(18):3966–3969 55. Fang N, Lee H, Sun C, Zhang X (2005) Sub-diffraction-limited optical imaging with a silver superlens. Science 308(5721):534–537 56. Lu D, Liu Z (2012) Hyperlenses and metalenses for far-field super-resolution imaging. Nat Commun 3:1205. doi:10.1038/ncomms.2176 57. Ma C, Van Keuren E (2013) Toward conventional-optical-lens-like superlenses. Nano Bull 2 (1):130105

17

Surface Plasmon, Surface Wave, and Enhanced Evanescent Wave Microscopy

543

58. Smolyaninov II, Elliott J, Zayats AV, Davis CC (2005) Far-field optical microscopy with a nanometer-scale resolution based on the in-plane image magnification by surface plasmon polaritons. Phys Rev Lett 94(5):057401 59. Smolyaninov II, Davis CC, Elliott J, Zayats AV (2005) Resolution enhancement of a surface immersion microscope near the plasmon resonance. Opt Lett 30(4):382–384 60. Burke JJ, Stegeman GI, Tamir T (1986) Surface-polariton-like waves guided by thin, lossy metal-films. Phys Rev B 33(8):5186–5201

Surface Plasmon-Enhanced Super-Localization Microscopy

18

Youngjin Oh, Jong-ryul Choi, Wonju Lee, and Donghyun Kim

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theoretical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conventional Microscopy Techniques Based on SP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SPM and SPR Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TIRFM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plasmon-Enhanced Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plasmon-Enhanced Super-Localization Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical Calculation of Near-Field Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Field Enhancement Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Super-Resolution Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

546 551 553 553 558 561 562 564 565 565 578 579

Abstract

Super-resolution microscopy has drawn tremendous interests because it allows precise tracking of molecular interactions and observation of dynamics on a nanometer scale. Intracellular and extracellular processes can be measured at the molecular level; thus, super-resolution techniques help in the understanding of biomolecular events in cellular and sub-cellular conditions and have been applied to many areas such as cellular and molecular analysis and ex vivo and in vivo observation. Y. Oh (*) • W. Lee • D. Kim (*) School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea e-mail: [email protected] J.-r. Choi School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea Medical Device Development Center, Daegu-Gyeongbuk Medical Innovation Foundation (DGMIF), Daegu, South Korea # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_4

545

546

Y. Oh et al.

In this chapter, we review near-field imaging that relies on evanescent waves such as TIRFM with an emphasis on super-resolution microscopy techniques that emerge recently based on excitation and localization of SP. In particular, three approaches are detailed: firstly, SUPRA imaging that employs the electromagnetic localization of near-fields by random nanopatterns; secondly, NLS that capitalizes on nanoscale fluorescence sampling at periodic nanoapertures; and finally, PSALM that depends on temporal switching of amplified local fields for enhancement of imaging resolution. The resolution typically achieved by these techniques is laterally below 100 nm and closely related to the size of a near-field hot spot and nanostructures used to localize SP. We expect the achievable imaging resolution to decrease significantly in the near future. Keywords

Surface plasmon • Localization • Fluorescence • Microscopy • Super-resolution Abbreviations

AFM FDTD FWHM NA NLS PALM PSALM PSF PSIM RCWA SEM SIM SP SPM SPP SPR SPRi STED STORM SUPRA TIR TIRF TIRFM

Atomic force microscopy Finite-difference time domain Full width at half maximum Numerical aperture Nanoscale localization sampling Photo-activated localization microscopy Plasmonics-based spatially activated light microscopy Point-spread function Plasmonic structured illumination microscopy Rigorous coupled-wave analysis Scanning electron microscopy Structured illumination microscopy Surface plasmon Surface plasmon microscopy Surface plasmon polariton Surface plasmon resonance Surface plasmon resonance imaging Stimulated emission depletion Stochastic optical reconstruction microscopy SP-enhanced random activation Total internal reflection Total internal reflection fluorescence Total internal reflection fluorescence microscopy

Introduction Human visual acuity is limited to 100 μm due to finite NA of the human eyes. A microscope is an instrument which allows visualization of objects that may be too small to see with naked eyes such as cells, viruses, metal grains, and even molecules.

18

Surface Plasmon-Enhanced Super-Localization Microscopy

547

Fig. 1 Microscopy techniques and resolution

The first optical microscope was developed around the 1590s by two Dutch eyeglass makers, Hans Lippershey and Zacharias Janssen. In 1625, the term “microscope” was coined for Galilei’s compound optical microscope [1]. Following the analyses of biological samples by Robert Hooke and Antonie van Leeuwenhoek, commercial manufacturing of optical microscopes started in the nineteenth century. Since then, diverse microscopy and imaging methods have been developed. Microscopy contributes to the progress of many scientific fields. The fields influenced most by the development of microscopy may be biomedical sciences, because biological objects, e.g., viruses, that in the past either went unnoticed or were regarded as impossible to observe have come to the attention of researchers by way of various microscopy techniques [2, 3]. For example, electron microscopy, such as SEM and transmission electron microscopy (TEM) [4], allows much better resolution than optical microscopy as shown in Fig. 1. Through electron microscopy, a magnification by more than a million times has been realized with a resolution on the order of a few nanometers [5, 6]. However, electron microscopy techniques have limited applicability and are difficult to use because of many environmental challenges enforced in the measurement process, e.g., need of vacuum and high voltage. These limitations can be critical in the areas of biological and biomedical engineering, where cells and tissues are typically maintained in a humid incubator in liquid buffer or gel forms if they are to be measured alive. In contrast, optical microscopy allows live observation of biological objects although it is at a reduced imaging resolution. To satisfy various needs and requirements, numerous techniques of optical microscopy have been developed with various functions serving different types of samples. For optical microscopy, fluorescence is widely used because it allows functional imaging on a potentially molecular scale by labeling with appropriate fluorescent molecules, e.g., specific cellular components may be observed through moleculespecific labeling. Also, combination of fluorescence with proper light microscopy allows contrast sufficient to observe structures inside a live sample in real time. Fluorescence is light luminescence by a fluorescent substance that absorbs light to excite fluorescence and emits or fluoresces light at a longer wavelength (lower energy), i.e., λex < λem as shown in the Jablonski diagram of Fig. 2 [7]. Note that photon energy E and wavelength λ are related by E = hc/λ, where h is the Planck constant and c represents the speed of light in vacuum. Excitation and emission

548

Y. Oh et al.

Fig. 2 Jablonski diagram: after a fluorescent molecule absorbs a high energy photon (λ ex), it is excited. The system relaxes non-radiatively and eventually to a ground state emitting a photon at a longer wavelength (λem)

wavelengths depend on the energy level structure of fluorescent molecules or fluorophores. When a fluorophore absorbs incident light energy, it enters into an excited state and loses energy to the environment by non-radiative process until releasing the energy radiatively as fluorescence. Since the emitted light energy is lower than the incident light energy, fluorescent light is always redshifted compared to the excitation light and produces Stokes shift. Absorption and emission process may take place between multi-sublevels within the ground and excited states. This can cause light absorption and emission not to occur at discrete wavelengths and instead to take a continuous spectral range. A larger Stokes shift makes it easier to separate emitted fluorescence from excitation light in fluorescence microscopy. Fluorescence is excited largely using organic dye molecules, while inorganic materials such as quantum dots are increasingly popular. The extensive availability of fluorescent materials allows fluorescent imaging techniques to be highly useful not just in biological imaging but also for samples like drugs and vitamins, making the scope of fluorescence microscopy even larger. As stated earlier, optical microscopy has limited imaging resolution. The lateral resolution of an optical imaging system can be defined as the ability to resolve two adjacent self-luminous points located in the lateral plane. When the two points are too close, the images of the two points form a continuous intensity distribution and cannot be distinguished. As a numerical measure of the resolution, Rayleigh criterion was introduced to define the resolution that can be achieved by an imaging system as the distance between two points when the peak of the image arising from one point collocates with the first minimum of the image arising from an adjacent point object [8]. In this case, in the absence of aberration effects, the lateral image resolution (d ) is determined as

18

Surface Plasmon-Enhanced Super-Localization Microscopy



0:61λ 0:61λ ¼ n sin u NA

549

(1)

and the imaging system is referred to as diffraction limited. Here, n and NA represent the refractive index and the numerical aperture of an objective lens, and u is the cone angle at focal point projected by the lens. Equation 1 suggests that the lateral resolution of an optical system improves with an increase of NA and if wavelength λ decreases. For example, for λ = 488 nm (blue) and with an oil-immersion lens at NA = 1.6, a microscope can optically resolve points separated by 200 nm. When an intensity dip between two adjacent self-luminous points becomes zero, the Sparrow resolution limit is defined such that d¼

0:5λ : NA

(2)

On the other hand, Abbe resolution is associated with the diffraction caused both by the objective and the object itself of an imaging system, according to which images of two adjacent points with spacing d can be resolved if nearest diffraction orders of the points are distinguished by the objective [9]. Therefore, the resolution depends on both imaging and illumination apertures and is given by d¼

λ : NAobjective þ NAcondenser

(3)

Unaided human eyes have an angular resolving power of approximately 1.5 arc minutes. For unmagnified objects at a distance of 250 mm from an eye (eye’s minimum focal distance), 1.5 arc minute converts to a resolution of deye = 100 μm. Oftentimes, a microscope objective does not usually provide sufficient magnification. If combined with an ocular or eyepiece, the resolving power of a microscope is then given by d¼

d eye deye ¼ : M Mobj Meyepiece

(4)

Mobj and Meyepiece are the magnification provided by an objective and an eyepiece, respectively, and contribute to the overall magnification M in series. In the Sparrow limit of the lateral resolution, the minimum microscope magnification is given by Mmin ¼

2d eye NA : λ

(5)

Mmin is the magnification required of an imaging system to reach diffraction limit and is calculated from Eq. 5 as about 250–500 NA (depending on wavelength). At lower magnification, an image becomes brighter due to a larger field of view, though at a worse resolution. Unlike common misconception that a combination of objective and eyepiece with higher magnification would provide better resolution,

550

Y. Oh et al.

resolution does not improve in proportionate to the magnification if it increases higher than 1,000 NA (typical) due to sampling limits. A higher magnification would result in image degradation, rather than improving image clarity. Useful magnification of a microscope is in the range of 500 NA and 1,000 NA. Usually, any magnification above 1,000 NA is called empty magnification [10], i.e., the highest useful magnification of a microscope is approximately 1,500 for an oil-immersion microscope objective with NA = 1.5. To break through the limit of resolution in optical microscopy, numerous imaging techniques have been attempted. These techniques are broadly termed as super-resolution microscopy. One of the conventional super-resolution techniques would be TIRFM based on evanescent waves. Fluorophores as contrast agents are excited in the penetration depth of an evanescent wave that ranges from 100 to 200 nm. This provides a high axial resolution below the diffraction limit in an extremely simple way, while it remains diffraction-limited laterally [11, 12]. Thus, many superresolution techniques are in fact built upon TIRFM. More details of TIRFM are described in the next section. Emerging super-resolution imaging techniques include STED microscopy, which reduces the PSF of incident light to 20 nm or smaller for improved resolution [13–16]. The resolution through this technique improves by 5–10 times over the diffraction limit. STED microscopy is based on the depletion of high energy states at the excited spot when treated with a pulsed optical signal. STED microscopy was used to visualize the glycoprotein distribution on the surface of individual virus [17] and tissue slices that are 30 μm deep in live brain tissue at 60-nm resolution for in vivo analysis [18]. Also characterized were transverse tubules (TTs) with nanometric resolution for investigation of TT remodeling during heart failure [19]. In contrast to STED microscopy, PALM and STORM rely upon repeated stochastic photoactivation of fluorescent molecules [20–22]. Measured images are stacked up through subsequent image post-processing. STORM was used to observe HIV viral infection [23], where molecular distribution of structural proteins was quantified before and after infection of cells. On the other hand, live motion of virus was analyzed using STORM for molecular tracking and measurement of viral activity. Also, STORM revealed the structure of pericentriolar material, an amorphous sub-cellular protein [24]. A drastically different approach to super-resolution microscopy was introduced as SIM, which is a wide-field technique with patterned illumination generated by a diffraction grating or a spatial light modulator and superimposed on the sample while acquiring images [25–28]. Sinusoidally patterned illumination is shifted and rotated during the image acquiring steps of each image set. Through post-processing using a designed reconstruction algorithm based on imposed patterned illumination, information with high frequency can be derived from the raw image set. Therefore, reconstructed images have enhanced lateral and axial resolution compared to images obtained by conventional wide-field microscopy. Since the first introduction [29], saturated excitation based SIM (SSIM) was developed and employed in various in vitro and in vivo biomedical imaging applications [30–32]. Recently, the feasibility to conjugate SIM in lab-on-a-chip applications was also reported [33].

18

Surface Plasmon-Enhanced Super-Localization Microscopy

551

These super-resolution techniques are not perfect. For example, STED microscopy typically requires a light source under pulsed operation, although continuous wave light source has been employed [34], and is slow due to the need of scanning. PALM and STORM also take a long time for image acquisition associated with stochastic fluorescence excitation and thus are potentially not appropriate to observe fast dynamics. In this chapter, we present various approaches of plasmonenhanced microscopy that can potentially resolve issues raised in super-resolution techniques described above. For this goal, section “Theoretical Background” describes backgrounds needed to understand SP and SP-enhanced microscopy. Section “Conventional Microscopy Techniques Based on SP” describes SP-related microscopy techniques such as SPM, SPRi, TIRFM, plasmon-enhanced microscopy, and SP-enhanced imaging. Section “Plasmon-Enhanced Super-Localization Microscopy” details SP-enhanced super-localization techniques for microscopy below the diffraction limit, which is followed by recently emerging super-resolution techniques. Finally, section “Summary” concludes the chapter.

Theoretical Background SP refers to coherent electron density oscillations that are formed at the interface between dielectric and metallic materials under TIR condition. If the real part of metal permittivity is negative and its magnitude larger than that of dielectric permittivity, a phase shift of π is produced at the interface in conjunction with the excitation of SP leading to longitudinal electron concentration waves, as illustrated in Fig. 3. SP is excited by p-polarized incident light, i.e., electric field oscillates in the plane of incidence [35]. Coupled with incident light and guided in the metal-dielectric interface, SP creates electromagnetic wave or SPPs. SPP wavelength is shorter Fig. 3 Illustration of SPP formation and axial geometry. ns and nd represent refractive index of substrate and dielectric ambience

552

Y. Oh et al.

than that of the incident light and thus SPPs can provide significant field localization and spatial confinement [36]. Dispersion relation of SP can be calculated by Maxwell’s equations with appropriate boundary conditions. When SP travels in the x-direction parallel to the surface, wave vectors of an electromagnetic wave at the interface have the following relation: kxd ¼ kxm ðx  directionÞ

(6)

em kzd ¼ ed kzd ðz  directionÞ

(7)

where kd and km represent wave vectors in the dielectric and the metal side. ed and em are the dielectric and metal permittivity, respectively. ω is light angular frequency. Putting Eqs. 6 and 7 to the wave equation leads to ω2  x 2  z 2 k d þ k d ¼ ed c   x 2  z 2 ω2 : km þ km ¼ em c

(8) (9)

Dispersion relation between SP momentum (ksp) and incident light energy is obtained by rearranging Eqs. 8 and 9 as ksp ¼

ω c

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi em ed : em þ ed

(10)

Figure 4 shows the dispersion relation of SP as well as that of light waves in air and glass. SP is excited when incident light is momentum-matched to create SP. In other words, excitation of SP occurs at the frequency at which the dispersion relations of SP and incident light coincide. At the frequency of the horizontal asymptote (ωsp), the real part e 0 m equals negative e d, to which very high SP momentum (ksp) corresponds. The dispersion relation of SP does not cross that of light in air, while light in high refractive index medium crosses the dispersion relation of SP giving rise Fig. 4 Dispersion relation of SPs and incident light

18

Surface Plasmon-Enhanced Super-Localization Microscopy

553

Fig. 5 Evanescent field amplitude in the dielectric medium. E0 refers to the amplitude at the surface (z = 0). Penetration depth p corresponds to the axial distance where the field amplitude is equal to E0/e

to SPR [37]. The dependence of SPR on medium permittivity lays the basis for SP sensing [38–41]. SP propagates at the interface and decays laterally with a propagation distance L given by L ¼ 1=k00sp ;

(11)

where k 00sp represents the imaginary part of SP wave vector at a metal-dielectric interface. Note that SPR occurs under TIR and plasmonic dipoles formed at the interface produce a shallow evanescent wave, a field that decays exponentially in amplitude with distance from surface as shown in Fig. 5, i.e., field amplitudes of an evanescent wave produced under TIR take the following form: EðzÞ ¼ E0 ez=p

(12)

where E0 is the field amplitude at surface. Penetration depth p is expressed as p¼

1=2 λ0  2 2 ns sin θ  n2d 4π

(13)

λ 0 is the wavelength of incident light in vacuum. ns and nd are refractive index of dielectric substrate and ambience. Penetration depth is independent of light polarization and decreases if angle of incidence θ increases. Molecular layers on the metal thin film affect the penetration depth, although the change is usually negligible [42]. Specific expressions of electromagnetic fields (Ex, Hy, and Ez) can be easily derived and are provided elsewhere [43, 44].

Conventional Microscopy Techniques Based on SP SPM and SPR Imaging SPM is a label-free imaging technique first attempted in 1980s that couples excitation of SPP with microscopy [45]. SPM uses incident light at a fixed angle (typically just below an SPR angle) and wavelength to measure the changes in

554

Y. Oh et al.

a

CCD

b

Light microscope Culture medium Aluminum

Cell Y X

Glass prism L3 L2

LD

L4 BE

L1

Surface plasmon microscope

C C D

P

M

Fig. 6 (a) Typical reflectance curve in angle-scanning SPR measurement. In SPRi, the angle remains fixed while binding is measured in terms of the change in reflectivity (ΔR). (b) Schematics of prism-coupled SPRi (Reprinted with permission of Cell Press from [46])

reflectivity (ΔR) that occur when an SPR curve shifts upon molecular interactions on the surface, as shown in Fig. 6 [46]. SPM was often used to visualize and quantify cell/substrate contacts [47]. Although SPM makes it convenient to measure structural changes within the penetration depth from surface without any labels, it has been relatively little used in microscopy applications, because the

18

Surface Plasmon-Enhanced Super-Localization Microscopy

555

Fig. 7 (a) Interference contrast image (b) corresponding SPM image of a goldfish glial cell on an Al film substrate. (c) The same cell with different angles of illumination. (d–f): reflectivity curves for locations 1–4 (marked in (a)); (d) bare substrate (□); (e) thick, organelle-rich part of the lamellipodium (◇); (f) thin part of the lamellipodium (~ and ○). The dashed line in (d) is a fit to the undisturbed surface plasmon curve of the bare substrate and is replotted in (e) and (f) as a reference. The solid lines in e and f are the calculated plasmon curves for regions of the central (e) and peripheral lamelli podium (f). Scale bar in (b) = 100 μm (Reprinted with permission of Cell Press from [46])

lateral resolution is dominated by the propagation length of SP, which is a few micrometers long [48–51]. For this reason, the image resolution tends to be larger than the diffraction limit, leading to smeared images in the SP propagation direction, as presented in Fig. 7, that compares interference contrast and SPM

556

Y. Oh et al.

Fig. 8 (a) Schematic of the optical setup. A p-polarized laser beam is injected onto a 47-nm thickness gold-coated glass coverslip through an oil-immersion objective, which is measured by a CCD camera. (b) The entire cell bottom membrane and part of the cell top membrane in the cell edge regions are located within the typical detection depth of the SPM (Reprinted with permission of Macmillan Magazines, Ltd. from [53])

images of a goldfish glial cell [46]. Changes in the reflectivity image are clearly visible in Fig. 7c as a result of molecular interactions on a substrate. Intensive efforts have been made to shorten the SP propagation length, thus to improve the lateral resolution of SPM. For example, an objective lens type SPM was implemented based on angle-resolved imaging (Fig. 8) [52, 53]. A prism-based SPM physically limits the magnification and overall NA of an imaging system, thus providing poor spatial resolution. However, an objective-based SPM can ensure high lateral imaging resolution on the order of 300 nm using a high NA objective lens. Also, sample position and optical paths do not change in an objective type SPM system when scanning the incident angle, which allows pixel-by-pixel tracking of acquired images. The resolution of SPM was also found to enhance by optimizing object orientation [54] or taking advantage of high NA immersion objectives [52, 55], scanning SPM configuration [56, 57], wide-field interferometry [58], and locally excited SPP modes [59]. Alternatively, surface enhancement by nanostructures has been attempted to modulate SP propagation length, thereby to improve lateral imaging resolution of SPM [60]. A large part of the efforts of nanostructurebased enhancement of SP was focused previously on the improvement of detection sensitivity of SPR biosensing by nanostructure-mediated excitation and localization of SP [61–64]. SP localization leads to shorter SP propagation, and thus, enhanced imaging resolution may be obtained for SPM. To quantify the degree of resolution improvement, SPM was performed experimentally on nanograting. Material and nanograting thickness (dg) was adjusted to maximize the effect of nanograting on SP propagation length. Figure 9 shows clearly that the resolution is particularly poor in silver, because propagation length of SP in silver is much longer (L = 21.3 μm)

Surface Plasmon-Enhanced Super-Localization Microscopy

a

b

c

d

e

f

g

h

Normalized intensity

i

1.0 Thin film control (x-axis) Grating: dg = 10 nm (x-axis) Grating: dg = 20 nm (x-axis) Thin film control (y-axis) Grating: dg = 10 nm (y-axis) Grating: dg = 20 nm (y-axis)

0.8 0.6 0.4 0.2

j

1.0

Normalized intensity

18

0.8

557

0.6 0.4 0.2 0.0

0.0 –2

0

2

4

6

8

10 12 14 16 18 20

Axial distance [µm]

0

6

12

18

24

30

36

Axial distance [µm]

Fig. 9 SEM image of reference squares on the thin film (a) gold and (b) silver. SPR images (c) gold and (d) silver film. SPR images of the reference pattern on Λ = 400-nm nanograting structure sample dg = 10 nm (e) gold, (f) silver and dg = 20 nm (g) gold, (h) silver. The lines represent the paths for intensity profiles presented in (i) and (j) (Reprinted with permission of the Optical Society of America from [60])

558

Y. Oh et al.

than in gold (L = 6.3 μm). The image quality along the vertical axis is worse than along the horizontal axis, because the wave vector component of incident light along the vertical axis drives the propagation of SP in the same direction. The grating vector is parallel to the direction of SP propagation, i.e., vertical axis, to modulate the SP propagation length. With resolution measured by the transition distance corresponding to a 10–90% intensity change across the pattern boundary, the intensity profiles shown in Fig. 9 confirm the enhancement of resolution [60]. The transition distance was measured to be 5.2 μm for a gold nanograting (dg = 10 nm), in contrast to the transition distance measured to be 10 μm on a thin film of gold. The enhancement is more effective on a thicker nanograting (dg = 20 nm) that produced a transition distance of 4.6 μm, thus an enhancement by 2.2 times. The enhancement was more significant with silver in which propagation length of SP is much longer. With dg = 20 nm, the transition distance on a silver nanograting was measured to be 5.5 μm versus 18.4 μm on a silver thin film, i.e., use of nanograting improved the resolution by 3.3 times. Note that there is no propagation of SP along the horizontal axis; therefore, an image remains diffraction limited horizontally. In general, SPM has been relatively limited for imaging applications. However, the technique has been extensively used for high-throughput SPR sensing, usually known as SPRi [65–71]. SPRi is a technique that measures reflectance changes of an evanescent field established by SP, similar to SPM, for simultaneous analysis of arrays of molecules. SPRi has been applied to detecting various biomolecular interactions involving, for example, DNA, RNA, and antibodies on arrays, since it was first used to study molecular monolayers [50].

TIRFM Evanescent fields exist at surface under TIR. TIRFM relies on the excitation of fluorophores in the evanescent field near the surface [72–77]. TIRFM provides extremely fine depth resolution on the order of 100 nm using a very simple optical setup and also allows depth-resolved contrast by avoiding excitation of fluorescence agents that are located far from the surface as shown in Fig. 10.

Fig. 10 Only fluorophores in the evanescent field are excited as indicated in green color. For TIR, the refractive index of the sample (nd) must be lower than the index of refraction of the coverslip (ns)

18

Surface Plasmon-Enhanced Super-Localization Microscopy

559

Mainly there are two common types of TIRFM as presented in Fig. 11: (1) illumination through a prism or (2) illumination through an objective lens [78]. Prismbased TIRFM is easily set up because it requires a simple optical configuration. Prism-based TIRFM allows well-controlled light incidence. With a right-angle prism, angular spread of incident light can be minimized. If a prism-based setup is adopted for plasmon-enhanced microscopy, fluorescence is directly acquired without suffering from additional damping through a metal film. On the other hand, TIRFM based on an objective lens uses an optical setup in which illumination and acquisition share an objective lens and are located on the same side. Target samples are placed on the surface of a transparent substrate. In the case that an objective lens is used for TIRFM, its NA tends to be relatively high, e.g., apochromatic lens of Olympus (APO 100x, NA 1.65) or Nikon (APO 60x, NA 1.49), because TIR should be maintained when illuminating biological samples such as live cells. Since the refractive index of live cells is typically between 1.33 and 1.38, NA should be higher than 1.38. Obviously, surface structures of a substrate can modify the properties of TIRFM. For instance, layers of different dielectric materials were deposited to produce an enhanced field at the surface available for increased fluorescence. In this line of research, a two-layer structure of Al2O3 and SiO2 thin films was designed based on reflectance calculation using Fresnel coefficients and was deposited on an SF10 glass substrate to provide maximal field enhancement for 442-nm excitation at a reasonable angle of incidence [79]. Maximum field enhancement in terms of the ratio of electromagnetic field intensity at surface to that of an incident wave was 56.2 and 19.5 for TE and TM polarization, respectively. In this case, the ratio of the field intensity with dielectric films to that of a conventional structure was obtained as 8.5 and 3.0. The experimental verification of this work was performed by imaging A431 human epithelial carcinoma cells using quantum dots for fluorescence excitation.

Fig. 11 TIRFM configurations (a) prism-based TIRFM and (b) objective lens-based TIRFM (L.S light source, P polarizer, M mirror, O objective lens, F optical filter, D.M dichroic mirror)

560

Y. Oh et al.

Fig. 12 (a) Bright-field and (b) TIRF images of A431 cells on a thin-film sample for TE polarization, compared with those on a reference sample without thin films, respectively, in (c) and (d) (Reprinted with permission of the Optical Society of America from [79])

Figure 12 shows the TIRFM image of A431 cells on a bilayer substrate in comparison with a reference image on a bare glass substrate without dielectric thin films and clearly confirms enhanced fluorescence [79]. In contrast, for bright-field images of the cells, the dielectric films did not make a noticeable difference. Although this work focused on the enhanced evanescent wave amplitude, it is expected that the same approach can be used to optimize axial resolution over conventional TIRFM without degrading lateral imaging resolution. The advantages of TIRFM include an excellent signal-to-noise ratio and relatively low photobleaching in addition to axial super-resolution. In contrast to confocal microscopy that presents enhanced fluorescence contrast relative to epi-fluorescence, TIRFM can provide detailed information regarding molecular dynamics near the plasma membrane as shown in Fig. 13 [12]. TIRFM is widely used for studying cell adhesion and the dynamics of membrane bound molecules [80–83]. For example, TIRFM was used to visualize microtubules distributed throughout the cytoplasm near cell surface for studying, e.g., cortical microtubule attachment and stabilization [84]. In addition, multicolor TIRFM allows visualization of single kinesin molecules that move on individual microtubule tracks to

18

Surface Plasmon-Enhanced Super-Localization Microscopy

561

Fig. 13 (a) Epithelial cell expressing vesicular stomatitis virus glycoprotein tagged with yellow fluorescent protein and targeted to the plasma membrane. (b) Schematic of the structures imaged in (b). A tubular transport container approaches the plasma membrane, fuses, and then disconnects. Scale bar: 2 μm (Reprinted with permission of Macmillan Magazines, Ltd. from [12])

determine the degree of preferential modification of kinesin for posttranslational tubulins [85]. Use of TIRFM in cells can be valuable not only for the research on cortical events but also for investigating the overall microtubule organization and dynamics in the vicinity of the cortex as shown in Fig. 14.

Plasmon-Enhanced Microscopy Excitation of SP is accompanied by an evanescent wave. Fluorescence microscopy using SP-associated evanescent waves is called plasmon-enhanced microscopy or metal-enhanced microscopy. The nature of evanescent waves and metal-enhanced fluorescence (MEF) is quite similar to TIRF in that fluorophores in the excited state interact with localized electromagnetic fields induced in the near-field. Oftentimes, the presence of metal thin films enhances the evanescent wave amplitude and thus excited fluorescence [86, 87]. An important distinction from conventional TIRFM is that the evanescent wave is polarization dependent. Because metal surfaces can increase the radiative decay of fluorophores and the extent of resonance energy transfer caused by interactions of fluorophores with free electrons, fluorescence quenching may occur because of non-radiative energy transfer between fluorescent dyes and metal film when fluorescence molecules are at short distances and also changes in the radiative decay rates [88]. The quantum yield (Q0) and lifetime (τ0) of fluorophores in the free space are given by Q0 = Γ/(Γ + knr) and τ0 = 1/(Γ + knr), where Γ denotes radiative decay rate. knr is non-radiative decay rate. The presence of metal surface increases the radiative rate by the addition of radiative decay in metal (Γm). In this case, the quantum yield (Qm) and lifetime (τm) of the fluorophore near the metal surface can be estimated as Qm = (Γ + Γm)/(Γ + Γm + knr) and τm = 1/(Γ + Γm + knr). In other words, higher Γm increases Qm while lifetime decreases. The radiative decay rate and lifetime can be adjusted by the refractive index of material. The intensity difference of heavily labeled human serum albumin on glass and Ag

562

Y. Oh et al.

Fig. 14 Use of TIRFM to enhance signal-to-noise ratio. Images of a CHO cell transiently expressing microtubule plus end marker EB1-YFP were obtained in the same focal plane but with different types of illumination. (a, b) Images were obtained in a TIRF mode with a high angle of incidence and low penetration depth. (c, d) Images were obtained in TIRF mode with a lower angle of incidence and higher penetration depth, compared to (a, b). (e, f) Images were obtained in the epi-fluorescence mode (Reprinted with permission of Elsevier from [84])

nanoislands is presented in Fig. 15 [89]. MEF is dramatic, as can be observed from the nearly invisible intensity on quartz (left-hand side) and the bright image on Ag nanoislands (right-hand side). MEF was applied to nanophotonics [90] and optical spectroscopy [91] for singlemolecule sensing [92–96]. Experimentally, novel metal nanostructures and nanoparticles were used for single-molecule detection via MEF [97]. It was also shown that metal particle conjugated fluorescent dye can enhance fluorescence intensity [98, 99]. Plasmon-enhanced microscopy can take advantage of waveguide structures on the metal surface for enhanced resolution [100].

Plasmon-Enhanced Super-Localization Microscopy An extreme variety of nanostructures has been investigated in regard to excitation and localization of SP to produce localized near-field electromagnetic waves that are dramatically amplified, also called hot spots. The localization of hot spots created at the surface of nanostructures has been explored for super-resolution microscopy. Formation of hot spots in the near-field is associated with strong localization of plasmonic dipoles at edges, ridges, or vertices due to lightning rod effect [101,

18

Surface Plasmon-Enhanced Super-Localization Microscopy

563

Fig. 15 Photograph of fluorescein-labeled HSA (molar ratio of fluorescein/HSA near 7) on quartz (left) and on an SIF (right) as observed with 430-nm excitation and a 480-nm long-pass filter (Reprinted with permission of Elsevier from [89])

102]. Some of the simple nanostructures that may be used for the field localization are shown in Fig. 16. Such nanostructures can be fabricated by various techniques, for example, electron-beam lithography, focused ion beam method, and nanoimprinting. Compared to conventional imaging techniques described earlier that are diffraction limited, use of localized plasmonic near-fields can produce resolution enhancement for imaging biomolecules, because the hot spot provides a field that is much smaller than the diffraction limit, for sampling target fluorescence. A well-designed hot spot should meet the following requirements to be useful for super-localization

564

Y. Oh et al.

Fig. 16 Scanning electron microscope (SEM) image of various nanostructures: (a) linear nanograting, (b) nano-rings, and (c) nanoapertures of squares and (d) triangles. The patterns were fabricated by electron-beam lithography

microscopy: (1) the optimum hot spot has the smallest FWHM to provide a sub-diffraction-limited PSF, (2) a circularly symmetric shape is desired because asymmetric near-field hot spots would cause the distortion of reconstruction images and degrade the imaging resolution as the FWHM of the long axis of a hot spot dominates the achievable imaging resolution, and (3) TIR should be maintained under all circumstances. In addition, a sufficiently fine grid of hot spots provides a spatial frequency exceeding the frequency stipulated by the diffraction limit by more than twice under the Nyquist theorem. In practice, other issues can also arise, for instance, secondary peaks may exist, which contribute to background noise.

Numerical Calculation of Near-Field Localization Various nanostructures such as bow-ties and C- or T-shapes have been considered to form a hot spot for diverse applications including surface-enhanced Raman spectroscopy (SERS). In this part, we describe a recent report about near-field hot spots that may be formed by relatively simple nanostructures such as circular, rhombic, and square shape [103]. For numerical investigation, many geometrical parameters such as array period, pattern size, depth, or thickness of an underlying layer were

18

Surface Plasmon-Enhanced Super-Localization Microscopy

565

varied. For calculation, RCWA in 3D was used with periodic boundary condition. All nanostructures were assumed on a BK7 glass substrate with a 2-nm thick chromium adhesion layer and a gold film. For SP excitation, TM-polarized light incidence was used at λ = 488 nm that is specific to GFP fluorescence excitation with an angle of light incidence fixed at θi = 60 . In the range of the parameters that were considered, an optimized nanostructure was obtained as periodic squares of 50-nm sides. These square nanoholes were shown to provide a spot size of 5,830 (=53  110) nm2, which is significantly smaller than a diffraction-limited spot. While this study was limited to simple nanopatterns, a gap-based nanostructure typically produces even smaller near-field localization [104–108].

Field Enhancement Effect Before we address super-resolution microscopy based on near-field localization, it is appropriate to note that the spatial localization is accompanied invariably by significant field enhancement. The field enhancement can be produced in many ways using different nanopatterns for a variety of applications. Recently, the use of grating type sub-wavelength nanostructures was explored for stronger fluorescence emission by near-field localization [109]. Numerical computation by FDTD method found that enhanced localization is induced by a larger dg and that thicker grating makes the evanescent field less uniform and more localized, as shown in Fig. 17. Also, silver nanograting presented stronger field localization over gold. On the other hand, a silver nanograting with dg = 10 nm and Λ = 300 nm shown in Fig. 18 was found to show relatively uniform field enhancement and yet reasonably strong plasmonic fields at the surface. It is interesting that a 10–20-nm thick structure is sufficient to support SP in contrast to using 40–50-nm thick metal films in traditional SPR structures. The field enhancement was verified on a quantitative basis by exciting and imaging fluorescent microbeads that are distributed randomly on grating samples and a controlled bare prism. For cell imaging experiments, A431 human epithelial carcinoma cells were cultured and quantum dot images of A431 cells were acquired as shown in Fig. 19. The fluorescence emission from cells on the test sample was visibly enhanced in intensity if compared to the images on the control in a manner that is, in general, quantitatively consistent. For cell images in Fig. 19, slight degradation appeared in conjunction with the mismatch between optimal quantum dot excitation spectrum and the design wavelength of the nanograting. Implication of these results is that field enhancement is accompanied by the near-field localization. If desired, the enhanced fields may be utilized for specific applications, although photobleaching may be aggravated with stronger fluorescence excitation.

Super-Resolution Microscopy Localized near-field hot spots can be utilized for various applications including super-resolution microscopy [110–116]. Here, we focus on super-resolution imaging

566

Y. Oh et al.

Fig. 17 Surface near-field intensity calculated by FDTD methods: (a) conventional TIRF on a silver film, (b) gold at dg = 10 nm and Λ = 300 nm, (c) dg = 20 nm and Λ = 300 nm, (d) silver at dg = 10 nm and Λ = 100 nm, (e) dg = 10 nm and Λ = 300 nm, and (f) silver at dg = 20 nm and Λ = 300 nm (Reprinted with permission of IOP Publishing from [109])

Fig. 18 SEM image of a silver grating (dg = 10 nm and Λ = 300 nm) (Reprinted with permission of IOP Publishing from [109])

18

Surface Plasmon-Enhanced Super-Localization Microscopy

567

Fig. 19 TIRFM images of A431 cells on a bare substrate: (a) and (b) for control sample and (c) and (d) on a sub-wavelength grating sample with  25 digital magnification (Reprinted with permission of IOP Publishing from [109])

based on near-field localization and explore related issues such as materials, nanopatterns, optics, and image deconvolution processes. The rationale behind the use of near-field localization for super-resolution microscopy is that if a localized hot spot is optimized to be of a size just to excite a single molecule, fluorophore excitation in a diffraction-limited spot would indicate the existence of a single target molecule and that the measured fluorescence image can be post-processed to produce superresolved images. However, it should be reminded that simple use of a hot spot does not improve image resolution because PSF is not determined by the near-field characteristics but by the far-field properties. Many studies have tried to use nanoplasmonic localized near-fields that are modulated by nanostructures and temporally sampled fluorescence excited by the near-fields. A recent study reports implementation of highly vertical nanostructures for the generation of a field quickly decaying in a cell and selective visualization of intracellular fluorophores [110]. Though the electromagnetic confinement is not based on the plasmonic field localization, this work is still worth a note because it shares the nature of localization-based spatially selective detection. For the confinement of electromagnetic waves, the authors produced a transparent dielectric nanostructure of a diameter smaller than light wavelength to restrict the propagation of light while generating an evanescent wave along the vertical surface within about 1-μm depth into the sample interior. A highly confined observation volume was

568

Y. Oh et al.

Fig. 20 Fluorescence imaging using vertical nanostructure illumination in live cells. (a) Transmitted light imaging reveals the locations of nanostructures. (b) Fluorescence imaging by epi-illumination. (c) Nanostructure illumination excites only those fluorescence molecules that are very close to nanostructures inside the cell. Scale bar: 10 μm (Reprinted with permission of the National Academy of Sciences, USA from [110])

created for single-molecule detection and sensitive optical measurement of dyes or proteins of interest in the in vivo cell environment. The vertical nanostructures were illuminated by transmission, as shown in Fig. 20a. Epi-fluorescence by excitation at 488 nm shown in Fig. 20b revealed diffuse fluorescence over the entire cell. Lack of light confinement in the epi-illumination mode caused the fluorescence intensity at the nanostructure to be lower than that of immediate surroundings. In contrast, selective excitation of vertical nanostructure by trans-illumination provided a highly localized fluorescence signal around the pillars as shown in Fig. 20c. Also, nanostructures were coated with antibody for targeted illumination of intracellular proteins. Figure 21 shows COS7 cell lines transfected with a plasmid encoding GFP-fused transmembrane protein synaptobrevin and nanostructures modified with antibodies against GFP. Figure 21c, f show bright fluorescence signals associated with colocalization of anti-GFP antibodies with the nanostructures inside the cell. The experiment indicates that vertical confinement nanostructures can not only function as an array of localized light sources inside a cell but also probe local cellular events of interest. On the other hand, vertical nanostructures are intrusive and may affect intracellular molecular dynamics and cell viability.

SUPRA Imaging Nanolithography techniques used to define nanostructure for localization of nearfields are often too costly and involve low-throughput processes. Obviously, it is desired that a single process would complete the fabrication of nanopatterns as a whole. For this reason, temperature-annealed nanoislands were used as a template

18

Surface Plasmon-Enhanced Super-Localization Microscopy

569

Fig. 21 Antibody-labeled nanopillars simultaneously recruit and illuminate proteins of interest in live cells. (a) White-light imaging reveals the locations of the nanostructures. (b) Fluorescence imaging by epi-illumination shows the shape of a COS7 cell expressing GFP-synaptobrevin. (c) Nanopillar illumination shows extremely clean signal, colocalizing perfectly with the nanopillars inside the cell area. (d–f) Zoom-in images show that nanopillar locations usually have brighter fluorescence compared with surrounding areas. Scale bar: 10 μm (Reprinted with permission of the National Academy of Sciences, USA from [110])

for super-localization microscopy, which is called SUPRA imaging [117]. The nanoislands form spatially random distribution of hot spots on the metal surface that is determined by the geometric parameters of the islands in a complicated way. SUPRA imaging thus depends on the excitation of SPs in random nanoisland patterns for enhanced lateral imaging resolution. If fluorophores distributed on a substrate are imaged by conventional microscopy, multiple molecules excited in a field of view cannot be distinguished due to the diffraction limit. In SUPRA imaging, fluorophores are excited by the hot spots created at nanoislands. If the distribution of hot spots can be controlled such that approximately a single hot spot exists in a field of view and also if the hot spot can be small in size enough to excite a single molecule, single-molecule imaging becomes a possibility. In this sense, how much one may control the distribution of nanoislands and the near-fields is critical for the performance of SUPRA imaging. For the preparation of nanoisland samples, a thin silver film was synthesized by thermal annealing method at 175  C for 5 min, whereby a silver film was transformed into nanoislands. The size distribution of nanoislands can be adjusted to some degree by adjusting the film thickness (df). Figure 22a, b present SEM and AFM images of nanoislands as a result of temperature annealing, when the initial

570

Y. Oh et al.

Fig. 22 (a) SEM and (b) AFM images of temperature-annealed nanoislands. (c) SEM image of an A549 cell attached on the nanoislands (Reprinted with permission of Wiley & Sons, Inc. from [117])

silver thin-film thickness was df = 10 nm. An image of an A549 cell on the synthesized nanoislands is also shown in Fig. 22c. When df = 10 and 20 nm, average island size was obtained as 112 and 118 nm, respectively, and the size was found to follow a normal distribution, as shown in Fig. 23a. On the other hand, the distribution of the separation between islands was measured to fit a log-normal probability density function p(s) given by: pffiffi 2 s 1 pðsÞ ¼ pffiffiffiffiffi e½lnðsc Þ= 2ws  2π sws

(14)

Here, s, sc, and ws denote separation between islands, average of the separation, and its deviation. Near-field distribution created by the nanoislands was calculated by RCWA in Fig. 23c and clearly shows that each field of view represented by the square in the figure contains one or no hot spot, although the size of a hot spot ranges from 50 to 100 nm in terms of FWHM much larger than a molecular scale. SUPRA imaging was experimentally confirmed by imaging receptor-mediated endocytosis of adenovirus through specific targeting with coxsackie virus and adenovirus receptors (CARs) at A549 cell membrane. Imaging pathways of

18

Surface Plasmon-Enhanced Super-Localization Microscopy

571

Fig. 23 Geometrical distribution of the fabricated nanoislands: (a) nanoisland size (b) separation, and (c) numerically calculated near-field distribution by RCWA (Reprinted with permission of Wiley & Sons, Inc. from [117])

adenovirus particles across the cell membrane can be an ideal application for SUPRA imaging, because an adenovirus is approximately 90 nm in diameter, similar to the size of a hot spot formed by nanoislands. Figure 24a, e present bright-field images of the cell line taken 30 min after the injection of adenoviruses on a thin-film control and nanoislands, respectively. Figure 24b, c show conventional plasmonenhanced microscopy images measured 15 and 30 min after injection on a thin-film control. In contrast, Fig. 24f, g are the same images by SUPRA imaging on nanoislands. The images taken 15 min after injection in Fig. 24b, f indicate that cell morphology tends to be maintained, whereas the cells become swollen after 30 min as shown in Fig. 24c, g as a result of internalization of the adenoviruses. Comparison of conventional plasmon-enhanced microscopy images taken on the thin-film control in Fig. 24c, d and SUPRA images on the nanoisland sample in Fig. 24g, h at the same particle concentration upholds the improved resolution. The intensity profiles are presented in Fig. 24i and contrast the resolution obtained in the images of adenovirus particles. SUPRA imaging provides well-defined intensity characteristics with FWHM measured to be 143 nm. This is approximately in line

572

Y. Oh et al.

Fig. 24 Images obtained after endocytosis of adenovirus into A549 cells. (a) Bright-field image of the cell and (b) plasmon-enhanced microscopy images of the thin-film control: 15 and (c) 30 min after the injection of adenoviruses. The same images in (e) bright-field and (f), (g) SUPRA imaging on the nanoisland sample. (d), (h) Magnified images of squares shown in (c) and (g). (i) Fluorescence intensity profiles along the lines shown in (d) and (h) (Reprinted with permission of Wiley & Sons, Inc. from [117])

with the scale of convolution between hot spot and adenovirus. In other words, the result confirms the detection of a single virus particle.

NLS Despite the advantages of SUPRA imaging, its random nature makes image deconvolution very difficult because exact locations and the shape of the near-field hot spots cannot be fully determined unless near-field detection is performed in advance. For this reason, periodic nanostructures with known kernel shapes have been used for localization-based super-resolution microscopy. As an example, suppose that closely located fluorescent molecules are excited by a localized near-field hot spot while they move at a constant speed. Conventional imaging techniques cannot distinguish individual molecular fluorescence because of the diffraction limit.

18

Surface Plasmon-Enhanced Super-Localization Microscopy

573

Fig. 25 (a) Experimental schematics for the NLS. (b) SEM image of the nanoaperture type antenna arrays (ϕ = 300 nm and Λ = 1 μm). Inset shows magnified and tilted SEM image of nanoaperture arrays (Reprinted with permission of Wiley & Sons from [113])

However, fluorescence sampling with time progression based on the hot spot can provide information at an improved resolution as long as the kernel shape is known a priori and sufficiently small. In other words, a localized field temporally samples movement of fluorescent molecules for enhanced lateral resolution in NLS [113]. The schematic for the NLS is shown in Fig. 25a. The effective resolution of the NLS is determined by the kernel size, which is to be much smaller than the diffraction-limited PSF. In this regard, optimal design of nanostructures is important in the NLS. Experimental feasibility of the NLS was demonstrated by imaging the sliding of microtubules in vitro that were fluorescently labeled with rhodamine. Microtubules participate in many cellular processes, e.g., cell division, vesicular transport that involves molecular movements in a cell. In this work, circular nanoapertures shown in Fig. 25b were implemented with a diameter (ϕ) at ϕ = 300–500 nm and the period (Λ) of the arrays in the range of Λ = 1–3 μm. The highest field localization was obtained with nanoapertures with ϕ = 300 nm and Λ = 1 μm, as shown in Fig. 26. For image reconstruction, near-field distribution was first calculated by RCWA. Theoretically calculated result was consistent with experimental results. It is clear that hot spots exist in the near-fields as shown in Fig. 26 for the nanoaperture structures with ϕ = 300 nm and Λ = 1 μm at an angle of incidence 70 . The shape of a hot spot created by the nanostructure was half-elliptical due to oblique light incidence. FWHM of the hot spot was 39 nm (width) and 135 nm (height). The acquired fluorescence intensity can be processed for reconstruction into a super-resolved image through serial convolution with the near-field kernel, i.e., an image can be reconstructed by

574

Y. Oh et al.

Fig. 26 Numerically calculated near-field distribution result, light incidence angle = 70 , a nanoaperture of ϕ = 300 nm and Λ = 1 μm. (a) The near-field hot spot is sized to be 39 nm in width (short axis) and 135 nm in height (long axis). x and y axis are in nm. Dotted lines represent the profile along which the field intensities are provided. (b) Near-field intensity distribution of nanoaperture arrays (Reprinted with permission of Wiley & Sons from [113])

I ðr; tÞ ¼

X m, n

  K ðr Þ  im, n ðtÞδ r  r m, n

(15)

where I(r;t) is the image. K(r) is the near-field kernel formed by a single nanostructure. im,n(t) is the intensity measured at each of the 2D nanostructure elements in an array indexed by m and n. K(r) works as a point spread as a result of finite size of the near-field kernel. Therefore, the product of im,n(t) with δ(rrm,n) represents the light intensity measured at each nanostructure element in an array. Equation 15 presumes that the locations of nanostructures and hot spots formed thereby can be exactly determined and the distribution of hot spots is spatially fixed. Equation 15 can be simplified as I ðr Þ ¼

X

  K ðr Þ  ia, b ðkΔtÞδ r  kΔt  v  r a, b :

(16)

k

ia,b(kΔt) is the temporally measured intensity of microtubular fluorescence at k-th time step. Δt and v are the length of time step and the sliding speed of a microtubule. Each time that a microtubule is sampled, it is displaced by vΔt. During the image acquisition, CCD shutter speed was 0.0517 s. The sampling of fluorescence was performed by taking peak fluorescence intensity measured at each acquisition. The reconstructed images of microtubules by the NLS are presented in Fig. 27 in comparison with a reference control image of microtubules captured on a gold film

18

Surface Plasmon-Enhanced Super-Localization Microscopy

575

Fig. 27 Conventional and reconstructed images by TIRFM and NLS. (a) Microtubules on a 10-nm-thick gold film measured over the microscopic field of view by TIRFM. (b) Reconstructed microtubule image by the NLS using a nanoaperture array of d = 300 nm and Λ =1 μm. The lateral resolution is 76 nm in the direction of movement (135 nm orthogonally). Insets show the magnified images. (c) Fluorescence intensity profiles across the line in the circle of image b (Reprinted with permission of Wiley & Sons from [113])

(thickness = 10 nm) using conventional TIRFM. Expectedly, the reference image of Fig. 27a looks coarse due to the diffraction limit, which makes it difficult to resolve details of a microtubule. The diffraction-limited lateral resolution is estimated to be 250 nm. In contrast, dramatic enhancement in the image clarity and resolution is observed in the NLS image of Fig. 27b. The intensity profile shown in Fig. 27c allows the resolution to be estimated to be on the order of 70–80 nm in the direction of the movement, which was determined by the finite kernel size and the CCD sampling rate. Note that nanoapertures create a hot spot in a size that is half the distance a microtubule travels in a single acquisition time step. On the other hand, the CCD acquires an image by integration so that the switching operation causes the kernel to be broadened by twice. This process effectively increases the kernel size and potentially degrades the image resolution. Therefore, all the space within a microtubule was completely filled. The resolution depends on the direction of movement because of the ellipticity of the kernel shape. In the direction orthogonal to the movement, the measured resolution was approximately 135 nm. Overall, it is suggested that the resolution can be enhanced further by reducing the size of a hot spot kernel and increasing the sampling rate, i.e., the shutter speed.

PSALM While the NLS offers extreme versatility for imaging moving molecules at an enhanced resolution, it is also limited in its own regard for molecules that do not move or move at an unknown speed. For general super-resolution microscopy based on near-field localization, the nanostructure arrays that create hot spots need to be fine with an array period much smaller than light wavelength. Unfortunately,

576

Y. Oh et al.

Fig. 28 (a) Schematic illustration of PSALM. Switching of light incidence between L1 and L2 produces spatial switching of hot spots between HS1 and HS2. (b) Experimental configuration (CO collimator, PO polarizer, M mirrors, RM rotating mirror, OB objective, and F filter). (c) SEM image of the fabricated nanograting (Reprinted with permission of the Optical Society of America from [118])

nanostructures cannot be too close to each other as the localized near-fields become merged if they are. This limitation can be circumvented by taking advantage of extremely fine meshes of nanopatterns and switching only a part of those on temporally to excite hot spots reasonably far apart to avoid the coupling. PSALM addresses the temporal switching of hot spots for enhanced superresolution microscopy [118]. The optical setup for PSALM is shown in Fig. 28. The main idea is spatially switched field localization by temporal variation of incident light paths. Figure 28a presents light incidence at a wave vector kin(θ) which creates a well-defined near-field hot spot at the edge of a nanostructure. A symmetric hot spot can be produced at the opposite side of the nanostructure with light incidence at an opposite wave vector kin(θ). This causes an excited hot spot to be continuously displaced by a preset distance (nanograting ridge size, in this case). For adequate temporal sampling, the switching can be performed at a time interval much faster than the characteristic time associated with a specific molecular interaction. Super-resolution microscopy is possible, as long as the distance between switched hot spots is below the diffraction limit, assuming that the hot spots are

18

Surface Plasmon-Enhanced Super-Localization Microscopy

577

isolated for sufficient separation from each other. Conceptually, PSALM can be regarded as sampling target fluorescence with multiple incident light wave vectors to increase imaging resolution. While imaging PSF does not improve, the lateral resolution of PSALM is dominated by the dimension of the nanostructures used for near-field excitation and also the precision related to the control of light incidence. PSALM also bears resemblance to the super-resolution techniques based on stochastic photoactivation such as STORM. However, PSALM can be much faster since the spatial switching may be performed rapidly. In comparison, the negative side of PSALM is that super-resolved information can be obtained only where hot spots can be excited and switched. For the proof of concept, 2D grating type nanostructures were fabricated to excite and localize evanescent fields. Therefore, evanescent fields were localized only two dimensionally. For full 3D localization, different types of nanostructures can be utilized. Figure 28a presents a schematic of 2D PSALM, where light incidence is switched between angles of θin = 60 and 60 . At θin = 60 , a localized hot spot of approximately 30 nm in terms of FWHM is formed on the left-hand side of the grating ridge. With light incidence at θin = 60 , the hot spot is symmetrically switched. For experimental verification of the PSALM, a prism-based TIRFM was set up with adjustable light incidence, as shown in Fig. 28b. The switching speed was determined by the CCD frame speed to be approximately 160 ms/frame. Nonlinear Gaussian linear square fits were used for image deconvolution process. Figure 28c shows an SEM image of the nanograting used for the experiment. The nanograting was fabricated by electron-beam lithography and samples were made of gold. The thickness of the nanograting sample was 40 nm with 300-nm period. According to the scheme, the ridge length is an approximate measure of the imaging resolution and was determined by SEM to be 100 nm with an overall grating fill factor at 33%. The imaging performance of PSALM was assessed by imaging fluorescent nanobead spheres (ϕ = 40 nm) with 488-nm excitation and 560-nm emission. Figure 29a presents a raw image of fluorescent nanobeads (ϕ = 40 nm) measured by plasmon-enhanced TIRFM. The image looks blurred and the beads cannot be resolved because they are diffraction limited. Each pixel image size is approximately 230 nm, slightly larger than the diffraction limit. However, if the fluorescent beads are imaged by PSALM on the nanograting, fluorescent beads can be resolved at the same magnification, as shown in Fig. 29b. Note that the enhancement works only in one direction because localization was based on linear grating and thus it is diffraction limited in the direction parallel to the grating. Therefore, images of beads in Fig. 29b are elliptical rather than circular. The image can be made to be isotropic if 3D nanostructures that compensate the ellipticity due to oblique light incidence are used to excite and localize SP. The peak-to-peak distance between fluorescence signals was measured to be about 90–100 nm, which is in excellent agreement with the length of a grating ridge. Figure 29c–e represents the intensity variation with the switching of a bead. As shown in Fig. 29c, the angular switching does not affect conventional images; thus, the measured intensity is measured to be almost constant regardless of the angular switching. In contrast, Fig. 29d shows that light

578

Y. Oh et al.

Fig. 29 Images of fluorescent beads taken by (a) TIRFM and (b) PSALM. Also shown in insets are the magnified images of beads (marked with arrows) measured by TIRFM and PSALM. The bar in the insets is 300-nm long. For (b), grating wires are directed vertically. Intensity variations during angular switching: (c) TIRFM and PSALM with a bead (d) on the left side of a grating ridge and (e) on both sides. Blue squares and red circles represent light incidence with kx = k0 sin(60 ) and kx = k0 sin(60 ), respectively, where k0 is the light wave vector in the free space (Reprinted with permission of the Optical Society of America from [118])

incidence at θin = 60 excites a bead that is located on the right-hand side of a grating ridge, which is turned off at θin = 60 . Figure 29e shows the measured intensity when beads are expected to exist on both sides of nanograting ridge. In this case, a smaller intensity difference was measured during the angular switching. The difference itself was associated with the distance from the grating and also with relative dipole orientations.

Summary Thanks to super-resolution microscopy, understanding of molecular events in cellular environment has greatly improved. Super-resolution imaging can benefit many scientific fields including biology and biomedical engineering. A new generation of super-resolution microscopy techniques is revolutionizing biomedical research,

18

Surface Plasmon-Enhanced Super-Localization Microscopy

579

allowing nanometer scale observation of molecules within cells. In this chapter, particular emphasis has been placed on SP-enhanced super-localization of biomolecules to produce super-resolution images. SP-enhanced super-localization techniques achieve three-dimensional sub-diffraction-limited localization of light fields in two ways: dependence on evanescent waves provides extremely fine depth resolution in the axial direction. Laterally, surface nanostructures modulate nearfields to form localized hot spots that are smaller than 100 nm. Various techniques were described, such as SUPRA imaging, NLS, and PSALM. The size of a hot spot is typically (50–100 nm)2 in the lateral plane and 100 nm axially, which we expect to decrease significantly in the near future. While we have primarily focused on superresolution microscopy, the range of potential applications is limited only by imagination, and can be further extended to optical sensing, molecular manipulation, and nanofabrication, to say the least.

References 1. Gould SJ (2000) The lying stones of Marrakech: penultimate reflections in natural history. Jonathan Cape, London 2. Rutherford E, Martin C, Murphy PA, Arkwright JA, Barnard JE, Smith KM, Gye WE, Ledingham JCG, Salaman RN, Twort FW, Andrewes CH, Douglas SR, Hindle E, Brierley WB, Boycott AE (1929) Discussion on “Ultra-microscopic viruses infecting animals and plants”. Proc R Soc Lond Ser B 104(733):537–560 3. Koch R (1876) Untersuchungen über Bakterien: V. Die Ätiologie der Milzbrand-Krankheit, begründet auf die Entwicklungsgeschichte des Bacillus anthracis. Cohns Beitr Biol Pflanz 2 (2):277–310 4. Ardenne M (1938) Das Elektronen-Rastermikroskop. Z Phys 109(9–10):553–572 5. Nebesářová J, Vancová M (2007) How to observe small biological objects in low voltage electron microscope. Microsc Microanal 13(S03):248–249 6. Drummy LF, Yang J, Martin DC (2004) Low-voltage electron microscopy of polymer and organic molecular thin films. Ultramicroscopy 99(4):247–256 7. Valeur B, Berberan-Santos MN (2012) Molecular fluorescence: principles and applications, 2nd edn. Wiley-VCH, Weinheim 8. Born M, Wolf E (1999) Principles of optics, 7th edn. Cambridge University Press, Cambridge 9. Abbe E (1870) Beitrage zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung. Arch Mikrosk Anat 9:413–420 10. Heintzmann R, Ficz G (2006) Breaking the resolution limit in light microscopy. Brief Funct Genomic Proteomic 5(4):289–301 11. Axelrod D (1981) Cell-substrate contacts illuminated by total internal reflection fluorescence. J Cell Biol 89(1):141–145 12. Steyer JA, Almers W (2001) A real-time view of life within 100 nm of the plasma membrane. Nat Rev Mol Cell Biol 2(4):268–275 13. Hell SW, Wichmann J (1994) Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt Lett 19(11):780–782 14. Hell SW (2007) Far-field optical nanoscopy. Science 316(5828):1153–1158 15. Sahl SJ, Leutenegger M, Hilbert M, Hell SW, Eggeling C (2010) Fast molecular tracking maps nanoscale dynamics of plasma membrane lipids. Proc Natl Acad Sci U S A 107 (15):6829–6834 16. Nägerl UV, Bonhoeffer T (2010) Imaging living synapses at the nanoscale by STED microscopy. J Neurosci 30(28):9341–9346

580

Y. Oh et al.

17. Chojnacki J, Staudt T, Glass B, Bingen P, Engelhardt J, Anders M, Schneider J, Muller B, Hell SW, Krausslich HG (2012) Maturation-dependent HIV-1 surface protein redistribution revealed by fluorescence nanoscopy. Science 338(6106):524–528 18. Takasaki KT, Ding JB, Sabatini BL (2013) Live-cell superresolution imaging by pulsed STED two-photon excitation microscopy. Biophys J 104(4):770–777 19. Wagner E, Lauterbach MA, Kohl T, Westphal V, Williams GS, Steinbrecher JH, Streich JH, Korff B, Tuan HT, Hagen B, Luther S, Hasenfuss G, Parlitz U, Jafri MS, Hell SW, Lederer WJ, Lehnart SE (2012) Stimulated emission depletion live-cell super-resolution imaging shows proliferative remodeling of T-tubule membrane structures after myocardial infarction. Circ Res 111(4):402–414 20. Betzig E, Patterson GH, Sougrat R, Lindwasser OW, Olenych S, Bonifacino JS, Davidson MW, Lippincott-Schwartz J, Hess HF (2006) Imaging intracellular fluorescent proteins at nanometer resolution. Science 313(5793):1642–1645 21. Rust MJ, Bates M, Zhuang X (2006) Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat Methods 3(10):793–795 22. Gould TJ, Verkhusha VV, Hess ST (2009) Imaging biological structures with fluorescence photoactivation localization microscopy. Nat Protoc 4(3):291–308 23. Pereira CF, Rossy J, Owen DM, Mak J, Gaus K (2012) HIV taken by STORM: superresolution fluorescence microscopy of a viral infection. Virol J 9:84 24. Mennella V, Keszthelyi B, McDonald KL, Chhun B, Kan F, Rogers GC, Huang B, Agard DA (2012) Subdiffraction-resolution fluorescence microscopy reveals a domain of the centrosome critical for pericentriolar material organization. Nat Cell Biol 14(11):1159–1168 25. Bailey B, Farkas DL, Taylor DL, Lanni F (1993) Enhancement of axial resolution in fluorescence microscopy by standing-wave excitation. Nature 366(6450):44–48 26. Neil MAA, Juskaitis R, Wilson T (1997) Method of obtaining optical sectioning by using structured light in a conventional microscope. Opt Lett 22(24):1905–1907 27. Gustafsson MG (2000) Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J Microsc 198(Pt 2):82–87 28. Choi JR, Kim D (2012) Enhanced image reconstruction of three-dimensional fluorescent assays by subtractive structured-light illumination microscopy. J Opt Soc Am A 29 (10):2165–2173 29. Gustafsson MGL (2005) Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proc Natl Acad Sci U S A 102 (37):13081–13086 30. Shao L, Kner P, Rego EH, Gustafsson MG (2011) Super-resolution 3D microscopy of live whole cells using structured illumination. Nat Methods 8(12):1044–1046 31. York AG, Parekh SH, Dalle Nogare D, Fischer RS, Temprine K, Mione M, Chitnis AB, Combs CA, Shroff H (2012) Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy. Nat Methods 9(7):749–754 32. Fiolka R, Shao L, Rego EH, Davidson MW, Gustafsson MG (2012) Time-lapse two-color 3D imaging of live cells with doubled resolution using structured illumination. Proc Natl Acad Sci U S A 109(14):5311–5315 33. Arpali SA, Arpali C, Coskun AF, Chiang HH, Ozcan A (2012) High-throughput screening of large volumes of whole blood using structured illumination and fluorescent on-chip imaging. Lab Chip 12(23):4968–4971 34. Rankin BR, Hell SW (2009) STED microscopy with a MHz pulsed stimulated-Ramanscattering source. Opt Express 17(18):15679–15684 35. Takahara J, Kobayashi T (2004) Low-dimensional optical waves and nano-optical circuits. Opt Photon News 15(10):54–59 36. Zayats AV, Smolyaninov II, Maradudin AA (2005) Nano-optics of surface plasmon polaritons. Phys Rep 408(3–4):131–314 37. Raether H (1988) Surface plasmon on smooth and rough surface and on Gratings. Springer, New York

18

Surface Plasmon-Enhanced Super-Localization Microscopy

581

38. Kim Y, Chung K, Lee W, Kim DH, Kim D (2012) Nanogap-based dielectric-specific colocalization for highly sensitive surface plasmon resonance detection of biotin-streptavidin interactions. Appl Phys Lett 101(23):233701 39. Oh Y, Lee W, Kim D (2011) Colocalization of gold nanoparticle-conjugated DNA hybridization for enhanced surface plasmon detection using nanograting antennas. Opt Lett 36(8):1353–1355 40. Kim K, Kim DJ, Moon S, Kim D, Byun KM (2009) Localized surface plasmon resonance detection of layered biointeractions on metallic subwavelength nanogratings. Nanotechnology 20(31):315501 41. Ma K, Kim DJ, Kim K, Moon S, Kim D (2010) Target-localized nanograting-based surface plasmon resonance detection toward label-free molecular biosensing. IEEE J Sel Top Quantum Electron 16(4):1004–1014 42. Yoon SJ, Kim D (2007) Thin-film-based field penetration engineering for surface plasmon resonance biosensing. J Opt Soc Am A 24(9):2543–2549 43. Lakowicz JR (1991) Topics in fluorescence spectroscopy. Plenum Press, New York 44. De Fornel F (2001) Evanescent waves: from Newtonian optics to atomic optics. Springer, Berlin 45. Rothenhausler B, Knoll W (1988) Surface–plasmon microscopy. Nature 332(6165):615–617 46. Giebel K, Bechinger C, Herminghaus S, Riedel M, Leiderer P, Weiland U, Bastmeyer M (1999) Imaging of cell/substrate contacts of living cells with surface plasmon resonance microscopy. Biophys J 76(1):509–516 47. De Bruijn HE, Kooyman RP, Greve J (1993) Surface plasmon resonance microscopy: improvement of the resolution by rotation of the object. Appl Opt 32(13):2426–2430 48. Boozer C, Kim G, Cong S, Guan H, Londergan T (2006) Looking towards label-free biomolecular interaction analysis in a high-throughput format: a review of new surface plasmon resonance technologies. Curr Opin Biotechnol 17(4):400–405 49. Campbell CT, Kim G (2007) SPR microscopy and its applications to high-throughput analyses of biomolecular binding events and their kinetics. Biomaterials 28(15):2380–2392 50. Hickel W, Kamp D, Knoll W (1989) Surface-plasmon microscopy. Nature 339(6221):186 51. Barnes WL, Dereux A, Ebbesen TW (2003) Surface plasmon subwavelength optics. Nature 424(6950):824–830 52. Huang B, Yu F, Zare RN (2007) Surface plasmon resonance imaging using a high numerical aperture microscope objective. Anal Chem 79(7):2979–2983 53. Wang W, Yang Y, Wang S, Nagaraj VJ, Liu Q, Wu J, Tao N (2012) Label-free measuring and mapping of binding kinetics of membrane proteins in single living cells. Nat Chem 4 (10):846–853 54. Berger CEH, Kooyman RPH, Greve J (1994) Resolution in surface plasmon microscopy. Rev Sci Instrum 65(9):2829–2836 55. Somekh MG, Liu S, Velinov TS, See CW (2000) High-resolution scanning surface-plasmon microscopy. Appl Opt 39(34):6279–6287 56. Tanaka T, Yamamoto S (2003) Laser-scanning surface plasmon polariton resonance microscopy with multiple photodetectors. Appl Opt 42(19):4002–4007 57. Berguiga L, Zhang S, Argoul F, Elezgaray J (2007) High-resolution surface-plasmon imaging in air and in water: V(z) curve and operating conditions. Opt Lett 32(5):509–511 58. Somekh MG, Stabler G, Liu S, Zhang J, See CW (2009) Wide-field high-resolution surfaceplasmon interference microscopy. Opt Lett 34(20):3110–3112 59. Bouhelier A, Ignatovich F, Bruyant A, Huang C, Colas des Francs G, Weeber JC, Dereux A, Wiederrecht GP, Novotny L (2007) Surface plasmon interference excited by tightly focused laser beams. Opt Lett 32(17):2535–2537 60. Kim DJ, Kim D (2010) Subwavelength grating-based nanoplasmonic modulation for surface plasmon resonance imaging with enhanced resolution. J Opt Soc Am B 27(6):1252–1259 61. Byun KM, Kim S, Kim D (2005) Design study of highly sensitive nanowire-enhanced surface plasmon resonance biosensors using rigorous coupled wave analysis. Opt Express 13 (10):3737–3742

582

Y. Oh et al.

62. Kim K, Yoon SJ, Kim D (2006) Nanowire-based enhancement of localized surface plasmon resonance for highly sensitive detection: a theoretical study. Opt Express 14 (25):12419–12431 63. Byun KM, Yoon SJ, Kim D, Kim SJ (2007) Experimental study of sensitivity enhancement in surface plasmon resonance biosensors by use of periodic metallic nanowires. Opt Lett 32 (13):1902–1904 64. Malic L, Cui B, Veres T, Tabrizian M (2007) Enhanced surface plasmon resonance imaging detection of DNA hybridization on periodic gold nanoposts. Opt Lett 32(21):3092–3094 65. Brockman JM, Frutos AG, Corn RM (1999) A multistep chemical modification procedure to create DNA arrays on gold surfaces for the study of protein  DNA interactions with surface plasmon resonance imaging. J Am Chem Soc 121(35):8044–8051 66. Blow N (2009) Proteins and proteomics: life on the surface. Nat Methods 6(5):389–393 67. Wark AW, Lee HJ, Corn RM (2005) Long-range surface plasmon resonance imaging for bioaffinity sensors. Anal Chem 77(13):3904–3907 68. Willets KA, Van Duyne RP (2007) Localized surface plasmon resonance spectroscopy and sensing. Annu Rev Phys Chem 58(1):267–297 69. Yu F, Knoll W (2004) Immunosensor with self-referencing based on surface plasmon diffraction. Anal Chem 76(7):1971–1975 70. Liebermann T, Knoll W (2000) Surface-plasmon field-enhanced fluorescence spectroscopy. Colloids Surf A 171(1–3):115–130 71. Yu F, Yao D, Knoll W (2003) Surface plasmon field-enhanced fluorescence spectroscopy studies of the interaction between an antibody and its surface-coupled antigen. Anal Chem 75 (11):2610–2617 72. Millis BA (2012) Evanescent-wave field imaging: an introduction to total internal reflection fluorescence microscopy. Methods Mol Biol 823:295–309 73. Mertz J (2000) Radiative absorption, fluorescence, and scattering of a classical dipole near a lossless interface: a unified description. J Opt Soc Am B 17(11):1906–1913 74. Rohrbach A (2000) Observing secretory granules with a multiangle evanescent wave microscope. Biophys J 78(5):2641–2654 75. Axelrod D (2001) Total internal reflection fluorescence microscopy in cell biology. Traffic 2 (11):764–774 76. Axelrod D, Burghardt TP, Thompson NL (1984) Total internal reflection fluorescence. Annu Rev Biophys Bioeng 13:247–268 77. Schneckenburger H (2005) Total internal reflection fluorescence microscopy: technical innovations and novel applications. Curr Opin Biotechnol 16(1):13–18 78. Toomre D, Manstein DJ (2001) Lighting up the cell surface with evanescent wave microscopy. Trends Cell Biol 11(7):298–303 79. Kim K, Cho EJ, Huh YM, Kim D (2007) Thin-film-based sensitivity enhancement for total internal reflection fluorescence live-cell imaging. Opt Lett 32(21):3062–3064 80. Lang T, Wacker I, Steyer J, Kaether C, Wunderlich I, Soldati T, Gerdes HH, Almers W (1997) Ca2+-triggered peptide secretion in single cells imaged with green fluorescent protein and evanescent-wave microscopy. Neuron 18(6):857–863 81. Fiolka R, Belyaev Y, Ewers H, Stemmer A (2008) Even illumination in total internal reflection fluorescence microscopy using laser light. Microsc Res Technol 71(1):45–50 82. Mattheyses AL, Simon SM, Rappoport JZ (2010) Imaging with total internal reflection fluorescence microscopy for the cell biologist. J Cell Sci 123(21):3621–3628 83. Jouvenet N, Neil SJ, Bess C, Johnson MC, Virgen CA, Simon SM, Bieniasz PD (2006) Plasma membrane is the site of productive HIV-1 particle assembly. PLoS Biol 4(12):e435 84. Grigoriev I, Akhmanova A (2010) Microtubule dynamics at the cell cortex probed by TIRF microscopy. Methods Cell Biol 97:91–109 85. Engel BD, Lechtreck KF, Sakai T, Ikebe M, Witman GB, Marshall WF (2009) Total internal reflection fluorescence (TIRF) microscopy of Chlamydomonas flagella. Methods Cell Biol 93:157–177

18

Surface Plasmon-Enhanced Super-Localization Microscopy

583

86. Kaiser R, Lévy Y, Vansteenkiste N, Aspect A, Seifert W, Leipold D, Mlynek J (1994) Resonant enhancement of evanescent waves with a thin dielectric waveguide. Opt Commun 104 (4–6):234–240 87. Ke PC, Gan XS, Szajman J, Schilders S, Gu M (1997) Optimizing the strength of an evanescent wave generated from a prism coated with a double-layer thin-film stack. Bioimaging 5(1):1–8 88. Lakowicz JR (2001) Radiative decay engineering: biophysical and biomedical applications. Anal Biochem 298(1):1–24 89. Lakowicz JR, Malicka J, D’Auria S, Gryczynski I (2003) Release of the self-quenching of fluorescence near silver metallic surfaces. Anal Biochem 320:13–20 90. Ozbay E (2006) Plasmonics: merging photonics and electronics at nanoscale dimensions. Science 311(5758):189–193 91. Lee J, Hernandez P, Govorov AO, Kotov NA (2007) Exciton-plasmon interactions in molecular spring assemblies of nanowires and wavelength-based protein detection. Nat Mater 6(4):291–295 92. Hong G, Tabakman SM, Welsher K, Wang H, Wang X, Dai H (2010) Metal-enhanced fluorescence of carbon nanotubes. J Am Chem Soc 132(45):15920–15923 93. Dubertret B, Calame M, Libchaber AJ (2001) Single-mismatch detection using gold-quenched fluorescent oligonucleotides. Nat Biotechnol 19(4):365–370 94. Ekgasit S, Thammacharoen C, Yu F, Knoll W (2004) Evanescent field in surface plasmon resonance and surface plasmon field-enhanced fluorescence spectroscopies. Anal Chem 76 (8):2210–2219 95. Futamata M, Maruyama Y, Ishikawa M (2003) Local electric field and scattering cross section of Ag nanoparticles under surface plasmon resonance by finite difference time domain method. J Phys Chem B 107(31):7607–7617 96. Aslan K, Huang J, Wilson GM, Geddes CD (2006) Metal-enhanced fluorescence-based RNA sensing. J Am Chem Soc 128(13):4206–4207 97. Zhang J, Fu Y, Chowdhury MH, Lakowicz JR (2007) Metal-enhanced single-molecule fluorescence on silver particle monomer and dimer:coupling effect between metal particles. Nano Lett 7(7):2101–2107 98. Sokolov K, Follen M, Aaron J, Pavlova I, Malpica A, Lotan R, Richards-Kortum R (2003) Real-time vital optical imaging of precancer using anti-epidermal growth factor receptor antibodies conjugated to gold nanoparticles. Cancer Res 63(9):1999–2004 99. Kyriacou SV, Brownlow WJ, Xu XH (2004) Using nanoparticle optics assay for direct observation of the function of antimicrobial agents in single live bacterial cells. Biochemistry 43(1):140–147 100. Edel JB, Wu M, Baird B, Craighead HG (2005) High spatial resolution observation of singlemolecule dynamics in living cell membranes. Biophys J 88(6):L43–L45 101. Kottmann J, Martin O, Smith D, Schultz S (2000) Spectral response of plasmon resonant nanoparticles with a non-regular shape. Opt Express 6(11):213–219 102. Kottmann JP, Martin OJF, Smith DR, Schultz S (2001) Plasmon resonances of silver nanowires with a nonregular cross section. Phys Rev B 64(23):235402 103. Lee W, Kim K, Kim D (2012) Electromagnetic near-field nanoantennas for subdiffractionlimited surface plasmon-enhanced light microscopy. IEEE J Sel Top Quantum Electron 18 (6):1684–1691 104. Galloway CM, Kreuzer MP, Acimovic SS, Volpe G, Correia M, Petersen SB, Neves-Petersen MT, Quidant R (2013) Plasmon-assisted delivery of single nano-objects in an optical hot-spot. Nano Lett 13(9):4299–4304 105. Liu ZQ, Liu GQ, Zhou HQ, Liu XS, Huang K, Chen YH, Fu GL (2013) Near-unity transparency of a continuous metal film via cooperative effects of double plasmonic arrays. Nanotechnology 24(15):155203 106. Neubrech F, Weber D, Katzmann J, Huck C, Toma A, Di Fabrizio E, Pucci A, Hartling T (2012) Infrared optical properties of nanoantenna dimers with photochemically narrowed gaps in the 5 nm regime. ACS Nano 6(8):7326–7332

584

Y. Oh et al.

107. Suh JY, Kim CH, Zhou W, Huntington MD, Co DT, Wasielewski MR, Odom TW (2012) Plasmonic bowtie nanolaser arrays. Nano Lett 12(11):5769–5774 108. Ye J, Van Dorpe P (2012) Plasmonic behaviors of gold dimers perturbed by a single nanoparticle in the gap. Nanoscale 4(22):7205–7211 109. Kim K, Kim DJ, Cho E-J, Suh J-S, Huh Y-M, Kim D (2009) Nanograting-based plasmon enhancement for total internal reflection fluorescence microscopy of live cells. Nanotechnology 20(1):015202 110. Xie C, Hanson L, Cui Y, Cui B (2011) Vertical nanopillars for highly localized fluorescence imaging. Proc Natl Acad Sci U S A 108(10):3894–3899 111. Fromm DP, Sundaramurthy A, Schuck PJ, Kino G, Moerner WE (2004) Gap-dependent optical coupling of single “bowtie” nanoantennas resonant in the visible. Nano Lett 4 (5):957–961 112. Jin EX, Xu X (2006) Enhanced optical near field from a bowtie aperture. Appl Phys Lett 88 (15):153110 113. Kim K, Yajima J, Oh Y, Lee W, Oowada S, Nishizaka T, Kim D (2012) Nanoscale localization sampling based on nanoantenna arrays for super-resolution imaging of fluorescent monomers on sliding microtubules. Small 8(6):892–900 114. Schnell M, Garcia-Etxarri A, Huber AJ, Crozier K, Aizpurua J, Hillenbrand R (2009) Controlling the near-field oscillations of loaded plasmonic nanoantennas. Nat Photon 3 (5):287–291 115. Genevet P, Tetienne J-P, Gatzogiannis E, Blanchard R, Kats MA, Scully MO, Capasso F (2010) Large enhancement of nonlinear optical phenomena by plasmonic nanocavity gratings. Nano Lett 10(12):4880–4883 116. Stranahan SM, Willets KA (2010) Super-resolution optical imaging of single-molecule SERS hot spots. Nano Lett 10(9):3777–3784 117. Kim K, Choi JW, Ma K, Lee R, Yoo KH, Yun CO, Kim D (2010) Nanoisland-based random activation of fluorescence for visualizing endocytotic internalization of adenovirus. Small 6 (12):1293–1299 118. Kim K, Oh Y, Lee W, Kim D (2010) Plasmonics-based spatially activated light microscopy for super-resolution imaging of molecular fluorescence. Opt Lett 35(20):3501–3503

Adaptive Optics for Aberration Correction in Optical Microscopy

19

Amanda J. Wright and Simon P. Poland

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AO in Astronomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General AO Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wavefront Modulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wavefront Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Direct Wavefront Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Indirect Wavefront Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Predetermined Aberrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications and Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Confocal and Multiphoton Microscopy: Fluorescence Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intrinsic Nonlinear Techniques (SHG, THG, and CARS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Single-Molecule Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stimulated Emission Depletion Microscopy (STED) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selective Plane Illumination Microscopy (SPIM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optical Trapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

586 590 592 592 595 595 596 600 601 601 602 603 605 605 608 609 610

A.J. Wright (*) Institute of Biophysics, Imaging and Optical Science (IBIOS), University of Nottingham, Nottingham, UK e-mail: [email protected] S.P. Poland Division of Cancer Research and Randall Division of Cell and Molecular Biophysics, Guy’s Campus, King’s College London, London, UK Richard Dimbleby Department of Cancer Research, Division of Cancer Studies, New Hunt’s House, King’s College London, London, UK e-mail: [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_37

585

586

A.J. Wright and S.P. Poland

Abstract

All forms of optical microscopy have the potential to suffer from aberrations due to misalignments in the optical system, local refractive index changes in the sample, or, in many cases, both. Aberrations produce a distorted wavefront at the focus of the imaging system leading to a non-optimum focal spot, resulting in a decrease in image resolution and hence a deterioration in image quality. The problem is particularly prevalent when imaging biological tissue using an optical sectioning microscope where the improved axial resolution over standard wide-field techniques leads the user to image deeper into their sample than ever before. The structure present in the tissue presents complex axial and lateral variations in refractive indices, inhomogeneities that increase as the thickness of tissue the light passes through increases. Adaptive optics, a technique that originated in optical astronomy, poses a powerful solution to the problem. The principle behind adaptive optics involves shaping the wavefront of the incoming light in such a way so as to overcome the distortions imposed by the sample and imaging system. Crucial to the successful implementation of adaptive optics in microscopy is the method used to determine the wavefront correction required. Here we introduce the concepts behind adaptive optics, discuss several approaches that have been taken to implement adaptive optics into microscopy, and finally provide examples of its success when applied to a variety of imaging modalities such as multiphoton microscopy, stimulated emission depletion microscopy, and selective plane illumination microscopy.

Introduction Microscopy has advanced greatly in the last 30 years with white light transmission microscopy in many cases being surpassed by techniques such as confocal microscopy and multiphoton microscopy. Such imaging modalities provide axial resolution close to the diffraction limit, typically several hundred nanometers, allowing the user to build up three dimensional image stacks of their samples. More recently the optics and instrumentation have taken an even greater leap forward with the advent of super-resolution microscopes such as STED, PALM, and STORM that utilize different properties of florescence to supersede the diffraction limit and provide image resolution on the scale of tens of nanometers. Common to all forms of optical microscopy is the issue of aberrations that can affect and hinder image quality and resolution. Aberrations can originate from the microscope optics and the sample being imaged and in many cases are as a result of both. The quality of an image is directly related to the size and shape of the focal spot; aberrations distort the wavefront of the light forming the image leading to a distorted and deformed focal spot and a poor quality image. Diffraction fundamentally limits resolution in the majority of imaging systems (bar the super-resolution approaches) and in the lateral directions is given by the Abbe diffraction limit, d¼

λ 2ðn sin θÞ

(1)

19

Adaptive Optics for Aberration Correction in Optical Microscopy

a

587

b

c

d Sagittal plane focus Tangential plane focus

Fig. 1 Schematic diagrams illustrating three common aberrations found in optical systems: (b) spherical aberration, (c) coma, and (d) astigmatism. (a) depicts a plane wave, free from aberration, focused to a diffraction-limited spot for comparison

where d is the diameter of the focal spot formed when light of wavelength, λ, travels through a medium of refractive index n and converges with an angle θ. When focusing with a microscope objective lens, n sin θ is equivalent to the numerical aperture, NA, of the lens and can be as high as 1.49 for oil immersion objectives, leading to a diffraction-limited spot diameter just less than 200 nm when using green light (λ taken to be 550 nm). Along the optical axis, the minimum spot size due to diffraction, ω, is given by ω¼

nλ ðn sin θÞ2

¼

nλ NA2

(2)

Common aberrations found in imaging systems are astigmatism, spherical aberration, and coma. Astigmatism is when light in two different planes, for example, the sagittal and tangential planes, focuses in two different positions along the optical axis; see Fig. 1d. Spherical aberration is due to rays in the periphery of the beam (marginal rays) focusing at a different location to the rays closer to the beam axis leading to an elongated focal spot; Figure 1b shows an example of spherical aberration resulting from refraction at a single optical interface. Spherical aberration is common when using thick or multielement optics and corresponds to the dependence of focal length on aperture for nonparaxial rays (i.e., rays far from the optical axis). Coma occurs when rays hit an optic off axis producing a “comet”-shaped asymmetric focal spot, Fig. 1c. These all lead to distortions in the wavefront and can result when there are misalignments in the optical system and mismatches in refractive index both in the system and the sample and cause problems in the majority of optical systems. As a convenient mathematical tool, a wavefront, ϕ(ρ, θ), can be expressed as a series of basis functions or aberration modes:

588

A.J. Wright and S.P. Poland

ϕðρ, θÞ ¼

1 X

ck Z k ðρ, θÞ

(3)

k¼0

where ck are the coefficients and Zk(ρ,θ) a complete set of orthogonal functions. Zernike modes are a convenient set of polynomial orthogonal functions that are frequently used to represent aberrations in optical systems. Zernike modes are defined over a unit circle, and the individual functions can be assigned to specific aberrations, for example, astigmatism or spherical aberration. The first three Zernike modes (k = 0, 1, and 2) are generally assigned to piston, tip, and tilt and referred to as the lower order modes; they represent translation in three dimensions but do not affect the resolution or contrast of an image. Astigmatism, coma, and spherical aberration are referred to as higher order modes. In radial coordinates, with radius ρ and angle θ, the general formulation for the Zernike polynomials is given by  Zm n ðρ, θ Þ ¼

Rjnmj ðρÞ: cos ðmθÞ for m  0 Rjnmj ðρÞ: sin ðmθÞ for m < 0

(4)

where n and m represent the radial order and azimuthal frequency, respectively. R|m| n is dependent only on ρ and given by 2 X nm

Rm n ð ρÞ

¼

i¼0

ð1Þi ðn  iÞ! n þ m n  m  :ρn2i  i!  i! i! 2 2

(5)

where i represents (n  m)/2. The Zernike polynomials are shown pictorially in Fig. 2 and can be assigned to aberrations seen commonly in optical systems as shown in Table 1. Optical aberrations are highly sample dependent as illustrated by Schwertner et al. [1, 2] in 2007 when they used a high numerical aperture interferometer to characterize the wavefront aberrations arising from a range of biological tissue at a range of imaging depths. In microscopy, when imaging deep into a tissue sample, it can be useful to think of aberrations in terms of system- and sample-induced aberrations. System aberrations result from fixed optics in the microscope system, typically due to any slight misalignments or optics being used outside of their design criteria. Sample aberrations result from local and global changes in refractive index in the sample itself. In general system aberrations remain fixed with imaging depth, whereas sample aberrations will change and normally increase as the user images deeper. For some microscope objectives, a correction collar can be adjusted to improve the image quality and remove/reduce the effects of aberration. The majority of standard objectives are designed to work with a coverslip thickness of 0.17 mm and refractive index 1.515 – for high numerical aperture (>0.8) air objectives, the thickness of the coverslip is crucial, and imperfections of the order of micrometers

19

Adaptive Optics for Aberration Correction in Optical Microscopy

589

Fig. 2 The Zernike polynomials to the fourth order illustrated as phase changes over a unit circle

can impact on the objective’s performance. The correction collar allows the user to adjust the position of critical lenses inside the objective barrel and overcome any discrepancies in thickness and refractive index in the coverslip. Practically the user has to have a reasonable level of skill as the focus position tends to shift during correction. Tissue optical clearing, pioneered by Tuchin and coworkers, has also proved to be successful when it comes to improving image quality in tissue [3]. Here the tissue is immersed in an optical clearing agent, which normally has a high refractive index similar to the scatterers present in the sample. As the optical clearing agent penetrates the extracellular spaces, the effects of scattering are reduced leading to a more transparent sample. Depending on the tissue, it can take several weeks to produce a transparent sample ready for imaging.

590 Table 1 The Zernike polynomials mapped to traditional aberrations found in optical systems

A.J. Wright and S.P. Poland Z 00 Z11 Z1 1 Z22 Z 02 Z2 2 Z33 Z13 Z1 3 Z3 3 Z 44 Z24 Z 04 Z2 4 Z4 4

Piston y-axis tilt x-axis tilt Astigmatism 45 Defocus Astigmatism 90 Trefoil 30 y-axis coma x-axis coma Trefoil 0 Second-order astigmatism 0 Spherical aberration Second-order astigmatism 45

More recently Combs et al. have demonstrated that by using a parabolic mirror to maximize collection efficiency in a multiphoton microscope, they are also able to considerably increase image depth. The parabolic mirror and additional collection optics increase the chance of collecting scattered photons, and they report and signal to noise enhancement of 8.9 [4, 5]. In this chapter we will focus on a technique called adaptive optics (AO) that was first used in optical astronomy to overcome the aberrations arising from the Earth’s atmosphere [6]. Since the turn of the century, AO has been applied to several different microscopy modalities in order to correct for aberrations (see section “Applications and Implementation”) [7–11]. AO aims to shape the wavefront of the incoming laser beam in such a way that it counteracts for any distortion imposed by the aberrations in the optical path or the sample, hence improving the quality of the focal spot and ultimately the resolution of the system. A schematic can be seen in Fig. 3. At the crux of all AO systems is the method used to determine the wavefront correction required in order to effectively compensate for the aberrations present. At present, there are a number of types of AO systems currently being employed in microscopy which can be grouped into direct wavefront sensing and indirect wavefront sensing techniques, each with their own advantages and disadvantages. This chapter aims to give a flavor of the different approaches that have been implemented, the practicalities of including AO in an imaging system, and the level of imaging improvement that has been achieved in a variety of microscopy modalities.

AO in Astronomy One of the greatest problems faced in ground-based optical astronomy is the distortions arising from the Earth’s atmosphere which impact on image resolution and quality [12]. These distortions originate from turbulence within the atmosphere,

19

Adaptive Optics for Aberration Correction in Optical Microscopy

a

b

Microscope

c

Microscope

591

d

Microscope

Microscope

Fig. 3 A schematic diagram illustrating the principle of adaptive optics. (a) A planar wavefront is focused by the microscope system producing a spherical wavefront and an optimal focal spot; (b) when focusing through a dielectric interface, for example, a microscope slide, the focal spot is broadened and elongated; (c) when imaging through biological material with a planar wavefront, the resulting focal spot is further distorted; (d) the equal but opposite wavefront distortion is placed on the incoming beam resulting in a spherical wavefront at the focus and restoring the quality of the focal spot to optimal

which, in turn, are caused by small temperature variations, leading to microvariations in both density and refractive index of the atmosphere. These small changes in refractive index (106) [6] over large distances (atmosphere depth  100 km) can build up to cause large refractive index variations. This results in a significant reduction in image resolution and signal intensity when imaging any celestial object from ground-based optical telescopes. Before the development of adaptive optic systems, telescopes were placed on top of mountains (e.g., Hawaii, La Palma) to reduce the degree of atmospheric turbulence or into space (i.e., the Hubble Space Telescope) to remove atmospheric disturbance altogether. Therefore any solution which could overcome problems with astronomical seeing due to the atmosphere as well as improving the detected signal (without increasing telescope mirror sizes) is of enormous benefit. The use of AO was first suggested in 1953 by H. W. Babcock [13] whereby, with the aid of a wavefront sensor and adaptive element, atmospheric distortions could be detected and compensated. In his original system, Babcock suggested using a revolving knife-edge above an orthicon (a now obsolete television pickup tube) to act as a wavefront sensor, where the electrical signal output controlled the electron beam intensity of an electron gun. By aiming the spatially varying electron beam at a mirror coated with a thin layer of oil, it was proposed that the resultant phase introduced by the change in thickness of the film would compensate for aberrations. Although the principle was sound, the technology required to compensate for the turbidity of the atmosphere required several thousand changes per second, which was unachievable at the time. It was not until recent developments in electronics and computing as well as wavefront sensing and deformable mirrors that the use of dynamic aberration correction devices has become a practical solution. In astronomy the degree of wavefront correction required is measured using a probe or beacon high in the atmosphere. The beacon is produced using a focused

592

A.J. Wright and S.P. Poland

high-power pulsed laser beam to excite sodium atoms located high in the upper atmosphere. By detecting the emission using a wavefront sensor, information on the atmospheric distortions from a particular region of sky can be measured and compensated for. This approach is often referred to as the “laser guide star” method.

General AO Systems AO systems are typically composed of three main components [14] (see Fig. 4): 1. Wavefront sensor – measures the state of the system to be optimized. This can be either performed directly using a form of wavefront sensor or indirectly by measuring a particular fitness parameter which is then sent to a control system. 2. Control system – interprets the signal sent from the sensor into a signal that can control the wavefront modulator and thus compensate for aberrations. 3. Wavefront modulator – the adaptive element that modulates the wavefront to correct for aberrations. Any subsequent change to the wavefront modulator is detected by the sensor via the closed-loop feedback (Fig. 4) and incorporated in the computer control.

Wavefront Modulators Wavefront modulators are used to spatially impose a phase change on the incoming wavefront. This can be achieved using various types of mirror and liquid crystal display technologies. These devices work on the basis of introducing a path length variation or refractive index variation to alter the phase profile of the light incident on the device. Deformable membrane mirrors (DMMs) or spatial light modulators (SLMs) are typically usually used as wavefront modulators (see Fig. 5 for a schematic of the two technologies). DMMs are essentially a thin reflective membrane above an array of electrostatic or magnetic actuators where local path length changes can be imposed across the wavefront by altering the voltages applied to individual actuators [15]. The device is composed of a continuous membrane, and therefore phase/path length discontinuities are not possible. The maximum wavefront change possible or “stroke” is device and manufacturer dependent ranging from less than 10 to 50 μm total stroke with smaller changes possible between neighboring actuators. The cost of the devices reflects the stroke available and ranges from ~€1.5 k, for the smaller stroke devices, to ~€22 k for the larger stroke devices; in many cases DMMs are the cost-effective option over SLMs. The majority of DMMs are bound at the edges, placing a restriction on the corrections achievable, and they are often “pull”-only devices. When using a “pull”-only device, it is common to start with a defocus offset

19

Adaptive Optics for Aberration Correction in Optical Microscopy

593

a Aberrated wavefront

Corrected wavefront Camera

Beam splitter Wavefront modulator

Wavefront sensor

Control system

b Aberrated wavefront

Corrected wavefront Camera

Beam splitter

Wavefront modulator

Fitness sensor

Control system

Fig. 4 Examples of two different AO feedback loops. (a) A direct system using a wavefront sensor and (b) an indirect AO with a form of fitness sensor

594

A.J. Wright and S.P. Poland

a

Incoming wavefront

Highly reflective membrane Electrodes Deformable membrane mirror (DMM)

Incoming wavefront

b Coverslip

Transparent electrode Liquid crystal Reflection enhancer

Electrodes Spatial light modulator (SLM)

Fig. 5 (a) An electric field applied to a deformable membrane mirror exerts a local force on the membrane causing it to deform and change the optical path length of the incoming light beam. (b) An electric field applied across a spatial light modulator changes the orientation of the liquid crystal molecules and hence the effective local refractive index altering the phase of the incoming beam

on the DMM, corrected by a lens earlier in the system, allowing the user to effectively apply negative and positive aberration correction. The two important benefits of DMMs are their light efficiency and speed. In terms of light loss, they are equivalent to placing an extra mirror in the system and can be antireflection coated according to the wavelength being used to further reduce losses. The speed of DMMs is again manufacturer dependent, but commonly they operate at refresh rates of several hundred Hz or a kHz, and therefore they are ideally suited to algorithm-based AO approaches that rely on the rapid sampling of possible wavefronts. SLMs are holographic, pixelated, liquid crystal devices (typically 512  512 pixels) that impose a phase change on the incoming wavelength. Local changes in electric field are applied to the liquid crystal resulting in changes in refractive index and therefore phase. Using phase wrapping approaches, they can achieve a maximum phase change, or “stroke,” of several π across the device and are, in this respect, more powerful than the DMMs. It is also possible to produce phase discontinuities, and there are no issues associated with bound edges in the same way there can be with DMMs, making an SLM a more flexible option. However, they can be lossy with zero-order diffraction efficiency of 65% not uncommon, although with recent advance in the technology, manufacturers are now quoting closer to 95%. Their update rate depends on the liquid crystal response time and also the method for

19

Adaptive Optics for Aberration Correction in Optical Microscopy

595

applying and determining the required hologram. The SLM is often controlled via the computer graphics card as though it was a separate monitor, and therefore they typically operate at ~50 Hz. SLM and graphics card technology is changing rapidly and update rates of several hundred Hz are now being quoted for nematic liquid crystal devices. Some optical microscopy experiments, for example, optical tweezers, routinely use SLMs, and therefore in these cases, to avoid the introduction of further optics, they are the natural correction device. An approach that uses both has also been reported taking advantage of the large stroke of the SLM and the high speed of the DMM [16]. Whether you use an SLM or a DMM, you are likely to end up including additional optics into your light path to reimage the active region onto the back aperture of the microscope objective and often the laser scanning device too. Care must be taken to expand and then reduce the size of your laser beam so that the full active region of the modulator and back aperture of the objective is used. Most SLMs and DMMs rely on a beam size of ~12–15 mm, whereas objective back apertures and scanners are usually smaller. With each additional optic introduced, there is a greater risk of introducing further aberration to the optical system.

Wavefront Sensing In order to determine the correction required to compensate for aberrations in the system, the wavefront must first be measured. This can either be achieved by measuring the surface profile of the wavefront directly or indirectly inferring it using a form of fitness sensor (see Fig. 5).

Direct Wavefront Sensing The three main types of direct wavefront sensing approaches are interferometric (i.e., shearing interferometry), geometrical (i.e., Shack-Hartmann [17]), and phase retrieval [18]. Phase retrieval is an algorithm-based approach for finding the phase solution to a measured amplitude function. Probably the most common form of wavefront sensor to be used in microscopy is a Shack-Hartmann wavefront sensor, which involves a lenslet array in front of a CCD camera. For a plane, collimated, incoming wave, an evenly spaced, equal size, array of spots should be formed on the camera, and any deviation to this can be attributed to aberration. For commercial systems the lenslet array and camera arrangement are pre-calibrated, and the software determines any deviation from plane wavefront in terms of contribution of the various Zernike polynomials. In a paper published in 2010, an SLM was used in an optical trapping configuration as both the wavefront correction device and a “virtual” wavefront sensing device [19]. The SLM was initially used to create a lenslet array which was focused

596

A.J. Wright and S.P. Poland

by the objective lens onto the sample plane; from here the Zernike polynomials were determined, and then the appropriate correction was applied to the SLM. For a successful direct wavefront sensing approach (see Fig. 5a), a known reference point is required in the sample, similar to the laser guide star used in optical astronomy. An obvious laser guide star is not readily available in microscopy. To resolve this Azucena et al. injected 1 μm fluorescent beads into a Drosophila embryo sample at the final stages of preparation and imaged them on a ShackHartmann wavefront sensor similar to a laser guide star approach [20, 21]. The wavefront sensor was used in a closed-loop configuration along with a DMM, and they were able to restore the image of a bead at a depth of 100 μm into the embryo sample. Direct wavefront sensing techniques require enough light to detect the shape of the wavefront which can be difficult to achieve when imaging deep into a biological sample due to the deterioration in the signal as well as the complexity of the aberrations present. In confocal systems, particularly when operating in fluorescence, the collected light may not be of sufficient intensity to make an accurate determination of the wavefront shape. Feierabend et al. removed the need for a laser guide star by looking at the backscattered light from their highly scattering samples [22]. They were able to distinguish in-focus and out-of-focus light using coherence-gated and phase-shifting interferometry, similar to optical coherence tomography. This works on the principle that unscattered light from the focus has a different arrival time to out-of-focus light that has under gone multiple scattering events. The phase-shifting interferometer not only extracts the in-focus light but also provides information about its phase, and from here the wavefront can be determined. At this point the wavefront contains equivalent information to that which would be retrieved from a laser guide star, all be it without the use of fluorescence, and a Shack-Hartmann-type approach can be used to express the aberrations in terms of Zernike modes.

Indirect Wavefront Sensing Indirect wavefront sensing usually involves an iterative search routine which alters the shape of the incoming wavefront and maximizes or minimizes a particular property of the image selected to be directly related to image quality or resolution (i.e., a fitness sensor), for example, intensity. The different approaches can be split into modal or zonal sensing depending on the basis set used for the search routine. For example, a modal approach might use the Zernike polynomials as the basis set, taking advantage of the orthogonality of the modes allowing each mode to be corrected independently of each other. The modal approach relies on being able to accurately reproduce the required modes without any cross talk. In a zonal sensing approach, the basis set could, for example, be each actuator on a DMM, and here the optimization algorithm routine becomes increasingly important since it determines

19

Adaptive Optics for Aberration Correction in Optical Microscopy

597

how quickly and effectively the search space is explored. In the zonal approach, the speed at which the wavefront can be changed is important, and hence it is usually best suited to systems using a DMM. Both techniques will be discussed in more detail in the subsequent sections. Due to the issues with direct wavefront sensing, the majority of research groups have embarked on an indirect wavefront sensing approaches.

Modal Sensing Using a modal approach for indirect wavefront sensing, sometimes referred to as sensor-less wavefront sensing, was pioneered by the group in Oxford led by Wilson and Booth [10, 23]. As explained previously in section “Introduction,” a wavefront can be described in terms of a set of modes (e.g., Zernike) which are orthogonal to each other; see Eq. 1. For each mode equal amounts of negative and positive correction are applied to the wavefront which when focused lead to two focal spots of differing intensities which can be measured using a confocal pinhole. The difference in intensity (V+k  V k ) between these two spots is proportional to the amount, ck, of that particular mode present on the original aberrated wavefront, ck ¼

 Vþ k  Vk Vo

(6)

Since the modes are orthogonal, their individual contribution to the final corrected shape can be measured independently of each other. Each mode is applied sequentially and the level of that mode present measured allowing the final combined correction to be determined. The great benefit of this approach is the speed at which you can arrive at the required correction since by considering only orthogonal modes, the search space is greatly simplified. The user decides the number of modes they wish to correct for, and for each mode two trial wavefronts are required plus a measure of the intensity of the focal spot when no correction has been applied, V0. Booth et al. used this routine to correct for 7 Zernike modes (including astigmatism, coma, trefoil, and first order spherical) in a fluorescence confocal image of mouse intestine and found an optimum result was achieved after running through the cycle twice leading to a total of 28 scans [10]. When using an SLM for indirect modal sensing, the whole process can be made even more time efficient by splitting the wavefront into n-paths which are then modified by aberration plates. Reducing the time taken to reach a correction is of particular importance for applications where sample photodamage is an issue. Important in any form of indirect modal wavefront sensing is the accurate representation of the modes. When using a DMM with a small number of actuators and applying analytical modes like Zernike or Lukosz modes, this is a nontrivial task. Wang et al. address this issue by deriving an alternative modal basis set directly from the actuator influence functions and hence avoid any approximation errors with their simulations showing a significant improvement [24].

598

A.J. Wright and S.P. Poland

Zonal Sensing Zonal sensing approaches rely on no prior knowledge of the sample or the modal wavefront components, and instead, for example, they work with the set of actuators present on the DMM. Optimization Algorithms One common form of zonal sensing uses a DMM with an optimization algorithm. Here the search space is typically more complicated since it has not been reduced via the use of orthogonal modes therefore it is important to be able to sample a wide range of wavefronts very quickly. A property of the image, directly related to image quality and/or resolution, is selected as the merit factor, and an optimization algorithm is used to either maximize or minimize this property and hence remove aberrations and improve image quality. In order for such an approach to be successful, the choice of merit factor and optimization algorithm has to be carefully considered, particularly in terms of speed and the level of correction required. The downside of this approach is often the time taken to complete an optimization and the amount of sample photon exposure that occurs during this time. In 2005 Wright et al. looked at a range of optimization algorithms from stochastic random search algorithms to evolutionary genetic algorithms in order to determine which worked best in a confocal microscope arrangement [25]. The algorithms were compared in terms of the number of iterations (or time taken) to complete an optimization and the level of improvement in axial resolution achieved. A hillclimbing algorithm that tried each actuator in turn was the quickest but achieved limited improvement since it is a local search technique with a tendency to get stuck on a local maximum; see Fig. 6. The genetic algorithm, based on evolution, started

B Fitness Value

3 C A 1

2

4

Solution

Fig. 6 An example search space showing solution versus merit factor for a particular problem highlighting the difference between a local and global optimization, B representing the global maximum (i.e., the solution giving the highest merit factor) and A and C representing local solutions. The final outcome for an optimization algorithm will depend on whether a local or global algorithm is implemented as well as the initial starting point (1, 2, 3, or 4). For example, if left to run long enough, a global algorithm such as the genetic algorithm with the potential to search the entire search space will return point B as a solution, whereas a local algorithm like a hill-climbing algorithm would return A if starting at point 1 or B if starting at point 3

19

Adaptive Optics for Aberration Correction in Optical Microscopy

599

with a random population of DMM shapes and used the individual actuators to represent the genes. The genetic algorithm provided a global search and therefore was best in terms of improvement factor but was poor on number of iterations required. The authors concluded that a random search optimization, where an actuator is selected at random, changed by a random amount and the change is accepted or rejected depending on the influence on the merit factor, provided a reasonable level of improvement in the fastest time. The intensity of a point in the image, or average intensity over a small region of interest, is typically used as the merit factor for optimization routines. Although intensity does not directly link to resolution, the effect of “asking” the algorithm to concentrate all the photons and energy to a small region in the image normal results in an increase in resolution and image quality. Different merit factors have been explored, for example, optimizing on contrast or directly on the width of the point spread function and hence resolution, but the major limitation with these approaches is the time taken to determine the merit factor [26]. Intensity of a point in the sample can be read out almost instantaneously. A recent paper [27] has addressed the issues of photo-bleaching and optimization time when using the optimization algorithm approach. One approach presented was to create look-up tables of DMM shapes which could be determined using a reference sample and then applied as required to the sample of interest. Mullenbroich et al. demonstrated that the look-up tables could be determines by optimizing on a second harmonic signal that does not suffer from photo-bleaching and the shape then applied to a multiphoton imaging to further reduce photo damage. All these approaches have their merit and depending on sample and information required can help to considerably reduce sample exposure time. Increased sample exposure time and therefore increased risk of photodamage is the downside of all AO system and something the user needs to be aware of when implementing such approaches. Pupil Segmentation In 2010 a group based at Janelia farm introduced a new zonal indirect wavefront sensing approach where they split the rear pupil of the microscope objective into individual beamlets using an SLM [28, 29]. In an ideal situation, without aberration and sample inhomogeneities, a diffraction-limited focus arises from all light rays entering the rear pupil and intersecting at a common point with a common phase inside the sample. Local changes in refractive index in the sample act to redirect these rays as well as shift their phase relative to each other. The SLM is divided into N subregions (typically N < 100) leading to N beamlets at the rear focal plane. Each beamlet can then be considered individually with a continuous phase ramp applied to the separate subregions of the SLM in order to alter the angle of the beam and correct for position. The authors proposed and demonstrated two separate methods for correcting for phase, the direct measurement and the phase reconstruction method. The direct measurement approach turns “on” one of the beamlets and uses this as a reference from which to correct the phase of the other beamlets, taking a series of images for each beamlet with a variety of phase offsets in order to maximize the

600

A.J. Wright and S.P. Poland

signal at the focal point. For the phase reconstruction method, information gained about the beam deflection required for each beamlet provides a map of phase gradients across the pupil, and from here the phase itself is extracted using an iterative algorithm. The main hurdle to overcome with the approach is power limitations arising from splitting the beam into multiple beamlets. This is particularly true when working with a nonlinear signal, i.e., a multiphoton signal where the signal intensity is proportional to incident power squared. Reducing the number of beamlets, increasing laser beam power, and increasing pixel integration time can all help here, but ultimately there will be a limit placed on the number of beamlets possible, and this limit will decrease with image depth. Sample photon exposure can be reduced using the phase retrieval method and by limiting the number of beamlets. Compared to using an optimization algorithm method, as described above, there is a reduced risk of photodamage since fewer steps are needed and images are taken rather than the beam being parked on a specific point for a prolonged period of time. Like the previous indirect methods described above, intensity from the focal point is used as a measure of the correction required, and this was read initially from a fluorescence bead present in the sample and then later from a “bright” micron-sized feature in the sample.

Predetermined Aberrations It is generally accepted that the bulk aberration created by biological tissue is spherical aberration which is generated by the average refractive index mismatch. Spherical aberration can be defined as a Zernike polynomial that is perfectly symmetrical with regard to rotation around the optical axis. This type of aberration is therefore only accurately generated by layers of refractive indices without any lateral inhomogeneities. This model, termed “stratified medium,” has been well documented in the published literature as the origin of spherical aberrations [30]. Several groups have used this approach as a route for predetermining the aberration correction required as the user images deeper into the sample [31, 32]. Although an enhancement in signal and image quality is observed, the result often falls short of the predicted outcome mainly because the model does not perfectly reproduce the optical properties of biological tissue. Biological tissue shows a complex axial and lateral variation of refractive indices and only converges toward the model of a stratified medium when the local inhomogeneities of refractive indices are averaged over large lateral areas. Since stratified medium is only an approximation of biological tissue and because Zernike polynomials are orthogonal, any aberration that is not in its entirety made up of spherical aberration must contain components of any of the various other Zernike modes.

19

Adaptive Optics for Aberration Correction in Optical Microscopy

601

Applications and Implementation Due to the development of inexpensive wavefront modulating systems, AO has been incorporated successfully into a wide variety of microscopy modalities, using different types of implementation and achieving a range of improvements. A number of these applications will be discussed in more detail concentrating first on fluorescent imaging systems, namely, confocal and multiphoton microscopy; then forms of nonlinear microscopy that use an intrinsic property of the sample as the contrast mechanism, i.e., second-harmonic imaging microscopy; and finally super-resolution techniques such as stimulated emission depletion microscope (STED).

Confocal and Multiphoton Microscopy: Fluorescence Imaging In confocal microscopy, the use of a pinhole in front of the detector rejects out-offocus light, increasing the ratio of the desired signal to unwanted background and producing images with greatly improved contrast and, in theory, diffraction-limited resolution. A 2D image can be formed by either scanning the beam across the sample or scanning the sample itself, and a 3D image stack is formed by then stepping the sample or the objective in the axial direction. Confocal microscopy is more often than not used in fluorescence mode with the pinhole rejecting the out-of-focus fluorescent light from the sample. It has become the workhorse of many life science laboratories providing valuable insight into the inner workings of cell design and function. Multiphoton microscopy functions through the nonlinear excitation of fluorophores resulting in fluorescence within a small volume of the sample. Typically, two low-energy long-wavelength photons are absorbed simultaneously to excite a single higher-energy shorter-wavelength fluorescent photon. The probability of this occurring is very low so in order for the two excitation photons to be absorbed simultaneously a high photon density is needed. Hence fluorescence is only generated at the focus of the laser beam removing the need for a confocal pinhole and making the technique inherently optically sectioning. When collecting fluorescence in the visible, the use of longer near-infrared wavelengths allows for deeper imaging since the effects associated with scattering is greatly reduced. Fluorescence, confocal, and multiphoton microscopies are common in many life science laboratories and are used routinely in in vitro and in vivo experiments of cells and tissue. All of the early work on AO in microscopy concentrated on these imaging systems, and indeed many of the more recent publications are based on improving these techniques and showing further enhancements in image quality and resolution. One of the first experimental feasibility studies of AO in microscopy used a multiphoton microscope, measured the aberrations using a wavefront sensor, and employed a spatial light modulator as the correction device [33]. The first practical implementation adaptive optics in multiphoton microscopy implemented a hill-climbing

602

A.J. Wright and S.P. Poland

algorithm and optimized the shape of a deformable membrane mirror using intensity as the merit factor [11]. Using a 1.3 NA, 40 oil immersion objective, imaging in water (to induce refractive index mismatch), the maximum scanning depth was increased from 3.4 to 46 μm. In 2006 Rueckel et al. [34] incorporated a coherence-gated approach for wavefront sensing with an AO mirror into a closedloop mechanism. Only light that had been scattered close to the focus was measured to determine the wavefront. They showed that this approach could correct for aberrations up to a depth of 200 μm in zebrafish larva. Applying the pupil segmentation approach detailed in section “Pupil Segmentation” to multiphoton microscopy in 2011, Ji et al. recovered diffraction-limited resolution at depths of 450 μm in brain tissue and improved image quality over fields of view of hundreds of microns [29].

Intrinsic Nonlinear Techniques (SHG, THG, and CARS) Second-harmonic generation (SHG), third-harmonic generation (THG), and coherent anti-Stokes Raman scattering (CARS) microscopy are all types of nonlinear imaging modalities that are used frequently in biological imaging. SHG and THG are special cases of sum frequency generation where photons interact with a particular nonlinear material, for example, collagen to form signal photons with two or three times the energy respectively. CARS is a four-wave mixing process where a Stokes beam ωs and two pump beams ωp interact at the sample to produce an anti-Stokes beam, having frequency ωas = 2ωpωs. By altering the beat frequency of the Stokes and pump beam to coincide with the frequency of a specific molecular vibration, the oscillation is driven coherently, and a particular chemical vibration can be targeted. Like multiphoton microscopy described above, all these techniques require a high peak power and high photon density, which is only available at the focal volume giving optical sectioning capabilities and improved axial and lateral resolution. The reliance on a high photon density and a “good quality” focus makes such techniques highly susceptible to aberrations and therefore the implementation of AO beneficial. With coherent nonlinear microscopies such as SHG, THG, and CARS, signal levels also depend on the phase distribution near the focal volume [35]. Since these techniques all arise from the intrinsic properties of the material, there is no need to fluorescently label the sample and therefore no risk of photo-bleaching, in addition fluorophores can introduce unwanted sample perturbations, and labeling can be hard to achieve in some samples. Much of the work on AO in SHG and THG microscopy has looked at improving the quality of images showing embryo development and in particular highlighting the dynamic nature of this process. The THG signal typically shows the cellular and subcellular structures of the embryo, whereas the SHG signal highlights the zona pellucida (membrane surrounding the oocyte) [36]. In 2009 Oliver et al. proposed a modal correction scheme for determining the DMM shape required using the 11 most influential eigenmodes of the DMM excluding tip, tilt, and defocus, thus allowing the correction phase to be accurately produced by the DMM. They used the sharpness of a small region of interest as their merit factor and showed a 2.5 increase

19

Adaptive Optics for Aberration Correction in Optical Microscopy

603

in sample illumination [37]. Also in 2009 Jesacher et al. imaged a mammalian embryo using SHG and THG incorporating AO into their system. They used a modal correction scheme, akin to that detailed in section “Modal Sensing,” but using the Zernike modes and intensity as their figure of merit and reported signal increases of 21% and 9% for SHG and THG, respectively. Interestingly they were able to confirm that the correction achieved when looking at a SHG signal was similar to that achieved when looking at a THG signal suggesting that both modalities are affected by the same aberrations [36]. AO has been incorporated into a CARS microscope using a search algorithm to optimize the intensity of a point in a sample with a look-up table to provide a good starting wavefront in order to improve the rate of optimization. The look-up table had been previously determined using a similar tissue sliced to a variety of thicknesses and also included a correction shape for the system-induced aberrations determined by focusing on a coverslip. In this paper the significance of look-up tables is demonstrated as a possible practical means of incorporating adaptive optics in an in vivo biological imaging situation. Figure 7 shows sample CARS images with and without AO and with only the system-induced aberrations corrected at a depth of 260 μm in chicken muscle.

Single-Molecule Imaging Single-molecule super-resolution techniques such as STORM [39] (stochastic optical reconstruction microscopy) and PALM [40] (photoactivated localization microscopy) obtain super-resolution through the localization of individual fluorescent emitters using a centroiding algorithm (i.e., 2D Gaussian fitting) to accurately determine each emitter’s position. Both techniques require each fluorophore to switch on and off individually in order to ensure that each individual molecule fluorescing at any particular time is sparse enough to determine the position with sufficient accuracy. Since the precision of location at which an isolated emitter can be determined is limited to the intensity (Eq. 7), any improvements made to reduce systematic aberrations will improve the resolution and speed at which a data set can be collected. In terms of area, Localisation 

FWHMPSF pffiffiffiffi N

(7)

where the localization precision is dependent on the full width half maximum (FWHM) of the point spread function (PSF) and the number of photons N used in the fit. By applying a known amount of astigmatism into the beam, one can infer the position along the optical axis of a single point illuminator based on shape of the point spread function (PSF). Initially this was performed using a cylindrical lens [7]. Since then further improvements in axial resolution have been achieved through PSF engineering using a spatial light modulator with a diffraction image to generate a rotating double-helix intensity pattern along the axial direction [41].

Fig. 7 Adipose globular deposit imaged with CARS microscopy at 260 μm depth in white chicken muscle. Top: comparison of image (a) without correction, (b) with the system-induced aberrations corrected, and (c) with the system- and sample-induced aberrations corrected. The arrow marks the point on the sample where beam was parked and the intensity optimized during the correction process. Bottom: line intensity plots of the three images for comparison [38]

604 A.J. Wright and S.P. Poland

19

Adaptive Optics for Aberration Correction in Optical Microscopy

605

Stimulated Emission Depletion Microscopy (STED) Stimulated emission depletion microscopy (STED) [42] combines a depletion beam and an excitation beam which overlap spatially and temporally with respect to each other. The excitation beam normally has a Gaussian intensity profile, but the depletion beam is doughnut shaped with a region of zero intensity at its center. The depletion beam de-excites all but a small region within the center of the excitation beam which is smaller than the diffraction limit. By then collecting only light from within this region, one effectively surpasses the diffraction limit imposed by Abbe (Eq. 1), and super-resolution is achieved. Here super-resolution typically only occurs in the lateral region, and although there are methods to attain 3D superresolution, these are limited to thin specimen samples due to either the complexity of the system or to aberrations. In an attempt to overcome this issue, Klar et al. used an SLM to create a phase mask that forms a beam with a ring-shaped focus and additional high-intensity lobes both above and below the central zero-intensity region thus providing super-resolution in the axial as well as the lateral directions [43]. The approach by Klar et al. proved successful on thin samples but the novel beam was highly susceptible to aberrations and therefore not suitable to imaging thick samples [44]. AO was first implemented in STED by Gould et al. in 2012 in a 3D STED system similar to that described above with an intensity maximum above and below the central doughnut beam. Their approach was based on the modal approach described in section “Modal Sensing” using Zernike modes (minus tip, tilt, and defocus) to express the wavefront but with two SLMs, one in the excitation beam and the other already present in the depletion beam but now used to correct for aberration as well as form the 3D zero-intensity region. Image brightness proved to be insensitive as a figure of merit since when the two beams no longer overlap a traditional confocal image with a higher average intensity is formed as opposed to a STED image. A new metric was defined that incorporated both the image intensity and image sharpness. First the system was used in standard confocal mode and the aberration correction in the excitation path determined using intensity as the merit factor and then the imaging system was converted to STED mode with the new combined metric applied. Figure 8 shows the success of this technique when applied to imaging fluorescent beads through zebrafish retina ~14 μm thick. In a second, more recent publication, the same team demonstrated that by applying a similar method but correcting for only tip, tilt, and defocus, they can improve the spatial alignment and overlap of the excitation and depletion beams [45].

Selective Plane Illumination Microscopy (SPIM) Selective plane illumination microscopy (SPIM) otherwise known as light-sheet microscopy is an imaging modality where the sample is illuminated from the side with a thin plane of illumination and the fluorescence light collection occurs perpendicular using a normal microscope objective lens with a wide-field camera [47].

606

A.J. Wright and S.P. Poland

Fig. 8 AO STED images of fluorescent beads through zebrafish retina sections. (a–f) Results for beads imaged through ~14 μm of retina. Lateral and axial sections of a single fluorescent bead imaged in (a, d) confocal, (b, e) STED, and (c, f) AO STED show improvement in signal and resolution when adaptive aberration correction is applied to the depletion beam path. (g–l) Similar image sequences for beads imaged through ~25 μm of retina. Axial profiles of beads in AO STED images were ~208 and ~249 nm for (f) and (l), respectively. Color bar in (e) also applies to (b), (c), and (f). Color bar in (k) also applies to (h), (i), and (l). (m–o) Volume renderings for data shown in (a–f) for (m) confocal, (n) STED, and (o) AO STED data. (n) and (o) plotted on same color scale for comparison of signal [46]

The thin illuminating plane is usually formed using a cylindrical lens, but more recently scanning laser techniques utilizing Bessel beams have been used to reduce the effects of scattering and substantially improve axial resolution [48]. Bourgenot et al. utilized the indirect modal sensing approach employing Zernike modes (section “Modal Sensing”) to correct for aberrations when imaging a zebrafish within a glass borosilicate pipette [49]. They altered the improvement metric used depending on the level of aberrations present, for highly aberrated regions of their sample image contrast were used as a measure, and when relatively low quantities of aberrations were present, they looked to maximize the high spatial frequencies of the image. A significant level of improvement was achieved as can been seen in Fig. 9 a–c,

19

Adaptive Optics for Aberration Correction in Optical Microscopy

607

Fig. 9 AO in SPIM with a zebrafish in a glass borosilicate pipette. (a, b, c) are images taken for a flat mirror shape, a mirror shape optimized for the system aberrations, and a mirror shape optimized directly on the fish. The white square corresponds to the region of interest on which the optimization is performed and is 24 μm wide. (d) shows the metric normalized to the uncorrected values during the z-stack, as a function of imaging depth, when the mirror is flat (blue), and optimized (black). The green vertical lines correspond to where the mirror has been optimized. The purple vertical line shows when the region of interest has been moved. (e) shows the Zernike mode amplitude at different depth. Mode 4 and mode 6 are focus and astigmatism, respectively [49]

608

A.J. Wright and S.P. Poland

although it is clear from Fig. 9b, e that in this particular case, they are correcting for mostly aberrations introduced by the optical system rather than the sample itself.

Optical Trapping Optical trapping is a microscopy technique whereby micron-sized objects and cells can be manipulated in three dimensions at the focus of a laser beam. Optical trapping systems are often built round commercial microscopes and require a laser beam focused by a high numerical aperture objective lens, typically an oil immersion lens with a numerical aperture of 1.3. The optical trapping force is a direct result of the tightly focused light and is proportional to intensity gradient of the beam; therefore any deterioration in focal spot greatly impacts on the trapping force available. This often places limits on the depth at which it is possible to trap a particle and also the minimum size of particle that can be trapped [50]. In the case of optically trapped cells, users are often forced to increase the power of their laser beam to increase the optical trapping force leading to high levels of sample photon exposure. For small displacements from equilibrium, x, an optical trap can be likened to a mass on a spring with a restoring force, F, acting to bring the particle back to its equilibrium position (F = kx). k is the constant of proportionality and is either referred to as the spring constant or trap strength giving a measure of how strongly or weakly an object is trapped. As an object is trapped deeper into a sample, k noticeably decreases due to the presence of aberrations and a broader focal spot. Many optical trapping systems are holographic systems incorporating an SLM into the microscope optics. The ability to have complete control over the phase of the input beam has allowed people to trap multiple objects in three dimensions with independent control over the objects positions, to trap large arrays of objects and also to spin and rotate objects by using beams carry angular momentum. In the early models, the SLM itself was often the cause of considerable aberration, and therefore an initial correction would be applied to the system to allow for this [51]. SLM technology has improved considerably over the last decade, and the existing devices present in optical trapping systems are now being used to correct for system- and sample-induced aberrations as well as control the position, orientation, rotation, etc., of the trapped object. For example, in 2010 Čižmár et al. used an SLM-based approach that worked on the principle that an optical field propagating through any system can be described as a composition of arbitrary orthogonal modes where regardless of the system or mode set, an optimal focus would be achieved when all the modes meet at a selected point in space with the same phase [52]. They split their SLM up into a rectangular lattice with each rectangle projecting a different mode onto the sample. A reference beam was created in the first diffracted order of the SLM and was interfered with each mode in turn. The phase of each mode was corrected by maximizing the interference pattern. This approach is in many ways similar to that described above in section “Pupil Segmentation.” Several groups have also taken a DMM approach to correcting for the aberrations present in an optical trapping system. Ota et al. predetermined the amount of

19

Adaptive Optics for Aberration Correction in Optical Microscopy

609

spherical aberration required at a certain depth and applied this amount using DMM [53]. Several groups have used optimization algorithm approaches to determine the wavefront correction required by the DMM. Theofanidou et al. maximized a multiphoton fluorescence signal from an optically trapped fluorescence bead [54], and Mϋllenbroich et al. minimized the displacement from equilibrium position when a trapped bead experienced an external force [55].

Summary AO aims to shape the wavefront of the incoming beam with equal but opposite distortion to that introduced by the sample and imaging optics in order to restore image quality and resolution. It is a concept that originated in optical astronomy and is now common place in most ground-based optical telescopes. It has also been used extensively in ophthalmology enabling scientists to produce high-quality images of the cones and rods on the retina at the back of the eye [56, 57] and also as a method for improving vision by making bespoke contact lenses and glasses [58]. In this chapter we discuss the application of AO to optical microscopy. Like all optical systems when used away from their specific design criteria, microscopes suffer from aberrations which deteriorate the image quality, reduce resolution, and significantly limit the amount of useful information that can be gained from a particular image. The application of AO to microscopy has come in many forms from DMMs to SLMs, direct wavefront measurements to stochastic algorithm-based approaches. As yet there is no one clear leader or accepted routine for correcting for aberrations in an optical microscope. This is due in part to the complex nature of the problem and the truly inhomogeneous property of tissue samples making sample-to-sample generalization extremely difficult. In the last few years, several commercial AO systems have become available to be used as add-ons to existing microscope systems. For example, Thorlabs provides an AO kit involving a DMM, Shack-Hartmann wavefront sensor, the appropriate optomechanics, and stand-alone closed-loop control software for determining the correction required. Imagine Optic sells a similar kit containing a DMM, ShackHartmann wavefront sensor, and appropriate software package. In addition to this, Imagine Optic also sells a bespoke software package called “GENAO” for use where direct wavefront sensor measurements are not possible. GENAO uses an evolutionary algorithm with the Zernike modes as the basis set and improves on a property of the image. Hopefully this chapter has highlighted the wide variety of microscope modalities that stand to benefit from some form of AO, from confocal microscopy (the workhorse of many life science laboratories) to super-resolution approaches such as STED microscopy. What all the techniques discussed have in common is their reliance on a tightly formed ‘good quality’ focal spot without which high resolution and decent image quality is not possible. Many applications of AO in microscopy have concentrated on imaging biological tissues both in vivo and in vitro, here without AO the user is often forced to increase the laser beam power as they image

610

A.J. Wright and S.P. Poland

deeper into the sample in order to achieve a measurable level of signal. With AO in place, the signal levels deep into the sample can be comparable to the levels possible at the surface; therefore, the user no longer needs to increase the power of their laser beam greatly reducing the risk of sample photodamage. It is common to split the corrected aberrations into two categories, system and sample aberrations referring to those resulting from the optical system and those arising due to the sample. System aberrations are often thought of as remaining fixed with depth, whereas sample aberrations vary the deeper a user images into their sample. From a practical point of view, this can be very useful; by first determining the system correction required, the user has a starting point to work from and enough signal available to go on and correct for the sample aberrations. In many cases a significant image improvement is achieved purely by correcting for system-induced aberrations resulting from very subtle misalignments in the optics or optics being used away from their specific design criteria. Indirect wavefront sensing approaches are often favored over direct wavefront sensing techniques due to the lack of a suitable laser guide star, or point source, within a sample at the required depth. Indirect wavefront sensing approaches rely on selecting a property of an image which can be measured and used as an indicator of the level of improvement achieved. It is extremely important that the appropriate indicator or fitness metric is chosen as if not the overall outcome of the AO system will be limited regardless of the method used to determine the final correction to be applied. A metric is needed that relates directly to image quality and is sensitive to the level of aberrations present in a particular system, and, in many cases, a metric that can be read out easily and at speed is also desirable. For this reason intensity is often used, whether it is intensity of a point in the image or the average intensity of a region of interest. Intensity can be recorded almost instantaneously and more often than not is a clear indicator of the level of image quality. Section “Applications and Implementation” includes examples where intensity was found to be inappropriate due to its lack of sensitivity, and here metrics that incorporate image sharpness or contrast were employed.

References 1. Schwertner M, Booth M, Wilson T (2007) Specimen-induced distortions in light microscopy. J Microsc 228(1):97–102 2. Fahrbach FO, Rohrbach A (2010) A line scanned light-sheet microscope with phase shaped self-reconstructing beams. Opt Express 18(23):24229–24244 3. Zhu D et al (2013) Recent progress in tissue optical clearing. Laser Photon Rev 7(5):732–757 4. Combs CA et al (2007) Optimization of multiphoton excitation microscopy by total emission detection using a parabolic light reflector. J Microsc 228(3):330–337 5. Combs CA et al (2010) Optimizing multi-photon fluorescence microscopy light collection from living tissue by non-contact total emission detection (TEDII). Biophys J 98(3):180a 6. Tyson R (2010) Principles of adaptive optics. CRC Press, London, UK 7. Huang B et al (2008) Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science 319(5864):810–813

19

Adaptive Optics for Aberration Correction in Optical Microscopy

611

8. Albert O et al (2000) Smart microscope: an adaptive optics learning system for aberration correction in multiphoton confocal microscopy. Opt Lett 25(1):52–54 9. Sherman L et al (2002) Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror. J Microsc 206(1):65–71 10. Booth MJ et al (2002) Adaptive aberration correction in a confocal microscope. Proc Natl Acad Sci 99(9):5788–5792 11. Marsh P, Burns D, Girkin J (2003) Practical implementation of adaptive optics in multiphoton microscopy. Opt Express 11(10):1123–1130 12. Coulman C (1985) Fundamental and applied aspects of astronomical ‘seeing’. Ann Rev Astron Astrophys 23:19–57 13. Babcock HW (1953) The possibility of compensating astronomical seeing. Publ Astron Soc Pac 65(386):229–236 14. Greenaway A, Burnett J (2004) Industrial and medical applications of adaptive optics. Technology tracking. IOP Publishing Ltd. Bristol, UK. ISBN 0-7503-0850-8 15. Dalimier E, Dainty C (2005) Comparative analysis of deformable mirrors for ocular adaptive optics. Opt Express 13(11):4275–4285 16. Wright AJ et al (2005) Dynamic closed-loop system for focus tracking using a spatial light modulator and a deformable membrane mirror. Opt Express 14(1):222–228 17. Shack RV, Platt B (1971) Production and use of a lenticular Hartmann screen. J Opt Soc Am 61 (5):656 18. Fienup JR (1982) Phase retrieval algorithms: a comparison. Appl Optics 21(15):2758–2769 19. Booth MJ, Neil M, Wilson T (1998) Aberration correction for confocal imaging in refractiveindex-mismatched media. J Microsc 192(2):90–98 20. Jiang M et al (2010) Adaptive optics photoacoustic microscopy. Opt Express 18 (21):21770–21776 21. Tao X et al (2011) Adaptive optics confocal microscopy using direct wavefront sensing. Opt Lett 36(7):1062–1064 22. Feierabend M, Ruckel M, Denk W (2004) Coherence-gated wave-front sensing in strongly scattering samples. Opt Lett 29(19):2255–2257 23. Neil M, Booth M, Wilson T (2000) Closed-loop aberration correction by use of a modal Zernike wave-front sensor. Opt Lett 25(15):1083–1085 24. Wang B, Booth MJ (2009) Optimum deformable mirror modes for sensorless adaptive optics. Opt Commun 282(23):4467–4474 25. Wright AJ et al (2005) Exploration of the optimisation algorithms used in the implementation of adaptive optics in confocal and multiphoton microscopy. Microsc Res Tech 67:36–44 26. Poland SP, Wright AJ, Girkin JM (2008) Evaluation of fitness parameters used in an iterative approach to aberration correction in optical sectioning microscopy. Appl Optics 47 (6):731–736 27. Müllenbroich MC et al (2014) Strategies to overcome photobleaching in algorithm based adaptive optics for nonlinear in-vivo imaging. J Biomed Opt 19(1):016021 28. Ji N, Milkie DE, Betzig E (2010) Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues. Nat Methods 7(2):141–147 29. Ji N, Sato TR, Betzig E (2012) Characterization and adaptive optical correction of aberrations during in vivo imaging in the mouse cortex. Proc Natl Acad Sci U S A 109(1):22–27 30. Török P et al (1995) Electromagnetic diffraction of light focused through a planar interface between materials of mismatched refractive indices: an integral representation. J Opt Soc Am A 12(2):325–332 31. Kner P et al (2010) High-resolution wide-field microscopy with adaptive optics for spherical correction and motionless focusing. J Microsc 237:136–147 32. Booth MJ, Neil MAA, Wilson T (1998) Adaptive aberration imaging in refractive-indexmismatched media. J Microsc 192:90–98 33. Neil M et al (2000) Adaptive aberration correction in a two-photon microscope. J Microsc 200(2):105–108

612

A.J. Wright and S.P. Poland

34. Rueckel M, Mack-Bucher JA, Denk W (2006) Adaptive wavefront correction in two-photon microscopy using coherence-gated wavefront sensing. Proc Natl Acad Sci 103 (46):17137–17142 35. Olivier N, Beaurepaire E (2008) Third-harmonic generation microscopy with focus-engineered beams: a numerical study. Opt Express 16(19):14703–14715 36. Jesacher A et al (2009) Adaptive harmonic generation microscopy of mammalian embryos. Opt Lett 34(20):3154–3156 37. Oliver N, Debarre D, Beaurepaire E (2009) Dynamic aberration correction for multiphoton microscopy. Opt Lett 34(20):3145–3147 38. Wright AJ et al (2007) Adaptive optics for enhanced signal in CARS microscopy. Opt Express 15(26):18209–18219 39. Rust MJ, Bates M, Zhuang X (2006) Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat Methods 3(10):793–796 40. Betzig E et al (2006) Imaging intracellular fluorescent proteins at nanometer resolution. Science 313(5793):1642–1645 41. Pavani SRP et al (2009) Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function. Proc Natl Acad Sci 106 (9):2995–2999 42. Hell SW, Wichmann J (1994) Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt Lett 19(11):780–782 43. Klar TA et al (2000) Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission. Proc Natl Acad Sci U S A 97(15):8206–8210 44. Deng S et al (2009) Investigation of the influence of the aberration induced by a plane interference on STED microscopy. Opt Express 17(3):1714–1725 45. Gould TJ et al (2013) Auto-aligning stimulated emission depletion microscope using adaptive optics. Opt Lett 38(11):1860–1862 46. Gould TJ et al (2012) Adaptive optics enables 3D STED microscopy in aberrating specimens. Opt Express 20:20998–21009 47. Huisken J et al (2004) Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305:1007–1009 48. Fahrbach FO, Simon P, Rohrbach A (2010) Microscopy with self-reconstructing beams. Nat Photon 4:780–785 49. Bourgenot C et al (2012) 3D adaptive optics in a light sheet microscope. Opt Express 20:13252–13261 50. Hajizadeh F, Nader S, Reihani S (2010) Optimized optical trapping of gold nanoparticles. Opt Express 18(2):551–559 51. Jesacher A et al (2007) Wavefront correction of spatial light modulators using an optical vortex image. Opt Express 15(9):5801–5808 52. Čižmár T, Mazilu M, Dholakia K (2010) In situ wavefront correction and its application to micromanipulation’. Nat Photon 4:388 53. Taisuke O (2003) Enhancement of laser trapping force by spherical aberration correction using a deformable mirror. Jpn J Appl Phys 42:L701 54. Theofanidou E et al (2004) Spherical aberration correction for optical tweezers’. Opt Commun 236:145 55. Müllenbroich MC, McAlinden N, Wright AJ (2013) Adaptive optics in an optical trapping system for enhanced lateral trap stiffness at depth. J Opt 15:075305 56. Baraas RC et al (2007) Adaptive optics retinal imaging reveals S-cone dystrophy in tritan colorvision deficiency. JOSA A 24(5):1438–1447 57. Li KY, Roorda A (2007) Automated identification of cone photoreceptors in adaptive optics retinal images. JOSA A 24(5):1358–1363 58. Liang J, Williams DR, Miller DT (1997) Supernormal vision and high-resolution retinal imaging through adaptive optics. JOSA A 14(11):2884–2892

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative Light

20

F. Argoul, L. Berguiga, J. Elezgaray, and A. Arneodo

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Waveguide-Based Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhancement of the Guiding Wave Mechanism by Surface Plasmon Resonance . . . . . . . . . Waveguides and Total Internal Reflection (TIR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Total Internal Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Waveguide Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . From TIR Microscopy to Dielectric Waveguide Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interference Reflection Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dielectric Waveguide Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Improving Waveguide Microscopy with Structured Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . Surface Plasmon Resonance: From Evanescent to Guided Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principles of Surface Plasmon Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensivity of SPR to Dielectric Layers: From Evanescent to Guided Waves . . . . . . . . . . . . . . . SPR-Based Microscopy for Biological Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principles of Scanning Surface Plasmon Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V (Z) Response to a Discrete Phase Jump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V (Z) Responses from Glass//Water and Glass//Gold//Water Interfaces . . . . . . . . . . . . . . . . . . .

614 615 616 617 618 621 623 623 624 626 629 629 633 637 637 640 641

F. Argoul (*) • A. Arneodo LOMA (Laboratoire Ondes et Matière d’Aquitaine), CNRS, UMR 5798, Université de Bordeaux, Talence, France CNRS UMR5672, LP ENS Lyon, Université de Lyon, Lyon, France e-mail: [email protected]; [email protected] L. Berguiga CNRS, UMR 5270, INL, INSA Lyon, B^atiment Blaise Pascal, Villeurbanne, France e-mail: lotfi[email protected] J. Elezgaray CBMN, CNRS UMR5248, Université de Bordeaux, Pessac, France e-mail: [email protected] # Springer Science+Business Media B.V. 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_40

613

614

F. Argoul et al.

SPRWG Microscopy on Thick Dielectric Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SPR and SPRWG Microscopy on Living Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

642 644 646 647

Abstract

For more than 50 years, resonant waveguides (RWGs) have offered highly sensitive label-free sensing platforms to monitor surface processes such as protein adsorption, affinity binding, monolayer to multilayer build-up, bacteria and more generally adherent or confined living mammalian cells and tissues. Symmetrical planar dielectric RWG sensitivity was improved by metal coating of at least one of their surfaces for surface plasmon resonance undertaking (SPRWG). However, RWG sensitivity was often obtained at the expense of spatial resolution and could not compete with other high resolution fluorescence microscopies. For years, RWGs have only rarely been combined with high-resolution microscopy. Only recently, the improvement of intensity and phase light modulation techniques and the availability of low-cost high numerical aperture lenses have drastically changed the devices and methodologies based on RWGs. We illustrate in this chapter how these different technical and methodological evolutions have offered new, versatile, and powerful imaging tools to the biological community. Keywords

Resonant waveguides • Surface plasmon resonance • Goss-Hänchen effet • Evanescent field microscopy • Guided-wave microscopy • High resolution imaging of living cells

Introduction Our ability to examine living cells in their native context is crucial to understand their dynamics, structural functions, and transformations in health and disease. Cellbased optical assays have gained popularity in drug discovery and diagnosis, because they allow extraction of functional and local informations that would otherwise be lost with biochemical assays. Cell-based assays mostly focus on specific cellular events traced out by identified tagged (fluorescent) bio-molecular targets. However, despite their high specificity, these “target”-based assays require more manipulations (e.g., over-expression of targets with and without a readout tag) than biochemical assays and they often change the expected native cell event itself. Cell-based assays that could provide noninvasive and real-time recording of native cellular activity with high sensitivity have been the subject of intensive research for more than 50 years. A partial response to this quest was provided by optical biosensors based on total internal reflection (TIR) and evanescent waves. These optical biosensors share a common high surface-specific sensitivity. They have been developed in a variety of configurations, including spectroscopy (IR, Raman),

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

615

fluorescence (TIRF), guided wave sensors based on surface plasmon resonance (SPR), and phase contrast interferometry [1–3]. Biosensors based on SPR and resonant waveguide gratings were initially constructed to sense ligand affinities and kinetics of receptors immobilized on their surface. More recently, the same methods were adapted to probe the activity of living cells, such as cell adhesion and motility, proliferation and death [4–6]. The principle and advantage of waveguiding is to increase the sensitivity by multiple reflections and refractions of the light through or at the interface of the sample. However, increasing this sensitivity systematically degrades the spatial resolution and hence limits the resolution of a microscopic device that would benefit of the waveguide amplification. In this chapter, our aim is to show that a compromise can be found that optimizes both sensitivity and spatial resolution. It can be reached by combining the principles of the high resolution microscopy methods [7, 8] with waveguide techniques. Interference reflection microscopy (IRM) [9, 10], also known as interference contrast and surface contrast microscopies, has been used since the 1970s to study a wide range of cellular behavior including cell adhesion, motility, exocytosis, and endocytosis. This technique relies on reflections from an incident beam of light as it passes through materials of different refractive indices. These reflected beams interfere, producing either constructive or destructive interferences depending on the thickness and index of the layer of aqueous medium between the cellular object and the glass surface. More sophisticated and sensitive phase microscopy methods were developed recently to visualize nonintrusively optical index changes from living cells. In particular, Fourier phase microscopy (FPM) [11], digital holographic microscopy (DHM) [12], and quantitative phase microscopy (QPM) [13–21] were implemented to provide quantitative phase images of biological samples with remarkable sensitivity, reproducibility, and stability over extended periods of time. Conversely to waveguide methods, quantitative phase imaging methods are derived from diffractive optics principles and can reach a good spatial resolution, but they remain less sensitive to local optical index variations.

Waveguide-Based Sensors Planar optical waveguides utilize thin optically transparent films with a greater refractive index than the media in contact with their surfaces. Thanks to total internal reflection (TIR), for an appropriate incidence angle, the light injected into a thin waveguide remains confined inside the film and may propagate over distances that depend on both the optical absorption inside the film and the reflection efficiency at its upper and lower interfaces (Fig. 1). Actually the light is reflected on both sides of the film, and it is this intermittent evanescent wave coupling which guides the light. Planar optical waveguides offer highly sensitive label-free platforms to monitor surface processes in aqueous solutions, from protein adsorption, affinity binding, monolayer to multilayer build-up to bacteria or even living mammalian cells [22]. RWG biosensors exploit evanescent waves generated by the resonant coupling of light into a waveguide via a diffraction grating [23, 24]. RWG biosensors typically

616

F. Argoul et al.

Fig. 1 Sketch of a planar waveguide. Planar waveguide made of a planar film of thickness W and optical index nwg, embedded in between a lower (n1) and a upper (n2) medium, both with smaller optical indices. When the thin film is homogeneous, waveguiding forces linear zigzag optical rays in between the two surface boundaries

consist of a substrate (glass coverslip for instance, with or without an added layer), a waveguide film wherein a grating structure is embedded, a medium, and an adlayer (the sample and its medium to be characterized). Because the waveguide has a higher refractive index than its surrounding media, the guided light propagates within the waveguide due to the confinement by total internal reflection at the substrate-film and film-medium interfaces. However, RWG biosensors have a relatively poor lateral resolution due to the propagation distance of the guided light.

Enhancement of the Guiding Wave Mechanism by Surface Plasmon Resonance Surface plasmon resonance (SPR) has been extensively used since the 1970s to probe minute refractive index variations at the interface between a gold film and a dielectric medium [25–27]. Many geometries including metal films, stripes, nanoparticles, nanorods, holes, slits. . . may support SPR and offer a range of original properties, such as field enhancement and localization, high surface and bulk sensitivity, subwavelength localization. This explains why SPR has found applications in a wide variety of fields such as spectroscopy, nanophotonics, biosensing, plasmonic circuitry, nanolasers and subwavelength imaging. SPR is very popular for the detection of molecular adsorption of small molecules, thin polymer films, DNA or proteins, self-assembled layers, and so on [28, 29]. The principle of SPR detection takes advantage of the high sensitivity of the surface plasmon polariton (SPP) to refractive index gradients generated by the adsorption of molecules on gold. The plasmon wave undergoes a modification of both its amplitude and phase when it meets an obstacle of different index. SPP is a transverse magnetic (TM) or p-polarized surface wave that propagates along a metal-dielectric interface, typically at visible or infrared wavelengths [27]. SPP is a surface mode since the electric and magnetic field amplitudes decay exponentially in the z direction normal to the interface into both the metal and the dielectric material. Because of dissipative losses in the metal, SPPs are also damped in their propagation

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

617

along the x-direction (normal to the magnetic field), with the complex propagation constant kx ¼ k0x þ ik00x . The lateral propagation length Lx ¼ 1=2k00x is responsible for the lateral resolution limitation of surface plasmon microscopy and determines to what extent the optical properties of a laterally heterogeneous, thin dielectric film deposited on top of a SPP-carrying metal surface can be analyzed by reflectivity versus angle scans. More elaborate SPR microscopy techniques with enhanced sensitivity do exist and are based on Mach-Zehnder interferometry [30, 31] or dark-field microscopy [32]. However, prism-coupled SPR devices (Kretschmann configuration) cannot reach a subwavelength resolution due to the SPP guided wave propagation inside the thin gold film [33]. Devices that replace the prism by oil-immersion [34–36] or solid-immersion [37] lenses for coupling light into SPPs circumvent this difficulty by shaping and confining the SPP laterally to the gold surface. The combination of high numerical aperture lenses and interferometric devices pushed further down its resolution to subwavelength distances [38–44]. Mainly two methods have been proposed: a wide-field SPR microscope (WSPRM) and a scanning SPR microscope (SSPRM). We have more specifically focused on the second method during the past 10 years [39, 41–44]. Improvement of RWG sensors was achieved by replacing the substrate adlayer by a metal film in the so-called surface plasmon resonance waveguide (SPRWG) microscopy [45, 46]. SPRWG couples surface plasmon resonance with waveguide excitation modes and allows to determine index, thickness, and anisotropy of thin dielectric films [47, 48]. Actually SPRWG is applicable to films thicker than the length of the evanescent field and is therefore very well suited for cellular imaging [49–52]. It benefits from the strong field enhancement by SPP and guided mode configuration. In the simplest and most common case, the sensor consists of a noble metal film evaporated on a glass support. More sophisticated multi-layered structures combining metal and dielectric layers conferring better sensitivity have also been proposed to better match experimental requirements such as hydrophylicity or hydrophobicity, smaller or larger k vectors [53]. Again, similarly to high resolution SPR microscopy, subwavelength resolution SPRWG microscopy requires confining the plasmon laterally with a high aperture numerical lens [43, 44].

Waveguides and Total Internal Reflection (TIR) The standard ray model (Fig. 1) considers that the reflected beams at the film interfaces are not shifted upon reflection and does not rigorously account for the penetration (evanescent field) of the light inside the upper and lower media. The transformation of a propagative light into an evanescent light in a TIR configuration is the kernel of light guiding principles. An evanescent field settles at the interface of two media with optical indices n1 and n2 such that n2 < n1, its amplitude decays exponentially in medium 2 since the z component (normal to the interface) of its wave-vector k has a non-nul imaginary part. This field is also called near-field.

618

F. Argoul et al.

Characteristic features of the near field are: (i) its wave vector residual component kx > kkk = 2πn2/λ0 = n2|k0| and (ii) its energy density E is greater than would be expected from the time-averaged flow of radiation through a volume element determined by the Poynting vector S: E > (8π/c)|S|. Purely propagating waves (without evanescent components) are the infinite plane waves, a sort of idealization that does not exist in nature. As soon as the light beam has a finite cross-section, evanescent (non-propagating along one direction at least) field components exist; their amplitude is inversely proportional to the diameter of the light beam. They henceforth play a non-negligible role in highly focused beams, confined light in nano-structures and waveguides.

Total Internal Reflection As far as waveguide efficiency computation is concerned, it is important to realize that at each TIR of the waveguide boundary, the electric field bleeds (dribbles) into the surrounding media through the evanescent wave. When the incident beam is collimated, a lateral shift of the reflected beam may occur, known as the GoosHänchen (GH) effect [55]. It seems as if the light beam was going beyond the position of the interface in z direction (Fig. 2b). This phenomenon may become highy significant for waveguides, since, at each reflection, it accumulates along the propagating direction of the waveguide. We revisit here the Fresnel model equations for reflection at a dielectric interface (Fig. 2a). For simplicity, we consider the system invariant along the y direction, and we therefore do not consider this third direction. In each medium the wave vector reads: ki = [kix = ki sin θi, kiz = ki cos θi], kkik = ki = 2πni/λ0, where ni is the optical index of medium i and λ0 the wavelength of light in vacuum. At the interface

Fig. 2 Reflectivity of a beam at the interface between two media with different optical indices such that n1 > n2. (a) Traditional representation of the incident and reflected beams as rays, without taking into account the Goos-Hänchen effect (the angles are noted as ϕ’s, differently to our notation θ’s). (b) Representation of the incident and reflected beams when considering the Goos-Hänchen effect. The reflected beam lateral shift D is illustrated as positive in this example (Reprinted with permission from Zeitschrift fur Naturforschung [54])

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

619

between the two media, the conservation of the momentum (k.exy) yields n1 sin θ1 = n2 sin θ2. From the continuity equations for the tangential components of the electric and magnetic fields at the interface 1//2, a 2  2 transmission matrix can be written as: T Mð1==2Þ ¼

1 t12



 r 12 ; 1

1 r 12

(1)

where r12 and t12 are the complex reflection and transmission coefficients at the (1//2) interface. E (resp. E0 ) represents the electric field along the z > 0 (resp. z < 0) direction. These electric fields are related via a matrix product at each interface: 

E1 E01



 ¼ T Mð1==2Þ

 E2 ; E02

(2)

where the optical index n1 (resp. n2) is used for medium 1 (resp. 2). The coefficients r12 and t12 are obtained from the z-components of the complex wave vectors k1 and k2, respectively. For the two polarizations TM (p) and TE (s), we have: r TM ¼

n22 k1z  n21 k2z , n22 k1z þ n21 k2z

tTM ¼

2n1 n2 k1z 2 n2 k1z þ n21 k2z

k1z  k2z , k1z þ k2z

tTE ¼

2k1z : k1z þ k2z

;

(3)

and r TE ¼

(4)

The propagation of light from the interface (1//2) (z = 0) to a distance d2 through media (2) is described by a propagation matrix P2:  P2 ½ d 2  ¼

eik2z d2 0

0

eik2z d2

 :

(5)

For the single interface shown in Fig. 2a, we have n1 sinθ1 = n2 sinθ2,  1=2 =λ0 . Total internal refleck1z = 2πn1 cos(θ1)/λ0, and k2z ¼ 2π n22  n21 sin2 ðθ1 Þ tion occurs when n1 sin ðθ1TIR Þ ¼ n2 . For θ1 > θ1TIR , k2z gets imaginary and its modulus defines the  inverse of the extinction length δev ¼ λ0 =   2  2 2 1=2 2π n1 sin ðθ1 Þ  n2 of the evanescent field into media 2. With the notation e ¼ n22 =n21, we get the following simplified expressions for rTM and rTE:

r TM

 1=2 e cos θ1  e  sin2 θ1 ¼  1=2 ; e cos θ1 þ e  sin2 θ1

(6)

620

F. Argoul et al.

and r TE ¼

 1=2 cos θ1  e  sin2 θ1  1=2 : cos θ1 þ e  sin2 θ1

(7)

The condition for TIR becomes sin ðθ1TIR Þ ¼ e1=2. In the TIR regime ðθ1 > θ1TIR Þ, the phase change upon reflection can be computed from the phase of the complexvalued rTM and rTE: ϕ12TM ¼ I flnðr TM Þg;

(8)

ϕ12TE ¼ I flnðr TE Þg;

(9)

and

where I (resp. R) represents the imaginary (resp. real) part of a complex quantity. The existence of a lateral shift of the reflected beam was observed in first mid of the twentieth century by Goos and Hänchen [55] and formalized by a simple relation linking the reflected beam lateral shift Dpol to the derivative of its phase with respect to the angle of incidence [56–59]: Dpol ¼ 

λ0 @ϕ12pol ; 2πn1 @θ1

(10)

where the subscript pol is either TM or TE. Another formulation was also proposed, as straightforwardly resulting from Eqs. 8 and 9 [54, 59]: Dpol

     @ln r pol λ0 λ0 @r pol   ¼ I I ¼ = r pol : @θ1 2πn1 2πn1 @θ1

(11)

From Eqs. 6 and 7, we get: DTM

( ) λ0 e sin θ1 ¼ I  1=2  2 1=2 ; πn1 e  sin2 θ1 sin θ1  e cos2 θ1

(12)

( ) λ0 sin θ1 ¼ I  1=2 : πn1 e  sin2 θ1

(13)

and DTE

These two shifts are shown for a glass//water interface in Fig. 3d. At the TIR angle θ1TIR , for both polarizations, the GH shift diverges, as expected. It is also important to notice that beyond the TIR angle, these shifts do not relax to zero but to a finite value  0.44 λ0, which is not negligible for highly focused beams. Actually, as the

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

a

b

c

d

621

Fig. 3 Computation of the GH shift from a dielectric interface with real positive optical indices. (a) Real (dotted-dashed line) and imaginary (solid line) part of the reflectivities for TM (red) (resp. TE (blue)) polarized light versus θ1. (b) Modulus of the reflectivity. (c) Phase of the reflectivity. (d) GH shifts computed from Eqs. 12 and 13. n1 = 1.5151, n2 = 1.335, λ = 632.8 nm

angle of incidence θ1 approaches the critical angle θ1TIR from above, the beam shift does not diverge to infinity, as predicted by Eqs. 12 and 13, but tends to a finite value bounded by the width of the beam [60]. Let us note that when the evanescent field is launched via a high numerical aperture objective lens, the reflected wave is phase shifted with respect to the incident wave, making the reflection point appear beyond the sample interface (inside the medium) [61, 62]. Generalization to Multilayer Systems It is straightforward to generalize the matrix equations (2) and (5) for a multilayer system: 

E1 E01



 ¼ T ð1==2Þ P2 ½d 2 T ð2==3Þ . . . Pi ½di T ði==ðiþ1ÞÞ , . . .

 En : E0n

(14)

From this equation, we can compute the reflectivity r ¼ E01 =E1 and the transmission t = En/E1 (E0n ¼ 0 since there is no injection of light in medium n in the backward direction). This matrix formalism will be used in the sequel to compute the reflected field of SPR-based waveguides. It can also be used to approximate a waveguide with a non-constant index profile by a stepped index function.

Waveguide Resonance In between the two media (n1, n2), let us insert a planar waveguide of optical index nwg (Fig. 1). For a beam angle θwg inside the waveguide larger than both its planar interface

622

F. Argoul et al.

TIR angles, the light bounces successively on its lower (wg//1) and upper (wg//2) interfaces. Each internal reflection at these interfaces adds a phase shift (ϕwg//1 between the waveguide and medium 1 and ϕwg//2 between the waveguide and medium 2) to the electric field. The propagation inside the waveguide of thickness W travels back and forth and adds an extra phase shift of 2kwgW cos θwg (z component of the wave vector inside the waveguide) (Fig. 1). The transverse resonance condition for constructive interference in a round-trip passage is:     2kwg W cos θwg þ ϕwg==1 θwg þ ϕwg==2 θwg ¼ m  2π:

(15)

The TIR phase shifts at the two interfaces vary with the polarization of the electric field; we have therefore different sets of resonance angles θwg (discrete values) for each polarization. A guided mode can exist only when a transverse resonance condition is satisfied, when the repeatedly reflected wave at each interface has constructive interference with itself, when no light gets out of the waveguide. Using the relations (6) and (7) for rTM and rTE, respectively, and assuming a symmetric waveguide (n2 = n1), we get the following phase sum F (W, θwg) for the light propagation and reflection inside the waveguide. For TM polarization: (  1=2 )   e cos θwg  e  sin2 θwg 2πF W, θwg ¼ 4πnwg W cos θwg =λ þ 2I ln  1=2 e cos θwg þ e  sin2 θwg ¼ 2πm;

(16)

and for TE polarization: 

2πF W, θwg



(

 1=2 ) cos θwg  e  sin2 θwg ¼ 4πnwg W cos θwg =λ þ 2I ln  1=2 cos θwg þ e  sin2 θwg ¼ 2πm;

(17)

where e = n1/nwg. For numerical computation of the resonant guided modes, we choose a waveguide of optical index nwg = 1.5151 (glass), immersed in water media (n1 = n2 = 1.335). We solve numerically Eqs. 16 and 17 by computing F (W, θwg) (versus the variable θ1, where n1 sin θ1 = nwg sin θwg) and searching its intersects with horizontal lines corresponding to integer values. Figure 4 shows that the resonant modes of a planar waveguide shrouded inside a medium of lower optical index can be excited without the need of sophisticated prism coupling, by a simple injection of a light beam through the liquid media. In this illustration, we consider a rather thick waveguide, W = 4λ0  2.5 μm, to show that the guided modes (marked by circles) can occur for much smaller incidence angles than θ1TIR ¼ 1:078. In the reflectivity curves, the waveguide modes correspond to zero reflectivity since in that case, the light is trapped inside the waveguide. What is important for the waveguide is that the light be reflected by its interfaces with the immersion media, even if this reflection is not 100% efficient. In this later case, there may still be a fraction of the

20

a

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

623

b

Fig. 4 Resonant guided modes in a dielectric waveguide. (a) F (W, θwg) versus θ1. (b) |r|(θ1). In (b) the reflectivity modulus for the waveguide (solid line) is compared to the TIR reflectivity of a BK7//water interface (dotted line). The colors correspond to the two polarizations TM (red) and TE (blue). nwg = 1.5151, n1 = n2 = 1.335, λ = 632.8 nm, W = 4λ0. θ1 is the incidence angle in medium 1 such that nwg sin(θwg) = n1 sin(θ1), θ1TIR = 1.078. The guided modes are marked by circles

light that propagates inside the waveguide, although it is more rapidly damped at each reflection.

From TIR Microscopy to Dielectric Waveguide Microscopy Interference Reflection Microscopy Ambrose [64] was the first to propose total internal reflection microscopy (TIRM) for imaging the contact of cells with a solid substrate. The evanescent field that illuminated the sample was generated by a prism, and an objective lens was placed on the opposite side of the prism to image the scattered light. As shown in Fig. 5a, this optical configuration implies that the objective lens is a water-immersion lens. An alternative configuration for TIRM consists in using the same objective lens for both the evanescent field production and the sample imaging (Fig. 5b). This configuration is better suited for high numerical aperture objective lenses. A matching optical index oil couples the objective lens to the glass coverslip used as a substrate for cell adhesion [65, 66]. Another strategy consists in using Reflection Interference Contrast Microscopy (RICM) [10, 67] that improves markedly the sensitivity of TIRM [10, 68]. RICM is globally inspired by phase contrast microscopy because it recombines a reference beam (undiffracted) with the beam diffracted by the sample. Figure 6 illustrates the imaging of human glioma cells with phase contrast microscopy (Fig. 6a) and RICM (Fig. 6b). This comparative analysis confirms that the RICM is able to sense the parts of the cells which are very close (a few 100 nm) to the substrate.

624

F. Argoul et al.

Fig. 5 Optical configurations for total internal reflection microscopy (TIRM) (a) The evanescent field, excited by the prism and the image collection through the objective lens are separated in space. (b) The evanescent field excitation and the sample image capture are both performed through a high aperture objective lens (Reprinted with permission from Nature Reviews. Molecular Cell Biology [63])

Fig. 6 Comparison of phase contrast and reflection interference contrast (evanescent waves) microscopies. Imaging of human glioma cells with (a) phase contrast microscopy and (b) RICM (Reprinted with permission from the Journal of Cell Biology [9])

Dielectric Waveguide Microscopy Planar resonant waveguide (RWG) sensors have been widely used for biological applications for their ability to probe with high sensitivity the refractive index of lipid or protein layers, living cells or tissues [2, 3, 69]. Different sensing films such as

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

625

phospholipid bilayers, functionalized alkyl silanes, polyethylene glycol chains, polyamino acids, and adsorbed proteins have been added to waveguide-based biosensor platforms enlarging their domains of application. RWG biosensors consist of four layers: a glass substrate with a diffractive grating, a waveguide thin film (with higher index), a cell layer, and the cell culture medium. Living cells can be directly cultured and coupled onto the waveguide surface to form a cell layer. When control cells are illuminated with a broadband light source, the RWG biosensor resonates for a specific wavelength (or angle) that depends on the density and distribution of biomass (e.g., proteins, molecular complexes) inside the cell. Upon biochemical or osmotic perturbation, cell morphological changes modify their local optical index and therefore the RWG resonance condition. The shift of the reflected wavelength after stimulation is characteristic of the intracellular mass redistribution that contributes to the cell optical index. The main advantage of RWG sensors, as compared to transmission microscopy techniques, is the short spatial range probed by the evanescent wave propagating at the RWG sensor surface (150 nm). Therefore, only partial density changes in the bottom portion of the cells in close contact with the biosensor surface are captured. However in most reported applications, the RWG sensors did not allow high resolution imaging of the cell mass redistribution. Optical waveguide microscopy was introduced by Hickel and Knoll [70], concomitantly to surface plasmon microscopy [47]. These two methods use the same concept, namely, the coupling of an incoming light to the sample which serves as an inhomogeneous resonant waveguide cladding. The back reflected field from the sample (which may also be diffracted by the sample) is Fourier back-converted by a lens to produce a wide-field image of the sample. The sensitivity of this method comes from the resonant coupling conditions offered by the cladding of the waveguide. The lateral resolution of these microscopes depends on the radiation losses of the waveguide; the larger the radiation losses, the higher the lateral resolution. More recently, waveguide evanescent field scattering (WEFS) microscopy was used to detect the field scattered by single bacteria attached or located very closely to the waveguide [71–73]. The coupling of the light to the propagating waveguide was achieved by a grating. The light scattered by the bacteria on the waveguide surface was captured by microscope objective lenses with different magnifications. WEFS microscopy is very efficient to detect single micron-size particles which come close to or in contact with the waveguide surface and to distinguish those which are in close contact from those which are still out of contact. Like SPR microscopy, this method is a label-free method and is suited for long-term or time-lapse studies, since there is no photo bleaching. Waveguide evanescent field fluorescence (WEFF) was also developed [72, 74–77] for imaging ultrathin films and cell to substrate interactions. This method requires staining the cell membrane or adhesion foci by fluorescent dyes. Since the waveguide excitation is separated from the imaging path, the lateral resolution of the system is limited by the microscope numerical aperture and the evanescent field which is propagating inside the waveguide has to be coupled back to the microscope lens for imaging.

626

F. Argoul et al.

Improving Waveguide Microscopy with Structured Illumination A major limitation of imaging methods in living systems is their inability to resolve objects separated by a few hundreds of nanometers or less (diffraction limitation). Although electron microscopy offers very high resolution, it is restricted to fixed specimens and therefore not suited for surveying the spatiotemporal dynamics of living cells. Scanning-probe methods have been developed for the past 50 years to circumvent this limitation [78, 79]. However, their resolution is dependent on the sharpness of the mechanical, chemical, electrical, or optical probe, and they are slower than wide-field microscopy methods. Moreover, due to their intrusiveness in the culture medium, they require to change the probe often, introducing a supplementary source of experimental irreproducibility. One of the most cost-effective and promising methods developed so far for improving microscopy resolution is based on standing-wave illumination [80–83]. The use of standing-wave imaging in TIR geometry has improved the resolution by a factor 2.6 [83], as compared to the diffraction limited resolution δr = λ/(2NA) = λ/(2n sin α), where α is the halfangle of the objective lens aperture, NA its numerical aperture, and n its optical index. This resolution is comparable or even better than those offered by other scanning-probe microscopies in many biological systems. It can be performed much faster, avoiding to fix the living sample and therefore lowering the imaging fuzziness coming for the ceaseless activity (restlessness) of living systems at submicron scales. If we consider a microscope lens as a linear and time-invariant system, we can use linear response theory to describe the principles of image formation. Once we know the impulse response function of the microscope, i.e., its point spread function (PSF), the output of the system can be calculated by convolving the input signal (or image) with the impulse response function. Thus, the image I(x) corresponding to a given object (taken as a source for the microscope) is obtained by the convolution of the emitted intensity from the object Ie(x0) with the PSF(x = (x, y)). In the Fourier domain, this convolution reads as a simple product: I ð xÞ ¼ I e ð xÞ

N

PSFðxÞ,

b I ðkÞ ¼ b I e ðkÞ:OTFðkÞ;

(18)

where OTF is the optical transfer function defined as the Fourier transform of the PSF and k is the corresponding spatial frequency vector in the Fourier domain. If the object is taken to be thin, it can be described by a complex amplitude transmittance function Vobj(x0) = |T (x0)|1/2 exp(ikϕ (x0)) = I0(x0) exp(ikϕ (x0)), which gives the change in magnitude and phase of the light passing through it [84]. The emitted intensity Ie(x0) can be approximated by the product Ie(x0) = |Vobj|2(x0)I0(x0). This intensity is further modified by the imaging system PSF. In the case of a fluorescent image, Sobj (x0) = |Vobj|2(x0) is replaced by the fluorescence intensity yield of the dye-labeled sample. In frequency domain, the emitted intensity writes: O b I e ðkÞ ¼ b I 0 ðkÞ Sbobj ðkÞ:

(19)

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

627

Substituting Eq. 18 into Eq. 19 leads to the following frequency spectrum of the final digitized image: n o O b I ðkÞ ¼ b I 0 ðkÞ Sbobj ðkÞ :OTFðkÞ:

(20)

When the illumination I0(x) = Ic is uniform, its Fourier transform is a delta function δ (k) and we have: n o b I ðkÞ ¼ I c Sbobj ðkÞ :OTFðkÞ:

(21)

This means that all the spatial frequencies of the sample are filtered in the Fourier domain by the OTF of the objective lens. The objective lens can be viewed as a low-pass filtering system, in the sense that it cuts the highest frequencies corresponding to the finer details of the object. Let us consider now a nonuniform illumination I0(x), for example a periodic modulation in one dimension (produced by interference fringes): I 0 ðxÞ ¼ I 0 ðxÞ ¼ I c ½1 þ m cos ð2πκ0 x þ φÞ;

(22)

where κ0 and φ denote the spatial frequency and the initial phase of the periodic illumination pattern, respectively. Ic and m are constants corresponding to the mean intensity and the fringe intensity modulation depth. The Fourier transform of this periodic function reads: h i m m b I 0 ðkx Þ ¼ I c δðkx Þ þ eiφ δðkx  κ 0 Þ þ eiφ δðkx þ κ 0 Þ : 2 2

(23)

Substituting Eq. 23 into Eq. 20 yields: n o m m b I ðkx Þ ¼ I c Sbðkx Þ þ eiφ Sbðkx  κ 0 Þ þ eiφ Sbðkx þ κ0 Þ :OTFðkx Þ: 2 2

(24)

The first term in Eq. 24 represents the normal frequency spectrum observed by a conventional microscope, where the cutoff frequency fc for kx is given by the numerical aperture of the objective lens fc = 1/δr = 2NA/λ. The second and third terms in Eq. 24 provide additional information on the object, since the central frequencies are shifted by κ0 and κ0, respectively. Thus kx satisfies now the relations |kx  κ0|  2π fc and |kx + κ0|  2π fc, respectively. By choosing the spatial frequency κ0 of the sinusoidal fringe illumination pattern larger than the cutoff frequency of the OTF, we extend the frequency spectrum domain and improve the image resolution. To obtain an isotropic resolution enhancement, an additional fringe intensity modulation along the y direction is necessary. This structured illumination microscopy (SIM) has been implemented for resolution enhancement in wide-field and spot-scanning fluorescence microscopy [85–91], differential interference contrast [92] and spatial light interference

628

F. Argoul et al.

Fig. 7 Dual-color super-resolution TIRF microscopy based on structured illumination microscopy (SIM) of intracellular structures. (a) TIRF image. (b) TIRF-SIM image. (c) Zoom on the regions of interest shown in (a) and (b). (d) Intensity profiles, along the white, oblique section lines in (a) and (b). Scale bar: 1 μm. The astrocytes were overnight transfected with ER-EGFP (endoplasmic reticulum) and labeled with Mito-Tracker-Deep-Red, a marker of mitochondria. Images were taken sequentially upon 488- and 561-nm excitation, respectively, with no emission filter. Exposure time was 200 ms per frame (total time: 3.8 s for the acquisition of the two colors). Structured illumination period was 180 nm (206 nm) for the green (red) channel, respectively (Reprinted with permission from Optics Express [98])

microscopy [11], as well as in standing-wave, harmonic excitation light microscopy (HELM) and nanostructured glass slides, coupled to total internal reflection microscopy [81–83, 93–97]. As an illustration of the important image improvement that can be achieved with structured illumination coupled to total internal reflection fluorescence microscopy, we present in Fig. 7 a dual-color imaging of mitochondrial dynamics in cultured cortical astrocytes. This figure enlightens for the first time the existence of interaction sites between near-membrane mitochondria and the endoplasmic reticulum. These wide-field fluorescence images reach an isotropic 100-nm resolution based on a subdiffraction fringe pattern generated by the interference of two colliding evanescent waves. In the context of structured illumination TIRF microscopy, interference of evanescent waves is particularly interesting. The standing waves that can be launched through a high aperture objective lens (TIR) indeed provide extended resolution by the fact that they have an effective wavelength λ0/(2n1 sin θ), where n1 is the optical index of the

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

629

highest index medium and θ1 the incident angle. The physical resolution in the evanescent region inside the low optical index medium (n2) is defined by the wavelength of light in the higher index medium, which scales it down by a factor n2/n1, compared to propagative waves in medium 2. High-NA objective-launched standing-wave total internal reflection fluorescence microscopy was shown experimentally to reach a lateral resolution of approximately 100 nm for a 60X objective lens with NA = 1.45, beyond the classical diffraction limit [99].

Surface Plasmon Resonance: From Evanescent to Guided Waves Principles of Surface Plasmon Resonance Surface plasmons are surface charge density oscillations which are coupled to evanescent electromagnetic fields at the interface of a metal (eg ¼ e0g þ ie00g ) and a dielectric (e2). The decay length of the field into the metal is of the order of 10 nm for visible wavelengths and about half of this wavelength in the dielectric. For charges to accumulate at this interface, the electric field needs to be TM (p) polarized, such that a driving force for the charges occurs normal to it. SPPs are characterized by a propagation wave number kx, parallel to the interface that is larger than the modulus of k in free space. Therefore, SPPs cannot be excited by a free-propagating field. However, they can be excited by an evanescent field, created either by TIR or light scattering by nanoscale structures. The in-plane wave vector for SPPs reads: ( kxSP ðλ0 Þ ¼ k0xSP ðλ0 Þ þ ik00xSP ðλ0 Þ ¼

2π λ

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi) e2 eg ðλ 0 Þ ; e2 þ eg ðλ 0 Þ

(25)

where eg(λ0) is the dielectric function of gold. The electric field propagating along the x direction, coupled to surface plasmon can be written as: 0

00

EðxÞ ¼ E0 eikxSP x ¼ E0 eikxSP x ekxSP x :

(26)

This electric field is the product of two terms: the first one is propagating, whereas the second one decays exponentially along the x-direction. Generally, for TM (p) polarized incident light, the resonant electromagnetic field reaches its maximum at the interface with a typical enhancement factor 102 compared to the incident field amplitude. Conversely, there is no enhanced field for TE (s) polarized incident light. The most famous device for SPR excitation has been proposed by Kretschmann [26, 100] (Fig. 8a), where a metal (usually silver or gold) film is placed at the interface of two dielectric media (1 and 2) such that medium 2 has a lower refractive index than medium 1. Beyond the TIR angle θ1TIR , for a specific incident angle θ1SPR the intensity of the reflected light decreases sharply. The angle for SPR resonance

630

F. Argoul et al.

Fig. 8 Kretschmann configuration for SPR excitation. (a) The SPR is coupled by a triangular prism. (b) Modulus of the reflectivity |rTM|(θi) versus θi for a TM polarized beam. (c) Modulus of the reflectivity |rTE|(θi) versus θi for a TE polarized beam. A model layer of 50 nm is chosen for this computation. Four wavelength values are represented: 500 nm (blue), 633 nm (red), 750 nm (brown) and 1500 nm (black). The optical index dispersion data for water and gold were taken from Refs. [101, 102], the ones for glass (BK7) were retrieved from Schott Optical Glass data sheets

θ1SPR is related to the dielectric function e2 of the medium 2 in contact with gold when e1 and eg are fixed. The fact that kSPR is parallel to the gold-dielectric medium interface confers to the surface plasmon resonance wave, the property of a guided wave. Importantly, SPR cannot be excited by a TE (s) polarized light (Fig. 8c) and requires instead a TM (p) polarization (Fig. 8b). The SPR resonance angle θ1SPR and the resonance width both decrease with the wavelength λ0 of the exciting beam (Fig. 8b). Increasing the wavelength therefore improves both angular sensitivity and discrimination power of SPR for minute refractive index variations. However, this gain of sensitivity is obtained at the expense of the spatial resolution since not only a larger wavelength implies an increase of the evanescent field depth (spatial resolution along the z direction) but also an increase of the lateral propagation length Lx of the plasmon guided wave (spatial resolution along the x direction). If one neglects, in a first approximation, the correction to kSP due to the prism coupling (Δk), the lateral propagation length behaves as: 1 λ0 Lx ðλ0 Þ ¼ 00 ¼ 2kxSP ðλ0 Þ 2π

(

e0g ðλ0 Þ þ e2 e0g ðλ0 Þe2

)3=2

e0 2g ðλ0 Þ e00g ðλ0 Þ

:

(27)

Typical values for the lateral decay length are Lx  22 μm for λ0 = 515 nm and Lx  500 μm for λ0 = 1060 nm. The surface plasmon z decay length inside the dielectric is much shorter:

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

631

Fig. 9 SPR reflectivity curves computed in a Kretschman configuration for a naked gold film (50 nm) separating a glass prism (BK7) and a dielectric medium (water) for TM (p) (solid lines) and TE (s) (dashed-dotted lines) polarizations. (a) Real R (r) (thick line) and imaginary I (r) (thin line) parts of r versus θ1. (b) Modulus |r|(θ1) of the reflectivity. (c) Phase ϕ (θ1) of the reflected field. (d) 3D representation of the reflectivity r in the 3D space (I (r), R (r), θ1)

91=2 8 = 0 < e ð λ Þ þ e 0 2 g λ0 LzSP ðλ0 Þ ¼ : 2 ; 2π : e2

(28)

LzSP increases nonlinearly with λ0, starting from half the wavelength λ0 at 633 nm (300 nm) to reach about twice the wavelength λ0 at 1550 nm (3 μm). Figure 9 illustrates the complex reflectivity curves computed for a thin gold film (50 nm) in contact with water for λ = 632.8 nm. |rTM| and ϕTM (solid lines) differ drastically from |rTE| and ϕTE (dashed-dotted lines), confirming that the SPP-related steep variations of the reflectivity modulus and phase at θ1SPR ¼ 1:2517 rad occur in TM polarization solely. The real and imaginary parts of rTM have a remarkable behavior at plasmon resonance. R (rTM) increases first after the TIR angle θ1TIR and decreases at θ1SPR , approaches zero and increases again. I (rTM) reaches a plateau after the TIR angle θ1TIR and decreases to negative values at θ1SPR . Figure 9d illustrates in three dimensions (I (r), R (r), θ1) the rotation of rTM in the complex plane at θ1SPR . The width of the plasmon resonance (Fig. 9b) increases with surface plasmon radiative losses. When λ0 increases, the SPP resonance gets narrower (Fig. 8b), the radiative losses decrease. The angular width of the surface plasmon resonance is determined by the imaginary part of the surface plasmon wave vector [103]:

632

F. Argoul et al.

a

b

c

d

Fig. 10 Computation of the GH shift from a 50 nm thick gold film inserted in between glass n1 and water n2. (a) Modulus of the reflectivity versus θ1. (b) Phase of the reflectivity. (c) GH shift computed numerically from Eq. 10. (d) Zoom on (c). TM polarization (red), TE polarization (blue). n1 = 1.5151, n2 = 1.335, eg = 11.8134 + 1.2144i, λ0 = 632.8 nm, dg = 50 nm

k00xSP ðλ0 Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Δθ1SPR ðλ0 Þ ¼ r :  2π λ0

2



(29)

2 k0 xSP ðλ0 Þ

We conclude from Fig. 9 that plasmon resonance is not only recognizable by a sharp decrease of the reflectivity modulus in TM polarization but also by a sharp drop of the phase of the reflected field. This phase drop at a very precise value of the incident angle should make the reflected beam very sensible to the GH effect, since at this angle the variation of the reflected phase with the wave number k is quite steep [59, 104, 105]. In Figs. 10 and 11 are shown the lateral shifts predicted by Eq. 10 and Refs. [56–59] from a thin gold layer (eg = 11.8134 + 1.2144i at 632.8 nm) inserted in between glass (BK7) and water, for two different gold thicknesses (50 and 30 nm, respectively). These figures provide deep insight on how the plasmon radiative losses enhance or attenuate the amplitude of the GH shift. When considering a 50 nm thick gold film, as commonly used for prism-coupled SPR excitation, we note that a first GH shift occurs around the TIR angle for TM polarization (Fig. 10c, d) and seems to be more localized than for a glass-water interface. A second GH shift of greater amplitude and of opposite sign occurs at plasmon resonance. Surprisingly, before dwelling to very negative shift values at θ1SPR , the GH shift increases before and after resonance in a sort of lateral rebound. Negative GH shifts are related to metal absorption (intrinsic damping) and depend on gold thickness, whereas positive GH

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

a

b

c

d

633

Fig. 11 Computation of the GH shift from a 30 nm thick gold film inserted in between glass n1 and water n2. (a) Modulus of the reflectivity versus θ1. (b) Phase of the reflectivity. (c) GH shift computed numerically from Eq. 10. (d) Zoom on (c). TM polarization (red), TE polarization (blue). n1 = 1.5151, n2 = 1.335, eg = 11.8134 + 1.2144i, λ0 = 632.8 nm, dg = 30 nm

shifts are due to radiative damping [106–108]. When the thickness of the gold film increases, so that the intrinsic damping becomes larger than the radiative damping, a negative GH shift is observed. In the reverse case, positive GH shift occurs. The shape of D/λ0 versus θ1 is telling us that if we use a high numerical aperture objective lens to launch the surface plasmons in a wide-field configuration, the different reflected beams should cross each other around plasmon resonance, leading to some aberrations in the reflected image of the focused spot [61]. With a 30 nm thick gold film, the GH shift in TM polarization changes sign and becomes very small at plasmon resonance (Fig. 11c, d). The angular width of the plasmon resonance increases, because the plasmon wave inside the metal is also leaking back inside the glass (radiative loss). If this fading out of the SPR sharpness could be considered as a disadvantage for SPR sensitivity, we realize by this simple computation that, in contrast, it could become an important criterium for improving the resolution of SPR microscopy.

Sensivity of SPR to Dielectric Layers: From Evanescent to Guided Waves SPP sensitivity made this technique very attractive for biosensing applications of very thin layers adsorbed on gold [109]. SPP changes with thicker dielectric layers are also interesting to investigate. As an illustration, we consider two gold films of thickness 50 and 25 nm, respectively, inserted as before in between glass and water

634

F. Argoul et al.

a

b

c

d

e

f

g

h

i

j

k

l

Fig. 12 SPR curves and GH shifts from a gold film (dg = 50 nm) with an adsorbed dielectric layer of width Wd, in contact with a water medium. (a, d, g, j) Modulus of the reflectivity versus θ1. (b, e, h, k) Phase of the reflectivity (shifted by its value at θ1 = 0). (c, f, i, l) GH shift computed numerically from Eq. 10. TM polarization: red lines, TE polarization: blue lines. n1 = 1.5151, n2 = 1.335, eg = 11.8134 + 1.2144i, λ0 = 632.8 nm. Four values of the thickness of the adsorbed layer (nd = 1.4) are reported Wd = 10, 50, 500, and 1800 nm, from top to bottom

and we add on gold an extra isotropic dielectric layer (d) with optical index nd = 1.4. We use the transfer matrix equation (Eq. 14) to compute rTM and rTE. The shift of the plasmon resonance to larger θ1 values with the thickening of the dielectric layer is more important for a gold film of thickness 50 nm (Fig. 12) than of thickness 25 nm (Fig. 13). This modification of the SPR resonance is also more visible on the GH shift curves D(θ1) for a 50 nm thick gold film (Fig. 12). For the 25 nm gold film (Fig. 13), the SPR resonance which is already much attenuated shifts only mildly and does not fade in depth, as compared with the 50 nm thick gold film (Fig. 12). Consequently, tiny thickness variations of an adsorbed dielectric layer are much better detected with a 50 nm gold film. However, for Wd values larger than 200 nm, the SPR reflectivity curve reaches a limit angle θ1LIM  1:49 (resp. 1.4) for a

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

a

b

c

d

e

f

g

h

i

j

k

l

635

Fig. 13 SPR curves and GH shifts from a gold film (dg = 25 nm) with an adsorbed dielectric layer of width Wd, in contact with a water medium. (a, d, g, j) Modulus of the reflectivity versus θ1. (b, e, h, k) Phase of the reflectivity (shifted by its value at θ1 = 0). (c, f, i, l) GH shift computed numerically from Eq. 10. TM polarization: red lines, TE polarization: blue lines. n1 = 1.5151, n2 = 1.335, eg = 11.8134 + 1.2144i, λ = 632.8 nm. Four values of the thickness of the adsorbed layer (nd = 1.4) are reported Wd = 10, 50, 500, and 1800 nm, from top to bottom

gold film thickness 50 nm (resp. 25 nm). Actually, in experimental situations, objective lenses made of BK7 glass (n1 = 1.5151) have limited numerical apertures 1.45, corresponding to an angle θmax = arcsin(NA/n1)  1.27 rad, making impossible the use of SPR excitation mode for microscopic imaging. Higher index objective lenses are therefore mandatory. However, the evolution of the reflectivity curve with the thickness of the dielectric layer is interesting, since it reveals the emergence of other resonance peaks which are not due to SPR, for much thicker dielectric layers. With dielectric layer thicknesses reaching the wavelength λ0 and beyond, the system behaves as an asymmetric metal//dielectric//water waveguide where resonance modes can be excited [23, 45, 53] with the same TIR configuration as for SPR. It can be modeled by a four layer system: a glass substrate n1, a thin gold film (ng, dg), a dielectric layer (nwg, Wwg) (the waveguide), and a final water medium

636

F. Argoul et al.

with a smaller optical index (nd < n2 and nd < n1). Using the same matrix formulation as in Eq. 14, we compute the different reflectivity phase changes at each interface of the waveguide and the resonance conditions:       2πF a W wg , θwg ¼ 2kwg W wg cos θwg þ ϕwg==g θwg þ ϕwg==2 θwg ¼ ð2m þ ζ Þ  π;

(30)

where ζ = 0 (resp. 1) depending on the TM (resp. TE) polarization, m is an integer. The complex reflection coefficient r at each interface of the waveguide film (wg) with its bounding media (J) (g for the gold film and 2 for the bulk water medium) is r wg==J ¼

2ρ kz, wg =n2ρ wg  k z, J =nJ 2ρ kz, wg =n2ρ wg þ k z, J =nJ

;

(31)

where ρ = 0 for TE (s-polarization) and ρ = 1 for TM ( p-polarization), and kz, i ¼  1=2 2π=λ n2i  n21 sin2 θ1 . For nwg > n2, TIR occurs at the (d//2)interface and the reflectivity of gold rules the reflection at the (g//d) interface, the electromagnetic field in medium 2 is evanescent and r wg==2 ¼ jr wg==2 jeiφwg==2 . Using the remarkable iϕ

equality tan ϕ ¼ iðeeiϕe þeþiϕ Þ, we get the following expression for the phase ϕwg//J: iϕ

ϕwg==J

" # i 1  r wg==J  : ¼ 2arctan 1 þ r gw==J

(32)

The SPRWG resonance m-modes are recognizable on |r(θ1)| curves as very narrow dips (Figs. 12g, j and 13g, j). Conversely to SPP waves, SPRWG waves can be excited by both TM and TE polarized illumination. These resonant guided waves can be excited at the gold-dielectric interface by prism, grating or objective lens coupling. However, because they are propagating inside the waveguide on much longer distances than SPR waves, they may integrate farther refractive index variations and be scattered similarly to plane waves in bulk materials. As reported in Refs. [110, 111], they can lead to scattering, refraction, and diffraction processes which can be analyzed in Fourier space (k space). The comparison of the SPRWG modes for two gold thicknesses (Figs. 12 and 13) is very instructive. For a rather inefficient SPR gold sensor (25 nm) (Fig. 13), we note that the SPRWG resonance modes are as strong as for a very efficient SPR gold sensor film (50 nm) (Fig. 12). More interestingly, because the SPR resonance is smeared out, thinner gold films can separate more efficiently SPR from SPRWG. The emergence of SPRWG peaks occurs at a finite set of Wwg values, separated by intervals of δWwg =  λ0/(2nwg). These SPRWG peaks accumulate at the angular value θ1==wgTIR ¼ asin nwg =n1  1:178 corresponding to TIR at the glass/dielectric layer. For each SPRWG resonance mode, the phase drops abruptly by about 2π and the corresponding GH shift is quite

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

637

large and sharply confined at the corresponding resonance angle. Note that the strong light confinement inside the SPRWG waveguides may produce GH shifts as large as tens of micrometers. These large GH shift effects look very promising to achieve optical biosensors with improved sensitivity.

SPR-Based Microscopy for Biological Applications Principles of Scanning Surface Plasmon Microscopy Lateral resolution of SPR microscopy [112] was markedly improved by the replacement of the prisms by high numerical aperture objective lenses [34, 113, 114]. Photon scanning tunneling techniques were also used to capture evanescent fields and characterize SPP propagation on the scale of a few tens of micrometers [115–117] and to demonstrate that prism-coupled SPR imaging cannot achieve a subwavelength resolution due to the lateral propagation of the SPP waves [118, 119]. Subwavelength resolution SPR microscopy images were obtained experimentally and theoretically with a far-field microscope [40, 113, 114, 120]. Subsequent high-resolution SPR microscopy studies, with high numerical aperture objective lenses, focused exclusively on amplitude images and did not discuss the possibility of far-field high-resolution SPR phase microscopy [36, 39, 121–125]. SPR phase microscopy was performed for the first time with a scanning SPR microscope (SSPRM) [41, 44] that combines (i) an heterodyne interferometer, (ii) a high numerical aperture objective lens and (iii) a three-dimensional piezo-scanning device (Fig. 14a). The SSPRM performs, at a given position (X, Y), the reconstruction of a V (Z ) integral of the backward reflected electric field r(X, Y, θ, φ) over the radial θ and azimuthal φ angles (Fig. 14b). This method is directly inspired from scanning acoustic microscopy [38, 126–128]. For a linearly polarized light beam (TM or TE) before the objective lens, a mixture of both TM and TE polarizations is obtained after crossing the objective lens: V ðX, Y, Z Þ ¼

ð 2π ð θmax 0

θmin



4iπn1 Z cos θ P2 ðθÞ r TM ðθÞ cos2 φ þ r TE ðθÞ sin2 φ e λ0 sin θ dθdφ; (33)

where Z is the focus position of the light beam by the objective lens with respect to the gold//dielectric interface, P(θ) is the pupil function of the objective lens, n1 is the index of the coupling medium (lens and matching index oil), (2π/λ0) the wave number. Ideally, the pupil function P should be as constant as possible; in experimental situations, it looks rather like a slowly varying Gaussian shape. The above mentioned three properties (i–iii) confer to SSPRM a unique ability to measure locally the phase of the reflected field Φ = Φ(V (Z)) and its spatial derivative dΦ/dZ that none of the other systems did afford previously.

638

F. Argoul et al.

Fig. 14 SSPRM setup and experimental V (Z ) curves from a glass//45 nm gold//water interface. (a) Microscope setup with the fully fibered interferometer. OI: optical isolator, WP: halfwavelength waveplates, L: lens, PMOF: polarization-maintaining optical fiber, OFC: optical fiber coupler, AOM: acousto-optics modulator, D: detector, CL: collimating lens. (b) Experimental |V (Z )| curves for TM (solid line) and TE (dashed line) polarizations recorded on a 45 nm gold film in contact with water. (c) |P2(θ1)r(θ1)| versus θ1 computed by inverse FFT transform of V (Z ). (d) Phase ϕ (θ1) of r(θ1) (Reprinted with permission from Applied Optics [44])

Actually dΦ/dZ, similarly to the SPR phase ϕ and its angular derivative dϕ/dθ1, crosses singular points at precise values of Z [129]. These Z positions correspond to highly contrasted phase images and hence increase the sensitivity of SPR imaging systems. Given that we use a pure TM polarization, the coupling of the incoming

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

639

light through a high numerical aperture objective lens simultaneously provides a structured illumination and focuses SPPs [130]. With the change of variable [131]: V z ¼ ð2n1 =λ0 Þ cos θ ; dv ¼ ð2n1 =λ0 Þ sin θ dθ; the V (Z ) curve takes the form of a Fourier transform of the reflected field multiplied by the square of the objective lens pupil function P: ð λ0 π vmax 2 V TM, TE ðZÞ ¼ P ðvz Þ r TM, TE ðvz Þ e2πivz z dvz ; (34) 2n1 vmin where νzmin = (2n1/λ0) cos θmax and νzmax = (2n1/λ0) cos θmin. If the incoming beam is 1 not diaphragmed for lower θ values, we have θmin = 0 and νzmax ¼ 2n λ0 . Given an c objective lens of numerical aperture NA, θmax = arcsin(NA/n1) and νzmax ¼ 2n λ0  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1  NA2 =n21 .

With a high NA objective lens, pure TM (resp. TE) polarization requires special optical or opto-electronic devices to convert the light polarization from linear V (Z ) to radial VTM (Z) (resp. azimuthal VTE (Z )) [122, 132–134]. From experimental V (Z ) curves, although they are inevitably limited in the range of Z values, we have shown that the modulus and the phase of r(θ1)P2(θ1) can be recovered by inverse Fourier transform the complex V (Z ) signals [43, 44] (Fig. 14c, d). As compared to prismcoupled SPR reflectivity devices, the advantage of the SSPRM is primarily a very strong confinement of light and a drastic gain in resolution [44]. Cancelling the pupil function P outside the angular interval [θmin = 0, θmax] and considering it as a constant inside this interval, we can rewrite the integral in Eq. 34 as: V TM, TE ðZ Þ ¼

λ0 π 2n1

ð1 1

r TM, TE ðvz Þ e2πivz z dvz :

(35)

V(Z ) corresponds to the Fourier transform of the complex reflectivity r(ν). Putting r(θ) = cst = 1, V (Z ) can be computed analytically:   λ0 2πn1 ð1  cos θmax ÞZ 2πin1 ð1λcos θmax ÞZ 0 sin V ðZ Þ ¼ : e λ0 2n1 Z

(36)

The complex function V (Z ) written in Eq. 36 has two modulation frequencies: ΔZ1 ¼

λ0 λ0 and ΔZ 2 ¼ ; n1 ð1  cos θmax Þ n1 ð1 þ cos θmax Þ

(37)

respectively. With NA = 1.45, n1 = 1.5151, θmax  1.28 rad., λ0 = 632.8 nm, we get ΔZ1 ’ 588 nm and ΔZ2 ’ 324 nm. Thus |V (Z )| behaves as the modulus of a sinc function (| sin(t)/t|) of period ΔZ1/2 = 294 nm, centered around zero.

640

F. Argoul et al.

V (Z) Response to a Discrete Phase Jump Both SPR and SPRWG reflectivity phases ϕ (θ1) display very sharp jumps at resonance (Figs. 12 and 13) with a strong enhancement of the GH shifts. The V (Z) function is also very strongly impacted by these phase jumps since, when uncoupling the variations ϕ (θ1) from |r(θ1)|, one can show that the V (Z ) shape is majoritarily determined by the phase drop [129]. We revisit this demonstration here, modeling ϕ (θ1) by a discrete phase jump at resonance and keeping |r| = cst = 1. We simplify the phase profile by a step at θ1r (the phase derivative is infinite at SPR and/or SPRWG resonance): ðI 1 Þ

ϕ¼0

θ 1 < θ 1r

for

ðv > vr Þ;

(38)

and ðI 2 Þ ϕ ¼ Δϕ

for θ1 < θ1r

ðv < vr Þ;

(39)

where νr = 2n1 cos(θ1r)/λ0. Then, V ðZÞ ¼ V I1 ðZ Þ þ V I2 ðZÞ ¼

λ0 π 2n1

ð vr

e2πivZ eiΔϕ dv þ

vmin

λ0 π 2n1

ð vmax

e2πivZ dv;

(40)

vr

  2iπn1 Z λ0 2πn1 Z ½ cos θ1r þ cos θmax  iΔϕ V ðZ Þ ¼ e sin ½ cos θ1r  cos θmax  e λ0 λ0 2n1 Z  2iπn1 Z 2πn1 Z ½ cos θmin þ cos θ1r  þ sin ½ cos θmin  cos θ1r  e λ0 : (41) λ0 If Δϕ is strictly a multiple integer of 2π, the phase drop does not have any impact on the V (Z) curve and we recover the sinc function of Eq. 36, characteristic of the objective lens pupil function. When Δϕ 6¼ 2πm, V I2 (Z ) is a modulated sinc function with two characteristic periods: ΔZI2 , 1 ¼

λ0 ; n1 ð cos θmin  cos θ1r Þ

(42)

λ0 : n1 ð1 þ cos θ1r Þ

(43)

and ΔZ I2 , 2 ¼

Putting θmin = 0 in Eq. 42 gives the formula proposed by Somekh et al. in 2000 [38]. For θ1r ¼ θ1SPR ¼ 1:265 rad (corresponding to the plasmon resonance for a

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

641

50 nm gold film in contact with water (Fig. 9)), we get ΔZI2 , 1  299 nm and ΔZ I2 , 2  321 nm. The first term V I1 ðZÞ in Eq. 41 is also a modulated sinc function, characterized by two periods: ΔZI1 , 1 ¼

λ0 λ0 and ΔZI1 , 2 ¼ : n1 ð cos θ1r  cos θmax Þ n1 ð cos θ1r þ cos θmax Þ

(44)

If θmax = π/2, ΔZI1 , 1 ¼ ΔZI1 , 2 = λ0/(n1 cos θ1r)  1390 nm. The period of |V (Z )| estimated from Eq. 42 (ΔZI2,1/2 = 299 nm) is very close to the period of the modulus of the sinc function produced by an objective lens NA = 145 alone (294 nm). It may therefore be quite hard to distinguish a resonant mode from the windowing of the objective lens, since θmax ’ θSPR. It will therefore be necessary to select an objective lens with greater numerical aperture and optical index for measurements in water media. However, we will keep θ1  [0, π/2] for our computations, as an absolute limit. In that case ΔZ1/2 ’ 209 nm and ΔZ2 ’ 417 nm. Importantly, the modulation frequency ΔZr given by a resonance phase jump decreases with θ1r, the larger the θ1r, the faster the modulations of |V (Z )| coming from the SPRWG or SPR excitation. We note that these sinc functions are all centered around Z = 0, but their summation in complex form may lead to shifted and asymmetric V (Z ) curves. In simple cases, we have shown that a wavelet-based space-frequency (Z, ν) decomposition of V (Z) functions can efficiently separate the different sources of periodic modulations of these V (Z) curves and select those which correspond to SPR reflectivity resonances [42].

V (Z) Responses from Glass//Water and Glass//Gold//Water Interfaces In real situations, both the modulus and the phase of the reflectivity r contribute to V (Z ) and can be modeled as the superimposition of a set of sinc-type functions with prefactors that depend on the reflectivity amplitude. Figure 15 compares the V (Z ) responses for different model interfaces: (i) a [glass//water interface] (black), (ii) a [glass//gold film (25 nm)//water] interface (blue), (iii) a [glass//gold film (50 nm)// water] interface (red), and (iv) an hypothetic interface with a constant r (green). The constant reflectivity gives a fully symmetric (green) curve (Fig. 15c), as predicted by Eq. 36, and the unwrapped phase Φ(Z ) (Fig. 15d) follows an amazing sawtooth wave profile with π jumps that correspond precisely to the Z positions where |V (Z)| = 0, namely, R (V (Z )) = 0 and I (V (Z )) = 0. These π jumps are quite regular for the constant reflectivity situation (green curve). The moduli of V (Z ) and unwrapped phases Φ(Z ) for glass//water and glass//gold//water (25 and 50 nm thick gold films) interfaces are much less regular. The |V (Z )| curves are no longer symmetric, and their maxima shift noticeably. These |V (Z )| shifts are positive (towards positive Z values) for the glass//water interface (200 nm) and negative for a glass//gold (50 nm)//water interface (80 nm). Surprisingly, the thinner gold film gives two maxima at 100 and 190 nm. These shifts of the V (Z) curves are the

642

F. Argoul et al.

a

b

c

d

Fig. 15 Reflectivity r(θ1) and V (Z ) functions for a TM polarized beam at a glass//water, without (black lines) or with an intermediate gold film of width 50 nm (resp. 25 nm): red (resp. blue) line. (a) |r|(θ1) versus θ1. (b) ϕ (θ1)/π versus θ1. (c) |V (Z )| versus Z. (d) Φ(Z )/π versus Z. The green curves correspond to a real and constant reflectivity r

consequence of the GH shifts, and their signs depend also on the changes of sign of @ϕ/@θ1 at the TIR angle or at the SPR angle (or both) [43, 120, 129].

SPRWG Microscopy on Thick Dielectric Layers Thick dielectric layers (≳ λ0), deposited on thin gold films can behave as an asymmetric waveguides and favor the emergence of resonance modes, as predicted by Eq. 30. This phenomenon is illustrated in Figs. 16 and 17 on the reflectivity and V (Z ) curves, computed for a 1 μm thick dielectric film (nwg = 1.4) sandwiched in between a thin gold film (50 and 25 nm, respectively) and water. For these gold thicknesses, the SPR resonance (in TM polarization) is pushed to the limit of the incident angle θ1 domain and is no longer useful for biosensing tasks. Nevertheless, a very sharp hollow appears at incident angles close to θ1TIR that brings extra biosensing possibilities in another angular domain. Importantly, both TM and TE polarizations give SPRWG resonance modes, and it appears that the TE polarization gives sharper and greater GWR phase jumps for thinner gold films. The discrimination of the TM and TE polarizations from the |V (Z )| curves is more marked with a 25 nm (Fig. 17c) than with a 50 nm thick gold film (Fig. 16c), in part because for the thin gold film the reflectivity is significantly smaller for incident angles θ1 below θ1TIR TIR. Another very interesting feature that occurs for TM polarization and thinner gold films with a dielectric layer of 1 μm is the quasi-linear variation of the phase of V (Z ). This phenomenon occurs because the |V (Z)| curve never gets very close to

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

a

b

c

d

643

Fig. 16 Reflectivity r(θ1) and V (Z ) functions for a 1 μm thick dielectric layer nwg = 1.4 on a 50 nm gold film with a TM (red) (resp. TE (blue)) polarized light. n1 = 1.5151, n2 = 1.335, λ0 = 632.8 nm

a

b

c

d

Fig. 17 Reflectivity r(θ1) and V (Z ) functions for a 1 μm thick dielectric layer nwg = 1.4 on a 25 nm gold film with a TM (red) (resp. TE (blue)) polarized light. n1 = 1.5151, n2 = 1.335, λ0 = 632.8 nm

644

F. Argoul et al.

zero in that case; the complexity of |r|(θ1) (Fig. 17a) and ϕ (θ1) (Fig. 17b) curves produces a superimposition of at least three spatial frequency modulations (|r|(θ1) has three minima, and @ϕ (θ1)/@θ1 has three extrema). From our previous discussion of GH shifts (sections “Total Internal Reflection” and “Sensivity of SPR to Dielectric Layers: From Evanescent to Guided Waves”), it is important to realize that each local extrema of the angular phase derivative (giving a GH shift) is likely to produce a different frequency on the |V (Z )| curve.

SPR and SPRWG Microscopy on Living Cells Direct and nonintrusive observation of adherent cells on solid substrates with objective-coupled surface plasmon resonance microscopy has become very competitive with other unstained microcopy methods. The first trials were performed on fixed cells in air [41, 42, 135, 136] and then in liquid medium [137–139]. However, very little experiments were carried out on living cells [140–143] and exclusively on short-term recordings. Better resolution was achieved by combining high-resolution SSPRM (Fig. 14a) with a radially polarized light (pure TM) [124, 128, 137, 143], the cell-substrate gap distance being estimated with bare gold surfaces as a control. A further improvement of this radial-SSPRM microscopy was obtained, introducing a fibered interferometer [43, 44, 144] that improved markedly the stability of the device and hence its temporal sensitivity over periods of several days. Time-lapsed recording are particularly interesting for the characterization of the dynamics of living cells, given that full resolution ((500  500) pixels) image capture takes about 1 min; the range of temporal scales affordable by this microscopy is more than three decades, without detectable drift of the optical setup. The V (X,Y) images that are shown in Fig. 18b–d were recorded with an unfibered version of this radially polarized microscope. Three V (Z) curves were recorded: two on the cell body (red and green) and one outside the cell (blue). These V (Z ) curves are shifted artificially, putting their maxima at Z = 0. They all present a single maxima, similar to the previous computations shown in Fig. 15c for a 50 nm thick gold film. We compare in Fig. 18 the amplitude (Fig. 18b, d) and phase (Fig. 18c, e) images of the same zone, centered at an IMR90 cell nucleus. An optimum contrast of the images is obtained when choosing the Z focus at a local maximum of @ |V (Z)|/@Z [44]. The two Z focus selected in Fig. 18b, d are marked by two black vertical lines (b and d) in Fig. 18a. The first focus position Z = 0.8 μm concentrates on near to gold surface cellular structures, such as those surrounding the nucleus (Golgi and endoplasmic reticulum) which have been immobilized upon cell dehydration by ethanol. The second Z focus position is chosen after the two main side lobes of the |V(Z)| curves where the quasi-complete damping of V (Z) means that the microscope becomes sensible to cell body structures beyond the evanescent field. In other words, at this defocus, the cell itself plays the role of a dielectric waveguide placed in air. For instance, in Fig. 18d, we note the emergence of bright disks, inside the nucleus domain, with a size of about 1–2 μm. What is also very interesting is the information that we can retrieve from the phase images (Fig. 18c, e). These phase images were coded by a continuous grey coding and surprisingly they are not smooth grey images but present

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

645

Fig. 18 V (X, Y) amplitude and phase images reconstructed from an IMR90 fibroblast for different Z values, in TM (radial) polarization. (a) |V (Z )| curves selected at the position marked by colored symbols in panel (b). (b and d) Modulus images |V (X, Y)| scanned at position Z = 0.8 and 1.2 μm. (c and e) Phase image Φ(X, Y) scanned at the same positions as (b, d). dgold = 45 nm. The SSPRM images were recorded in air (fixed cells) with a 60X objective lens with NA = 1.45 (Olympus) (Reprinted with permission of Optics Express [41])

different domains with a specific grey level characteristic of phase plateaus. On the boundary of these domains, very sharp phase jump appear which delimit the nuclear membrane and the extracellular membranes (bottom of the images). Comparing the two focus positions in Fig. 18c, e, we note that the dynamic of the phase has increased

646

F. Argoul et al.

Fig. 19 Comparison of (a) a SSPRM image and (b) a topographic AFM image of an erythrocyte. Scale bar: 10 μm

by a factor 4/3 from Z = 0.8 to 1.2 μm. This type of observation was made recently on living myoblasts [44], and finely sampled V (Z) curves were recorded on 2D grids to allow the estimation of the local refractive index of adherent cells. Some intracellular structures were observed in liquid, and because they behave as intrinsic intracellular waveguides, they lead to highly contrasted phase images. Red blood cells are very interesting objects for quantitative phase microscopy, since their internal body and therefore their optical index can be assumed as homogeneous [17]. They also give very nice height images with SPRWG microscopy (Fig. 19a). Actually, since their height variation remains limited to a few microns (in the example shown in Fig. 19a, the red blood cell was partly dried before imaging in air), the amplitude |V (Z )| or the phase at a given Z focus position may suffice to recover the internal optical index of the cell. In Fig. 19, we compare a |V (Z )| image (Fig. 19a) with a topographic image recorded in contact mode with an atomic force microscope (Fig. 19b). We have also recorded full |V (Z )| curves from which we get an estimation of the index of 1.45 for a partly dried erythrocyte, slightly larger than previously estimated with digital holography microscopy (1.4  0.2) [145–147].

Conclusion We have surveyed in this chapter the development of nonintrusive microscopy methods based of evanescent waves, providing a formal support to the experimental observations that will help improving further these tools. SPR microscopy appears as a powerful trick to enhance further the electromagnetic field. Actually it is more than that because recent experimental studies have also shown that SPR can be monitored by an electric polarization of the gold film interface [148] and that it can be modulated in time during a biosensing experiment. Other materials (negative index materials (NIM) or left-handed materials) [149] than metals can be used to

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

647

significantly enhance the field depth for SPR imaging and to increase the sensitivity to changes in refractive index in the bulk solution. Another interesting outcome is the possible excitation of SPPs by a TE polarized light at the interface between a dielectric and a metamaterial [150], which is impossible with positive refractive index materials according to Maxwells equations. Importantly, recent efforts have been invested to push SPR microscopy beyond the optical or visible spectral range to infrared [49–52, 151] and TeraHertz domains [152, 153]. When Otto [25] and Kretschmann and Raether [26] designed the optical system for exciting surface plasmon resonance, they could not dare to think how far this small system could lead. High-resolution surface plasmon microscopy has not yet fully demonstrated its whole application potential. In particular, its application to diagnosis for living systems should be further developed, in a similar way as quantitative phase microscopy is now becoming a reference for nonintrusive imaging of living cells [17, 21, 154–157]. As compared to holographic or interferometric phase imaging methods, SPR and SPRWG have a decoupled sensitivity. They also bring the possibility to uncouple the thickness and the optical index variations from submicron scale structures [158] when a guided wave mode is implied [43]. The improvement of wave guiding methods should also open new challenging perspectives for SPR-based sensor systems since they could be used for intravital diagnosis without need of staining the region of interest. Acknowledgements We are indebted to Centre National de la Recherche Scientifique, Ecole Normale Supérieure de Lyon, Lyon Science Transfert (projet L659), Région Rhône Alpes (CIBLE Program 2011), INSERM (AAP Physique Cancer 2012), and the French Agency for Research (ANR-AA-PPPP-005, EMMA 2011) for their financial support.

References 1. Fan X, White IM, Shopova SI, Zhu H, Suter JD, Sun Y (2008) Sensitive optical biosensors for unlabeled targets: a review. Anal Chim Acta 620(1–2):8 2. Zourob M, Lakhatakia A (2010) Optical guided-wave chemical and biosensors I. Springer, Berlin/Heidelberg 3. Zourob M, Lakhatakia A (2010) Optical guided-wave chemical and biosensors II. Springer, Berlin/Heidelberg 4. Horvath R, Pedersen HC, Skivesen N, Svanberg C, Larsen NB (2005) Fabrication of reverse symmetry polymer waveguide sensor chips on nanoporous substrates using dip-floating. J Micromech Microeng 15(6):1260 5. Fang Y, Ferrie AM, Fontaine NH, Mauro J, Balakrishnan J (2006) Resonant waveguide grating biosensor for living cell sensing. Biophys J 91(5):1925 6. Velasco-Garcia MN (2009) Optical biosensors for probing at the cellular level: A review of recent progress and future prospects. Semin Cell Dev Biol 20(1):27 7. Zernike F (1955). How I discovered phase contrast. Science 121(3141):345 8. Stephens DJ, Allan VJ (2003) Light microscopy techniques for live cell imaging. Science (New York, NY) 300(5616):82 9. Bereiter-Hahn J, Fox CH, Thorell B (1979) Quantitative reflection contrast microscopy of living cells. J Cell Biol 82:767

648

F. Argoul et al.

10. Verschueren H (1985) Interference reflection microscopy in cell biology: methodology and applications. J Cell Sci 75:279 11. Popescu G, Deflores LP, Vaughan JC, Badizadegan K, Iwai H, Dasari RR, Feld MS (2004) Fourier phase microscopy for investigation of biological structures and dynamics. Opt Lett 29 (21):2503 12. Rappaz B, Marquet P, Cuche E, Emery Y, Depeursinge C, Magistretti P (2005) Measurement of the integral refractive index and dynamic cell morphometry of living cells with digital holographic microscopy. Opt Express 13(23):9361 13. Tychinskii VP (2001) Coherent phase microscopy of intracellular processes. Physics-Uspekhi 44(6):617 14. Popescu G, Ikeda T, Dasari RR, Feld MS (2006) Diffraction phase microscopy for quantifying cell structure and dynamics. Opt Lett 31(6):775 15. Tychinskii VP (2007) Dynamic phase microscopy : is a ‘dialogue’ with the cell possible ? Physics-Uspekhi 50(5):513 16. Bon P, Maucort G, Wattellier B (2009) Quadriwave lateral shearing interferometry for quantitative phase microscopy of living cells. Opt Express 17(15):468 17. Popescu G (2011) Quantitative phase imaging of cells and tissues. McGraw Hill, New York 18. Bon P, Savatier J, Merlin M, Wattellier B, Monneret S (2012) Optical detection and measurement of living cell morphometric features with singleshot quantitative phase microscopy. J Biomed Opt 17(7):076004 19. Martinez-Torres C, Berguiga L, Streppa L, Boyer-Provera E, Schaeffer L, Elezgaray J, Arneodo A, Argoul F (2014) Diffraction phase microscopy: retrieving phase contours on living cells with a wavelet-based space-scale analysis. J Biomed Opt 19(3):036007 20. Martinez-Torres C, Laperrousaz B, Berguiga L, Boyer-Provera E, Elezgaray J, Nicolini FE, Maguer-Satta V, Arneodo A, Argoul F (2015) Deciphering the internal complexity of living cells with quantitative phase microscopy: a multiscale approach. J Biomed Opt 20(9):096005 21. Martinez Torres C, Laperrousaz B, Berguiga L, Boyer Provera E, Elezgaray J, Nicolini FE, Maguer-Satta V, Arneodo A, Argoul F (2016) In: Popescu G, Park Y (eds) Quantitative phase imaging II. SPIE proceedings, SPIE, Bellingham, WA vol 9718. p 97182C 22. Li SY, Ramsden JJ, Prenosil JE, Heinzle E (1994) Measurement of adhesion and spreading kinetics of baby hamster kidney and hybridoma cells using an integrated optical method. Biotechnol Prog 10(5):520 23. Tiefenthaler K, Lukosz W (1989) Sensitivity of grating couplers as integrated-optical chemical sensors. J Opt Soc Am B 6(2):209 24. Kunz RE, Cottier K (2006). Optimizing integrated optical chips for labelfree (bio-)chemical sensing. Anal Bioanal Chem 384(1):180 25. Otto A (1968) Excitation of nonradiative surface plasmon waves in silver by the method of frustrated total reflection. Z Phys 410:398 26. Kretschmann E, Raether H (1968). Radiative decay of non-radiative surface plasmons excitated by light. Z Naturforsch A 23:2135 27. Raether H (1988) Surface plasmons on smooth and rough surfaces and on gratings. Springer, Berlin/Heidelberg 28. Nelson BP, Grimsrud TE, Liles MR, Goodman RM, Corn RM (2001) Surface plasmon resonance imaging measurements of DNA and RNA hybridization adsorption onto DNA microarrays. Anal Chem 73(1):1 29. Homola J (2003) Present and future of surface plasmon resonance biosensors. Anal Bioanal Chem 377(3):528 30. Nikitin PI, Grigorenko AN, Beloglazov AA, Valeiko MV, Savchuk AI, Savchuk OA, Steiner G, Kuhne C, Huebner A, Salzer R (2000) Surface plasmon resonance interferometry for micro-array biosensing. Sensors Actuators A Phys 85(1–3):189 31. Notcovich AG, Zhuk V, Lipson SG (2000) Surface plasmon resonance phase imaging. Appl Phys Lett 76(13):1665

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

649

32. Grigorenko AN, Beloglazov AA, Nikitin PI (2000) Dark-field surface plasmon resonance microscopy. Opt Commun 174(1):151 33. Burke JJ, Stegeman GI, Tamir T (1986) Surface-polariton-like waves guided by thin, lossy metal films. Phys Rev B 33(8):5186 34. Kano H, Mizuguchi S, Kawata S (1998) Excitation of surface-plasmon polaritons by a focused laser beam. J Opt Soc Am A 15(4):1381 35. Kano H, Knoll W (2000) A scanning microscope employing localized surface-plasmonpolaritons as a sensing probe. Opt Commun 182:11 36. Stabler G, Somekh MG, See CW (2004) High-resolution wide-field surface plasmon microscopy. J Microsc 214(3):328 37. Zhang J, See CW, Somekh MG, Pitter MC, Liu SG (2004) Widefield surface plasmon microscopy with solid immersion excitation. Appl Phys Lett 85(22):5451 38. Somekh MG, Liu SG, Velinov TS, See CW (2000) Optical V(z) for high-resolution 2 pi surface plasmon microscopy. Opt Lett 25(11):823 39. Berguiga L, Zhang S, Argoul F, Elezgaray J (2007) High-resolution surface-plasmon imaging in air and in water: V(z) curve and operating conditions. Opt Lett 32(5):509 40. Somekh MG, Stabler G, Liu S, Zhang J, See CW (2009) Wide field high resolution surface plasmon interference microscopy. Opt Lett 34(20):3110 41. Berguiga L, Roland T, Monier K, Elezgaray J, Argoul F (2011) Amplitude and phase images of cellular structures with a scanning surface plasmon microscope. Opt Express 19(7):6571 42. Boyer-Provera E, Rossi A, Oriol L, Dumontet C, Plesa A, Berguiga L, Elezgaray J, Arneodo A, Argoul F (2013) Wavelet-based decomposition of high resolution surface plasmon microscopy V (Z) curves at visible and near infrared wavelengths. Opt Express 21(6):7456 43. Berguiga L, Boyer-Provera E, Martinez-Torres C, Elezgaray J, Arneodo A, Argoul F (2013) Guided wave microscopy: mastering the inverse problem. Opt Lett 38(21):4269 44. Berguiga L, Streppa L, Boyer-Provera E, Martinez-Torres C, Schaeffer L, Elezgaray J, Arneodo A, Argoul F (2016) Time-lapse scanning surface plasmon microscopy of living adherent cells with a radially polarized beam. Appl Optics 55(6):1216 45. Tien PK (1977) Integrated optics and new wave phenomena wave guides. Rev Mod Phys 49(2):361 46. Salamon Z, Macleod HA, Tollin G (1997) Surface plasmon resonance spectroscopy as a tool for investigating the biochemical and biophysical properties of membrane protein systems. II: Applications to biological systems. Biochim Biophys Acta 1331(2):131 47. Hickel W, Knoll W (1990) Surface plasmon optical characterization of lipid monolayers at 5 μm lateral resolution. J Appl Phys 67(8):3572 48. Aust EF, Knoll W (1993) Electrooptical waveguide microscopy. J Appl Phys 73(6):2705 49. Bivolarska M, Velinov T, Stoitsova S (2006) Guided-wave and ellipsometric imaging of supported cells. J Microsc 224:242 50. Golosovsky M, Lirtsman V, Yashunsky V, Davidov D, Aroeti B (2009) Midinfrared surfaceplasmon resonance: A novel biophysical tool for studying living cells. J Appl Phys 105 (10):102036 51. Yashunsky V, Marciano T, Lirtsman V, Golosovsky M, Davidov D, Aroeti B (2012) Real-time sensing of cell morphology by infrared waveguide spectroscopy. PLoS One 7(10):e48454 52. Yashunsky V, Kharilker L, Zlotkin-Rivkin E, Rund D, Melamed-Book N, Zahavi EE, Perlson E, Mercone S, Golosovsky M, Davidov D, Aroeti B (2013) Real-time sensing of enteropathogenic E. coli-induced effects on epithelial host cell height, cell-substrate interactions, and endocytic processes by infrared surface plasmon spectroscopy. PLoS One 8(10): e78431 53. Knoll W (1998) Interfaces and thin films as seen by bound electromagnetic waves. Annu Rev Phys Chem 49:569 54. Wolter VH (1950) Untersuchungen zur Strahlversetzung bei Totalreflexion des Lichtes mit der Methode der Minimumstrahlkennzeichnung. Z Naturforsch A J Phys Sci 5(3):143

650

F. Argoul et al.

55. Goos F, Hanchen H (1943) Ein neuer und fundamentaler versuch zur totalreflexion. Ann Phys 6(1):333 56. Artmann VK (1948). Berechnung der seitenverstzung des totalreflectierten strahles. Ann Phys 6(2):87 57. McGuirk M, Carniglia CK (1977) An angular spectrum representation approach to the GoosHanchen shift. J Opt Soc Am 67(1):103 58. Puri A, Birman JL (1986) Goos-Hänchen beam shift at total internal reflection with application to spatially dispersive media. J Opt Soc Am A 3(4):543 59. Götte JB, Aiello A, Wördman JP (2008) Loss-induced transition of the Goos-Hänchen effect for metals and dielectrics. Opt Express 16(6):3961 60. Horowitz BR, Tamir T (1973) Unified theory of total reflection phenomena at a dielectric interface. Appl Phys 1(1):31 61. Novotny L, Grober RD, Karrai K (2001) Reflected image of a strongly focused spot. Opt Lett 26(11):789 62. Novotny L, Hecht B (2006) Principles of nano-optics. Cambridge University Press, Cambridge 63. Steyer JA, Almers W (2001) A real-time view of life within 100 nm of the plasma membrane. Nat Rev Mol Cell Biol 2(4):268 64. Ambrose EJ (1956) A surface contact microscopy for the study of cell movements. Nature 178:1194 65. Byrne GD, Pitter MC, Zhang J, Falcone FH, Stolnik S, Somekh MG (2008) Total internal reflection microscopy for live imaging of cellular uptake of sub-micron non-fluorescent particles. J Microsc 231(Pt 1):168 66. Choi R (2015) Design and characterisation of a label free evanescent waveguide microscope. MPhil Thesis, University of Nottingham 67. Radler J, Sackmann E (1993) Imaging optical thicknesses and separation distances of phospholipid vesicles at solid surfaces. J Phys II 3:727 68. Limozin L, Sengupta K (2009) Quantitative reflection interference contrast microscopy (RICM) in soft matter and cell adhesion. Eur J Chem Phys Phys Chem 10:2752 69. Herold KE, Rasooly A (2012) Biosensors and molecular technologies for cancer diagnostics. CRC Press, Boca Raton 70. Hickel W, Knoll W (1990) Optical waveguide microscopy. Appl Phys Lett 57(13):1286 71. Thoma F, Langbein U, Mittler-Neher S (1997) Waveguide scattering microscopy. Opt Commun 134:16 72. Hassanzadeh A, Nitsche M, Mittler S, Armstrong S, Dixon J, Langbein U (2008) Waveguide evanescent field fluorescence microscopy: thin film fluorescence intensities and its application in cell biology. Appl Phys Lett 92(23):233503 73. Nahar Q (2014) Oriented collagen and applications of waveguide evanescent field scattering (WEFS) microscopy. PhD thesis, University of Western Ontario 74. Grandin HM, Städler B, Textor M, Vörös J (2006) Waveguide excitation fluorescence microscopy: A new tool for sensing and imaging the biointerface. Biosens Bioelectron 21(8):1476 75. Agnarsson B, Ingthorsson S, Gudjonsson T, Leosson K (2009) Evanescent-wave fluorescence microscopy using symmetric planar waveguides. Opt Express 17(7):5075 76. Horvath R, Pedersen HC, Skivesen N, Selmeczi D, Larsen NB (2005) Monitoring of living cell attachment and spreading using reverse symmetry waveguide sensing. Appl Phys Lett 86 (7):071101 77. Agnarsson B, Lundgren A, Gunnarsson A, Rabe M, Kunze A, Mapar M, Simonsson L, Bally M, Zhdanov VP, Hook F (2015). Evanescent lightscattering microscopy for label-free interfacial imaging: from single sub-100 nm vesicles to live cells. ACS Nano 9(12):11849 78. Binnig G, Quate CF (1986) Atomic force microscope. Phys Rev Lett 56(9):930 79. Betzig E, Trautman JK, Harris TD, Weiner JS (1991) Breaking the diffraction barrier: optical microscopy on the nanometer scale. Science 251:1468

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

651

80. Bailey B, Farkas DL, Lansing Taylor D, Lanni F (1993) Enhancement of axial resolution in fluorescence microscopy by standing-wave excitation. Nature 366:44 81. Cragg GE, So PT (2000) Lateral resolution enhancement with standing evanescent waves. Opt Lett 25(1):46 82. Frohn JT, Knapp HF, Stemmer A (2000) True optical resolution beyond the Rayleigh limit achieved by standing wave illumination. Proc Natl Acad Sci U S A 97(13):7232 83. Beck M, Aschwanden M, Stemmer A (2008) Sub-100-nanometre resolution in total internal reflection fluorescence microscopy. J Microsc 232(1):99 84. Streibl N (1984). Phase imaging by the transport equation of intensity. Opt Commun 49(1):6 85. Gustafsson MGL (2000) Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J Microsc 198(2):82 86. Gustafsson MGL (2005) Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proc Natl Acad Sci U S A 102 (37):13081 87. Gustafsson MGL, Shao L, Carlton PM, Wang CJR, Golubovskaya IN, Cande WZ, Agard DA, Sedat JW (2008) Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophys J 94(12):4957 88. Somekh MG, Hsu K, Pitter MC (2008) Resolution in structured illumination microscopy: a probabilistic approach. J Opt Soc Am A 25(6):1319 89. Saxena M, Eluru G, Gorthi SS (2015) Structured illumination microscopy. Adv Opt Photon 7:241 90. Strohl F, Kaminski CF (2016). New frontiers in structured illumination microscopy. Optica 3(6):667 91. Muller M, Monkemoller V, Hennig S, Hubner W, Huser T (2016) Opensource image reconstruction of super-resolution structured illumination microscopy data in ImageJ. Nat Commun 7:10980 92. Chen J, Xu Y, Lv X, Lai X, Zeng S (2013) Super-resolution differential interference contrast microscopy by structured illumination. Opt Express 21(1):112 93. So PT, Kwon HS, Dong CY (2001) Resolution enhancement in standingwave total internal reflection microscopy: a point-spread-function engineering approach. J Opt Soc Am A 18(11):2833 94. Gliko O, Reddy GD, Anvari B, Brownell WE, Saggau P (2006) Standing wave total internal reflection fluorescence microscopy to measure the size of nanostructures in living cells. J Biomed Opt 11(6):064013 95. Chung E, Kim D, Cui Y, Kim YH, So PTC (2007) Two-dimensional standing wave total internal reflection fluorescence microscopy: superresolution imaging of single molecular and biological specimens. Biophys J 93(5):1747 96. Sentenac A, Belkebir K, Giovannini H, Chaumet PC (2009) High resolution total-internal fluorescence microscopy using periodically nanostructured glass slides. J Opt Soc Am A 26 (12):2550 97. Shen H, Huang E, Das T, Xu H, Ellisman M, Liu Z (2014) TIRF microscopy with ultra-short penetration depth. Opt Express 22(9):10728 98. Brunstein M, Wicker K, Hérault K, Heintzmann R, Oheim M (2013) Full-field dual-color 100nm super-resolution imaging reveals organization and dynamics of mitochondrial and ER networks. Opt Express 21(22):26162 99. Chung E, Kim D, So PTC (2006) Extended resolution wide-field optical imaging: objectivelaunched standing-wave total internal reflection fluorescence microscopy. Opt Lett 31(7):945 100. Kretschmann E (1978) The ATR method with focused light - application to guided waves on a grating. Opt Commun 26(1):41 101. Hale GM, Querry MR (1973) Optical constants of water in the 200-nm to 200-microm wavelength region. Appl Optics 12(3):555 102. Olmon RL, Slovick B, Johnson TW, Shelton D, Oh SH, Boreman GD, Raschke MB (2012) Optical dielectric function of gold. Phys Rev B 86(23):235147

652

F. Argoul et al.

103. Lirtsman V, Ziblat R, Golosovsky M, Davidov D, Pogreb R, SacksGranek V, Rishpon J (2005) Surface-plasmon resonance with infrared excitation: Studies of phospholipid membrane growth. J Appl Phys 98(9):093506 104. Yin X, Hesselink L, Liu Z, Fang N, Zhang X (2004) Large positive and negative lateral optical beam displacements due to surface plasmon resonance. Appl Phys Lett 85(3):372 105. Oh GY, Kim DG, Kim HS, Choi YW (2009) Analysis of surface plasmon resonance with Goos-Hanchen shift using FDTD method. Proc SPIE 7218:72180J 106. Wang LG, Chen H, Zhu SY (2005) Large negative GoosHanchen shift from a weakly absorbing dielectric slab. Opt Lett 30(21):2936 107. Liu X, Cao Z, Zhu P, Shen Q, Liu X (2006) Large positive and negative lateral optical beam shift in prism-waveguide coupling system. Phys Rev E 73(5):056617 108. Chen B, Basaran C (2011) Statistical phase-shifting step estimation algorithm based on the continuous wavelet transform for high-resolution interferometry metrology. Appl Optics 50(4):586 109. Homola J (2006) Springer series on chemical sensors and biosensors. Springer, Berlin/ Heidelberg 110. Rothenhausler B, Knoll W (1987) Total internal diffraction of plasmon surface polaritons. Appl Phys Lett 51(11):783 111. Rothenhausler B, Knoll W (1987) Plasmon surface polariton fields versus TIR evanescent waves for scattering experimentas at surfaces. Opt Commun 63(5):301 112. Fu E, Foley J, Yager P (2003) Wavelength-tunable surface plasmon resonance microscope. Rev Sci Instrum 74(6):3182 113. Somekh MG, Liu S, Velinov TS, See CW (2000) High-resolution scanning surface-plasmon microscopy. Appl Optics 39(34):6279 114. Somekh MG, See CW, Goh J (2000) Wide field amplitude and phase confocal microscope with speckle illumination. Opt Commun 174:75 115. Hecht B, Bielefeldt H, Novotny L, Inouye Y, Pohl D (1996) Local excitation, scattering and interference of surface plasmons. Phys Rev Lett 77(9):1889 116. Velinov T, Somekh MG, Liu S (1999) Direct far-field observation of surface-plasmon propagation by photoinduced scattering. Appl Phys Lett 75(25):3908 117. Bouhelier A, Ignatovich F, Bruyant A, Huang C, Colas des Francs G, Weeber JC, Dereux A, Wiederrecht GP, Novotny L (2007) Surface plasmon interference excited by tightly focused laser beams. Opt Lett 32(17):2535 118. Dawson P, de Fornel F, Goudonnet JP (1994) Imaging of surface plasmon propagation and edge interaction using a photon scanning tunneling microscope. Phys Rev Lett 72(18):2927 119. Dawson P, Puygranier BAF (2001) Surface plasmon polariton propagation length: A direct comparison using photon scanning tunneling microscopy and attenuated total reflection. Phys Rev 63:1 120. Somekh MG (2002) Surface plasmon fluorescence microscopy: an analysis. J Microsc 206(2):120 121. Huang B, Wang W, Bates M, Zhuang X (2007) Three-dimensional superresolution imaging by stochastic optical reconstruction microscopy. Science 319:810 122. Watanabe K, Horiguchi N, Kano H (2007) Optimized measurement probe of the localized surface plasmon microscope by using radially polarized illumination. Appl Optics 46 (22):4985 123. Watanabe K, Terakado G, Kano H (2009) Localized surface plasmon microscope with an illumination system employing a radially polarized zerothorder Bessel beam. Opt Lett 34 (8):1180 124. Vander R, Lipson SG (2009) High-resolution surface-plasmon resonance real-time imaging. Opt Lett 34(1):37 125. Roland T, Berguiga L, Elezgaray J, Argoul F (2010) Scanning surface plasmon imaging of nanoparticles. Phys Rev B 81(23):235419

20

Resonant Waveguide Imaging of Living Systems: From Evanescent. . .

653

126. Atalar A (1978) An angular-spectrum approach to contrast in reflection acoustic microscopy. J Appl Phys 49:5130 127. Atalar A (1979) A physical model for the acoustic signatures. J Appl Phys 50:8237 128. Pechprasarn S, Somekh MG (2012) Surface plasmon microscopy: resolution, sensitivity and crosstalk. J Microsc 246(3):287 129. Argoul F, Roland T, Fahys A, Berguiga L, Elezgaray J (2012) Uncovering phase maps from surface plasmon resonance images: Towards a subwavelength resolution. C R Phys 13(8):800 130. Hu ZJ, Tan PS, Zhu SW, Yuan XC (2010) Structured light for focusing surface plasmon polaritons. Opt Express 18(10):10864 131. Ilett C, Somekh MG, Briggs GAD (1984) Acoustic microscopy of elastic discontinuities. Proc R Soc Lond A 393:171 132. Quabis S, Dorn R, Eberler M, Glockl O, Leuchs G (2000) Focusing light to a tighter spot. Opt Commun 179:1 133. Dorn R, Quabis S, Leuchs G (2003) Sharper focus for a radially polarized light beam. Phys Rev Lett 91(23):233901 134. Shoham A, Vander R, Lipson SG (2006) Production of radially and azimuthally polarized polychromatic beams. Opt Lett 31(23):3405 135. Sefat F, Denyer MCT, Youseffi M (2011) Imaging via widefield surface plasmon resonance microscope for studying bone cell interactions with micropatterned ECM proteins. J Microsc 241(3):282 136. Watanabe K, Matsuura K, Kawata F, Nagata K, Ning J, Kano H (2012) Scanning and nonscanning surface plasmon microscopy to observe cell adhesion sites. Biomed Opt Express 3(2):354 137. Moh KJ, Yuan XC, Bu J, Zhu SW, Gao BZ (2008) Surface plasmon resonance imaging of cellsubstrate contacts with radially polarized beams. Opt Express 16(25):20734 138. Soon CF, Khaghani SA, Youseffi M, Nayan N, Saim H, Britland S, Blagden N, Denyer MCT (2013) Interfacial study of cell adhesion to liquid crystals using widefield surface plasmon resonance microscopy. Colloids Surf B 110:156 139. Peterson AW, Halter M, Tona A, Plant AL (2014) High resolution surface plasmon resonance imaging for single cells. BMC Cell Biol 15:35 140. Mahadi Abdul Jamil M, Denyer MCT, Youseffi M, Britland ST, Liu S, See CW, Somekh MG, Zhang J (2008) Imaging of the cell surface interface using objective coupled widefield surface plasmon microscopy. J Struct Biol 164:75 141. Wang Z, Ding H, Popescu G (2011) Scattering-phase theorem. Opt Lett 36(7):1215 142. Wang S, Xue L, Lai J, Li Z (2012) Three-dimensional refractive index reconstruction of red blood cells with one-dimensional moving based on local plane wave approximation. J Opt 14 (6):065301 143. Toma K, Kano H, Offenha A (2014) Label-free measurement of cellelectrode cleft gap distance with high spatial resolution surface plasmon microscopy. ACS Nano 8(12):12612 144. Streppa L, Berguiga L, Boyer Provera E, Ratti F, Goillot E, Martinez Torres C, Schaeffer L, Elezgaray J, Arneodo A, Argoul F (2016) In: Vo-Dinh T, Lakowicz JR, Ho HPAH, Ray K (eds) Plasmonics in biology and medicine XIII. SPIE proceedings, SPIE, Bellingham, WA vol 9724. p 97240G 145. Popescu G, Park YK, Choi W, Dasari RR, Feld MS, Badizadegan K (2008) Imaging red blood cell dynamics by quantitative phase microscopy. Blood Cells Mol Dis 41(1):10 146. Park Y, Diez-Silva M, Popescu G, Lykotrafitis G, Choi W, Feld MS, Suresh S (2008) Refractive index maps and membrane dynamics of human red blood cells parasitized by plasmodium falciparum. Proc Natl Acad Sci U S A 105(37):13730 147. Rappaz B, Barbul A, Hoffmann A, Boss D, Korenstein R, Depeursinge C, Magistretti PJ, Marquet P (2009) Spatial analysis of erythrocyte membrane fluctuations by digital holographic microscopy. Blood Cells Mol Dis 42(3):228 148. Lioubimov V, Kolomenskii A, Mershin A, Nanopoulos DV, Schuessler HA (2004) Effect of varying electric potential on surface-plasmon resonance sensing. Appl Optics 43(17):3426

654

F. Argoul et al.

149. Pendry J (2000) Negative refraction makes a perfect lens. Phys Rev Lett 85(18):3966 150. Ruppin R (2000) Surface polaritons of a left-handed medium. Phys Lett A 277(1):61 151. Shalaev VM, Cai W, Chettiar UK, Yuan HK, Sarychev AK, Drachev VP, Kildishev AV (2005) Negative index of refraction in optical metamaterials. Opt Lett 30(24):3356 152. Withayachumnankul W, Abbott D (2009) Metamaterials in the terahertz regime. IEEE Photonics J 1(2):99 153. Yao H, Zhong S (2014) High-mode spoof SPP of periodic metal grooves for ultra-sensitive terahertz sensing. Opt Express 22(21):25149 154. Tychinsky V (2009) The metabolic component of cellular refractivity and its importance for optical cytometry. J Biophotonics 2(8–9):494 155. Yu L, Mohanty S, Zhang J, Genc S, Kim MK, Berns MW, Chen Z (2009) Digital holographic microscopy for quantitative cell dynamic evaluation during laser microsurgery. Opt Express 17(14):12031 156. Bon P, Wattellier B, Monneret S (2012) Modeling quantitative phase image formation under tilted illuminations. Opt Lett 37(10):1718 157. Park Y, Best CA, Badizadegan K, Dasari RR, Feld MS, Kuriabova T, Henle ML, Levine AJ, Popescu G (2010) Measurement of red blood cell mechanics during morphological changes. Proc Natl Acad Sci U S A 107(15):6731 158. Elezgaray J, Berguiga L, Argoul F (2014) Plasmon-based tomographic microscopy. J Opt Soc Am A 31(1):155

Part IV Light Manipulation and Therapeutic Applications

Photodynamic Therapy

21

Wing-Ping Fong, Hing-Yuen Yeung, Pui-Chi Lo, and Dennis K. P. Ng

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . History of Photodynamic Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photochemistry of Photodynamic Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photodynamic Reaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Light Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photosensitizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modified Photosensitizers for Targeted Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photosensitizers Linked to Targeting Molecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photosensitizers Activatable at Tumor Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photosensitizer Encapsulated into Nanoparticles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biological Effects of Photodynamic Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Direct Cytotoxic Effect on Tumor Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Destruction of Tumor-Associated Blood Vessel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Induction of Antitumor Immunity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combination Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PDT Combined with Cytotoxic Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PDT Combined with Anti-angiogenic Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PDT Combined with Immunoactive Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

658 659 660 660 661 662 665 665 667 667 669 669 672 673 676 676 676 677 678 678

W.-P. Fong (*) • H.-Y. Yeung School of Life Sciences, The Chinese University of Hong Kong, Hong Kong, China e-mail: [email protected] P.-C. Lo • D.K.P. Ng Department of Chemistry, The Chinese University of Hong Kong, Hong Kong, China e-mail: [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_35

657

658

W.-P. Fong et al.

Abstract

Photodynamic therapy (PDT) is a clinically established treatment modality for a range of cancers. It utilizes the combined action of photosensitizer, light, and molecular oxygen to generate reactive oxygen species (ROS), particularly singlet oxygen, to eradicate malignant cells and tissues. The therapeutic outcome depends largely on the performance of the photosensitizer. For cancer treatment, only a few PDT drugs, including porfimer sodium, temoporfin, and aminolevulinic acid, have been clinically approved. Unfortunately, they still suffer from a number of drawbacks. As a result, development of new generations of photosensitizers that are more efficient and tumor selective, have a wider scope of action, and produce less side effect is under intensive investigation. In addition, various approaches have been actively explored to enhance the tumor-targeting property of photosensitizers. It is commonly believed that PDT exerts its antitumor effects through three different biological mechanisms. Firstly, the ROS generated through the photosensitization process can trigger apoptotic or necrotic response, leading to direct tumor cell death. Secondly, the photodynamic action can target the blood vessels so as to block the nutrient and oxygen supplies to the rapidly proliferating tumor cells. Finally, PDT can also enhance antitumor immunity which is important not only in killing the tumor cells but also in preventing recurrence. The treatment efficacy of PDT can further be improved in combination therapy where it is used together with drugs that are cytotoxic, anti-angiogenic, or immunogenic. This chapter aims to give an overview of the principle and development of this innovative approach for cancer treatment. Keywords

Photodynamic therapy • Reactive oxygen species • Photosensitizer • Tumor targeting • Nanoparticle • Cell death • Anti-angiogenesis • Antitumor immunity • Antitumor vaccine • Combination therapy

Introduction Light has long been used for medicinal purposes. Photodynamic therapy (PDT) has emerged as a promising treatment modality for a variety of premalignant and malignant diseases. It involves the combined use of three individually nontoxic components, viz., photosensitizer, light, and molecular oxygen, to produce a toxic effect. In the presence of light, the photosensitizer is activated and converts endogenous molecular oxygen into cytotoxic reactive oxygen species (ROS). The ROS generated react rapidly with biological substrates, leading to apoptotic or necrotic cell death. An ideal photosensitizer is nontoxic without illumination. Thus, with a specific delivery of light and preferably also of the photosensitizer, the toxic effect can be confined to a localized region. Such specificity makes PDT a promising approach in treating various diseases, including cancer [1–5].

21

Photodynamic Therapy

659

Cancer is the greatest threat to human health in modern society, despite of many significant scientific and technological breakthroughs. Classical cancer therapies like surgical removal, radiotherapy, and chemotherapy are still widely used, but their invasiveness and low specificity have been deterring. Only in the late 1990s were targeted therapies clinically available. Compared with the classical therapies, PDT is relatively noninvasive and has fewer side effects, higher tolerance of repeated doses, and higher specificity that can be achieved through precise delivery of light. To achieve a desirable therapeutic outcome, the efficacy of the photosensitizer, for example, the efficiency in generating ROS and the selectivity for tumor cells, is the most important. Although PDT appears to be a promising approach, only a few photosensitizers, for example, porfimer sodium, temoporfin, and aminolevulinic acid, have been clinically approved for different oncological conditions. Unfortunately, these drugs still have some deficiencies such as weak absorption of tissuepenetrating red light, sustained skin photosensitivity, and low initial selectivity, among others. Thus, throughout the years, optimization of their photophysical and biological characteristics, as well as the development of novel photosensitizers with improved properties, particularly those which are tumor targeting, has been the major focus in PDT research.

History of Photodynamic Therapy Light has been used for thousands of year to treat diseases. In ancient China, Egypt, and India, it was used for different skin problems. The importance of phototherapy was fully recognized in 1903 when the Nobel Prize in Physiology or Medicine was awarded to Finsen in “recognition of his contribution to the treatment of diseases, especially lupus vulgaris, with concentrated light radiation, whereby he has opened a new avenue for medical science.” Although there was also some use of light together with special chemicals in treating skin conditions in the past, formal recognition of photodynamic activity was absent until about a hundred years ago [2, 6]. Raab was the first to exploit the interaction between light and the fluorescence compound acridine to exert cytotoxic effect on a Paramecium. Later, von Tappeiner successfully used topical eosin and white light to treat skin tumor and coined the term “photodynamic action.” Since then, there have been extensive researches on the photosensitizers. Most of the studies were focused on the use of hematoporphyrin and its derivatives, first on tumor detection and subsequently on tumor treatment. In the 1970s, Diamond showed that hematoporphyrin can be used as a photosensitizing agent to kill rat glioma both in vitro and in vivo. Later, Dougherty reported the first successful, largescale clinical application of PDT using hematoporphyrin and red light to treat skin cancer in human patients. More and more promising results were obtained. Finally in 1993, Photofrin (porfimer sodium), a derivative of hematoporphyrin, was approved in Canada as the first drug for PDT in the treatment of bladder cancer. Two years later, it was also

660

W.-P. Fong et al.

approved in the United States of America for use in esophageal cancer. Over the next two decades, a number of other photosensitizers have received approval from regulatory authorities in various countries for various malignant conditions. The list includes Levulan (5-aminolevulinic acid, ALA), Metvix (methyl aminolevulinate), Foscan (meta-tetra(hydroxyphenyl)chlorin, m-THPC), and Verteporfin (benzoporphyrin derivative monoacid ring A). A number of other photosensitizers are currently under scientific research or in clinical trials. It is envisioned that more drugs will become available in the near future. In addition to the wide applications in multiple types of cancer, including skin, esophageal, lung, colon, head and neck, digestive system, prostate, bladder, and lung cancers, PDT can also be used in skin conditions like acnes and psoriasis, age-related macular degeneration, and antibacterial therapy in infectious diseases.

Photochemistry of Photodynamic Therapy Photodynamic Reaction The three components of PDT are all nontoxic individually. However, illumination of the photosensitizer will lead to the production of toxic ROS (Fig. 1). Upon absorption of light with appropriate wavelength, the photosensitizer will be excited from the stable, ground state to a transient, excited singlet state. The singlet state photosensitizer can go back to the ground state by emitting fluorescence; such property makes the photosensitizer a good diagnostic tool for superficial cancers. Alternatively, the singlet state photosensitizer can also be converted by intersystem crossing to the relatively more stable triplet state. Two photodynamic reactions can occur before the triplet state photosensitizer returns to the ground state. Type I reaction involves the removal of proton(s) from or transfer of electron(s) to nearby molecules, for example, protein, fatty acid, or water. This process generates different free radicals which react with molecular oxygen to Intersystem crossing

Excited singlet state *PS

Excited triplet state *PS

Light energy

Type I

Energy transfer

Proteins DNA Lipids Water Oxygen, etc

Organic free radicals ROS

Excited singlet state 1 O2 PS

PS

Ground state

Type II Fluorescence

O2 Ground state

Fig. 1 A modified Jablonski diagram showing the photosensitization process

21

Photodynamic Therapy

661

produce different ROS, including superoxides and hydroperoxyl and hydroxyl radicals, as well as hydrogen peroxide. In contrast, type II reaction involves the direct transfer of energy from the excited triplet photosensitizer to the ground state triplet oxygen. As a result, reactive singlet oxygen is generated. While the two photodynamic reactions can occur simultaneously, for most photosensitizers, type II reaction predominates, making singlet oxygen as the major type of ROS generated during PDT [1, 5]. The ROS generated are extremely unstable. For example, singlet oxygen has a short life-span of about 0.04 μs. Thus, it has only a very short effective reaction range (about 0.02 μm) from its site of formation. The subcellular localization of the photosensitizer will determine the organelles primarily damaged by the treatment and, subsequently, the biochemical pathway and process involved [4].

Light Source A suitable light source is critical in making the photosensitizer toxic in the right place and at the right time. Several types of PDT light source are available, including broadband lamps, light-emitting diode lamps, and lasers. Among them, lasers are most frequently used. Laser light has the characteristics of monochromaticity, coherence, and collimation. These properties allow a narrow beam of light with high intensity, which can transmit into target tissue with great precision. The fact that laser can be focused onto a tiny spot contributes to the specificity of PDT. Only the photosensitizer present in the illuminated tumor site will be activated whereas those nearby in the non-illuminated normal tissue will not result in any adverse side effects. Due to their accessibility to light, dermatological malignancies are most conveniently treated by PDT. For internal cancers, light delivery to the target area is more challenging. Nevertheless, with the development of optimal fiber-optic delivery devices, for example, fiber-optic cable inside endoscope, it is now possible for laser to be directed to cavity or areas inside the body, and hence PDT is useful also in treating esophagus, lung, stomach, and bladder cancers [5, 7]. The wavelength of the applied light should match with the absorption peak of the photosensitizer so as to have adequate activation. For clinical usage in PDT, laser with wavelength between 650 and 850 nm is most appropriate. At a longer wavelength (>850 nm), the excited photosensitizer does not have sufficient energy to excite oxygen and produce ROS. In contrast, at a shorter wavelength (10 μm/s in both transverse directions, the bacterium remained trapped. During the experiment, the trapped bacterium remained alive as it was observed to continuously wiggle inside the trap. The lowest optical power needed to ensure a stable 3D trap was ~ 0.8 mW. When the power was decreased further, the 3D trap became unstable, but we found that the bacteria could still be trapped in two dimensions on the cover glass. At a power of 0.9 mW, stable 3D trapping of bacteria

Fig. 17 Fluorescence images of a 520-nm fluorescent polystyrene bead trapped in 3D (labeled by “T”) by the fiber-based SP lens (Media 1) [26]. A reference bead (labeled by “R”) is attached to the cover glass. The next movement is labeled at the bottom of each image. With the bead trapped, the cover glass was moved along +y (a, b), x, and y (b, c). The fiber as well as the trap was then lifted up along +z (c, d), followed by the cover glass movements along +y (d, e) and y (e, f). The bead remained trapped during these movements and could be seen when the focal plane was lifted up to the trap level (f, g). (h) The schematic of the experimental setup. The optical power at the fiber end face was 1.5 mW

710

Y. Liu and M. Yu

Fig. 18 (Color online) Images of 3D trapping of a bacterium with the fiber SP lens (Media 2) [26]. The white arrows indicate the bacterium, and the black arrows point to a reference silica bead. (a, b) A free bacterium was trapped. (c–e) The bacterium was lifted up in the vertical direction while the focal plane was on the cover glass. (e, f) The focal plane was brought to the plane where the bacterium was located, while the reference bead became out of focus. (f–h) The water was moved along +y and +x while the bacterium remained trapped. The optical power at the fiber end face was 0.9 mW

for several hours was achieved. Long-time, stable trapping of biological specimens without physical contact and photodamage is especially useful in investigation techniques requiring a long acquisition time such as Raman spectroscopy [46]. It is noted that trapping live bacteria is more challenging than trapping polystyrene or silica beads. This is due to the fact that bacteria have a low refractive index (1.38 over the visible wavelength band compared to 1.58 ~ 1.60 for polystyrene beads and 1.45 for silica beads), small dimensions, and fast motility. Moreover, live bacteria have the ability to propel themselves. Therefore, to trap a live bacterium requires much higher optical power than to trap a dead one [47]. To enable a stable 3D trap for bacteria, the lowest optical power used with conventional optical tweezers is a couple of mW (3 ~ 6 mW in Ref. [47] and 6 mW in Ref. [48]). The fact that a much smaller optical power was used in this work suggests that the SP lens-based fiber tweezers have better trapping efficiency than the reported conventional optical tweezers. It should also be noted that we did not observe obvious convection or thermophoresis as reported in [41] when the polystyrene beads or bacteria were trapped more than 20 μm above the cover glass. This implies that the heating effect due to the SP lens absorption does not significantly affect the trap. This can be explained by the fact that the trap was located away from the fiber end face where the heat was generated. However, when the fiber lens was brought much closer to the cover glass ( a) such that the particle in the stationary trap does not affect the motion of the particle in the oscillatory trap except for the coupling through the medium in between as is prescribed by G*12(w). This weak-coupling approximation is justified by the assumption that the amplitude of the oscillation of the particle in the stationary trap is expected to be so small that its feedback effect to the other particle is a higherorder effect which can be neglected. Thus, the last term on the right-hand side of Eq. 7a can be ignored, and the local complex shear moduli become   kOT A  1 : (8) G 11 ðωÞ ¼ 6πa x1 ðωÞ The above expressions for the local mechanical properties around each probe particle are identical to the corresponding expressions for the storage and loss moduli given in the previous section. The nonlocal complex modulus is given by  

kOT A x 1 ð ωÞ x 1 ð ωÞ G12  ðωÞ ¼ 2 (9) þ A 4πR x2 ðωÞ x2 ðωÞ where x1 ðωÞ ¼ D1 ðωÞeiδ ðωÞ and x2 ðωÞ ¼ D2 ðωÞeiδ ðωÞ are the displacements of the particles in the oscillatory optical tweezers and the stationary optical tweezers, respectively. The mechanical properties of the medium between the two particles can thus be determined by measuring the motion of the particles x1(ω) and x2(ω). Explicitly, the nonlocal storage and loss moduli are given by 1

G012 ðωÞ ¼

2



kOT A cos δ2 ðωÞ D1 2 ðωÞ cos ðδ2 ðωÞ  2δ1 ðωÞÞ 2D1 ðωÞ cos ðδ2 ðωÞ  δ1 ðωÞÞ þ  D2 ðωÞ AD2 ðωÞ D2 ðωÞ 4πR

(10a) G0012 ðωÞ ¼



kOT A sin δ2 ðωÞ D1 2 ðωÞ sin ðδ2 ðωÞ  2δ1 ðωÞÞ 2D1 ðωÞ sin ðδ2 ðωÞ  δ1 ðωÞÞ þ  D2 ðωÞ AD2 ðωÞ D2 ð ω Þ 4πR

(10b)

Microrheology of Polymer Solutions In this section, we consider the mechanical properties of polymer solutions measured by optical tweezers. This approach allows us to measure the storage and the loss moduli in the range of approximately 101 to 104 dyne/cm2, for an optical trapping

740

M.-T. Wei et al.

force constant kOT around 0.01 dyne/cm. In section “Single-Particle Measurements in a Viscoelastic Medium,” we consider the single-particle active microrheology of homogeneous polymer solution. The results are consistent with the corresponding bulk mechanical properties measured by a conventional rheometer and the micromechanical properties obtained from passive microrheological approach [84]. However, the experimental results of the single-particle microrheology of inhomogeneous soft materials differ from the corresponding macroscopic mechanical properties. In section “Two-Particle Measurements in a Viscoelastic Medium,” we consider the application of two-particle active microrheology to study the effects of microscopic heterogeneous mechanical properties of a polymer solution between two probe particles.

Single-Particle Measurements in a Viscoelastic Medium In this section, we present, as a specific example, the application of oscillatory optical tweezers to study the micromechanical properties of an aqueous solution of polyethylene oxide (PEO; Mw = 100 kg/mol). The results are in good agreement with the bulk mechanical properties measured by Dasgupta et al. [85]. For a 20 wt% solution of PEO in water, the average mesh size of the polymer network is much smaller than the size of the probe particle (1.5 μm diameter polystyrene spheres). Thus, the intrinsic inhomogeneity of the network should not affect the micromechanical properties. Although PEO has been shown to absorb onto the surface of polystyrene particle, absorption is not an issue because the thickness of the absorbed polymer has been determined to be approximately 24 nm [86]. The data of microrheological studies reported by Dasgupta et al. [85] indicate that the frequency dependence of the viscoelasticity of the polymer solution agrees with their macroscopic measurements for all surface treatments and particle sizes. Figure 5 shows a comparison of the mechanical properties of a 20 wt% solution of PEO in water as measured by the active oscillatory-optical-tweezers approach versus the corresponding results obtained by the passive particle-tracking approach (without optical tweezers). In the active approach, the complex shear modulus can be measured directly as is prescribed by Eq. 6; in the passive approach, the complex shear modulus can be determined by tracking the Brownian motion and using the fluctuation-dissipation theorem (FDT) [87, 88] or the generalized Stokes-Einstein relation (GSER) [89]. According to the fluctuation-dissipation theorem, the imaginary part of the complex response function α"(ω) can be written as a00 ðωÞ ¼

ω CðωÞ 2kB T

(11)

where C(ω) is the power spectral density analyzed from particle displacement fluctuations, kB is the Boltzmann constant, and T is the absolute temperature. The imaginary part of the complex response function α"(ω) deduced from Eq. 11 agrees

24

Optical-Tweezers-Based Microrheology of Soft Materials and Living Cells

741

Fig. 5 The experimental results of storage modulus G0 (solid symbols) and the loss modulus G00 (open symbols) as a function of frequency obtained by active oscillatory-optical-tweezers approach (□) and passive particle-tracking approach (solid lines) with a 1.5 μm diameter polystyrene particle suspended in 20 wt% 100 kg/mol PEO solutions. The dotted lines represent G0 (ω) and G00 (ω) based on the Maxwell model. The inset shows the imaginary part of compliance function α"(ω) measured by passive particle-tracking approach (without optical tweezers; solid line) and by active oscillatory approach (squares)

with the corresponding results obtained by the active approach in this equilibrium system. According to the Kramers-Kronig relation (KKR), the real part of the compliance function α0 (w) can be expressed as 2 α ð ωÞ ¼ P π 0

ð1 0

ξα00 ðξÞ 2 dξ ¼ 2 2 π ξ ω

ð1 0

cos ðωtÞdt

ð1

α00 ðξÞ sin ðξtÞdξ

(12)

0

The reciprocal of α*(ω) is the complex shear modulus G*(ω) = G0 (ω) + iG00 (ω), where G 0 (ω) and G00 (ω) are given by G 0 ð ωÞ ¼

1 α 0 ð ωÞ 6πa α0 ðωÞ2 þ α00 ðωÞ2

(13a)

G00 ðωÞ ¼

1 α00 ðωÞ 6πa α0 ðωÞ2 þ α00 ðωÞ2

(13b)

The loss modulus G00 (ω) obtained by the active and passive approaches agrees well. However, the storage modulus G0 (ω) measured by the active approach and the passive approach does not agree at both the high- and low-frequency regimes because the lower and upper bounds of the frequency range of the integral in Eq. 12 are replaced by finite values (6 rad/s and 6,000 rad/s, respectively). In Fig. 5, the results indicate that the microrheological properties of semi-dilute PEO solutions agree with the bulk mechanical properties [85] within the frequency range of 6–100 rad/s accessible by both techniques [77]. For angular frequencies in

742

M.-T. Wei et al.

the range of 100–6,000 rad/s, a comparison can be made between the micromechanical properties determined by the oscillatory optical tweezers and by dynamic light scattering [85]. The results in Fig. 5 indicate the polymer solution has liquid-like behavior (G00 > G0 ) in the lower-frequency regime and solid-like behavior (G0 > G00 ) in the higher-frequency regime. The experimental results of the mechanical properties as a function of frequency can be fitted to the Maxwell model represented by a purely viscous damper and a purely elastic spring connected in series:  2 2  τ ω þ iτω G þ iG ¼ G 1 1 þ τ 2 ω2 0

00



(14)

with two adjustable parameters τ and G1, where τ is the relaxation time of the system and G1 is the plateau modulus. According to the theory of rubber elasticity [90], G1 = vkBT, where v is the number of elastically active chains in the network per unit volume. The rheology is well described by the Maxwell model [91, 92], with fitting parameters G1 = 10,578 dyne/cm2 and τ = 1 ms (the dotted lines in Fig. 5).

Two-Particle Measurements in a Viscoelastic Medium Single-particle microrheology can be extended to two-particle microrheology to investigate the mechanical inhomogeneities in soft materials at length scales comparable to the distance between the probes. Two-particle microrheology particularly contributes to the understanding of the mechanical properties of biological materials [35, 93]. In this section, we present the micromechanical properties of a 20 wt% polyethylene oxide solution (Mw = 100 kg/mol) surrounding a single probe particle and between two particles. As noted earlier, PEO solution is homogeneous on length scales comparable to the size (1.5 μm diameter) of the probe particle, meaning that the local mechanical properties of the medium surrounding the probe particle are comparable to the bulk mechanical properties. Figure 6 shows the nonlocal storage modulus and loss modulus at several length scales (i.e., the distance between two particles). The nonlocal mechanical properties probed by the two-particle microrheological approach agree reasonably well with the results obtained by the singleparticle microrheological approach as well as with bulk viscoelastic properties over the accessible frequency range of the bulk rheometer. In conclusion, since the two-particle microrheology technique allows the distance between two particles to be varied systematically, the nonlocal micromechanical properties of the medium between two probe particles can be used to compare with the local micromechanical properties surrounding a probe particle to study the homogeneity of the medium; for homogeneous polymer solutions, the local and nonlocal micromechanical properties agree well with the bulk mechanical properties, as expected. For systems in mechanical equilibrium, a good agreement was achieved in the measurement of the imaginary part of the response function α"(ω) by active (single- and two-particle) and passive approaches. We will discuss the nonequilibrium system in the next section.

24

Optical-Tweezers-Based Microrheology of Soft Materials and Living Cells

743

Fig. 6 A comparison between (a) the local storage modulus G0 11(ω) and the nonlocal storage modulus G0 12(ω); (b) the local loss modulus G00 11(ω) and the nonlocal storage modulus G00 12(ω), at several particle distances for a 20 wt% solution of PEO. In the legends in the lower right, “R” is the distance between the two probe particles and “a” is the particle radius

Microrheology of Living Cells Microrheology of living cells can be used to gain insight into the inhomogeneous structure and dynamics of the cytoskeleton [56, 94–96]. As a living polymer network, the cytoskeleton is constantly polymerizing and depolymerizing with activities, depending on the biological functions of cells [97]. It is known that the cytoskeleton maintains the cell shape and regulates cellular mechanics, but there is also evidence indicating that the cytoskeletal network may contribute to other important cellular functions. It has been shown that the cytoskeleton is directly connected to the nucleus and that external shear stress stimuli [98] can lead to cytoskeletal reorganization as well as modulations of gene expression in cells [99]. It is also well known that the differentiation of stem cells depends on the attachment of the cell to a substrate [100–105]. Knowledge of the role of mechanical forces in living cells in discerning signaling pathways and the quantification of how the transmission of mechanical forces in the cytoskeleton affects the micromechanical properties can provide a better understanding of the complex system of cellular signaling pathways [99].

744

M.-T. Wei et al.

Comparative Study of Extracellular and Intracellular Microrheology The ability to measure the mechanical properties at the subcellular level is important for the study of mechanotransduction. Rotational optical tweezers [106], which often require a spherical birefringent probe particle, enable highly localized measurements because the probe particle does not change position in the surrounding medium. In contrast, oscillatory optical tweezers, which do not require probe particles to be birefringent, enable the trapping of an endogenous intracellular organelle as a probe to measure the intracellular microrheological properties. In this section, we present a comparison of the measurements of cellular mechanical properties using a probe particle located exterior to the plasma membrane and an intracellular probe endogenous to the cell as shown in Fig. 7, where a micron-sized endogenous intracellular organelle is shown on the right and an extracellular 1.5 μm silica particle attached to the cytoskeleton through transmembrane integrin receptor is shown on the left on top of the cell membrane [29, 30]. Figure 8a, b is a microrheology data obtained by extracellular and intracellular probes. Both the storage modulus (G0 ) and the magnitude of the complex shear Fig. 7 A sketch of an oscillatory optical-tweezersbased cytorheometer with an intracellular granular structure (lamellar body, right circle) or an extracellular antibodycoated particle (left circle)

b G ′or G ″ (dyne cm–2)

G ′or G ″ (dyne cm–2)

a 105 104

103 102 1

104 102 100 10–2

10

100

Frequency (Hz)

1000

1

10

100

1000

Frequency (Hz)

Fig. 8 The storage modulus G0 (solid symbols) and the loss modulus G" (open symbols) of cells, as a function of frequency (ω), probed with (a) an anti-integrin conjugated silica particle attached to the plasma membrane and (b) an intracellular organelle. In both (a) and (b), the dashed line represents a power-law fit to G0 (Note: G00 (ω) does not follow the power law)

24

Optical-Tweezers-Based Microrheology of Soft Materials and Living Cells

745

modulus (G) followed a weak power-law dependence on frequency. The behavior has been attributed to a distribution of relaxation times in the soft material. Fabry et al. interpret the mechanical properties of cells to be soft glassy materials where disorder and metastability may be essential features underlying the cell’s mechanical functions [107]. The exponents of the power-law dependence of the data from the intra- and extracellular measurements are similar; however, the differences in the magnitudes of the moduli from the two measurements are statistically significant. It is possible that the larger moduli measured with the external particles might be partly due to the extensional stiffness or other mechanical properties of the plasma membrane. Although using an intracellular organelle as a probe provides a direct measurement of intracellular local mechanical properties, the optical force constant, kOT, of the optical tweezers is determined by assuming that the indices of refraction of the probe and the surrounding material are known, which leads to uncertainty in the measured mechanical properties.

Comparative Study of Active and Passive Cellular Microrheologies Cells generate and react to forces through a nonequilibrium network of cytoskeleton and motor proteins [108, 109]. The mechanical properties of active cytoskeletal network systems can change due to intracellular tension created by the active motors [35, 110–117]. The motor activity gives the essence of how nonequilibrium systems [118] may be created in living cells [33, 119]. Nonequilibrium mechanical behavior has been observed with an in vitro model system consisting of an actin network with embedded myosin motors [34] and has also been observed in living cells [32, 33]. In this section, we discuss the nonequilibrium mechanical system using a combination of active and passive microrheological approaches to characterize intracellular forces and mechanical properties for biological systems. Cellular mechanical properties caused by active motor proteins are investigated by comparing the data obtained by the active and passive microrheological approaches against the predictions of the fluctuation-dissipation theorem. For active microrheology (AMR), the mechanical properties can be directly determined from the experimentally measured particle displacement magnitude and phase shift (Eq. 6) [29]. By using the fluctuation-dissipation theorem (Eq. 11), the theoretical value of the thermal fluctuations (Cthermal) can be determined from (2kBTα00 /ω). For passive microrheology (PMR), the fluctuations of a probed particle in a living cell were tracked by a CCD camera. In an equilibrium system, where only thermal forces are acting on the probe, the power spectral density of the displacement fluctuations is directly related to the estimated value of the thermal fluctuations (2kBTα00 /ω) measured by AMR. Violation of the fluctuation-dissipation theorem would be defined as the ratio of experimental power spectrum (C) measured by passive microrheology to the thermal fluctuations (2kBTα00 /ω) measured by active microrheology. The measurements using an endosome [120] and an engulfed micron-sized polystyrene particle as a probe (Wei et al.) are shown in Fig. 9a, b, respectively. In previous studies, this ratio

746

M.-T. Wei et al.

Fig. 9 Experimental results of the fluctuation-dissipation theorem violation (2kBTα"/ω) as a function of frequency by probing with (a) an endosome and (b) an engulfed microparticle

was also defined as the ratio of an effective temperature of the nonequilibrium system to the bath temperature [32, 33]. At frequencies lower than 10 Hz, the ratios obtained via an engulfed microparticle are much smaller than the corresponding values obtained via an endosome [95] presumably due to the different nonthermal fluctuations induced by different molecular motors (i.e., actin motors vs. microtubule motors). At frequencies higher than 10 Hz, the AMR and PMR results agree, indicating the frequency limit of the nonequilibrium dynamics caused by molecular motors. These results are qualitatively consistent with the previous studies with either a probe particle attached to a cellular cortex [32, 93] or a probe particle bound inside a cell [33]. A “nonthermal force” () [35], caused by active driving forces (e.g., motor activities), could be obtained from the extra fluctuations, the difference between the total fluctuation spectrum “C” measured by PMR and the thermal fluctuations (2kBTα00 /ω) estimated by AMR: 2 2 2kB Tα00 f α ¼ C  Cthermal ¼ C  ω

(15)

The cellular nonthermal forces using either a probe particle attached to a cellular cortex [32, 93] or a probe particle bound inside a cell [33] can be studied by both local and nonlocal microrheology [33]. Comparing with previous reports [32, 33, 93] as shown in Table 1, the measurements using an engulfed particle (1 μmdiameter polystyrene particles) as a probe show that intracellular force is smaller than the tension on the cellular cortex. The result indicates that intracellular motors might be weaker or less active than motors on the cellular cortex. Whereas extracellular probes attached to the cytoskeleton provide measurements of global cell mechanical properties, intracellular probes provide direct measurements of intracellular mechanical properties. The latter may be more useful in investigating the microrheology of intracellular heterogeneity and temporal fluctuations. Comparisons of passive and active intracellular microrheologies allow thermal and nonthermal fluctuations to be distinguished in a nonequilibrium system.

24

Optical-Tweezers-Based Microrheology of Soft Materials and Living Cells

747

Table 1 A comparison of the measurements of cellular nonthermal forces

Studying the intracellular nonthermal forces and mechanical properties would help advance our current understanding of how cells sense and respond to their mechanical environment, leading to new designs in biomaterials and advancing our understanding of diseases linked to cellular mechanotransduction [99, 121–125].

Summary and Conclusions This chapter describes several experiments that use the techniques of oscillatory optical tweezers for the determination of the viscoelasticity of mechanical systems with complex shear moduli ranging from 101 to 104 dyne/cm2 over a wide frequency range (101 < ω < 103 rad/s). The measurements of micromechanical properties of semi-dilute polymer solutions illustrate that the techniques lead to results consistent with bulk properties when the length scale of inhomogeneity intrinsic to the polymer network is much smaller than the size of the probe particle. The results are also in good agreement with the measurements by passive approach for equilibrium mechanical systems. The two-particle technique allows us to study the microscopic inhomogeneous mechanical properties at length scales of the distance between the probe particles. Microrheology inside living cells using an engulfed microparticle or an endosome as a probe demonstrates the possibility of investigating intracellular heterogeneity and temporal fluctuations of viscoelasticity as well as nonthermal forces for nonequilibrium mechanical systems in biological cellular matters, from which important biomedical implications can be expected.

References 1. Ashkin A (1970) Acceleration and trapping of particles by radiation pressure. Phys Rev Lett 24(4):156–159 2. Ashkin A, Dziedzic J, Bjorkholm J, Chu S (1986) Observation of a single-beam gradient force optical trap for dielectric particles. Opt Lett 11(5):288–290 3. Ashkin A (1997) Optical trapping and manipulation of neutral particles using lasers. Proc Natl Acad Sci U S A 94:4853–4860

748

M.-T. Wei et al.

4. Svoboda K, Schmidt CF, Schnapp BJ, Block SM (1993) Direct observation of Kinesin stepping by optical trapping interferometry. Nature 365:721–727 5. Lien C-H, Wei M-T, Tseng T-Y, Lee C-D, Wang C, Wang T-F, Ou-Yang HD, Chiou A (2009) Probing the dynamic differential stiffness of dsDNA interacting with RecA in the enthalpic regime. Opt Express 17(22):20376–20385 6. Ashkin A, Dziedzic JM (1987) Optical trapping and manipulation of single cell using infrared laser beams. Nature 330:769–771 7. Svoboda K, Schmidt CF, Branton D, Block SM (1992) Conformation and elasticity of the isolated red blood cell membrane skeleton. Biophys J 63:784–793 8. Liu S-L, Karmenyan A, Wei M-T, Huang C-C, Lin C-H, Chiou A (2007) Optical forced oscillation for the study of lectin-glycoprotein interaction at the cellular membrane of a Chinese hamster ovary cell. Opt Express 15(5):2713–2723 9. Wei M-T, Hua K-F, Hsu J, Karmenyan A, Tseng K-Y, Wong C-H, Hsu H-Y, Chiou A (2007) The interaction of lipopolysaccharide with membrane receptors on macrophages pretreated with extract of Reishi polysaccharides measured by optical tweezers. Opt Express 15(17): 11020–11032 10. Stout AL (2001) Detection and characterization of individual intermolecular bonds using optical tweezers. Biophys J 80:2976–2986 11. Meiners J-C, Quake SR (1999) Direct measurement of hydrodynamic cross correlations between two particles in an external potential. Phys Rev Lett 82(10):2211–2214 12. Hough LA, Ou-Yang HD (2002) Correlated motions of two hydrodynamically coupled particles confined in separate quadratic potential wells. Phys Rev E 65:021906 (021907 pages) 13. Henderson S, Mitchell S, Bartlett P (2002) Propagation of hydrodynamic interactions in colloidal suspensions. Phys Rev Lett 88:088302 (088304 pages) 14. Ou-Yang HD, Wei M-T (2010) Complex fluids: probing mechanical properties of biological systems with optical tweezers. Annu Rev Phys Chem 61:421–440 15. Yao A, Tassieri M, Padgett M, Cooper J (2009) Microrheology with optical tweezers. Lab Chip 9:2568–2575 16. Preece D, Warren R, Evans RML, Gibson GM, Padgett MJ, Cooper JM, Tassieri M (2011) Optical tweezers: wideband microrheology. J Opt 13:044022 (044026 pages) 17. Pertsinidis A, Ling XS (2001) Equilibrium configurations and energetics of point defects in two-dimensional colloidal crystals. Phys Rev Lett 87(9):098303 (098304 pages) 18. Crocker JC, Grier DG (1994) Microscopic measurement of the pair interaction potential of charge-stabilized colloid. Phys Rev Lett 73(2):352–355 19. En A-R, Díaz-Leyva P, Arauz-Lara JL (2005) Microrheology from rotational diffusion of colloidal particles. Phys Rev Lett 94:106001 (106004 pages) 20. Wilson LG, Harrison AW, Poon WCK, Puertas AM (2011) Microrheology and the fluctuation theorem in dense colloids. EPL 93:58007 21. Murazawa N, Juodkazis S, Tanamura Y, Misawa H (2006) Rheology measurement at liquidcrystal water interface using laser tweezers. Jpn J Appl Phys 45(2A):977–982 22. Koenig GM Jr, Ong R, Cortes AD, Antonio Moreno-Razo J, Pablo JJ, Abbott NL (2009) Single nanoparticle tracking reveals influence of chemical functionality of nanoparticles on local ordering of liquid crystals and nanoparticle diffusion coefficients. Nano Lett 9(7):2794–2801 23. Mizuno D, Kimura Y, Hayakawa R (2004) Electrophoretic microrheology of a dilute lamellar phase: relaxation mechanisms in frequency-dependent mobility of nanometer-sized particles between soft membranes. Phys Rev E 70:011509 24. Hough LA, Islam MF, Janmey PA, Yodh AG (2004) Viscoelasticity of single wall carbon nanotube suspensions. Phys Rev Lett 93(16):168102 (168104 pages) 25. Helfer E, Harlepp S, Bourdieu L, Robert J, MacKintosh FC, Chatenay D (2001) Viscoelastic properties of actin-coated membranes. Phys Rev E 63:021904 (021913 pages) 26. Helfer E, Harlepp S, Bourdieu L, Robert J, MacKintosh FC, Chatenay D (2000) Microrheology of biopolymer-membrane complexes. Phys Rev Lett 85:457–460

24

Optical-Tweezers-Based Microrheology of Soft Materials and Living Cells

749

27. Helfer E, Harlepp S, Bourdieu L, Robert J, MacKintosh FC, Chatenay D (2001) Buckling of actin-coated membranes under application of a local force. Phys Rev Lett 87(8):088103 (088104 pages) 28. Yanai M, Butler JP, Suzuki T, Kanda A, Kurachi M, Tashiro H, Sasaki H (1999) Intracellular elasticity and viscosity in the body, leading, and trailing regions of locomoting neutrophils. Am J Physiol Cell Physiol 277:C432–C440 29. Wei M-T, Zaorski A, Yalcin HC, Wang J, Hallow M, Ghadiali SN, Chiou A, Ou-Yang HD (2008) A comparative study of living cell micromechanical properties by oscillatory optical tweezer. Opt Express 16(12):8594–8603 30. Yalcin HC, Hallow KM, Wang J, Wei M-T, Ou-Yang HD, Ghadiali SN (2009) Influence of cytoskeletal structure and mechanics on epithelial cell injury during cyclic airway reopening. Am J Physiol Lung Cell Mol Physiol 297:L881–L891 31. Balland M, Desprat N, Icard D, Féréol S, Asnacios A, Browaeys J, Hénon S, Gallet F (2006) Power laws in microrheology experiments on living cells: comparative analysis and modeling. Phys Rev E 74:021911–021917 32. Gallet F, Arcizet D, Bohec P, Richert A (2009) Power spectrum of out-of-equilibrium forces in living cells: amplitude and frequency dependence. Soft Matter 5:2947–2953 33. Wilhelm C (2008) Out-of-equilibrium microrheology inside living cells. Phys Rev Lett 101:028101 (028104 pages) 34. Mizuno D, Tardin C, Schmidt CF, MacKintosh FC (2007) Nonequilibrium mechanics of active cytoskeletal networks. Science 315:370–373 35. Mizuno D, Head DA, MacKintosh FC, Schmidt CF (2008) Active and passive microrheology in equilibrium and nonequilibrium systems. Macromolecules 41(19):7194–7202 36. Mofrad MRK (2009) Rheology of the cytoskeleton. Annu Rev Fluid Mech 41:433–453 37. Pelletier V, Gal N, Fournier P, Kilfoil ML (2009) Microrheology of microtubule solutions and actin-microtubule composite networks. Phys Rev Lett 102:188303 (188304 pages) 38. Zhu X, Kundukad B, van der Maarel JRC (2008) Viscoelasticity of entangled l-phage DNA solutions. J Chem Phys 129:185103 (185106 pages) 39. Mason TG, Ganesan K, Zanten JH, Wirtz D, Kuo SC (1997) Particle tracking microrheology of complex fluids. Phys Rev Lett 79:3282–3285 40. Hough LA, Ou-Yang HD (2006) Viscoelasticity of aqueous telechelic poly(ethylene oxide) solutions: relaxation and structure. Phys Rev E 73:031802 (031808 pages) 41. Chiang C-C, Wei M-T, Chen Y-Q, Yen P-W, Huang Y-C, Chen J-Y, Lavastre O, Guillaume H, Guillaume D, Chiou A (2011) Optical tweezers based active microrheology of sodium polystyrene sulfonate (NaPSS). Opt Express 19(9):8847–8854 42. Lee H, Shin Y, Kim ST, Reinherz EL, Lang MJ (2012) Stochastic optical active rheology. Appl Phys Lett 101:031902 43. Latinovic O, Hough LA, Ou-Yang HD (2010) Structural and micromechanical characterization of type I collagen gels. J Biomech 43:500–506 44. Shayegan M, Forde NR (2013) Microrheological characterization of collagen systems: from molecular solutions to fibrillar gels. PLoS One 8(8):e70590 45. Hénon S, Lenormand G, Richert A, Gallet F (1999) A new determination of the shear modulus of the human erythrocyte membrane using optical tweezers. Biophys J 76:1145–1151 46. Rancourt-Grenier S, Wei M-T, Bai J-J, Chiou A, Bareil PP, Duval P-L, Sheng Y (2010) Dynamic deformation of red blood cell in dual-trap optical tweezers. Opt Express 18 (10):10462–10472 47. Lim CT, Dao M, Suresh S, Sow CH, Chew KT (2004) Large deformation of living cells using laser traps. Acta Mater 52:1837–1845 48. Daoa M, Limb CT, Suresha S (2003) Mechanics of the human red blood cell deformed by optical tweezers. J Mech Phys Solid 51:2259–2280 49. Lyubin EV, Khokhlova MD, Skryabina MN, Fedyanin AA (2012) Cellular viscoelasticity probed by active rheology in optical tweezers. J Biomed Opt 17(10):101510

750

M.-T. Wei et al.

50. Meiners J-C, Quake SR (2000) Femtonewton force spectroscopy of single extended DNA molecules. Phys Rev Lett 84(21):5014–5017 51. Hough LA, Ou-Yang HD (1999) A new probe for mechanical testing of nanostructures in soft materials. J Nanopart Res 1:495–499 52. Wei M-T (2014) Microrheology of soft matter and living cells in equilibrium and non-equilibrium systems. Ph.D., Bioengineering, Lehigh University, Bethlehem 53. Crocker JC, Valentine MT, Weeks ER, Gisler T, Kaplan PD, Yodh AG, Weitz DA (2000) Two-point microrheology of inhomogeneous soft materials. Phys Rev Lett 85 (4):888–891 54. Levine AJ, Lubensky TC (2000) One- and two-particle microrheology. Phys Rev Lett 85:1774–1777 55. Hoffman BD, Crocker JC (2009) Cell mechanics: dissecting the physical responses of cells to force. Annu Rev Biomed Eng 11:259–288 56. Hoffman BD, Massiera G, Citters KMV, Crocker JC (2006) The consensus mechanics of cultured mammalian cells. Proc Natl Acad Sci U S A 103(27):10259–10264 57. Valentine MT, Dewalt LE, Ou-Yang HD (1996) Forces on a colloidal particle in a polymer solution: a study using optical tweezers. J Phys Condens Matter 8:9477–9482 58. Ou-Yang HD (1999) Design and applications of oscillating optical tweezers for direct measurements of colloidal forces. In: Farinato RS, Dubin PL (Eds.), Colloid–polymer interactions: from fundamentals to practice. Wiley, New York 59. Wright WH, Sonek GJ, Berms MW (1993) Radiation trapping forces on microspheres with optical tweezers. Appl Phys Lett 63(9):715–717 60. Ghislain LP, Switz NA, Webb WW (1994) Measurement of small forces using an optical trap. Rev Sci Instrum 65(9):2762–2768 61. Ashkin A (1992) Forces of a single-beam gradient laser trap on a dielectric sphere in the ray optics regime. Biophys J 61:569–582 62. Mazolli A, Neto PAM, Nussenzveig HM (2003) Theory of trapping forces in optical tweezers. Proc R Soc Lond A 459:3021–3041 63. Richardson AC, Reihani SNS, Oddershede LB (2008) Non-harmonic potential of a single beam optical trap. Opt Express 16(20):15709–15717 64. Merenda F, Boer G, Rohner J, Delacrétaz GD, Salathé R-P (2006) Escape trajectories of single-beam optically trapped micro-particles in a transverse fluid flow. Opt Express 14 (4):1685–1699 65. Greenleaf WJ, Woodside MT, Abbondanzieri EA, Block SM (2005) Passive all-optical force clamp for high-resolution laser trapping. Phys Rev Lett 95:208102 (208104 pages) 66. Neves AAR, Fontes A, Pozzo LY, Thomaz AA, Chillce E, Rodriguez E, Barbosa LC, Cesar CL (2006) Electromagnetic forces for an arbitrary optical trapping of a spherical dielectric. Opt Express 14(26):13101–13106 67. Jahnel M, Behrndt M, Jannasch A, Schäffer E, Grill SW (2011) Measuring the complete force field of an optical trap. Opt Lett 36(7):1260–1262 68. Ling L, Zhou F, Huang L, Guo H, Li Z, Li Z-Y (2011) Perturbation between two traps in dualtrap optical tweezers. J Appl Phys 109:083116 69. Huang C-C, Wang C-F, Mehta DS, Chiou A (2001) Optical tweezers as sub-pico-newton force transducers. Opt Commun 195:41–48 70. Rohrbach A, Kress H, Stelzer EHK (2003) Three-dimensional tracking of small spheres in focused laser beams influence of the detection angular aperture. Opt Lett 28(6):411–413 71. Rohrbach A, Tischer C, Neumayer D, Florin E-L, Stelzer EHK (2004) Trapping and tracking a local probe with a photonic force microscope. Rev Sci Instrum 75(6):2197–2210 72. Rohrbach A (2005) Stiffness of optical traps: quantitative agreement between experiment and electromagnetic theory. Phys Rev Lett 95(16):168102 73. Wei M-T, Yang K-T, Karmenyan A, Chiou A (2006) Three-dimensional optical force field on a Chinese hamster ovary cell in a fiber-optical dual-beam trap. Opt Express 14(7):3056–3064

24

Optical-Tweezers-Based Microrheology of Soft Materials and Living Cells

751

74. Wei M-T, Chiou A (2005) Three-dimensional tracking of Brownian motion of a particle trapped in optical tweezers with a pair of orthogonal tracking beams and the determination of the associated optical force constants. Opt Express 13(15):5798–5806 75. Ghislain LP, Webb WW (1993) Scanning-force microscope based on an optical trap. Opt Lett 18(19):1678–1680 76. Wei M-T, Ng J, Chan CT, Chiou A, Ou-Yang HD (2012) Transverse force profiles of individual dielectric particles in an optical trap. In: SPIE optics photonics, San Diego 77. Latinovic O (2010) Micromechanics and structure of soft and biological materials: an optical tweezers study. Verlag Dr. Muller Publishing, Saarbrucken 78. Wright WH, Sonek GJ, Berns MW (1999) Parametric study of the forces on microspheres held by optical tweezers. Appl Optics 33(9):1735–1748 79. Barton JP, Alexander DR, Schaub SA (1989) Theoretical determination of net radiation force and torque for a spherical particle illuminated by a focused laser beam. J Appl Phys 66:4594–4602 80. Zemánek P, Jonáš A, Šrámek L, Liška M (1998) Optical trapping of Rayleigh particles using a Gaussian standing wave. Opt Commun 151:273–285 81. Ganic D, Gan X, Gu M (2004) Exact radiation trapping force calculation based on vectorial diffraction theory. Opt Express 12(12):2670–2675 82. Viana NB, Mazolli A, Neto PAM, Nussenzveig HM (2006) Absolute calibration of optical tweezers. Appl Phys Lett 88:131110 83. Ferry JD (1970) Viscoelastic properties of polymers. Wiley, New York 84. Brau RR, Ferrer JM, Lee H, Castro CE, Tam BK, Tarsa PB, Matsudaira P, Boyce MC, Kamm R, Lang MJ (2007) Passive and active microrheology with optical tweezers. J Opt A: Pure Appl Opt 9:S103–S112 85. Dasgupta BR, Tee S-Y, Crocker JC, Frisken BJ, Weitz DA (2002) Microrheology of polyethylene oxide using diffusing wave spectroscopy and single scattering. Phys Rev E 65:051505 (051510 Pages) 86. Huang Y, Santore MM (2002) Dynamics in adsorbed layers of associative polymers in the limit of strong backbone-surface attractions. Langmuir 18(6):2158–2165 87. Gittes F, MacKintosh FC (1998) Dynamic shear modulus of a semiflexible polymer network. Phys Rev E 58(2):R1241–R1244 88. Schnurr B, Gittes F, MacKintosh FC, Schmidt CF (1997) Determining microscopic viscoelasticity in flexible and semiflexible polymer networks from thermal fluctuations. Macromolecules 30(25):7781–7792 89. Mason TG, Weitz DA (1995) Optical measurements of frequency-dependent linear viscoelastic moduli of complex fluids. Phys Rev Lett 74(7):1250–1253 90. Green MS, Tobolsky AV (1946) A new approach to the theory of relaxing polymeric media. J Chem Phys 14(80):1724109 91. Annable T, Buscall R, Ettelaie R, Whittlestone D (1993) The rheology of solutions of associating polymers: comparison of experimental behavior with transient network theory. J Rheol 37:695–727 92. Pham QT, Russel WB, Thibeault JC, Lau W (1999) Polymeric and colloidal modes of relaxation in latex dispersions containing associative triblock copolymers. J Rheol 43:1599–1616 93. Mizuno D, Bacabac R, Tardin C, Head D, Schmidt CF (2009) High-resolution probing of cellular force transmission. Phys Rev Lett 102:168102 (168104 pages) 94. Hale CM, Sun SX, Wirtz D (2009) Resolving the role of actoymyosin contractility in cell microrheology. PLoS One 4(9):e7054 (7011 pages) 95. Robert D, Nguyen T-H, Fo G, Wilhelm C (2010) In vivo determination of fluctuating forces during endosome trafficking using a combination of active and passive microrheology. PLoS One 5(4):e10046 96. Kollmannsberger P, Fabry B (2011) Linear and nonlinear rheology of living cells. Annu Rev Mater Res 41:75–97

752

M.-T. Wei et al.

97. Aratyn-Schaus Y, Oakes PW, Gardel ML (2011) Dynamic and structural signatures of lamellar actomyosin force generation. Mol Biol Cell 22:1330–1339 98. Wang N, Butler JP, Ingber DE (1993) Mechanotransduction across the cell surface and through the cytoskeleton. Science 260:1124–1127 99. Wang Y, Botvinick EL, Zhao Y, Berns MW, Usami S, Tsien RY, Chien S (2005) Visualizing the mechanical activation of Src. Nature 434:1040–1045 100. Engler AJ, Sen S, Sweeney HL, Discher DE (2006) Matrix elasticity directs stem cell lineage specification. Cell 126(4):677–689. doi:10.1016/j.cell.2006.06.044 101. Wang N, Tolić-Nørrelykke IM, Chen J, Mijailovich SM, Butler JP, Fredberg JJ, Stamenović D (2002) Cell prestress. I. Stiffness and prestress are closely associated in adherent contractile cells. Am J Physiol Cell Physiol 282:C606–C616 102. Byfield FJ, Wen Q, Levental I, Nordstrom K, Arratia PE, Miller RT, Janmey PA (2009) Absence of filamin a prevents cells from responding to stiffness gradients on gels coated with collagen but not fibronectin. Biophys J 96:5095–5102 103. Trichet L, Digabel JL, Hawkins RJ, Vedula SRK, Gupta M, Ribrault C, Hersen P, Voituriez R, Ladoux B (2012) Evidence of a large-scale mechanosensing mechanism for cellular adaptation to substrate stiffness. Proc Natl Acad Sci U S A 109(18):6933–6938 104. Han SJ, Bielawski KS, Ting LH, Rodriguez ML, Sniadecki NJ (2012) Decoupling substrate stiffness, spread area, and micropost density: a close spatial relationship between traction forces and focal adhesions. Biophys J 103(4):640–648 105. Tee S-Y, Fu J, Chen CS, Janmey PA (2011) Cell shape and substrate rigidity both regulate cell stiffness. Biophys J 100(5):L25–L27 106. Bishop AI, Nieminen TA, Heckenberg NR, Rubinsztein-Dunlop H (2004) Optical microrheology using rotating laser-trapped particles. Phys Rev Lett 92(19):198104 (198104 pages) 107. Fabry B, Maksym GN, Butler JP, Glogauer M, Navajas D, Fredberg JJ (2001) Scaling the microrheology of living cells. Phys Rev Lett 87(14):148102 (148104 pages) 108. Koenderink GH, Dogic Z, Nakamura F, Bendix PM, MacKintosh FC, Hartwig JH, Stossel TP, Weitz DA (2009) An active biopolymer network controlled by molecular motors. Proc Natl Acad Sci U S A 106(36):15192–15197 109. Silva MS, Depken M, Stuhrmann B, Korsten M, MacKintosh FC, Koenderink GH (2011) Active multistage coarsening of actin networks driven by myosin motors. Proc Natl Acad Sci U S A 108(23):9408–9413 110. John K, Caillerie D, Peyla P, Raoult A, Misbah C (2013) Nonlinear elasticity of cross-linked networks. Phys Rev E 87:042721 111. Reymann A-C, Boujemaa-Paterski R, Martiel J-L, Guérin C, Cao W, Chin HF, Cruz EMDL, Théry M, Blanchoin L (2012) Actin network architecture can determine myosin motor activity. Science 336(6086):1310–1314 112. Stuhrmann B, Silva MS, Depken M, MacKintosh FC, Koenderink GH (2012) Nonequilibrium fluctuations of a remodeling in vitro cytoskeleton. Phys Rev E 86:020901(R) (020905 pages) 113. Lau AWC, Hoffman BD, Davies A, Crocker JC, Lubensky TC (2003) Microrheology, stress fluctuations, and active behavior of living cells. Phys Rev Lett 91:198101 (198104 pages) 114. MacKintosh FC, Levine AJ (2008) Nonequilibrium mechanics and dynamics of motoractivated gels. Phys Rev Lett 100:018104 115. Brangwynne CP, Koenderink GH, MacKintosh FC, Weitz DA (2008) Nonequilibrium microtubule fluctuations in a model cytoskeleton. Phys Rev Lett 100:118104 116. Kollmannsberger P, Mierke CT, Fabry B (2011) Nonlinear viscoelasticity of adherent cells is controlled by cytoskeletal tension. Soft Matter 7:3127–3132 117. Fernández P, Pullarkat PA, Ott A (2006) A master relation defines the nonlinear viscoelasticity of single fibroblasts. Biophys J 90:3796–3805 118. Yao NY, Broedersz CP, Depken M, Becker DJ, Pollak MR, MacKintosh FC, Weitz DA (2013) Stress-enhanced gelation: a dynamic nonlinearity of elasticity. Phys Rev Lett 110:018103

24

Optical-Tweezers-Based Microrheology of Soft Materials and Living Cells

753

119. Bruno L, Salierno M, Wetzler DE, Despósito MA, Levi V (2011) Mechanical properties of organelles driven by microtubule-dependent molecular motors in living cells. PLoS One 6(4): e18332 120. Wei M-T, Ou-Yang HD (2010) Thermal and non-thermal intracellular mechanical fluctuations of living cells. In: SPIE optics photonics, San Diego, p 77621L 121. Chien S (2007) Mechanotransduction and endothelial cell homeostasis: the wisdom of the cell. Am J Physiol Heart Circ Physiol 292:H1209–H1224 122. Chen CS (2008) Mechanotransduction – a field pulling together? J Cell Sci 121 (20):3285–3291 123. Wang N, Tytell JD, Ingber DE (2009) Mechanotransduction at a distance: mechanically coupling the extracellular matrix with the nucleus. Nat Rev 10:75–82 124. Parker KK, Ingber DE (2007) Extracellular matrix, mechanotransduction and structural hierarchies in heart tissue engineering. Philos Trans R Soc B 2114:1–13 125. Alamo JC, Norwich GN, Li Y-sJ, Lasheras JC, Chien S (2008) Anisotropic rheology and directional mechanotransduction in vascular endothelial cells. Proc Natl Acad Sci U S A 105 (40):15411–15416

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method and Application to Optical Trapping

25

Takanobu A. Katoh, Shoko Fujimura, and Takayuki Nishizaka

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Design Rationale for 3-D Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ready-to-Use Implement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Calibration of the Displacement Along z-Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optical Trapping Equipped with 3-D Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

756 756 757 758 759 761 762 765 766

Abstract

We here describe the three-dimensional optical tracking method, which is realized with a simple optical component, a quadrangular wedge prism. Additional two lenses located between a conventional optical microscope and a camera enable to track single particles in 3-D. Because of the simplicity of its rationale and construction, any laboratory equipped with 2-D tracking method, under either fluorescence, phase-contrast, bright-field or dark-field illumination, can adopt our method with the same analysis procedure and thus the same precision. Applications to a molecular motor, kinesin-microtubule system, and optical trapping, are also demonstrated, verifying the advantage of our approach to assess the movement of tiny objects, with the size ranging from ten nanometers to a few microns, in an aqueous solution.

T.A. Katoh • S. Fujimura • T. Nishizaka (*) Department of Physics, Gakushuin University, Tokyo, Japan e-mail: [email protected]; [email protected]; takayuki.nishizaka@gakushuin. ac.jp # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_2

755

756

T.A. Katoh et al.

Keywords

3-D single particle tracking optical system construction • Custom-ordered optical device • Design • Implementation • Kinesin-microtubule system • Mini-stage equipped with piezo-actuator • Proteins inside a single cell • Schematic illustration • Displacement along z-direction, calibration of • Optical trapping • With 3D tracking • Optical tweezers • Parallax

Introduction In the past four decades, the technique called optical trapping (also referred to as “optical tweezers”), originally designed by Arthur Ashkin [1–4], has been developed as useful applications aiming to manipulate small particles with the size of microns under optical microscopes (see excellent reviews [5–8] for more details). By tracking single trap particles with the precision of tens of nanometer, the optical trap also works as a force transducer: the meticulous and reliable calibration allows us to measure the external force imposed to a trapped particle through the displacement from the trap center. However, the direction of force measurement is limited in 2-D, simply because the detection of the displacement of a specimen is only capable in the sample plane, which is defined as the plane perpendicular to the optical axis of the objective lens. Although trapping of particles is achieved in all directions, only 2-D force is measured under conventional optical microscopes. This limitation causes inaccurate estimation of tiny forces, such as powers produced by single bacterium or even single motor molecules. In this chapter, the simple but powerful approach is introduced in order to break the limitation. One optical component, a wedge prism, can improve the conventional 2-D detection into the 3-D world [9]. By combining the prismatic setup with the optical trapping, 3-D force measurement is now capable in any microscopes equipped with a laser for the trapping. This approach will be a powerful tool in research fields such as of cell biology, biophysics, biomedical engineering and single-molecule physiology.

Design Rationale for 3-D Detection The way that human beings detect the depth of objects by the eyes is called as “parallax.” Using the different geometry of multiple detectors (two eyes) arranged in horizontal direction, any movement of objects is converted into 3-D cognition at a time. The trick is very simple: intuitively speaking, the moving direction projected to two detectors are inverse when an object moves away from you; the object moves slightly rightward when you watch the object by only your right eye, whereas does leftward by your left eye. If a researcher sets the same arrangement for a specimen and a camera, you can convert any movement parallel to the optical axis into the relative horizontal displacement, as in the case of two eyes.

25

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method. . .

757

Fig. 1 (a) Schematic of 3-D tracking optical system that has been developed by our group. The beam from a single emitter located at the right plane is divided into two components of light (the blue and green path) by the single wedge prism located between two lenses. The beams are projected to two separate regions of a single camera plate (left). The beam runs from –z to + z. (b) Optical design at the camera port. In the case that the microscope has an infinite optical system, the equivalent back-focal-plane (eBFP) should be located at the plane apart from L1 with the distance of f1. The wedge prism should be set at eBFP in order to divide the beam flux precisely in half

Our group decided to use a single optical component, a wedge prism, for this task [9]. The prism is located at the back focal plane of the objective lens (BFP) and divides the beam from the point source into two components of the light (Fig. 1a). Two separated beams are projected to a single camera plate and work as “two eyes” because projected positions are perpendicularly displaced. When one light source is displaced parallel to the optical axis, the direction of displacements of two images of the source is inverse, as in the case of horizontally aligned two eyes. In this way, z-movement of a specimen is converted into relative x-directed displacement between two half-separated images. X- and y-movements are simultaneously detected as average positions of two images. Taken together, the absolute position of any particles appeared as a point light source is determined in 3-D directions, with the similar precision of 2-D detection without the prism.

Construction Because a single objective lens is comprised of multiple lenses inside a metal tube and the BFP is located inside the tube, the precise arrangement of any optical component at BFP is impossible in most cases. Instead, BFP is focused outside the camera port of a microscope as an equivalent BFP (eBFP) by setting a convex lens (L1) with the focal length of f1. By locating the lens apart from the equivalent sample plane (eSP) with the distance of f1, eBFP appears apart from L1 with the distance of f1. Subsequently, the other lens (L2) with the focal length of f2 focuses the image on the camera plane (Fig. 1b). The prism is set between two lenses. When the prism is carefully positioned as the edge of the prism just located in the center of the beam flux from the specimen, two spots with a same intensity appear on one camera plate.

758

T.A. Katoh et al.

To achieve the above optical setup, a custom-made prism with the shallow angle that matches both f2 and the size of the camera plate (dx  dy) is needed. The distance between two spots should be the half of dy to track specimens under the microscope to maximize the observation area. Additionally, a linear translator equipped with a Mitsutoyo micrometer head is needed for the precise adjustments of the prism along x-direction. The position can be easily adjusted by watching single emitters, such as fluorescent beads attaching to the glass substrate, so as to produce two equal signals. For z-adjustments of the prism, fine micrometers are not recommended because the determination of eBFP is only capable with the millimeter scale in the above magnification scale. One easy trick is the projection of the signature component within the condenser-lens unit located above the objective, such as the ring slit for the phase-contrast microscope, to the thin paper. By moving the paper along z-direction quickly by a hand, the precise position of eBFP can be directly recognized by the eye as the ring pattern becomes sharpest. The precision is limited to submillimeter scale with the above procedure, which is presumably enough to reproduce the same prism arrangement (Fig. 1b).

Ready-to-Use Implement Our 3-D method is simple and thus the construction is straightforward. The implement including all optics in one device is useful and reliable for the reproducible calibrations and outputs. Our group developed two types of the custom-ordered implement for Nikon microscopes: one for TE2000E and the other for Ti-E (Fig. 2). Two lenses were

Fig. 2 Micrograph of the custom-ordered optical device attaching to the camera port of an inverted microscope Ti-E, Nikon. The adjustment block for the prism is equipped in the middle cuboid. The front (right in the micrograph) cuboid has switchable beam splitter inside, which enables detecting an additional channel with a different wavelength by the other camera

25

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method. . .

759

fixed in the device and the prism could be adjusted by three translators keeping the light interception. There were additional two unique tools inside. (1) The prism can be removed from the optical path by a simple click mechanism, which allows us to transform the system into the conventional microscope for other users who do not need 3-D tracking. (2) The optical filter can be located between L1 and the prism, which allows us to split the signal into two with different wavelengths of light by setting a desirable beam splitter. Users can get two channels, one for 3-D tracking and the other for the conventional illumination (such as phase-contrast or fluorescent microscopes), simultaneously with two different cameras. With this implementation, our group succeeded to dismantle the subunit from the single protein immobilized to the glass surface while watching single fluorescent probe labeled upon the protein (Naito et al., manuscript in preparation).

Calibration of the Displacement Along z-Direction The relative x-displacement between two images separated by the prism does not simply correspond to the real displacement toward z-direction, and so the calibration between two values (Δx and z) is needed. To estimate the absolute movement of the sample in the sample chamber filled with a solution, the calibration factor, which is the function to convert Δx into z, should be fixed. For this purpose, a custom-made mini stage (Fig. 3), by which the position of the sample is displaced with the nanometer accuracy in the direction of the optical axis, was constructed. To avoid the gap of the position of the BFP between the calibration and measurement,

Fig. 3 Micrograph of the mini stage equipped with the piezo actuator (PhysikInstrumente, a cube depicted by yellow dotted line) for the axial position displacement of the sample. The sample is placed at the middle hole keeping the same height in the real measurement situation, in which a conventional annual plate for the commercial stage is used

760

T.A. Katoh et al.

Fig. 4 (a) Image of a single fluorescent bead with a diameter of 0.5 μm attaching to the glass surface observed under 3-D tracing system. Because the image of single emitter is divided into two by the prism located at the equivalent back focal plane, two spots are projected to two separate regions of the single camera plate. Both images move in the opposite x-directions when the distance between the glass and the objective changes, i.e., the top emitter moves from left to right when the sample moves from 0.65 to +0.65 μm, while the bottom emitter from right to left. (b) Calibration for the 3-D tracking. The abscissa axis is controlled by the piezo actuator, and the ordinate axis was determined from the relative position between separated emitters. The black and red lines show the data when the sample moves upward and downward, respectively. The cyan curve shows the linear fitting of points in the range of 0.50 μm of the black line

the position of the objective lens should, ideally, be located at the exact same position. The mini stage is designed to keep the height of the sample plane equivalent with the help of the L-shaped plates. The relationship between the z-position and the relative displacement between two images is almost linear as exemplified in Fig. 4. The slope is 0.66 in this case, and the value depends on the tracking algorithm and types of the objective. One approach is 2-D Gaussian, fitting to each separated emitter albeit emitters represent a fanlike profile as the image is defocused. The alternative one is the estimation of the centroid of each emitter, but the subtraction value may interfere with the spatial resolution especially in the case that the signal becomes low as the image is defocused. Ideally, the pattern-matching algorithm with the reconstruction of single emitters, which is generally used for defocused imaging, will be applied in the future to track divided emitters in the defocused situation. The slope of Fig. 4b is taken as a calibration factor for the real measurement, with which relative displacement is simply calculated by applying a linear relationship. Typically, the real displacement is determined with the range of 2 μm, but the range depends on what specimen is used and, most importantly, on the numerical aperture of the objective lens. Note that each objective should have its inherent calibration factor; even the optics behind the camera port is equivalent.

25

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method. . .

761

Application The prismatic optical tracking described here was now established as one of the general approaches to track particles in 3-D [10]. It has successfully applied to two biomolecule samples so far [9, 11]. The first one is kinesin-microtubule system. The microtubule has a filament structure with the length of tens of microns in cells or in an in vitro motility assay and comprised of subunit proteins called α- and β-tubulin that are alternately arranged in longitudinal direction. The diameter of the microtubule is only 26 nm, and thus the microtubule is fluorescently labeled to be visualized under an optical microscope in a conventional assay. The molecular motor kinesin literally walks along the microtubule as it uses two catalytic domains, called as heads, in a hand-over-hand manner [12]. To address the genuine property of the catalytic core of kinesin, it should be truncated into a single-headed form. With this idea, our research group designed the new experimental setup by which the movement of the surface of microtubule is precisely tracked (Fig. 5a). The quantum dot (QD) was specifically attached to the surface of microtubule, and the trace of single QD was directly reconstructed as 3-D plotting. Interestingly enough, microtubule represents a corkscrewing motion during sliding when it runs on the lawns of recombinants [13]. The handedness of the rotation was directly quantified with the above method [9] as represented in the 3-D plot (Fig. 5b). The oscillation curve in x-z plot also allows quantifying the pitch value with sub-nanometer-scale resolution (Fig. 5c). Note that the radius of

Fig. 5 (a) Schematic of the modified in vitro motility assay which enables to track the movement of microtubule surface through the quantum dot. In the presence of ATP, the kinesin recombinant fused to gelsolin slides the microtubule. The recombinant is immobilized to the glass substrate through an antibody. (b) 3-D trajectory of the quantum dot. (c) y-z and x-z plots of a

762

T.A. Katoh et al.

corkscrewing was only 20 nm as the diameter of microtubule is 24 nm. The data also work as a good validation of our 3-D tracking method which enables us to detect nanometer-scale displacement of biomolecules. The second one was the tracking of specific proteins inside a single cell [11]. Yeast was employed as it is now established as one of the main biological resources. In this contribution, QD-conjugated protein, prion Sup35, was prepared. The dynamics of prions was directly visualized qualitatively inside living yeast cells and unique appearance of the diffusional motion, perhaps originated from the pattern of prion aggregation, was typically observed. Through these applications, we noticed two pitfalls to develop totally new 3-D methods that had not been expected during designing. First, precise determination of the handedness required several complex steps. When we estimated the calibration factor, the position of an emitter moved by the mini stage was equipped with the piezoelectric actuator. Note that the direction of the movement of the emitter becomes relatively inverse when the objective is moved: when the objective is displaced upward, the emitter moves downward, which is realized by the shrinkage of the actuator during calibration. These configurations directly couple to the definition of the plus and minus of the calibration factor. Additionally, the image acquired under inverted microscopes is always mirror image. This simple fact often escapes our memory because the handedness does not matter in most 2-D observations that do not include any information along the optical axis. In the case that an observer does not know the orientation from which they are watching the glass slide, they do not need to define the back side and rear side of the glass. The problem of mirroring emerges only in the case of 3-D reconstructions to determine the handedness of movements or structures of biomolecules.

Optical Trapping Equipped with 3-D Tracking As demonstrated above, 3-D localization with a nanometer accuracy is capable of using one optical device, a wedge prism, with a precise calibration. One may think that this feature can be easily combined with other techniques, especially having a limitation caused by 2-D observation. Optical trapping has been used to capture and manipulate small particles typically with the size ranging from 100 nm to 10 μm, and the direction of the trapping is distributed along not only xy-plane but also optical z-axis even with a single laser beam when the beam is focused at the sample plane. In the case that researchers use the optical trapping as a force transducer to measure the biological force such as gliding force of a single bacterium [14] and binding force of a molecular motor [15], the direction of the force measurement was limited in 2-D although particles were captured in 3-D, simply because we did not have any conventional way to track particles in 3-D. As the dimension problem was solved with the method presented here, all optical assemblies including optical trapping should be expanded in 3-D systems with single prisms. One representative diagram for the optical trapping and 3-D tracking is shown in Fig. 6. Our system typically includes two channels for the observation: the

25

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method. . .

763

Fig. 6 Schematic of the optical system including the technique to track the particle in three dimensions (3-D unit) and the optical trap. The infrared laser path is shown by the yellow line. The image of the fluorescent bead trapped by the optical trap is captured by EMCCD camera while another channel, such as a DIC image of a bacterium or an organelle in cells, is captured simultaneously by sCMOS camera. Any additional illumination can be easily added to this system as long as it is conventional and commercially available to an inverted microscope

fluorescent image (green and blue fluxes leading to EMCCD camera) and the other additional illumination from the condenser lens (the red line) such as phase-contrast or DIC image. Two channels are split to lead two different cameras by a beam splitter located outside the microscope. Two dichroic mirrors are located between the objective and focusing lens: the one that introduces IR laser that works as the optical trapping and the other that illuminates fluorescent probe as an exciter light. Their optical axes should be completely aligned to the optical axis of the objective to realize the ideal optical trapping, observation, and tracking. In the setup shown in Fig. 6, a fluorescent image is taken for the channel to track the particle in 3-D, because fluorescent beads look symmetrical even in the case that the image is defocused. Note that this choice is not a requirement for an observation: if a researcher needs to use the fluorescent channel to observe other features of the specimen, it can lead to sCMOS camera by switching the beam splitter located outside the camera port. Such flexibility is the advantage of our 3-D system in which a few optical devices are added to a conventional microscope. The typical trajectories of the single trapping bead are shown in Fig. 7. The optical trapping is known to work as a Hookean elastic spring along x- and y-direction, i.e.,

Fig. 7 Validation of the trapping force of the optical trap along three-dimensional direction. (a) Typical x-, y-, and z-time courses of the trapped bead under high (left) and low (right) laser powers. (b) Bead trapped by the low laser power in a was further analyzed to make histograms. Because the probability of the localization should follow Boltzmann distribution, the histogram is converted into the energy diagram assuming that the potential of the optical trap works as a spring

764 T.A. Katoh et al.

25

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method. . .

765

the force applied to the trapping object is proportional to the displacement from the trap center. In other words, the movement of the trapped particle is restricted under the potential of 1/2κi[(xi–xi0)2]0.5, where κi and xi0 are the spring constant of trapping and the trap center, respectively, along ith axis. When the low laser power is applied to the trapping, the bead fluctuates and, according to the probability of the position, follows Boltzmann distribution under the potential. In our typical setup, the spring constant along z-axis becomes nearly quarter compared with those along x- and y-axis. This feature possibly originates from multiple unmodifiable factors, such as the characteristic of the objective and the magnification power of the tip of the optical fiber along z-axis. Although the shape of the potential of the trapping is asymmetrical, the force imposed to the trapped particle is precisely determined with 3-D localization method, with the assumption that the force can be split in three components along x-, y-, and z-axis. Histograms of position distribution tell us the limit of displacements for applying three spring constants for any objective lenses (Fig. 7b).

Conclusion Remarks The three-dimensional tracking method described here has a structural advantage in terms of the simplicity of assemblies of optical components. For research groups who have already developed methods to track particles in 2-D, totally identical approaches for capturing sequential images and analyzing data can be used for our 3-D method. Additional equipment they need is a prism located between a microscope and a camera and two lenses in order to produce both equivalent back focal plane and sample plane outside the camera plate. Most importantly, the rationale is so simple that any types of illumination, such as bright-field, dark-field, phase-contrast and fluorescent ones, can be extended to 3-D localization. Tracking can be applied to the variety of samples in the broad range from bacteria locomotion to single-molecule tracking, in principle, as exemplified in the molecular motor [9] and the prion protein in yeast cells [11]. Because our method can be used for all types of light microscopy, it is likely to become an important new tool for areas of study ranging from cell biology to single-molecule biophysics. Possible application will be high-speed imaging with nanometer and submillisecond resolution, simultaneous tracking of multiple particles, and, finally, single-fluorophore tracking. Conventional experimental setups can be easily turned into 3-D systems without remodeling the setups. This versatility indicates that new application in various optical microscopes is feasible, including three-dimensional super resolution microscopes. Acknowledgments This study was supported in part by the Funding Program for Next Generation World-Leading Researchers Grant LR033 (to T. N.) from the Japan Society for the Promotion of Science and by Grants-in-Aid for Scientific Research on Innovative Areas “Harmonized Supramolecular Motility Machinery and Its Diversity” (Grant 24117002 to T. N.), “Fluctuation & Structure” (Grant 26103527 to T. N.) and “Cilia & Centrosomes” (Grant 87003306 to T.N.) from the Ministry of Education, Culture, Sports, Science, and Technology of Japan.

766

T.A. Katoh et al.

References 1. Ashkin A (1970) Acceleration and trapping of particles by radiation pressure. Phys Rev Lett 24:156–159 2. Ashkin A (1984) Stable radiation-pressure particle traps using alternating light beams. Opt Lett 9:454 3. Ashkin A, Dziedzic JM (1985) Observation of radiation-pressure trapping of particles by alternating light beams. Phys Rev Lett 54:1245–1248 4. Ashkin A, Dziedzic JM, Bjorkholm JE, Chu S (1986) Observation of a single-beam gradient force optical trap for dielectric particles. Opt Lett 11:288–290 5. Molloy JE, Padgett MJ (2002) Lights, action: optical tweezers. Contemp Phys 55:241–258 6. Neuman KC, Abbondanzieri EA, Block SM (2005) Measurement of the effective focal shift in an optical trap. Opt Lett 30:1318–1320 7. Moffitt JR, Chemla YR, Smith SB, Bustamante C (2008) Recent advances in optical tweezers. Annu Rev Biochem 77:205–228 8. Marago OM, Jones PH, Gucciardi PG, Volpe G, Ferrari AC (2013) Optical trapping and manipulation of nanostructures. Nat Nanotechnol 8:807–819 9. Yajima J, Mizutani K, Nishizaka T (2008) A torque component present in mitotic kinesin Eg5 revealed by three-dimensional tracking. Nat Struct Mol Biol 15:1119–1121 10. Deschout H et al (2014) Precisely and accurately localizing single emitters in fluorescence microscopy. Nat Methods 11:253–266 11. Tsuji T et al (2011) Single-particle tracking of quantum dot-conjugated prion proteins inside yeast cells. Biochem Biophys Res Commun 405:638–643 12. Yildiz A, Tomishige M, Vale RD, Selvin PR (2004) Kinesin walks hand-over-hand. Science 303:676–678 13. Yajima J, Cross RA (2005) A torque component in the kinesin-1 power stroke. Nat Chem Biol 1:338–341 14. Miyata M, Ryu WS, Berg HC (2002) Force and velocity of mycoplasma mobile gliding. J Bacteriol 184:1827–1831 15. Nishizaka T, Miyata H, Yoshikawa H, Ishiwata S, Kinosita K Jr (1995) Unbinding force of a single motor molecule of muscle measured using optical tweezers. Nature 377:251–254

Optical Manipulation and Sensing in a Microfluidic Device

26

Daniel Day, Stephen Weber, and Min Gu

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optical Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Experimental System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SPR-Based Optical Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SPR-Based Optical Manipulation in a Static Fluid Environment . . . . . . . . . . . . . . . . . . . . . . . . . . SPR-Based Optical Manipulation in a Dynamic Fluid Environment . . . . . . . . . . . . . . . . . . . . . . SPR-Based Optical Manipulation on a Patterned Metallic Surface . . . . . . . . . . . . . . . . . . . . . . . . Optical Sensing (MDR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Design of Morphology-Dependent Resonance Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Experimental System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MDR in a Microfluidic Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

768 771 772 775 777 781 783 787 789 791 793 800 803

Abstract

This chapter describes the realization of a lab-on-a-chip optical sensor that is based on surface plasmon resonance (SPR) trapped microspheres acting as localized sensing elements for morphology-dependent resonance (MDR) sensing. The microfluidic device is fabricated by a combination of direct laser writing and hot embossing. This allows simple integration of SPR techniques by the evaporative coating of a metal layer on the surface of the microfluidic device. Trapping of 4, 10, and 15 μm polystyrene microspheres is demonstrated using SPR in static and dynamic fluidic environments. Patterning of the metal surface is demonstrated to increase the trapping potential of the SPR technique as well as provide a method of further localizing the position of the optical trap within the D. Day (*) • S. Weber • M. Gu Centre for Micro-Photonics, Faculty of Engineering and Industrial Sciences, Swinburne University of Technology, Melbourne, VIC, Australia e-mail: [email protected]; [email protected]; [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_12

767

768

D. Day et al.

device. Comparison between the trapping of microspheres for both on- and off-resonance incident angles of the trapping beam shows strong difference in the strength of the optical trap allowing for an on/off switching of the trapping force within the device. The integrated SPR trapping technique provides a method for arbitrary trapping of a range of microspheres within a microfluidic environment. The MDR optical sensing technique was selected as a noninvasive, multivariable sensing technique that can be performed on a range of optically trapped microcavities. Coupling to the MDR of a spherical microcavity is achieved via evanescent wave coupling under total internal reflection within a static fluidic environment. Fluid refractive index detection is realized with a sensitivity of 9.66  102 refractive index units (RIU) by the characterization of the shift of the MDR positions. A quality (Q) factor of 1.1  104 is observed for a 90 μm glass microsphere with a stability of Δλ = 0.04. The coupling of light to the MDR mode is realized for a 90 μm glass microsphere trapped in a dynamic microfluidic device via SPR-based optical trapping. The position of the trapped microsphere is defined by the location of the patterned region of the metal surface as well as the position of the location of the focal spot of the SPR incident light source. A Q-factor of 4  103 is observed under these coupling conditions. Detection of a change in the refractive index of the local fluidic environment is observed via change in the MDR of a microcavity held under SPR trapping conditions; a resolution of 7.75  10–2 RIU is observed under a flow rate of 20 μm/s. This research explores the integration of optical-based manipulation and localized sensing techniques into a microfluidic environment. From the work demonstrated, it is anticipated that this research will develop toward an opticalbased sensing system where localized sensing can be performed in an arbitrary location within a fluidic environment. Keywords

Microfluidics • Optofluidics • Photonics • Plasmonics • Biosensor • Surface plasmon resonance • Optical trapping • Microfabrication

Introduction Optical detection techniques have several advantages over other analysis methods such as electrochemical [1] and electronic methods [2]. First, they require no direct contact with the target sample or molecule; second, they permit an array configuration which, when used in conjunction with specifically targeted immobilizing chemistry, allows for multiplexed investigation of biological samples. One of the more exciting futures that optofluidic systems will play is a key role in point-of-care (POC) “blood-to-diagnosis” systems for genetic analysis [3]. Challenges that have arisen for the development of such systems are primarily based on the

26

Optical Manipulation and Sensing in a Microfluidic Device

a

769

b Raman Laser

SERS signal

Recptor treated substrate

c

d CC

Dc

Photodetector array

am

era

Signal Analyte

Resonant Cavity

Receptors Target molecules Receptors

Sensing region

Reference path

Resonant Cavity

Incident light

Integrated waveguide structure Incident light

Fig. 1 Diagram of biosensors using the optical detection method, (a) Raman spectroscopy, (b) photonic crystal, (c) resonant cavity, and (d) interferometry

requirements of the end users and include speed, cost, sensitivity, ease of use, non-contamination, and portability. System cost (including cost per experiment and total equipment costs), material, and design will depend on the balance of functionality between disposable components and the instrument. Point-of-care systems must isolate, capture, and amplify target DNA for detection while utilizing on-chip pumping, valving, mixing, and reaction capabilities. The test must be performed in a robust, repeatable, and representative fashion while minimizing the possibility of contamination. One promising technique to overcome the challenges in POC systems is to use optical detection techniques in conjunction with biological or chemical receptors to detect concentration, interactions, and the presence of analytes (molecules) in a sample. Systems which integrate these techniques are often called optical biosensors and typically include such methods as evanescent wave [4], interferometry (resolution = 5  108 RIU) [5], resonant cavity (resolution = 7.6  107 RIU) [6], photonic crystal (resolution = 7  105 RIU) [7], Raman spectrometry (resolution = 1011 M) [8], and surface plasmon resonance (resolution = 5  105 RIU) [9]; see Fig. 1. There is often overlap between the techniques with surface plasmon resonance being utilized to improve the resolution of Raman scattering techniques [8] or to enhance the transmission of light into the evanescent wave [10]. Optical biosensors are frequently separated into label and label-free categories. Label-based systems

770

D. Day et al.

perform treatments such as fluorescent dye staining or radiometric element binding to assist in the measurements of samples. However, such a treatment can potentially result in the death of the specimen, thus preventing repeated measurements of a single population. There is also a limitation on the size of the molecules such treatments are applicable to; extremely small molecules such as virus and antibodies are often present in concentrations and sizes that are extremely difficult to observe and thus require a different detection approach. As such there has been a strong push for measurement techniques that do not utilize labeling. Label-free biosensors [11, 12] are systems that generally involve the measurement of a physical property (i.e., size, mass, dielectric permittivity, etc.) of the sample under investigation. The sensor component of the system converts the physical property into a quantifiable signal that can be collected via an appropriate system/instrument (such as the voltage/current shift that occurs in a thin-film crystal monitor in response to a deposited mass in an evaporative coating machine). Optical biosensors work on the principle that all biological molecules have a dielectric permittivity greater than that of air or water; thus, their modification of an electric field that interacts with the molecule is distinguishable from the background material of the sample. Therefore the design goal for an optical biosensor is to provide a system where the sensing surface possesses a measurable characteristic that is modified in response to changes in the dielectric permittivity on its surface. In many types of optical biosensors, a solid material medium confines an electromagnetic (EM) wave in such a way that the wave has the opportunity to interact with a test sample. The EM wave is generally in the form of a standing or traveling wave. In order to interact with the analyte at the sensor region, the EM wave must propagate away from the surface of the sensor into the testing region. Electromagnetic waves that are bound to an optical component but extend into an external medium are called evanescent fields. The evanescent field decays exponentially away from the sensor surface, with a decay length of approximately λ/2π, where λ is the wavelength of the light; see Fig. 2. For a common wavelength range for optical biosensors of 600–1,064 nm, this means that the evanescent field only extends around 95–169 nm into the test media. Thus, most optical biosensors can only detect within direct proximity of the sensing region. There are many investigations [13, 14] currently underway into techniques

Fig. 2 Diagrams of waveguide structure confining a traveling evanescent wave

26

Optical Manipulation and Sensing in a Microfluidic Device

771

that either extend the evanescent field further into the sensing medium or focus the evanescent field to higher intensities to enhance the sensitivity of the sensor. There are several characteristics of optical biosensors that determine their overall performance. Two fundamental properties of the device are its sensitivity and its resolution. Sensitivity in this case is defined as the magnitude of the change in the sensor signal in response to a given shift in the surface-absorbed mass density, and the resolution refers to the smallest observable change in the effective mass density that can be measured by the device/system. The resolution of an optical biosensor is often referred to as the “quality factor” or “Q-factor,” which is defined as Q = λ0/Δλ for a wavelength-based sensing system. Here, λ0 is the center wavelength of a resonance peak, and Δλ is the spectral width determined at half of the peaks’ maximum value. Typically, highly sensitive biosensor systems present several problems for integration with microfluidic devices: first is the reduced active sites due to the limitation of available space within a microfluidic system which has a negative effect on the detection limit of the sensor, and second is the increased difficulty of transportation of the analyte to the active sites in sufficient volumes for detection. There have been several methods proposed to solve these limitations including flowing the analyte solution through or around the sensor; however, this limits the use with biological targets that need to bind with the sensor.

Optical Manipulation The three primary factors to consider when developing optical techniques for the manipulation and trapping of micro-objects are the gradient of the field intensity, scattering forces, and gravity. It is the control and interplay between these forces that determine the strength, stability, and functionality of the manipulation technique. Computer-generated holograms [15], spatial light modulators (SLMs) [16], acousticoptical devices [17], and diffractive optical elements [18] have all been demonstrated to achieve multiple simultaneous trapping of micro-objects. Holographic manipulation techniques manipulate the objects via changing the gradient intensity profile within the trapping region by cycling through a series of holograms. The technique while allowing the manipulation of trapped objects in three dimensions (3D) possesses limitations on the range and translational movement that can be achieved. The SLM also has issues with translation of individual traps due to the degradation of the trap arising from aberration and scattering of local objects. Moreover, the majority of these techniques are limited to the field of view of a microscope, which is determined by the numerical aperture (NA) of the objective used in the system, hence the appeal of evanescent wave (EW) manipulation techniques. When a microsized object interacts with an EW, the wave can be converted to a propagating wave that results in the guiding of the object along the surface in the direction of the longitudinal wave vector of the electric field. One of the major challenges of extending near-field optical trapping to large-area manipulation is that optical interactions involving evanescent waves are considerably weaker than in standard optical trapping techniques. This limits both the extent

772

D. Day et al.

in area over which particle arrays may be created and the strength of the particle traps. One method to overcome this limitation is to use surface plasmon polaritons (SPPs) that are surface waves produced by collective oscillation of free electrons at a metal-dielectric interface. Recent work has shown that enhanced optical forces and optically induced thermophoretic and convective forces produced by SPP excitation can be used for large-scale ordering and trapping of colloidal aggregations [19–21]. However, when conducting experiments in a fluidic system, there is the potential for the disruption of the thermo-based forces and the breaking of the optical trap via cooling or flow-based forces. It has previously been demonstrated [22, 23] that light propagating in the z-axis incident on a sub-wavelength aperture in a metallic surface can also be used to generate SPPs, which results from the diffraction of the incident light from the edge of the hole which decays into SPPs emanating from the hole in the plane of the film. This provides a structural confinement to the SPP wave, and when the diameter of the aperture is less than the wavelength of the incident light, there is a plasmonic coupling; both these effects lead to an enhancement of the electric field at the metal-dielectric interface. This generates a gradient force at the interface strong enough to trap microspheres without thermo effects. There have been several investigations using nano-hole arrays to trap particles [24, 25]; however, the fabrication systems required to pattern nano-hole arrays are expensive, and the traps set a limit for the size of particles that can be trapped; also there have been few investigations into how such traps are effected when the particles move under flow conditions.

Experimental System Experiments were performed on a Kretschmann prism-coupling geometry; see Fig. 3. The illumination light beam from a 1,064 nm Nd:YAG laser beam is expanded to a parallel beam by the lenses L1 (microscope objective, NA = 0.25) and L2 (plano-convex, focal length = 200 mm). The beamwidth is controlled by a variable aperture (VA), and the beam polarization is set using one wave plate (WP). A third lens L3 (plano-convex, focal length = 400 mm) focuses the beam onto the back surface of a 35 mm equilateral glass prism (BK7, n = 1.51). The sample chamber was placed on top of the prism via index matching liquid (n = 1.516). P-polarized light from the incident laser beam was coupled into the chamber surface at an angle of 70 , set by the geometric alignment of the prism – L3 combination and fine-tuned by the micrometer-controlled rotation stage. The incident power is controlled using a neutral density (ND) filter wheel. The interactions of the microspheres with the illuminated region are observed via a microscope objective (NA = 0.3), a CCD camera, and illumination from a white light source. In order to perform detailed analysis, video files of the interactions were captured and processed using particle tracking algorithms. Polydimethylsiloxane (PDMS), a two-part silicone compound, is formed by mixing at a 10:1 ratio base with curing agent and is a common material used in the fabrication of microscale objects and devices. Its liquid state is easy to use and

26

Optical Manipulation and Sensing in a Microfluidic Device

773

Fig. 3 Schematic diagram of experimental setup for SPR manipulation experiments

manipulate and after curing into a solid but flexible material responds well to preand post-lithographic techniques. A multistage hot-embossing technique (see Fig. 4) is used to form the PDMS into microfluidic devices. First, PDMS is spin coated at 1,500 RPM for a 50 μm thick coating onto a silicon wafer. The PDMS is then cured on a hot plate at 85  C for 30 min. After curing the microfluidic design is cut into the PDMS using a CO2 laser. The inverse of this PDMS/silicon master is then imprinted onto a polymethyl methacrylate (PMMA) sheet (25  75 mm) by bringing the plastic sheet to a temperature above the glass transition temperature where the plastic “softens”; when placed under sufficient pressure, the soft plastic molds into the cavities of any master it is in direct contact with. The plastic is then rapidly cooled while in contact with the mold so as to harden it, thus generating a PMMA substrate with the inverse pattern of the PDMS/silicon master imprinted onto the surface. This process is achieved by aligning both the PMMA sheet and the PDMS/silicon mater and placing them into a vice clamp which has been heated to a temperature of 180  C via contact with a hot plate. Torque, applied by a torque wrench, of 10 Nm is applied for 10 min; the vise clamp is custom designed to allow the introduction of cold water throughout the whole clamp causing a rapid cooling of the system. The embossed PMMA slide then forms the base for a mold; PDMS solution is poured onto the mold and then cured for 1 h at 85  C. Figure 5 shows a typical schematic of a microfluidic device fabricated using this technique. The channels of the device are imprinted into the PDMS during the curing process. The four ports (i–iv) are bored into the PDMS before being sealed with

774

D. Day et al.

Cutting via CO2 laser (PDMS/silicon master)

Cured PDMS on Silicon wafer

PDMS PMMA master

Inverse design imprinted on PMMA sheet under pressure

Silicon Wafer PMMA

Liquid PDMS is poured onto PMMA master

Imprinted PDMS forms top section of microfluidic device

Fig. 4 Schematic of hot-embossing process for fabrication of PDMS microfluidic device components

i ii iii iv Flow

n

ctio

dire

Fig. 5 A sample design of a microfluidic device

26

Optical Manipulation and Sensing in a Microfluidic Device

1. Mask pattern cut out of double-sided tape using CO2 laser

4. Mask is removed

775

2. he adhesive section of the tape is placed on glass slide

3. Using the evaporative coater a 5 nm Cr and 40 nm Au layer is deposited onto the slide

Fig. 6 Schematic diagram outlining the procedure for coating of the microfluidic device substrate

metallic tubes; these act as input and output ports for the introduction and extraction of solution from our device. The PDMS chamber is sealed onto a glass substrate by activating both surfaces using an O2 plasma and then bonding the surfaces together. The coating procedure for the glass substrate is schematically shown in Fig. 6. A mask is patterned out of double-sided tape using the CO2 laser, and the adhesive section is bonded to a glass slide; 40 nm thick Au layer was thermally evaporated onto the glass slide. A (5 nm) Cr layer was evaporated onto the surface prior to the Au layer to increase the adhesion strength of the Au to the glass. The mask was then removed, leaving the gold-coated glass substrate. The design of the microfluidic device with three input ports allows for the utilization of the microfluidic technique called hydrodynamic focusing. Hydrodynamic focusing manipulates the width of the inner fluid by adjusting the ratio of flow rates between the inner and outer fluids with a higher ratio resulting in a thinner inner width. This setup allows the increase in concentration of microspheres by reducing the width of the microsphere solution through the center of the device.

SPR-Based Optical Manipulation Figure 7 shows a calculation for the magnitude of the reflected signal versus the angle of incidence, for a light source incident under total internal reflection of a metallic-dielectric interface. The wavelength of the incident light is 1,064 nm, with a transverse electric (TE) polarization. A four-layer interface was simulated for the calculation with the first layer composing of the glass slide substrate with a refractive index of n1 = 1.51. The second layer is chromium with a refractive index of n2 =

776

D. Day et al. 1 0.9

Reflectance (a.u.)

0.8

Off

0.7 0.6 0.5 0.4 0.3 0.2

On

0.1 0 50

55

60

65 Angle (deg)

70

75

80

Fig. 7 Plot showing reflectance versus angle for an Au-coated glass slide with highlights showing location of on- and off-resonance positions

3.54 + 3.579i (at λ = 1,064 nm [26]) and a thickness of 5 nm. The third material in the calculation is a 40 nm thick layer of gold (Au) with a refractive index of n3 = 0.285 + 7.35i (at λ = 1,064 nm [26]). The final layer simulates the solution within the microfluidic device, primarily being water with an index of n4 = 1.33. The figure highlights two points of interest on the curve. The first position is the point of minimum reflectance; at an angle of 66 , it is the point where the magnitude of the incident light coupled into the surface plasmon wave is at its maximum. This angle is called the surface plasmon resonance angle and henceforth is referred to as the “on”-resonance position. The second point of interest in Fig. 7 is an arbitrarily selected position on the reflectance curve; defined as the “off”-resonance position, it is at an angle of 73 and equates to the point of maximum reflection within the angular range of the experimental setup where the coupling to the SPR is minimized. These two labeled positions are selected to allow us to compare the trapping and manipulation effects of the incident wave for both on- and off-resonance positions of the incident light. The absorption of hydrophobic molecules such as polystyrene particles [27] onto the PDMS introduces errors into the analysis of the efficiencies of optical trapping techniques under investigation. Surface modification of the PDMS material is required to reduce aggregation and adhesion of microspheres to the chamber walls and substrate. Poly(ethylene oxide) (PEO) is effectively used to inhibit the bonding on hydrophobic surfaces [28]. Bonding to the surface layers of the microfluidic chamber PEO generates an exclusion layer, which is the area covered by the PEO material and its immediate vicinity. Via molecular repulsion the PEO coating prevents bonding to the surface in the areas in which it occupies. The physical

26

Optical Manipulation and Sensing in a Microfluidic Device

777

absorption of the PEO material onto the surface is one of the most common coating methods; however, it can be easily disrupted by the presence of proteins or other materials that have the potential to replace the PEO on the surface. The Pluronic range of PEO-based copolymers has been synthesized to contain hydrophobic blocks that ensure a stronger and more stable coating of the surface [29]. The microfluidic channel is coated with Pluronic F127 surfactant and left for a minimum period of 12 h to ensure a solid uniform coating. The chamber is flushed with a phosphate-buffered solution (PBS), a buffer solution with a refractive index of n = 1.33 that is commonly used in biological research. The solution helps maintain a constant pH and is often used due to its similarity with the cell environment in the human body. The microspheres under investigation are suspended in a PBS solution in order to bring the proposed microfluidic sensor more in-line with current biological research practices. Figures 8 and 9 show the calculations of the surface plasmon resonance position under a range of experimental conditions. These conditions match those to be investigated and show both prisms used, the SF11 (n = 1.785) and the BK7 (n = 1.51). The calculations were performed over a range of metal thicknesses and demonstrate that for a change in the thickness of the metal layer, the magnitude of the resonance dip is effected; however, there is less than a 20 shift in the angular position of the minimum reflection point. The thickness and angle for the minimum reflection conditions are summarized in Table 1; it can be seen that across all parameters, the optimum thickness is around 60–70 nm. However, in this research, the thickness of the metal layer was set to 40 nm.

SPR-Based Optical Manipulation in a Static Fluid Environment In order to investigate the effect of SP waves in a microfluidic device under dynamic conditions, an understanding of the effects of SP in a static environment first needs to be developed. 5 μm polystyrene microspheres were placed in the detection region of a microfluidic device and allowed to settle to the surface of the device over a period of 10 min. Residual internal flow velocities are observed to decay to zero over this time period. The focal region of an incident light source under SPR coupling conditions is illuminated in the region of interest with a focal spot diameter of 350 μm at an intensity of 84.5  2 mW. Figure 10 shows the effect on the microspheres in the region inside and surrounding the focal volume of the incident light. At time t = 0 min, the microspheres are diffused in a random arrangement. The incident light source is turned on and the microspheres are observed to move toward the center of the focal spot. The selforganization of the microspheres into a hexagonal lattice is observable after a period of 30 min; see Fig. 10b. As discussed by Garcés-Chávez [30], this is a result of the intensity of the incident beam under SPR coupling conditions inducing localized heating in the metal surface, which generates a convection current in the fluid in the region above the focal spot. If the velocity of the convection flow is balanced via

778

a

1 0.9

Reflectance (%)

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 34

b

35

36

37 Angle (deg)

38

39

40

50

51

52 Angle (deg)

53

54

55

1 0.9

Reflectance (%)

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 49

c

1.0 Air Water

Reflectance (%)

0.8

0.6

0.4 0.2

0.0 0

d Angle of minimum reflectance (deg)

Fig. 8 SPR plots for nprism = 1.785, λ = 1,064 nm, with refractive index of the medium being (a) n = 1.00 (air) and (b) n = 1.33 (water). Effect of thickness of gold layer on minimum reflectance shown versus (c) value of reflectance and (d) angular position

D. Day et al.

10

20

30

40 50 60 70 Thickness (nm)

80

90

58 56

100 110

Air Water

54 52 50 48 46 44 42 40 38 36 34 0

10

20

30

40 50 60 70 Thickness (nm)

80

90

100 110

Optical Manipulation and Sensing in a Microfluidic Device

a

779

1 0.9

Reflectance (%)

0.8 0.7 0.6 10 nm 20 nm 30 nm 40 nm 50 nm 60 nm 70 nm 80 nm 90 nm 100 nm

0.5 0.4 0.3 0.2 0.1 0 64

b

65

66

67 Angle (deg)

68

69

70

1 0.9

Reflectance (%)

0.8 0.7 0.6

10 nm 20 nm 30 nm 40 nm 50 nm 60 nm 70 nm 80 nm 90 nm 100 nm

0.5 0.4 0.3 0.2 0.1 0 42

c

43

44

45 Angle (deg)

46

47

48

1.0 Air Water

0.8 Reflectance (%)

Fig. 9 SPR plots for nprism = 1.51, λ = 1,064 nm, with refractive index of the medium being (a) n = 1.00 (air) and (b) n = 1.33 (water). Effect of thickness of gold layer on minimum reflectance shown versus (c) value of reflectance and (d) angular position

0.6

0.4 0.2

0.0 0

10

20

30

d Angle of minimum reflectance (deg)

26

40 50 60 70 Thickness (nm)

80

90

100 110

Air Water

70 65 60 55 50 45 40 0

10

20

30

40 50 60 70 Thickness (nm)

80

90

100 110

780 Table 1 Optimum Au thickness coating for SPR under experimental parameters for l = 1,064 nm

D. Day et al.

nprism 1.785 1.785 1.51 1.51

λ 1,064 1,064 1,064 1,064

nm 1.00 1.33 1.00 1.33

θr(deg) 35.1 51.1 42.8 67

dθr(nm) 60 70 70 70

Fig. 10 Images of convection trapping of 5 μm polystyrene microspheres in a static microfluidic environment at times (a) t = 0 min (laser is turned on), (b) t = 1 min, (c) t = 10 min, (d) t = 20 min, (e) t = 30 min (laser is turned off), and (f) t = 40 min. Scale bar is 100 μm

fine-tuning of the incident intensity, then the microspheres will be drawn to the center of the focal spot without the flow having enough momentum to remove them from the region; see Fig. 11. After the incident light is removed, the convection flow dissipates and Brownian motion overtakes the microspheres and particle diffusion occurs, as seen in Fig. 10f. Figure 11 shows a schematic diagram of the potential effects of convection flowbased manipulation. Initial random states and induction of the convection forces are shown in Fig. 11a and b, respectively. When the velocity of the convection flow is slow enough (as controlled by the intensity of the incident light), the microspheres are drawn to the center of the focal region yet do not have enough energy to be drawn away from the trap against gravity; see Fig. 11c.However, in the state where the flow velocity is sufficiently fast, the convection force is strong enough to overcome gravitation forces and the microspheres are drawn upward and away from the trap as demonstrated by Fig. 11d. This form of optical-based manipulation is an excellent form of large-area multiple particle manipulation under static flow conditions, with the position and dimensions of the trap being dictated by the location and size of the focal spot of the light source.

26

Optical Manipulation and Sensing in a Microfluidic Device

a

781

b PDMS Convection flow Polystyrene microspheres Gold coating Glass slide

Initial state with microspheres in random distribution

c

Microspheres drawn to center of focal spot of incident beam

Induction of convection flow via heating of metal surface under SPR coupling

d

If convectional forces overcome gravitational forces the microspheres are drawn upwards and away from center offocal region.

Fig. 11 Schematic showing manipulation of microspheres via SPR-induced convection flow, showing (a) microspheres randomly distributed on surface of the chamber, (b) a convection flow induced in the chamber via heating from illumination of light under SPR coupling, (c) microspheres which are drawn to center of the convection flow as defined by the position of the incident focal spot, and (d), if the convection flow overcomes the gravitational forces, the microspheres which are drawn upward and away from the focal region

SPR-Based Optical Manipulation in a Dynamic Fluid Environment The introduction of a flow velocity into a microchannel results in a change in the mechanisms of optical manipulation systems. Under static conditions, thermalinduced forces have a tendency to dominate over optical forces; however, the introduction of a flow allows for the removal of energy from the system by transporting the heated fluid away from the focal region and thus preventing the formation of thermal forces. Ten and 15 μm polystyrene microspheres were suspended in a PBS solution and pumped through a microfluidic device. Hydrodynamic focusing resulted in a microsphere solution width of 200 μm. Incident light is focused to the surface with a focal spot diameter of 100 μm. The k-vector of the incident surface wave is aligned to be in opposite direction to the flow inside the microfluidic channel. The power of the incident light was varied between 20 and 70 mW in 10 mW intervals, and the manipulation of the polystyrene microspheres flowing through the interaction region at an average flow rate of 22 μm/s was recorded via image capture software. In order to compare the efficiency of the microspherical trapping effect of the variable, particle trapping efficiency (PTE) is defined as

782

D. Day et al.

PTE ¼

N Tr ; NT

(1)

where NTr is the number of microspheres that are observed to be trapped within the patterned region over a set time interval and NT is the total number of microspheres that pass through the patterned region over the same time interval. The PTE is a representation of the efficiency of the optical trap over a particular interval to allow investigation of the changes to the trapping force over time. Each time interval was set at 10 s over a period of 1 min, where t = 0 defines the point in time when the incident light source was turned on. The PTE of each parameter was determined based on the interaction region being the whole 400  400 μm array viewable via the CCD camera. For the 10 μm polystyrene microspheres, the PTE versus power is shown in Fig. 12. Under a flow rate of 22 μm/s, trapping of microspheres is only observed to occur on resonance for powers above 50 mW, after which higher incident power is observed to result in a higher PTE. This is intuitive based on the theory of evanescent wave trapping, where the more energy the incident photons can convey to the particle, the more momentum transfer can occur; a higher number of trapping events also occur as there are more photons which posses sufficient energy to trap a microsphere. In order to investigate the effect of higher incident light coupling to the evanescent wave, trapping for both the on- and off-resonance conditions of the experimental setup was performed for all trapping experiments. It can be seen by comparing Fig. 12a and b that while the off-resonance condition demonstrates trapping at a lower power, 50 mW, the PTE is more unstable than that of the on-resonance condition. For 15 μm microsphere, trapping the difference between on- and off-resonance trapping is much more pronounced than for 10 μm microspheres. The off-resonance trapping is shown only for an incident intensity of 70 mW with a maximum PTE of 25% at time t = 20 s; see Fig. 13b. After 1 min the PTE is shown to drop to zero

a 100 80 70 60

20 mW 30 mW 40 mW 50 mW 60 mW

90 Particle Trapped Efficeincy (%)

90 Particle Trapped Efficiency (%)

b 100

20 mW 30 mW 40 mW 50 mW 60 mW 70 mW

50 40 30 20 10

80 70 60 50 40 30 20 10

0

0 0

10

20

30 Time (sec)

40

50

60

0

10

20

30

40

50

60

Time (sec)

Fig. 12 Results of SPR manipulation of 10 μm polystyrene microspheres under (a) on-resonance and (b) off-resonance coupling conditions

a

Optical Manipulation and Sensing in a Microfluidic Device 100

Particle Trapped Efficiency (%)

b

20 mW 30 mW 40 mW 50 mW 60 mW 70 mW

90 80 70

783

100

80

20 mW 30 mW 40 mW 50 mW

70

60 mW

90 Particle Trapped Efficiency (%)

26

60 50 40 30 20

70 mW

60 50 40 30 20 10

10

0

0 0

10

20

30 40 Time (sec)

50

60

0

10

20

30

40

50

60

Time (sec)

Fig. 13 Results of surface wave manipulation of 15 μm polystyrene microspheres under (a) surface plasmon resonance and (b) off-resonance coupling conditions

showing that even at its maximum trapping potential, the off-resonance traps are still significantly weaker than the on-resonance counterpart, with a decrease in PTE an indication of microspheres escaping the trapped region. A significant increase in the PTE is observed in the 15 μm microsphere trapping when compared to the 10 μm. This is a result of two factors: the first is the increase in surface area of the microsphere increases the number of potential photon interactions, and the second is that the larger microsphere has a higher density and thus flows at a lower height in the microchannel; as discussed earlier the magnitude of the electric field decays exponentially into the solution; thus, a lower position in the microchannel results in encountering a larger trapping force.

SPR-Based Optical Manipulation on a Patterned Metallic Surface The use of sub-wavelength apertures and structured surface components to confine and enhance the electric field of a plasmon has been well investigated in the literature. This led to the theory of structured surface components creating regions of high-gradient intensity in order to create localized optical trapping sites as independent from the surface plasmon wave trapping whose dimensions are currently only confined to the size of the focal spot of the excitation light source. Incident light is focused to the glass/Au interface of a 200  200 μm hole array via the Kretschmann configuration. The change in the SPR angle due to the diffraction effect of the hole array structured was not accounted for in the theoretical calculation of the SPR angle. However, the SPR angle was experimentally determined for each microfluidic device via the measurement of the intensity of the reflected beam prior to each experiment. The k-vector of the incident surface wave is aligned to be in opposite direction to the flow inside the microfluidic channel. The power of the incident light was varied between 20 and 60 mW in 10 mW intervals.

784

D. Day et al.

Fig. 14 Plot showing particle trapping efficiency over time for a 4 μm microsphere, on and off resonance

The manipulation of polystyrene microspheres flowing through the interaction region at an average flow rate of 22 μm/s was recorded via image capture software for microspheres of diameters 4, 10, and 15 μm. When calculating the PTE of the patterned region, the number of trapped microspheres, NTr, only takes into consideration the microspheres trapped within the patterned region. From Fig. 14 a fast increase is observed between the value of the PTE and the incident power for the on-resonance incident coupling condition; the trend line of the 40 mW on-resonance parameter gives indication that saturation of the array occurs shortly after t = 40 s. This saturation may be a result of a “filling” of the trapping region via microspheres thus preventing further microspheres being trapped. The distance of the microsphere from the surface of the microfluidic device is believed to play an important part in the observed trapping results. The results of the 4 μm manipulation via a patterned surface show an enhancement to the PTE of the optical trap for all incident powers, when compared to the unpatterned surface condition where there was no observable trapping of 4 μm polystyrene microspheres. However, there is no discernible trend observed when comparing an increase in intensity of the trapping beam to the PTE of the system. This lack of linear trend may arise from several different reasons, the first being the flow rate of the fluid solution in the device may vary over time; this would both vary the position (in the z-axis) of the microspheres in the solution and increase the velocity of the microsphere resulting in a change on the required value of the trapping force. Another potential influence is the induced heating effect on the surface of the microfluidic device. It has already been shown that the incident light inducing a local heating effect is strong enough to induce a convection force inside a static solution; see Fig. 10.

26

Optical Manipulation and Sensing in a Microfluidic Device

785

Fig. 15 Image of 4 μm microspheres trapping in a patterned surface under SPR illumination (P = 60 mW) at t = 40 s. Red circles highlight trapped microspheres

Fig. 16 Image of 10 μm microspheres trapping in a patterned surface under SPR illumination (P = 60 mW) at t = 40 s. Red circles highlight trapped microspheres and blue circles highlight the untrapped microspheres. Scale bar is 40 μm

There is no observed trapping of the 4 μm microspheres under off-resonance coupling conditions. This is a result of the significantly reduced momentum of the photons of the k-vector of the surface wave in the off-resonance case compared to the on resonance. This lower photon energy means that the momentum shift imparted on the microspheres via the surface wave is of lower magnitude, which corresponds to a lower PTE. Figure 15 shows the trapping of 4 μm microspheres within the 200  200 μm patterned region under SPR illumination; the position of microspheres is highlighted for both trapped and non-trapped microspheres. Images of the trapping of the 10 μm microspheres are presented in Fig. 16. The figure shows that for an increase in the power of the incident light, there is a decrease in the PTE of the optical trap. Compared with the unpatterned trapping results

D. Day et al.

Particle Trapped Efficiency (%)

786 100 90 80

20mW on 30mW on 40mW on 50mW on 60mW on 50mW off 60mW off

70 60 50 40 30 20 10 0 0

10

20

30 40 Time (sec)

50

60

Fig. 17 Plot showing particle trapping efficiency over time for a 10 μm microsphere, on and off resonance

100

20mW on 30mW on 40mW on 50mW on 60mW on 50mW off 60mW off

Particle Trapped Efficiency (%)

90 80 70 60 50 40 30 20 10 0 0

10

20

30 40 Time (sec)

50

60

Fig. 18 Plot showing particle trapping efficiency over time for a 15 μm microsphere, on and off resonance

(see Fig. 12), a modest enhancement is observed in the PTE of the 60 mW power and a dramatic increase in the PTE of all other powers with the 20 mW showing the greatest enhancement to the PTE at the 60 s mark. No trapping was observed with either of the off-resonance powers. This demonstrates the enhancement effect to the PTE of the on-resonance coupling condition, leading to a potential on/off switch via angular adjustment, and that secondary light sources will provide no effect on the motion of the microspheres if coupled into the microcavity under off-resonance conditions (Fig. 17). From Fig. 18 we can see a fast increase between the value of the PTE and the incident power for the on-resonance incident coupling condition; the trend line of the

26

Optical Manipulation and Sensing in a Microfluidic Device

787

Fig. 19 Image of 15 μm microspheres trapping in a patterned surface under SPR illumination (P = 60 mW) at t = 40 s. Scale bar is 40 μm

40 mW on-resonance parameter gives indication that saturation of the array occurs shortly after t = 40 s the same as observed in the 4 and 10 μm cases. An example of the 15 μm trapping is presented in Fig. 19. The presented data sets were selected from multiple experiments as they showed the only data set that was collected which covered all experimental parameters. A statistical representation could not be built up due to issues with inconsistent flow rates and microsphere concentrations.

Optical Sensing (MDR) Optical resonators have been gaining increasing focus not only as a basis for standard laser devices, but as a system for high-accuracy measurements and for nonlinear optics in many modern optical devices. The exploration of whispering gallery modes (WGM) in optical microcavities has gained significant momentum over the last few years as they provide a platform for modal stability, high quality factor (Q), and small modal volumes. Whispering gallery modes can be described as light rays that are confined and propagate along the surface of the structure, where the confinement originates from the total internal reflection of the light at the surface. The circular optical mode in such resonators can be understood to be caused by the interference of a light beam propagating inside a dielectric particle confined by total internal reflection; see Fig. 20. As a beam of light propagating inside the particle returns to its starting position in phase, the constructive interference effect leads to a series of peaks in the scattered field given an appropriate particle size. When the reflecting beam has high index contrast and the radius of curvature exceeds several wavelengths, the radiative losses (by absorption or transmission) become very small, and the Q becomes limited only by material attenuation and scattering caused by

788

D. Day et al.

Fig. 20 Schematic diagram of light rays of different wavelengths constructively interfering inside a microcavity when incident under total internal reflection

2.0

1.5 Intensity (a.u)

Fig. 21 Example of MDR resonance spectrum with the black spectrum showing the cavity modes in an initial state and the red spectrum demonstrating a potential MDR spectrum response to a molecular binding event

1.0

0.5

0.0 777.0

777.5

778.0

778.5

779.0

779.5

780.0

780.5

Wavelength (nm)

geometrical imperfections (e.g., surface roughness). As the optical resonances are a function of their morphology and dielectric properties, they are often referred to as morphology-dependent resonances (MDRs) [31]. A spherical microcavity possesses natural internal modes of oscillation at characteristic frequencies corresponding to the specific ratio of size to wavelength; an example is shown in Fig. 21. The wavelength at which these MDRs can occur can be calculated from the theoretical studies by Mie on the scattering of plane electromagnetic waves by a sphere. They also arise from Debye’s derived equations for the resonant eigenfrequencies for free dielectric and metallic spheres. These calculations, while the MDR is independent of the scattering process, elastic or inelastic, are dependent on the boundary conditions of the microsphere, including the refractive index

26

Optical Manipulation and Sensing in a Microfluidic Device

789

mismatch between the microsphere and the surrounding medium as well as the shape, size, and surface roughness of the cavity. For a given spherical microcavity, resonances occur at specific values on qm,l. Here q is the size parameter given in Eq. 2: qm, l ¼

2aπ ; λ Em, l

(2)

where a is the radius of the spherical cavity, λ(Em,l) is the emission wavelength, and m and l are integers. The mode number m indicates the order of the spherical Bessel and Hankel functions (ζ) describing the radial field distribution, and the order l indicates the number of maxima in the radial dependence of the internal field distribution (Em,l which is a function of ζm,l). m and l indicate that both the discrete transverse electric (TE) and transverse magnetic (TM) resonances exist.

Design of Morphology-Dependent Resonance Sensors Excitation of high-Q MDR requires light coupled into the microcavity at angles greater than the critical angle. This is extremely difficult with direct coupling methods; however, efficient coupling of MDR via evanescent waves generated by total internal reflection (TIR) has been observed. The most common system for TIR excitation of an evanescent wave is the coupling of light on the back surface of a high refractive index prism. While this is the most common method, other systems such as tapered optical fibers and high numerical aperture (NA) objectives have been used to demonstrate coupling of light into MDR of microcavities. Confinement of light into the surface wave at a prism interface is a wellunderstood optical phenomenon. The prism allows light to be incident of the interface at angles beyond the critical angle, as defined by Snell’s law. The surface wave is classified as an evanescent wave (EW) due to its propagation away from the interface surface into the surrounding medium; a microcavity positioned within the decay length of the EW acts as a scattering source. If the scattered waves complete a round trip and constructively interfere on the cavity surface, they are classified as an MDR mode. More efficient methods for coupling light into MDR of microcavities have been observed in fiber coupling [32] and high NA objective coupling [33]. Optical fibers propagate light via the continuous internal reflection between the core and the surrounding cladding of the fiber. By stripping the cladding of the fiber, an evanescent wave is observed in the exposed region. This evanescent wave can be utilized to couple light into the MDR; a schematic representation of the evanescent wave and optical fiber geometries is shown in Fig. 22. This MDR can be observed in either the transmission spectra of the microcavity or in the transmission loss from the optical fiber. Microcavities are not limited to spherical or symmetrical objects and have been observed in with cavity designs (see Table 2) ranging from polystyrene [33, 34] and

790

a

D. Day et al.

b MDR resonator

Tapered fibre or waveguide

Fig. 22 Schematic representation of MDR coupling to a microcavity via (a) prism geometry and (b) tapered optical fiber geometry Table 2 Different MDR microcavities with corresponding Q-factor values

Cavity Prism Fabry-Perot Photonic crystal Square microcavity Toroid Microsphere

Q-factor 10 104 104 105 108 1010

glass microspheres [32], square microcavities [35], asymmetric microparticles [36], ring resonators on glass or silicon substrates, and silicon oxide toroid structures [37]. Provided that the microcavity presents at least one path length in which light can complete constructive “round trips,” a cavity can support an MDR mode. The use of optical microcavities with MDR is a growing field with many potential applications such as spectroscopy [38, 39], remote sensing [12], microcavity lasing [40, 41], second harmonic generation [42, 43], and Raman scattering [44]. Integration of MDR microcavities into microfluidic-based devices shows enormous potential for the development of high-resolution sensing systems. These systems utilize MDRs’ dependence on the sensitivity of the modes to changes in the surrounding dielectric permittivity or changes to the surface of the microcavity via binding of molecules, with resolution such that single-molecule detection is possible [45, 46]. The primary advantage of an MDR sensor is that the trapped photons are able to circulate on their orbit several thousand times before exiting the MDR, provided that the losses from absorption of the cavity material are low and that losses arising from the TIR scattering at the cavity boundary are minimal. This long optical path length corresponds to a lengthy confined photon lifetime and results in the very high sensitivity associated with MDR cavity systems with potential to detect single nanoparticles or single molecules.

26

Optical Manipulation and Sensing in a Microfluidic Device

791

Fig. 23 Shift in the MDR signal of a resonant microcavity in response to a target molecule binding to the biochemical receptor on the surface of the cavity

The binding of a biochemical receptor (antibody, DNA, etc.) onto the surface of a resonance cavity provides a platform for the cavity to act as a biosensor. The primary requirements of a biosensor are a high signal-to-noise ratio, a low limit of detection, capacity of integration, and high sensitivity. When a micro- or nanometer object (even one of biological origin) is brought into contact with the surface or target receptor, there is a resulting change in the effective radius and/or refractive index of the surface of the cavity; through the interaction with the evanescent part of the MDR field, a resultant shift in the MDR spectrum of the resonator occurs; see Fig. 23.

Experimental System The investigation of coupling into the MDR mode of various sized microcavities was demonstrated in the experimental system shown in Fig. 24. The illumination light beam from a tunable diode laser system (765–781 nm) is expanded to a parallel beam by the lenses L1 (microscope objective, NA = 0.25) and L2 (plano-convex, focal length = 175 mm). The beamwidth is controlled by a variable aperture (VA) and the beam polarization is set using a wave plate (WP). A third lens L3 (plano-convex, focal length = 400 mm) focuses the beam onto the back surface of a 35 mm equilateral glass prism (SF11, n = 1.785) under TIR conditions (θ = 62 ). The focal spot is further confined by the introduction of an objective lens (NA = 0.26) in the beam path between the prism and L3; the objective is mounted to an X-Y-Z stage (M1) to allow fine control of the position of the focal spot on the surface of the prism. The prism is placed in a custom-made mount and attached to a micrometer-controlled rotation stage; an adjustment plate was made so the prism,

792

D. Day et al.

CCD

CCD monitor L4

photodector

OBJ

M1

Laser L1

L2

WP

VA

L3

Fig. 24 Schematic diagram of MDR setup for investigation of coupling parameters via evanescent wave coupling

when rotated, would induce minimum translation of the focal spot at the prism surface. The scattered light is collected by an objective lens (Obj) (NA = 0.7) and expanded by a plano-concave lens (focal length = 88 mm, L4), and the image is observed via a CCD camera onto a monitor. A flip-mounted mirror allows the beam path to be changed from the CCD camera to a photodetector that is connected to an oscilloscope (CRO). This allows the focusing of the incident beam onto a microcavity with the working distance of the detection objective and then to switch beam paths to observe the changes in the scattering intensity with respect to the changes in the wavelength of the incident beam. The incident light source is tuned over a wavelength range of 776.5 nm  λ  781.3 nm, with a wavelength scanning speed of 0.2 nm/s. Two primary microcavities were used in this work: the first is soda lime microspheres (n = 1.52 at λ = 589 nm) with a diameter of 90 2.8 μm: the second is 90 μm polystyrene microspheres (n = 1.59 at λ = 589 nm). The sample of 90 μm glass microspheres is prepared by mixing the powdered microspheres into 7 ml of methanol in a plastic measuring cylinder; the solution is then mixed via a mechanical stirrer for 5 min before being transferred to a glass vial for storage. The plastic

26

Optical Manipulation and Sensing in a Microfluidic Device

793

cylinder is required to prevent damage to the microspheres via impact with the surface of the container. The polystyrene microsphere solution is prepared by placing 2–3 drops of the microsphere solution in a centrifuge tube along with 5 ml of methanol and mixed using a mechanical stirrer; the solution was then placed in a centrifuge and run at 10,000 RPM for 2 min. The liquid is then extracted from the tube leaving the microsphere residue. This process is repeated, and the remaining microspheres are diluted with 10 ml of methanol before being placed in a glass vial. The 90 μm microspheres were used to investigate MDR coupling under these experimental geometries due to the increased ease of locating and coupling to diameters greater than those used in the trapping experiments.

MDR in a Microfluidic Device Morphology-dependent resonance is highly sensitive to various changes in the local environment of the microcavity; this is generally a result in a change in one of two parameters. The first is a change in the surface quality of the microcavity by the binding of an object to the surface of the cavity or via the damaging of the surface by mechanical collision or chemical or optical etching of the surface; these changes shift the effective cavity size of the microsphere and/or remove potential modes. The second is a change in the local refractive index; this can occur by the introduction of a different solution or by a local heating/cooling effect changing the local density of the solution the microcavity resides in; the change of index adjusts the coupling conditions of the incident light source by adjusting the critical angle of the light coupling into the microcavity. In order to develop MDR as a sensing system in a microfluidic environment, the cavity response to a large number of variables must be investigated so that secondary responses to variables can be illuminated. Toward this aim the investigation of the MDR response to several variables was investigated; the microcavity was placed in a static fluidic well. Initial experiments were performed with distilled water, with Fig. 25 showing a typical MDR spectrum for a 90 μm glass microsphere. The Q-factor for this spectrum is estimated to be 11,124 for the S-polarization and 11,137 for the P-polarization with a visibility of V = 0.8 for both polarization states. This value is over seven times the Q-factor of 1,497 for the glass microsphere in air with a P-polarization. This is counterintuitive, as the air medium system should have a higher Q-factor due to the greater difference in the prism medium refractive index. There are several potential factors that could result in this discrepancy; these factors include the difference in the TIR angle between the air and water systems and the difference in the position of the detection source in relation to the microsphere. The illumination of secondary shifts to the MDR spectrum will be of prime concern when attempting to utilize an MDR-based sensing system. For example, there exists a potential for the illumination source to induce localized heating effects in the vicinity of the microcavity, which would induce an effective change in the local refractive index observable by a shift in the MDR spectrum. Figure 26 shows

794

D. Day et al.

Fig. 25 MDR spectrum for 90 μm glass microsphere in water

Intensity (a.u)

6

P-pol S-pol

4

2

0 778 779 780 Wavelength (nm)

777

6

Peak 1

Peak 2

Peak 3

781

Peak 4 t = 0 min t = 5 min t = 10 min t = 15 min

Intensity

4

2

0 777

778

779 Wavelength (nm)

780

781

Fig. 26 MDR spectrum for 90 μm glass microsphere in water under continuous illumination. Insert showing higher magnification of peak 4

the plot of the MDR spectrum over a time interval of 15 min with the microcavity under constant illumination; the diode laser was set to maximum current of 80 mA, and the shift in the wavelength of the MDR peak features was recorded and plotted in Fig. 27 showing that over the investigated time frame, the MDR cavity mode maintained strong stability with a fluctuation of 0.03 nm observed. It has been reported that the absorption of the liquid medium by a PMMA substrate affects the position of the wavelength spectrum of a Fabry-Perot

26

Optical Manipulation and Sensing in a Microfluidic Device

795

781.0

Peak 1 Peak 2 Peak 3 Peak 4

780.5

Wavelength (nm)

780.0 779.5 779.0 778.5 778.0 777.5 777.0 776.5 0

2

4

6 8 Time (min)

10

12

14

Fig. 27 Value of the wavelength for the peaks of the MDR spectrum for a 90 μm glass microsphere in water under continuous illumination

cavity [47]. As the Fabry-Perot is effectively a microcavity, this presents a potential problem in the use of polymer or other material-based microcavities for long-term sensing applications. In order to investigate this effect, the MDR spectrum was recorded over a period of 1 h, in 5 min intervals, with the illumination only incident on the microcavity when measurements of the spectrum were taken. The MDR spectrum for a glass microsphere is shown in Fig. 28 and for a polymer microsphere in Fig. 29. The peaks were defined and the changes in the wavelength position were plotted for each time measurement (see Fig. 30). We observe minor fluctuations in the wavelength position over a period of 1 h; however, the long-term stability of the MDR within microcavity is maintained. Similar observations are made for the polystyrene microsphere (see Fig. 31) indicating that for both materials, the aforementioned swelling of the microcavity via absorption of the surrounding medium is not observable and there is no significant effect on the long-term stability for the MDR sensing system. The MDR optical sensing technique has the potential to measure localized temperature in several ways: first, a change in temperature results in a change in the fluid density that produces a change in the local refractive index of the solution: second, energy absorbed by the microsphere from localized heating results in a swelling of the microsphere. These changes are observable by a shift in the MDR spectrum, similar to that observed in Figs. 28 and 29. Initial experiments were performed by measuring the MDR spectrum of a 90 μm glass microsphere surrounded by distilled water (nW = 1.33 at λ = 589 nm) and slowly replacing the solution with ethanol (nE = 1.36 at λ = 589 nm). The MDR spectrum was measured after the introduction of every amount of ethanol and was recorded when no further shifts in the MDR spectrum were observed indicating the

796

D. Day et al.

Intensity (a.u.)

6

t = 0 min t = 5 min t = 10 min t = 15 min

4

2

0 777

778 779 780 Wavelength (nm)

781

6

Intensity (a.u.)

t = 20 min t = 25 min t = 30 min t = 35 min

4

2

0 777

778 779 780 Wavelength (nm)

781

6

Intensity (a.u.)

t = 40 min t = 45 min t = 50 min t = 55 min t = 60 min

4

2

0 777

780 778 779 Wavelength (nm)

781

Fig. 28 MDR spectra for a 90 μm glass microsphere in a water medium, insert showing magnified image of peak 1

26

Optical Manipulation and Sensing in a Microfluidic Device 0.7

t = 0 min t = 5 min t = 10 min t = 15 min

0.6

Intensity (a.u.)

797

0.5 0.4 0.3 0.2 777

778 779 780 Wavelength (nm)

781

0.7

t = 20 min t = 25 min t = 30 min t = 35 min

Intensity (a.u.)

0.6 0.5 0.4 0.3 0.2 777

780 778 779 Wavelength (nm)

781

0.7

t = 40 min t = 45 min t = 50 min t = 55 min t = 60 min

Intensity (a.u.)

0.6 0.5 0.4 0.3 0.2 777

780 778 779 Wavelength (nm)

781

Fig. 29 MDR spectra for a 90 μm polystyrene microsphere in a water medium, insert showing magnified image of peak 1

798

D. Day et al.

a 6

1

2

3

4

5 6

7

8

9

10

Intensity (a.u.)

5

4

3

2

1 778

777

779

781

780

Wavelength (nm)

b

781.0

Peak 1 Peak 2 Peak 3 Peak 4 Peak 5 Peak 6 Peak 7 Peak 8 Peak 9 Peak 10

780.5 Wavelength (nm)

780.0 779.5 779.0 778.5 778.0 777.5 777.0 776.5 0

5

10

15

20

25

30

35

40

45

50

55

60

Time (min)

Fig. 30 Stability of the MDR spectrum for a 90 μm glass microsphere where (a) shows the identification of the peak positions at t = 0 and (b) plots the shift in wavelength of the peak position from time t = 0 to t = 60 min

surrounding medium had completely transitioned from water to ethanol. The change in the MDR spectrum in response to the change in the refractive index of the surrounding solution is shown in Fig. 32, with a wavelength red shift of 0.07  0.03 nm observed. In order to determine the resolution of the MDR sensing system, a series of liquids with a known refractive index is needed to be tested. From Table 3 we can

26

Optical Manipulation and Sensing in a Microfluidic Device

799

a 0.7

Intensity (a.u.)

0.6

0.5

0.4

0.3

0.2 777

778

779 Wavelength (nm)

780

781

b 781.0

Peak 1 Peak 2 Peak 3 Peak 4 Peak 5 Peak 6

780.5

Wavelength (nm)

780.0 779.5 779.0 778.5 778.0 777.5 777.0 776.5 0

5

10

15

20

25 30 35 Time (min)

40

45

50

55

60

Fig. 31 Stability of the MDR spectrum for a 90 μm polystyrene microsphere where (a) shows the identification of the peak positions at t = 0 and (b) plots the shift in wavelength of the peak position from time t = 0 to t = 60 min

generate a glycerin-water solution with a refractive index ranging from 1.33 to 1.474. Four solutions were prepared using this table: the first had a glycerin concentration of 7% (n = 1.341), the second had a glycerin concentration of 15% (n = 1.351), the third was 31.85% (n = 1.373), and the fourth was water (n = 1.333). The observations of the MDR spectrum were performed for a single 90 μm glass microsphere for each of the refractive index solutions. As with the water to ethanol experiment, each solution is introduced in single drops and the MDR spectrum is observed on the CRO; this is repeated until no further additions of solution result in

800

D. Day et al. 6 Ethanol Water 5

Intensity (a.u)

4

3

2

1 777

778 779 780 Wavelength (nm)

781

Fig. 32 Comparison of MDR spectrum between ethanol and water surrounding medium for a single 90 μm glass microsphere Table 3 Refractive index of glycerin-water solutions at 20

Water % 68.15 85 93 100

Glycerin % 31.85 15 7 0

Refractive index nD 1.373 1.351 1.341 1.333

an observable shift in the wavelength position of the MDR spectrum that is then recorded. The MDR spectrum for each solution is presented in Fig. 33. The corresponding shifts in the peak position of the MDR spectrum are recorded and plotted in Fig. 34. The trend lines are a linear extrapolation both forward and backward to cover the entire refractive index range that is reasonable for the setup used in these experiments. The shifts in the MDR peaks correspond to a sensor sensitivity of 9.66  102 RIU, where the sensitivity of the sensor is defined as Δn which defines Δλ the minimum detectable refractive index variation in this system as a function of the shift in the MDR spectra.

Summary The integration of an SPR-based optical manipulation technique and an MDR sensing technique has strong potential as a noninvasive, nondestructive, all optical-based sensing system. This technique shows particular promise in extending the sensing regime away from the surface of the device by the use of micrometer-

26

Optical Manipulation and Sensing in a Microfluidic Device

801

Water

5 Glycerine/Water (7%)

Intensity (a.u)

4

Glycerine/Water (15%) Glycerine/Water (31.85%)

3

2

1 777

778

779

780

781

Wavelength (nm)

Fig. 33 MDR spectrum response to changes in local refractive index via change in surrounding medium for a single 90 μm glass microsphere

Fig. 34 Plot showing position of the wavelength peaks from Fig. 33 versus refractive index

sized spheres which under MDR coupling act as sensing elements for all regions within the near field of the sphere. The use of an SPR technique allows for the trapping of the microsphere within an arbitrarily defined position within the microsphere provided illumination of the area is achievable at angles greater than the critical angle, which would allow for the technique to be performed within nearly

802

D. Day et al.

any arbitrary designed microfluidic device which presents a significant increase in freedom of design over other integrated MDR sensing systems. The potential of an SPR-based optical trap is demonstrated inside a microfluidic device fabricated by direct laser cutting and hot embossing in a PMMA polymer substrate. The surface plasmon technique was selected for its ease in design, for its ability to trap over a large (hundreds of microns) area, and for its compatibility with MDR coupling systems. The particle trapping efficiency (PTE) of 4, 10, and 15 μm polystyrene microspheres is characterized for both patterned and unpatterned gold surfaces at both on- and off-resonance incident angles. On resonance, a 40% increase in the PTE of both 4 μm (0 – 40%) and 15 μm (50 – 90%) microspheres at higher intensity incident light (40 and 60 mW, respectively) via the patterning of the metal surface, while 50–60% increases in the PTE of 10 μm microspheres are observed for lower-intensity incident light (20 and 40 mW). Off resonance shows no PTE for almost all incident intensities and microspheres offering the potential to act as an on/off switch for a continuous illumination source. The patterning of the metal surface was demonstrated to act as a method of localization of the optical trap, with the enhancement to the PTE being observed for only particles trapped within the 100  100 μm patterned region of the detection area, reducing the trapping volume via a factor of 4. As the location of the patterning of the metal surface or the location of the incident focal spot is arbitrary, SPR trapping is demonstrated to be an excellent trapping technique for trapping of a particle anywhere within a microfluidic device. It was shown that both polystyrene and soda lime-based microspheres posses stable MDR mode profiles over long-term use as a maximum wavelength shift of 0.04 nm was observed for microspheres resting in both water and ethanol solutions for a period of 1 h. The investigation of the stability of the MDR modes of a glass microsphere under continuous illumination was undertaken; no temperature-induced wavelength shift was observed with the microsphere. A very high Q-factor of 11,124 was observed in a 90 μm glass microsphere in water solution and the resolution of such a system was determined to be 9.66  102 RIU. The integration of an SPR trap with an MDR sensing technique for arbitrary localized sensing system was demonstrated with a single 90 μm glass microsphere observed to be trapped within one of several 200  200 μm patterned regions in a microfluidic device. The MDR mode from the microsphere is then collected with an observed Q value of 2,924 with a visibility of 0.267. Environment sensing is observed in a microsphere held against a 20 μm/s flow rate via SPR trapping; a refractive index shift of 0.04 is observed in the MDR spectra with a resolution of 7.75  102 RIU. To our knowledge this is the first MDR sensing performed under such coupling and trapping conditions. When the SPR trap was turned off, the microsphere was observed to be removed via flow forces from the detection region of the system. This demonstrates the potential of this technique to position a sensing element within an arbitrary location within a microfluidic device, perform highresolution sensing experiments, and then be able to remove the sensing elements to prevent interference in the experiment under investigation.

26

Optical Manipulation and Sensing in a Microfluidic Device

803

References 1. Mark M, Crain J, Douglas J et al (2009) Fully integrated three-dimensional electrodes for electrochemical detection in microchips: fabrication, characterization, and applications. Anal Chem 81:4762–4769 2. Mandenius C (2000) Electronic noses for bioreactor monitoring. Adv Biochem Eng Biotechnol 66:65–82 3. Navrtil M, Norberg A, Lembrn L, Mandenius C (2005) On-line multi-analyzer monitoring of biomass, glucose and acetate for growth rate control of a Vibrio cholerae fed-batch cultivation. J Biotechnol 115:67–79 4. Wannemacher R, Quinten M, Pack A (1999) Evanescent-wave scattering in near-field optical microscopy. J Microsc 194:260–264 5. Lambeck PV (1999) Remote opto-chemical sensing with extreme sensitivity: design, fabrication and performance of a pigtailed integrated optical phase-modulated Mach-Zehnder interferometer system. Sens Actuators B 61:100–127 6. Tanyeri M, Nichkova M, Hammock BD, Kennedy IM (2005) Chemical and biological sensing through optical resonances in microcavities. In: Imaging, manipulation, and analysis of biomolecules and cells: fundamentals and applications III, vol 5699. Proceedings of the SPIE, San Jose, pp 227–236 7. Mandai S, Serey X, Erickson D (2010) Nanomanipulation using silicon photonic crystal resonators. Nano Lett 10:99–104 8. Huh YS, Chung AJ, Erickson D (2009) Surface enhanced Raman spectroscopy and its application to molecular and cellular analysis. Microfluid Nanofluid 6:285–297 9. Homola J, Koudela I, Yee SS (1999) Surface plasmon resonance sensors based on diffraction gratings and prism couplers: sensitivity comparison. Sens Actuators B 54:16–24 10. Garcia-Chavez V, Spalding GC, Dholakia K (2005) Near- field optical manipulation by using evanescent waves and surface plasmon polaritons. In: Optical trapping and optical micromanipulation II, vol 5930. Proceedings of the SPIE, San Diego, pp 1–10 11. Fang Q, Kim DP, Li X, Yoon TH, Li Y (2011) Facile fabrication of a rigid and chemically resistant micromixer system from photocurable inorganic polymer by static liquid photolithography (SLP). Lab Chip 11:2779–2784 12. Arnold A, Shopova SI (2011) Whispering gallery mode biosensor: fulfilling the promise of single virus detection without labels. In: Biophotonics: spectroscopy, imaging, sensing and manipulation. Springer, Dordrecht, pp 237–259 13. Lezec HJ, Thio T (2004) Diffracted evanescent wave model for enhanced and suppressed optical transmission through subwavelength hole arrays. Opt Express 12:3629–3651 14. Luo J, Zhuang X, Yao J (2012) Enlarging the evanescent field scope by nanoparticle scattering for nanoparticle sensing of optical fiber sensors. J Nanoeng Nanosyst 226:39–43 15. Dienerowitz M, Gibson G, Bowman R, Padgett M (2011) Holographic tweezers: a platform for plasmonics. In: Optical trapping and optical micromanipulation VIII, vol 8097. Proceedings of the SPIE, San Diego. doi: 10.1117/12.894695 16. Bowman R, Wright A, Padgett M (2010) An SLM-based Shack-Hartmann wavefront sensor for aberration correction in optical tweezers. J Opt 12:124004 17. Zhang J, Lin H, Sun JP, Feng X, Gillis K, Moldover M (2010) Cylindrical acoustic resonator for the re-determination of the Boltzmann constant. Int J Thermophys 31:1273–1293 18. Brouhard G, Schek H, Hunt AJ (1981) Advanced optical tweezers for the study of cellular and molecular biomechanics. Phys Rev Lett 47:1927–1930 19. Reece P, Garcia-Chavez V, Dholakia K (2006) Near-field optical micromanipulation with cavity enhanced evanescent waves. Appl Phys Lett 88:221116 20. Righini M, Girard C, Quidant R (2008) Light-induced manipulation with surface plasmons. J Opt A 10:093001

804

D. Day et al.

21. Righini M, Volpe G, Girard C, Petrov D, Quidant R (2008) Surface plasmon optical tweezers: tunable optical manipulation in the femtonewton range. Phys Rev Lett 100:186804 22. Rindzevicius T, Alaverdyan Y, Sepulveda B et al (2007) Nanohole plasmons in optically thin gold films. J Phys Chem C 111:1207–1212 23. Lalanne P, Rodier JC, Hugonin J (2005) Surface plasmons of metallic surfaces perforated by nanohole arrays. J Opt A 7:422–426 24. Miller R, Malyarchuk V, Lienau C (2003) Three-dimensional theory on light-induced near-field dynamics in a metal film with a periodic array of nanoholes. Phys Rev B 68:2054151–2054159 25. Cesario J, Quidant R, Badenes G, Enoch S (2005) Electromagnetic coupling between a metal nanoparticle grating and a metallic surface. Opt Lett 30:3404–3406 26. Johnson PB, Christy RW (1972) Optical constants of the noble metals. Phys Rev B 6:4370–4379 27. Thormann E, Simonsen AC, Hansen PL, Mouritsen O (2008) Interactions between a polystyrene particle and hydrophilic and hydrophobic surfaces in aqueous solutions. Langmuir 24:7278–7284 28. Green R, Davies M, Roberts C, Tendler S (1998) A surface plasmon resonance study of albumin adsorption to PEO-PPO-PEO triblock copolymers. J Biomed Mat Res 42:165–171 29. Boxshall K, Wu MH, Cui Z, Cui ZF, Watts J, Baker MA (2006) Simple surface treatments to modify protein adsorption and cell attachment properties within a poly(dimethylsiloxane) micro-bioreactor. Surf Interface Anal 38:198–201 30. Garcia-Chavez V, Quidant R, Reece P, Badenes G, Torner L, Dholakia K (2006) Extended organization of colloidal microparticles by surface plasmon polariton excitation. Phys Rev B 73:1–5 31. Ashkin A, Dziedzic JM (1981) Observation of optical resonances of dielectric spheres by light scattering. Appl Opt 20:1803–1814 32. Caldas P, Jorge P, Araujo FM et al (2010) Interrogation of microresonators using multimode fibers. In: 4th European workshop on optical fiber sensors, vol 7653, Porto. doi:10.1117/ 12.866355 33. Morrish D, Gan X, Gu M (2004) Morphology-dependent resonance induced by two-photon excitation in a micro-sphere trapped by a femtosecond pulsed laser. Opt Express 12:4198–4202 34. Rahman A (2011) Temperature sensor based on dielectric optical microresonator. Opt Fib Technol 17:536–540 35. Poon AW, Courvoisier F, Chang RK (2001) Multimode resonances in square-shaped optical microcavities. Opt Lett 26:632–634 36. Zamora V, Dez A, Andrs MV, Gimeno B (2011) Cylindrical optical microcavities: basic properties and sensor applications. Photonics Nanostruct Fundam Appl 9:149–158 37. Kippenberg TJ, Spillane SM, Vahala KJ (2004) Demonstration of ultra-high-Q small mode volume toroid microcavities on a chip. Appl Phys Lett 85:6113–6115 38. Teraoka I, Arnold S (2009) Resonance shifts of counter-propagating whispering-gallery modes: degenerate perturbation theory and application to resonator sensors with axial symmetry. J Opt Soc Am B 26:1321–1329 39. Savchenkov A, Matsko A, Maleki L (2006) White-light whispering gallery mode resonators. Opt Lett 31:92–94 40. Cai M, Painter O, Vahala KJ, Sercel PC (2000) Fiber-coupled microsphere laser. Opt Lett 25:1430–1432 41. Elliott G, Murugan G, Wilkinson J, Zervas MN, Hewak DW (2010) Chalcogenide glass microsphere laser. Opt Express 18:26720–26727 42. Dumeige Y, Fron P (2006) Whispering-gallery-mode analysis of phase-matched doubly resonant second-harmonic generation. Phys Rev A 74:063804 43. Dominguez-Juarez JL, Kozyreff G, Martorell J (2011) Whispering gallery microresonators for second harmonic light generation from a low number of small molecules. Nat Commun 2:254–257

26

Optical Manipulation and Sensing in a Microfluidic Device

805

44. Serpengzel A, Poon A (2011) Optical processes in microparticles and nanostructures. World Scientific Publishing, Singapore 45. Vollmer F, Arnold S (2008) Whispering-gallery-mode biosensing: label-free detection down to single molecules. Nat Methods 5:591–596 46. Yoshie T, Tang L, Su SY (2011) Optical microcavity: sensing down to single molecules and atoms. Sensors 11:1972–1991 47. Gervinskas G, Day D, Juodkazis S (2011) High-precision interferometric monitoring of polymer swelling using a simple optofluidic sensor. Sens Actuators B 159:39–43

Part V Emerging Biophotonic Materials and Devices

Functional Metal Nanocrystals for Biomedical Applications

27

Lei Shao and Jianfang Wang

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Growth of Gold Nanocrystals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Growth from Gold Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shape and Size Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functionalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface Functionalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shell Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Labeling and Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-Ray Computer Tomography Contrast Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scattering-Based Labeling and Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonlinear Optical Properties for Labeling/Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photothermal and Photoacoustic Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plasmonic Biosensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Refractive Index-Based Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Colorimetric Sensing Controlled by Assembly of Gold Nanocrystals . . . . . . . . . . . . . . . . . . . . Photoluminescence Quenching for Biosensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface-Enhanced Raman Scattering for Biosensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photothermal Conversion-Based Therapy and Drug/Gene Release . . . . . . . . . . . . . . . . . . . . . . . . . . . Photothermal Conversion Properties of Gold Nanocrystals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photothermal Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photothermal Conversion-Controlled Drug/Gene Delivery and Release . . . . . . . . . . . . . . . . . . Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

810 811 811 814 816 816 817 818 818 820 822 823 825 825 827 828 829 830 830 832 833 836 838

L. Shao • J. Wang (*) Department of Physics, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China e-mail: [email protected]; [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_34

809

810

L. Shao and J. Wang

Abstract

Gold nanocrystals can be used in various biomedical applications due to their excellent chemical and physical properties. They also exhibit promising potentials in further clinical practices. In particular, in contrast with small molecules and bulk materials, Au nanocrystals exhibit fascinating plasmonic properties, which allow them to possess extremely large light scattering and absorption cross sections, high sensitivity to surrounding environmental conditions, and remarkable ability to enhance optical signals. These attractive features brought by their plasmonic characteristics endow Au nanocrystals with unique advantages in being employed for medical diagnostic and therapeutic applications. In this chapter, the preparation and functionalization of gold nanocrystals for biomedical applications are briefly introduced. Emphases are thereafter put on the use of Au nanocrystals in a variety of biomedical applications, including labeling, imaging, biosensing, and photothermal conversion-based therapy and release of drugs and genes. Principles of different techniques, as well as the specific plasmonic properties of Au nanocrystals employed for the above applications, are discussed. Application examples are listed when the principles are introduced. The challenges of applying Au nanocrystals into practical biomedical and clinical applications are discussed. Keywords

Absorption • Bioimaging • Biolabeling • Biosensing • Drug delivery • Gold nanocrystals • Gold nanorods • Photothermal conversion • Photothermal therapy • Plasmon coupling • Plasmon resonance • Refractive index-based sensing • Scattering • Surface functionalization • Two-photon photoluminescence

Introduction When metals are divided into fragments of sizes reaching down to 100 nm, the resultant metal nanocrystals have very different electronic, optical, and catalytic properties from their bulk counterparts. Their unique properties, such as size- and shape-dependent optical features, crystalline structure-determined electronic properties, and improved catalytic performances, originate from the large surface areato-volume ratio and the spatial confinement of the conduction band electrons of metal nanocrystals. Among various metal nanostructures, Au nanocrystals have received extensive attention. They have great potentials in biology and medicine due to their nontoxic nature and chemical stability in aqueous environments [1]. Compared with other metal nanostructures, Au nanocrystals have at least three other advantages for biomedical applications. First, reliable and high-yield methods have been developed for preparing Au nanocrystals with different shapes and sizes [2, 3]. Second, various chemical functional groups, such as thiols, phosphines, and amines, possess a certain affinity to Au surface. One therefore can employ ligands containing such functional groups to modify the surfaces of Au nanocrystals [4]. By introducing additional moieties, such as oligonucleotides,

27

Functional Metal Nanocrystals for Biomedical Applications

811

proteins, and antibodies, greater functionality can be achieved. Third, and most importantly, Au nanocrystals exhibit fantastic plasmonic properties. They can support localized surface plasmon resonance, which is the collective coherent oscillation of their conduction band electrons. Benefiting from the plasmon resonance, Au nanocrystals have intriguing scattering and absorption properties and impressive electromagnetic field confinement capabilities at their plasmon resonance wavelengths, which can be easily tuned synthetically by varying the shape, size, and chemical composition. The plasmon wavelengths of Au nanocrystals can be finely varied through the visible to near-infrared (NIR) spectral regions, perfectly matching the biological transparency window and allowing for deep light penetration into soft tissues. The plasmonic properties of Au nanocrystals make them promising candidates for biomedical applications: (i) Their strong scattering is useful for microscopic labeling/imaging-based applications; (ii) their high spectral sensitivity to environmental changes is of great potentials in biomolecular sensing applications; (iii) their high NIR absorption enables efficient laser photothermal therapy; (iv) their large electromagnetic enhancements allow the use of surface-enhanced Raman scattering spectroscopy for bioanalysis. This chapter starts with the discussion of growth and morphology tuning of Au nanocrystals. Surface functionalization of grown Au nanocrystals will be briefly introduced later. Then, emphases will be made on the principle of using Au nanocrystals in biomedical applications, including labeling, imaging, biosensing, and photothermal conversion-based therapy and release of drugs and genes. Application examples will be listed when the principles are introduced.

Growth of Gold Nanocrystals Growth from Gold Sources Reproducible production of Au nanocrystals is of vital importance in their applications. Au nanocrystals for biomedical applications are usually prepared by so-called bottom-up synthetic approaches, where Au nanocrystals are generated through nucleation in aqueous solutions and subsequent overgrowth. Dating back to as early as 1857, Michael Faraday reported on the first procedure for growing colloidal Au nanocrystals, where gold chloride was reduced by phosphorus to form ruby fluid composed of fine Au nanoparticles. Up to now, various methods have been developed for preparing Au nanocrystals, such as wet-chemistry, electrochemical, sonochemical, solvothermal, microwave-assisted, and photochemical reduction techniques. The shape and size of Au nanocrystals can be finely controlled by state-of-the-art growth methods. Currently readily obtained Au nanocrystals include nanospheres, nanorods, nanoplates, nanocubes, nanopolyhedra, nanoshells, and anisotropic nanostructures with various protrusions [3, 5–8]. In the methods for growing Au nanocrystals, aqueous solvated Au salts serve as the Au source and are reduced by various reducing agents, such as sodium borohydride, ascorbic acid, and small Au clusters, under different external stimuli in

812

L. Shao and J. Wang

Fig. 1 Gold nanocrystals grown by the seed-mediated method. (a) Schematic illustrating the seed-mediated growth method. The seeds are stabilized by surfactant molecules. (b) Transmission electron microscopy (TEM) image of Au nanospheres [5]. (c) TEM image of Au nanorods [6]. (d) Scanning electron microscopy (SEM) image of tetrahexahedral Au nanocrystals [7]. (e) TEM image of Au nanostars [8] (Reproduced from Refs. [6–8] with permission from the Royal Society of Chemistry and the American Chemical Society)

solutions containing stabilizing agents. The stimulus can trigger or enhance the reduction of the Au salt. The two most commonly used wet-chemistry growth methods of Au nanocrystals include the one-step Turkevich method and seedmediated method. In the method first described in 1951 by Turkevich, chloroauric acid (HAuCl4) is reduced by citrate in water. Citrate serves as both a reducing agent and an anionic stabilizer. Chemists have made various developments on the Turkevich method and successfully achieved the production of Au nanospheres with diameters ranging from 16 to 150 nm [1]. Seed-mediated growth (Fig. 1a) is also frequently employed to produce spherical nanoparticles of different sizes and nonspherical Au nanocrystals, such as nanocubes, nanorods, nanopolyhedra, nanoplates, and nanostars (Fig. 1b–e). In the seed-mediated method, the diameter of the grown Au nanospheres can be tuned from 5 to 250 nm, by controlling the growth steps and the ratio between the seed and the growth solution [1]. The growth of Au nanocrystals with different shapes can be achieved by selecting different surfactants and ions in growth solutions. Among all nonspherical Au nanocrystals, Au nanorods have attracted flourishing research interests, mainly because of their easy and robust growth methods and synthetically tunable plasmonic properties in a wide wavelength range. The seedmediated growth method can give nearly monodisperse Au nanorods with very high yields (the yield of Au nanorods refers to the ratio between the number of Au nanorods and the total number of Au nanocrystals in the growth product) and uniformity [6, 9, 10]. In a typical growth, 3-nm Au nanoparticle seeds are first

27

Functional Metal Nanocrystals for Biomedical Applications

813

prepared by reducing chloroauric acid with borohydride in an aqueous cetyltrimethylammonium bromide (CTAB) solution. The seed solution is then added into the growth solution containing the Au salt precursor, reducing agent, and CTAB. The CTAB surfactant serves as the stabilizing agent to prevent the aggregation of Au nanorods. It also acts as a “soft template” by forming micelles to direct the longitudinal growth of Au nanorods. The detailed mechanism is still under investigation. The seed-mediated growth method can produce Au nanorods with a yield as high as 95%. Additionally, the size and shape of Au nanorods can be tailored by carefully adjusting the growth conditions, such as the composition of the surfactant, ions contained in the solution, pH of the growth solution, seed amount added to the growth solution, and growth temperature. By varying the growth conditions, as-grown Au nanorods with their aspect ratios from 2.4 to 8.5 can be produced. Apart from colloidal nanocrystals with well-defined geometries such as Au nanospheres and nanorods, branched Au nanostars can also be prepared with the seed-mediated method (Fig. 1e) [8]. The unique geometry of Au nanostars endows them with intense scattering, high environmental sensitivities, NIR absorption properties, and large electromagnetic enhancements. As a result, Au nanostars can be very useful in a number of biomedical applications. The temporal and monetary costs of the wet-chemistry techniques for growing Au nanocrystals are relatively low. The easy and fast preparation of Au nanocrystals therefore gives rise to the birth of a few start-up companies as the commercial suppliers for high-quality Au nanocrystals, such as Nanopartz and NanoSeedz. The ready availability of highquality Au nanocrystals further boosts the fast development of the research activities involving colloidal Au nanocrystals. Although biomedical applications typically employ Au nanocrystals prepared by the bottom-up synthetic approaches mentioned above, Au nanostructures with high uniformity, controlled particle geometries, and precise spatial arrangements are demanded in a number of on-chip diagnostic and bioanalytical applications, which require the employment of top-down fabrication methods. Two types of top-down methods are usually applied for the preparation of Au nanostructures. The first is based on the removal of gold from pre-deposited Au films according to predesigned patterns. The removal of unwanted gold is achieved by using focused ion beam or various etching techniques. Au nanostructures located at predetermined sites can therefore be produced. The second type of top-down methods employs lithography techniques to make masks on substrates. Au layers are then deposited onto the substrates through physical methods, such as thermal, electron-beam evaporation, or sputtering. Au nanostructures with designed patterns are obtained after the liftoff process. Electron-beam lithography is the most commonly used technique for the fabrication of Au nanostructures. State-of-the-art electron-beam lithography can now produce Au nanostructures ranging from below 10 nm to several hundred nanometers. The limitations of the top-down methods are their time-consuming processes, high cost, and damped plasmonic properties. In this chapter, emphasis will be put on Au nanocrystals prepared by the bottom-up methods due to their much wider adoption in biomedical applications.

814

L. Shao and J. Wang

Shape and Size Tuning Fine tuning of the size and shape of colloidal Au nanocrystals in a controlled manner is challenging yet essential for biomedical applications, since the plasmonic properties of Au nanocrystals, such as their resonance wavelength and scatteringto-absorption intensity ratio, are strongly dependent on their geometry (Fig. 2) [6]. For instance, the plasmon wavelength redshifts as the diameter of Au nanospheres is increased (Fig. 2a). The plasmon resonance wavelength corresponding to the electron oscillations along the length axis of Au nanorods exhibits a nearly linear dependence on their aspect ratio (Fig. 2b). The excitation of high-order plasmon modes is strongly dependent on the size and surface curvature of Au nanocrystals [6]. In addition, shape tuning enables researchers to control the exposed crystalline facets of grown Au nanocrystals and thereafter allows for different manners of surface functionalization and overgrowth of other metals. Controlling the surface functionalization manner is important for engineering the assembly and bio-conjunction of Au nanocrystals, while overgrowth of other materials on different facets is crucial for catalytic applications [6]. Taking colloidal Au nanorods as an example, tremendous progress has been made on their shape and size tuning. Unlike most top-down methods limited by

Fig. 2 Gold nanocrystals with synthetically tunable shapes and sizes. (a) Au nanospheres. On the right is a photograph of differently sized Au nanosphere samples in aqueous solutions [5]. (b) Au nanorods. On the right is a photograph of differently sized Au nanorod samples in aqueous solutions. The Au nanocrystal dispersions exhibit distinct colors resulting from their strong scattering and absorption at their plasmon wavelengths, which can be varied by synthetically tuning the diameters of Au nanospheres and aspect ratios of Au nanorods (Reproduced from Ref. [5] with permission from John Wiley & Sons) (Reproduced from Ref. [5] with permission from John Wiley & Sons)

27

Functional Metal Nanocrystals for Biomedical Applications

815

their 10-nm resolution, fine tuning in solutions usually occurs at scales of several tens of nanometers [6]. First, the size or diameter of Au nanorods can be easily varied by changing the amount ratio between the seeds and the growth solution and the number of growth steps, which is similar to the growth of Au nanospheres with different diameters. Second, by combining the seed-mediated growth method together with subsequent chemical modifications, Au nanorods with various head shapes can be obtained. As-grown Au nanorods using the seedmediated method usually possess spherical heads. One can tune the head shape by subsequent overgrowth using the slightly modified growth solution, with as-grown Au nanorods as the seeds. By carefully adjusting the CTAB concentration, pH of the growth solution, and amount of the reactants, one is able to grow Au nanorods with spherical ends into dogbone-like nanorods with relatively flat heads, dumbbell-shaped nanorods with spherical heads thicker than the waist, and even cuboidal Au nanocrystals with flat facets [6]. Third, the length-to-diameter aspect ratio of Au nanorods can be adjusted, enabling one to tailor their longitudinal plasmon resonance wavelengths to accommodate different applications. There are mainly two approaches for adjusting the aspect ratio. One is anisotropic oxidation, and the other is transverse overgrowth. In the anisotropic oxidation process, the aspect ratio of Au nanorods can be synthetically tailored over a broad range by reducing their length without changing their diameter or by reducing their diameter without varying their length. The length of Au nanorods can be shortened by thermal or laser heating, cyanide or Au(III) dissolution, or various oxidants, while the nanorod diameter remains nearly unchanged [6]. Selective shortening of Au nanorods is believed to be enabled by the smaller packing density of CTAB molecules at the two ends of Au nanorods. Mild oxidation can be achieved by using environmentally benign oxygen or hydrogen peroxide. Br ions coming from CTAB play important roles in ensuring the progress of the oxidation reaction. The oxidation rate can be accelerated by increasing the acid concentration or the reaction temperature and stopped by removing the reactants through centrifugation. In another anisotropic oxidation method, where the nanorod diameter is reduced while the length is kept nearly unchanged, preferential etching of the side surface of Au nanorods is realized by selectively capping the nanorod ends with a Ag2O protection layer. This transverse oxidation method can give rise to Au nanorods with longer longitudinal plasmon wavelengths and smaller particle volumes. The overgrowth method, in contrast to anisotropic oxidation, tailors the aspect ratio of Au nanorods by selectively widening their diameter [6]. Small thiol molecules, such as glutathione or cysteine, are first bonded to the ends of Au nanorods to block the longitudinal growth. As a result, Au nanorods undergo transverse overgrowth upon the additional supply of the growth solution, producing Au nanorods with larger diameters but lengths remaining nearly unchanged. The transverse overgrowth process is accompanied by the development of the stable {111} facets. As a result, faceted Au nanocrystals will be produced if a sufficient amount of the growth solution is provided.

816

L. Shao and J. Wang

Functionalization Some synthetic reagents playing important roles in the growth of various types of Au nanocrystals, for example, the CTAB surfactant, which forms a bilayer structure on the Au surface (Fig. 3a), are toxic to cells. To impart biological compatibility to Au nanocrystals and arm them with added functionalities for particular applications, functionalization of Au nanocrystals with appropriate organic or inorganic species is necessary. Commonly employed functionalization techniques mainly include the formation of covalently bonded monolayers of molecules on the Au surface, secondary functionalization based on molecular interactions between functional molecules and the primary molecular anchors pre-adsorbed on the Au surface, and the coating of biocompatible dielectric or polymer shell on Au nanocrystals.

Surface Functionalization In biomedical applications, a number of functional molecular linkers and passivating agents are employed to conjugate Au nanocrystals. These molecules are attached to the Au surface through anchoring groups such as thiolate, dithiolate, dithiocarbamate, amine, carboxylate, selenide, isothiocyanate, and phosphine moieties [1]. Among all the surface chemistry methods for functionalizing Au surface, gold-thiol bonding chemistry is most frequently utilized for completely or partly covering the nanocrystal surface (Fig. 3b, c). Large thiol-terminated polymers with high molecular weights, such as poly(ethylene glycol)s and DNAs, are preferred for functionalization, since small thiol molecules have a small steric effect and cannot overcome the attractive force between Au nanocrystals. Thiolated poly(ethylene glycol) (PEG-SH) is by far the most commonly employed surface ligand for functionalizing Au nanocrystals in biomedical applications. The hydrophilicity and long chain length of PEG-SH endow Au nanocrystals with good dispersibility and high stabilities in aqueous solutions. PEG-SH also permits Au nanocrystals in aqueous dispersions to be conjugated with lipophilic molecules and prevents the uptake and clearance of Au nanocrystals in biological systems by blocking adsorption of serum proteins and opsonins [1].

Fig. 3 Schematics showing the functionalization of Au nanocrystals. (a) CTAB bilayer-capped Au nanocrystals. (b) Functionalized Au nanocrystals with the CTAB bilayer completely exchanged. (c) Functionalized Au nanocrystals with the CTAB bilayer partially exchanged. (d) Secondarily functionalized Au nanocrystals [6] (Reproduced from Ref. [6] with permission from the Royal Society of Chemistry)

27

Functional Metal Nanocrystals for Biomedical Applications

817

To “decorate” Au nanocrystals with new functionalities in biomedical applications, secondary functionalization is often performed (Fig. 3d). The secondary functionalization also plays an important role in the assembly of Au nanocrystals into a variety of superstructures. Techniques for realizing the secondary functionalization include electrostatic attraction, antibody/antigen interaction, DNA sequence recognition, etc. For example, one can deposit biological polyelectrolytes and proteins onto the surface of CTAB-capped Au nanocrystals through electrostatic attraction. The adsorbed polyelectrolytes and proteins can prevent the Au nanocrystals from aggregation, and the functionalized Au nanocrystals can have a significantly decreased amount of toxic CTAB released in solutions. The tolerable dose of CTAB-capped Au nanocrystals will, as a result, be very different from that of free CTAB molecules alone. The secondary functionalization technique can also introduce functional groups, such as carboxylic acid and amine groups, for further modification.

Shell Encapsulation Entrapping Au nanocrystals within polymer or dielectric shell to form core/shell nanostructures is another popular method for functionalizing Au nanocrystals (Fig. 4). Polymers can be coated onto the Au surface by polymerization reactions on the premodified Au surface or by layer-by-layer deposition techniques. For instance, polyaniline (PANI) shells can be formed on the surface of positively charged, CTAB-capped Au nanocrystals by surfactant-assisted chemical oxidative polymerization (Fig. 4a) [11]. In this process, an anionic surfactant, sodium dodecylsulfate (SDS), is first adsorbed on positively charged, CTAB-capped Au nanocrystals. SDS adsorption brings aniline molecules to the surface of Au nanocrystals. Aniline polymerization leads to the formation of PANI shell. The polymerization process can be repeated. By increasing the number of the

Fig. 4 Gold nanocrystals encapsulated with polymer or dielectric shell. (a) TEM image of Au nanospheres coated with PANI shell [11]. (b) TEM image of Au nanorods coated with mesoporous silica shell [13] (Reproduced from Ref. [13] with permission from the American Chemical Society)

818

L. Shao and J. Wang

polymerization cycles, the thickness of PANI shell can be increased. On the other hand, polyelectrolytes can be attached onto surfactant-stabilized Au nanocrystals by layer-by-layer deposition [1, 6]. One can sequentially deposit negatively and positively charged polyelectrolytes onto the positively charged surface of Au nanocrystals through electrostatic interaction. Commonly employed polyelectrolytes include polyacrylic acid (PAA), polystyrenesulfonate (PSS), poly(diallyldimethylammonium chloride) (PDADMAC), and poly(allylamine hydrochloride) (PAH). PAA and PSS are negatively charged, while PDADMAC and PAH are positively charged. The deposition cycle can be repeated multiple times to form polymer shells with different thicknesses. Layer-by-layer assembly has been found to be very useful in a number of multidrug and gene delivery applications. Among dielectric shell-encapsulated Au nanocrystals, gold/silica core/shell nanostructures are especially attractive. Both solid and mesoporous silica can be coated onto the surface of Au nanocrystals by facile silane chemistry [12, 13]. Raman reporters and fluorescent molecules can be easily embedded into the silica shell during the coating process. The resonant excitation of the plasmon modes of Au nanocrystals will remarkably enhance the Raman scattering and fluorescence intensities of molecules [13–15], making the gold/silica core/shell nanostructures highly preferable for bioimaging and biosensing. Mesoporous silica shells (Fig. 4a) further provide large specific surface areas and pore volumes for high drug payload [16]. The plasmon-induced photothermal conversion properties of the encapsulated Au nanocrystals can facilitate the release of these drugs.

Labeling and Imaging The unique plasmonic properties of Au nanocrystals, including their large scattering and absorption cross sections and strong electromagnetic field enhancing capability, can be employed to develop Au nanocrystal-facilitated imaging and labeling techniques for biomedical applications. These imaging and labeling techniques can have higher contrast, improved resolution, deeper penetration, and higher stability, compared with the conventional fluorescence-based imaging method.

X-Ray Computer Tomography Contrast Agent X-ray Computer Tomography (CT) is a commonly used, cost-effective imaging technique, offering broad availability for diagnostic applications. Soft tissues can be differentiated from electron-dense bones by X-ray CT due to their density difference and the resultant difference in X-ray attenuation. Highly water-soluble small organic iodinated molecules are commonly employed CT contrast enhancers. However, their rapid renal clearance and nonspecific vascular permeation make their imaging time to be very short [17]. They are disabled to increase the contrast between normal and diseased tissues [17]. Gold nanocrystals, as an alternative for acting as CT contrast enhancers, have been utilized to detect tumors in mice [18, 19].

27

Functional Metal Nanocrystals for Biomedical Applications

819

Fig. 5 Gold nanocrystals as CT contrast enhancers. (a–c) Au nanoparticles of 1.9  0.1 nm in size suspended in phosphate-buffered saline as X-ray contrast agents for in vivo imaging [18]. The X-ray images of mouse hind legs without contrast agents (a), with Au nanoparticles injected (b), and with an equal weight of an iodine contrast agent (c) are given for comparison. The arrow points to the leg with tumor and increased vascularity. The arrowhead points to a vessel of 100 μm in diameter. The scale bar is 5 mm (d, e) CT images in a rat hepatoma model before (d) and 4 h after (e) the injection of PEG-coated Au nanoparticles (100 mg/mL) into the tail vein [19]. The arrows indicate the hepatoma regions and the arrowhead indicates the aorta. The Au nanoparticles have an average diameter of 31  7.5 nm (Reproduced from Refs. [18] and [19] with permission from the Radiological Society of North America and the American Chemical Society)

With a higher atomic number (Au 79 versus I 53), gold provides about 2.7 times greater contrast per unit weight than iodine and therefore can reduce patient radiation dose (Fig. 5a–c) [18]. In CT imaging, the amount of Au content per unit volume is important irrespective of the particle shape and size. Au nanocrystals can be stabilized in phosphate-buffered saline and gum Arabic or functionalized with PEG-SH as biocompatible X-ray CT contrast agents. PEG coating can greatly extend the blood circulation time of Au nanocrystals (>4 h) than the iodine contrast agent iopromide (7 days). Thus, making water-dispersible bioconjugated Si QDs remains an important challenge to be overcome. More recently, Erogbogbo et al. reported the synthesis of water-dispersible Si QDs using phospholipid micelles [46, 116]. The Si QDs were prepared by laser-driven pyrolysis of silane and subsequently followed by HF–HNO3 etching. Styrene, octadecene, or ethyl undecylenate was used to functionalize the Si QD surfaces and allowed them to be dispersible in organic solvents. Phospholipid micelles were then used to encapsulate Si QDs whereby making them dispersible in water and generating a hydrophilic shell terminated with PEG groups. For in vitro cell-labeling studies, amine-functionalized phospholipid PEGs were used to encapsulate Si QDs, and these encapsulated particles were used as biological luminescent probes. The uptake of micelle-encapsulated Si QDs into pancreatic cancer cells was confirmed by confocal imaging. Later, the same group further improved the Si QD formulation that avoided enzymatic degradation, evaded uptake by the reticuloendothelial system (RES), maintained stability in the acidic tumor microenvironment, and produced bright and stable photoluminescence in vivo [86]. More specially, nude mice bearing subcutaneously implanted Panc1-tumors were intravenously injected with Si QDs conjugated with RGD peptide [117]. Wavelengthresolved spectral unmixing confirmed the presence of emission from Si QDs targeted to the tumor vasculature. The luminescence intensity at the tumor site maintained up to 40 h (Fig. 8). Blood assay and histological analysis of tissue sections revealed no sign of systemic and organ toxicity in the treated animals. This demonstrated the effectiveness of tumor targeting using the bioconjugated Si QDs. Many studies have pointed out that C-dots have good photostability [118, 119], and they can be made water dispersible for bioimaging applications. For example, Bhunia et al. demonstrated the preparation of C-dots by using carbohydrate carbonization method, and these particles displayed tunable emissions ranging from blue to red [67]. Using these prepared particles, TAT peptide- and folate-functionalized C-dots were synthesized and employed for targeted imaging of HeLa cells. Cao et al. reported the synthesis of propionylethylenimine-co-ethylenimine (PPEI-EI)functionalized C-dots for multiphoton imaging of live cells [120]. Upon incubating the prepared C-dots formulation in live MCF-7 cells for 2 h, the labeled cells became brightly illuminated when they are exposed to 800 nm excitation source. The C-dots was observed to label mainly in the cell membrane and the cytoplasm of the MCF-7 cells. The as-synthesized C-dots were also applied for imaging of living tissues and demonstrating their potential for small animal imaging and lymph node mapping applications in the near future. Kong et al. reported the generation of bioconjugated C-dot probes for monitoring the pH gradient in tissues with depth varying from 65 to 185 mm [121]. These particles were synthesized by the electrochemical method, and they were conjugated with AE–TPY using EDC chemistry for pH sensing. Basically, the constructed C-dots complex is sensitive toward the pH changes in environment. The authors showed that the fluorescence emission intensity of C-dots–TPY formulation increases as the pH in the environment decreases. These particles were used to

28

Cadmium-Free Quantum Dots for Biophotonic Imaging and Sensing

861

Fig. 8 Time-dependent in vivo luminescence imaging of Panc-1 tumor-bearing mice injected with 5 mg of (A ~ E) RGD-conjugated Si QDs or (K ~ O) nonconjugated Si QDs. The tumors are indicated by white arrows. Background signals and the unmixed SiQD signals are coded in green and red, respectively. Panels F ~ J and panels P ~ T correspond to the images in panels A ~ E and K ~ O, respectively. Ex vivo images (U, W) and luminescence images (V, X) of tumors harvested at 40 h postinjection from mice treated with (U, V) RGD-conjugated Si QDs or (W, X) nonconjugated SiQDs (Adapted with permission from Ref. [86]. Copyright # 2011, American Chemical Society)

label the cancer cells, and subsequently the treated cells were analyzed by 3D two-photon confocal fluorescence imaging where the system monitored the fluorescence intensity changes of the cells by varying the pH environment in the cell culture medium. Huang et al. demonstrated the preparation of green-emitting C-dots functionalized with the near-IR emitting dye (ZW800) and employed them for tumor-bearing mice imaging [122]. The in vivo NIR fluorescence images showed high tumor-to-background contrast and demonstrating the specificity of the C-dots to the tumor cells. GQDs have similar optical property when compared to C-dots, and they were also applied for bioimaging applications. For instance, Dong et al. prepared GQDs formulation using XC-72 carbon black material for cell imaging. The as-prepared QDs were used to stain MCF-7 cells, and the labeled cells were analyzed using confocal laser scanning microscopy technique. It was observed that GQDs were mostly accumulated in the nucleus, and no cell damage was observed [123]. We envision that GQDs can be used as optical probes for in vivo imaging study in the near future. However, several limitations of GQDs need to be overcome before they can be successfully applied for in vivo study. For example, one needs to

862

B. Zhang et al.

improve their quantum yield and colloidal and optical stability in biological fluids. To date, there are several attempts at using GQDs for imaging of living tissues [124] and mice [125], but the result shows that further optimization is needed for perfecting the GQDs for targeted imaging and sensing use.

Nanotoxicity of Cadmium-Free QDs: From Cellular to Primate Studies Owing to the absence of toxic heavy metals such as cadmium, lead, and mercury as active ingredients, the CuInS2, AgInS2, InP, and Si and other heavy-metal-free QDs are of extreme interest for bioimaging applications. Nevertheless, it is still important for one to evaluate the cytotoxicity of these QD formulations before they can be translated into clinical research. In general, the preliminary cytotoxicity of these QDs can be evaluated using a cell viability (MTS) assay. Cytotoxicity studies of cadmium-based QDs with different sizes, shapes, and surface coatings have been extensively reported in the literature [16, 126, 127]. However, very few studies have been reported for CuInS2, AgInS2, Ag2S, and InP QDs. For example, Yong et al. demonstrated that cells treated with micelle-encapsulated CuInS2 QDs for 24 h maintained greater than 80% viability even at a particle concentration as high as 195 μg/mL, thus suggesting low cytotoxicity associated of these QDs [100]. The authors even compared the cytotoxicity between cadmium-based and cadmium-free QDs, where cysteine-coated CdTe QDs were synthesized and used as a reference for MTS studies. It was observed that the particle concentrations corresponding to 50% cell viability were 100 μg/mL and 300 μg/mL for CdTe and CuInS2 QDs, respectively, in Panc-1 cells. This indicates that CuInS2 QDs can be safely loaded into cells at a higher concentration for bioimaging studies. Recently, Liu et al. have investigated the cytotoxicity of AgInS2 QDs [104]. Basically, they have assessed the cytotoxicity of the AgInS2 NC sample on human pancreatic cancer cells (Panc-1) using the MTS assay. Exposure of the Panc-1 cells to AgInS2 QDs led to insignificant change in cell viability. The cells treated with QD formulation maintained greater than 80% viability even at a QD dosage as high as 500 μg/mL, demonstrating the low cytotoxicity of these QDs. In the case of InP/ZnS QDs, the viability of treated cells was also in the range of 80–90% relative to that of untreated cells, even at a treatment concentration as high as ~300 μg/mL [106]. It is worth noting that this dosage is at least 30 times higher than that of the cytotoxicity dosage of CdTe QDs. Similarly, Mn-doped ZnS QDs displayed the same trend, and it was reported that particle concentration that retained 50% cell viability is at least ten times the dosage of CdSe QDs. More importantly, it was reported that no injuries were found in major organs of nude mice when a high dosage of 100 μl of 50 nM QDs was intravenously injected into the small animal [115]. In addition to doped QDs, Hocaoglu et al. demonstrated that the NIH/3 T3 cells treated with Ag2S QDs at 600 μg/mL exhibit no significant difference when compared to the control group [79]. Later, Zhang et al. investigated the biodistribution of PEGylated Ag2S QDs in mice for 2 months [128]. It was found that the injected QDs first accumulated at RES system (e.g.,

28

Cadmium-Free Quantum Dots for Biophotonic Imaging and Sensing

863

spleen and liver), and they were then cleared out from the body after 60 days. Also, the authors discovered that there were no changes in the body weight, blood, and hematological parameters for mice treated with 30 mg/kg Ag2S QDs upon comparing to the control group. All these studies are suggesting that the Ag2S QDs formulation is nontoxic, and they can be further employed for clinical applications such as tumor-guided surgery and possible for therapy use as well. More recently, our group investigated the in vivo toxicity of Si QDs formulation [86]. Specifically, we treated the mice with Si QDs at a dosage as high as ~380 mg/kg, and no changes in the body weight, eating, drinking, exploratory behavior, activity, and physical features were observed over 3 months of the evaluation period. More importantly, no abnormalities were detected from the histological analysis of the major organs harvested from the treated mice (Fig. 9). Based on this finding, we continued with

Fig. 9 H&E-stained tissue sections of the heart (A ~ D), kidney (E ~ H), liver (I ~ L), lungs (M ~ P), and spleen (Q ~ T) from mice treated with ~380 mg/kg of micelle-encapsulated Si QDs at different time points postinjection. The control group (column 1) was treated with saline only and sacrificed 24 h postinjection (Reprinted with permission from Ref. [86]. Copyright # 2011, American Chemical Society)

864

B. Zhang et al.

a pilot study of the Si QD formulation in nonhuman primates (NHPs) to check whether a similar trend could be observed in this advanced animal model [129]. Body weights of animals were recorded daily, with no significant differences observed between treated and untreated ones. Similarly, the eating, drinking, grooming, exploratory behavior, physical features, neurological status, and urination of the treated animals were normal throughout the evaluation period. The blood chemistry parameters of the animals were determined in our study; no sign of infection or toxic reactions that can be attributed to the Si QDs were found. Indicators of liver function showed no abnormalities, and no signs of kidney impairment were observed. More importantly, the analyses of histological images from various parts of the organs (e.g., the brain, cerebellum, atrium, ventricle, heart muscle, lung, kidney, liver, spleen, renal tubule, and intestine) of the animals were observed with no discernible sign of nanoparticle-induced changes. Pathologists confirmed that no signs of kidney, liver, or spleen disease or damage were present in these histological images. Several in vitro studies have demonstrated that C-dots are highly biocompatible, and their bioinertness is comparable to PEG molecules [65, 130]. A more recent study has shown that the cell viability of HeLa cells maintained over 90% when treated with chemically functionalized C-dots at concentration ~500 μg/mL [60]. Yang et al. performed in vivo C-dots toxicity study using CD-1 mice. The isotope ratio analysis revealed that the injected 13C-dots were accumulated in the liver and spleen after 6 h of treatment. No abnormal behavior was observed for mice treated with C-dots at concentration as high as 40 mg/kg [65]. Huang et al. reported rapid renal clearance of C-dots from the small animal body, and such fast excretion rate of the particles may be attributed to their ultrasmall hydrodynamic diameter that is around 4.1 nm [122]. In the case of graphene QDs, it was observed that MC3T3 cells exposed to 400 μg/mL GQDs maintained over 80% viability, and such findings suggest that both GQDs and C-dots possess similar biocompatibility [131]. All these results suggest that rationally designed QD formulation based upon a nontoxic element (CuInS2, AgInS2, InP, doped Zn chalcogenide, C, and Si) and FDA-approved components can be expected to create nontoxic multifunctional QDs for biomedical and clinical research applications.

Summary and Future Outlook In this review, we have summarized the current research status of engineering of cadmium-free QDs for biomedical and medicinal applications. Specifically, we have highlighted the current findings on the development of bioconjugated InP, CuInS2, AgInS2, doped Zn chalcogenide, C, and Si QDs for cell labeling, targeted delivery, tumor imaging, and their biodistribution profile. From their ported data, it is evident that these nanocrystals have a much lower cytotoxicity upon comparing to the cadmium-based QDs. While cadmium-free QDs have emerged as promising candidates to replace cadmium-based QDs for biological applications, there are obviously a number of issues that need to be overcome before their full potential can be realized for clinical applications. For example, after coating the QDs with

28

Cadmium-Free Quantum Dots for Biophotonic Imaging and Sensing

865

biocompatible polymers, the dimensions of QDs increase, and the size is close to that of a large protein, which may affect their excretion profile. Also, cadmium-free QDs tend to have larger full-width half maximum, and it is 2–3 times larger than that of the full-width half maximum of cadmium-based QDs. Thus, it will be difficult for one to employ these QDs for high-sensitive multiplex imaging in vitro and in vivo. In addition, the composition of elements in the ternary QD varies from individual particle to particle in the same synthesis batch, and it is challenging to isolate the particles having the same composition. It is worth noting that most of the bioimaging experiments are performed using small animal models such as rats and mice, since they are easy to care and cost-effective. However, imaging of small animals is different from imaging in humans, and the set of experimental imaging procedures for animals cannot be scaled up proportionally to humans. To overcome this challenge, new setup of optical imaging system and individualized treatment plan of various QD formulations is essential and viable for effective imaging of targeted parts in the human body. Naturally, many more studies and investigations are needed to address these issues before we can translate these cadmium-free QDs for clinical use. This process seems slow, but we should not give up hope since many active and distinguished researchers worldwide are currently optimizing and testing QDs in vivo that will eventually lead to perfected formulations for human theranostic applications. By observing and analyzing the current trend in the QD biomedical research community, a definite demand for creating new types of cadmium-free QDs is continuously needed since each and every type of cadmium-free QDs has their own specific biomedical use. In the near future, we foresee that the colloidal synthesis protocols of cadmium-free QDs will be optimized, standardized, and reach a plateau stage where researcher can easily prepare a wide variety of size- and shape-controlled QDs for specific biomedical applications such as siRNA delivery, single-molecule imaging, targeted tumor imaging, and QD–FRET systems for ultrasensitive imaging of cells.

References 1. Alivisatos A (1996) Semiconductor clusters, nanocrystals, and quantum dots. Science 271 (5251):933–937 2. Grieve K, Mulvaney P, Grieser F (2000) Synthesis and electronic properties of semiconductor nanoparticles/quantum dots. Curr Opin Coll Interf Sci 5(1):168–172 3. Michalet X et al (2005) Quantum dots for live cells, in vivo imaging, and diagnostics. Science 307(5709):538–544 4. Bruchez M et al (1998) Semiconductor nanocrystals as fluorescent biological labels. Science 281(5385):2013–2016 5. Chan WC, Nie S (1998) Quantum dot bioconjugates for ultrasensitive nonisotopic detection. Science 281(5385):2016–2018 6. Zhang C-Y et al (2005) Single-quantum-dot-based DNA nanosensor. Nat Mater 4 (11):826–831 7. Gao X et al (2004) In vivo cancer targeting and imaging with semiconductor quantum dots. Nat Biotechnol 22(8):969–976

866

B. Zhang et al.

8. Dubertret B et al (2002) In vivo imaging of quantum dots encapsulated in phospholipid micelles. Science 298(5599):1759–1762 9. Wu X et al (2002) Immunofluorescent labeling of cancer marker Her2 and other cellular targets with semiconductor quantum dots. Nat Biotechnol 21(1):41–46 10. Bagalkot V et al (2007) Quantum dot-aptamer conjugates for synchronous cancer imaging, therapy, and sensing of drug delivery based on bi-fluorescence resonance energy transfer. Nano Lett 7(10):3065–3070 11. Yang H et al (2006) GdIII-functionalized fluorescent quantum dots as multimodal imaging probes. Adv Mater 18(21):2890–2894 12. Wang S et al (2007) Core/shell quantum dots with high relaxivity and photoluminescence for multimodality imaging. J Am Chem Soc 129(13):3848–3856 13. Chen O et al (2013) Compact high-quality CdSe–CdS core–shell nanocrystals with narrow emission linewidths and suppressed blinking. Nat Mater 12:445–451 14. Dabbousi B et al (1997) (CdSe) ZnS core-shell quantum dots: synthesis and characterization of a size series of highly luminescent nanocrystallites. J Phys Chem B 101(46):9463–9475 15. Zheng Y, Gao S, Ying JY (2007) Synthesis and cell-imaging applications of glutathionecapped CdTe quantum dots. Adv Mater 19(3):376–380 16. Hardman R (2006) A toxicologic review of quantum dots: toxicity depends on physicochemical and environmental factors. Environ Health Perspect 114(2):165 17. Yong K-T et al (2013) Nanotoxicity assessment of quantum dots: from cellular to primate studies. Chem Soc Rev 42(3):1236–1250 18. Derfus AM, Chan WC, Bhatia SN (2004) Probing the cytotoxicity of semiconductor quantum dots. Nano Lett 4(1):11–18 19. Murray CB, Norris DJ, Bawendi MG (1993) Synthesis and characterization of nearly monodisperse CdE (E = sulfur, selenium, tellurium) semiconductor nanocrystallites. J Am Chem Soc 115(19):8706–8715 20. Murray C, Kagan C, Bawendi M (2000) Synthesis and characterization of monodisperse nanocrystals and close-packed nanocrystal assemblies. Ann Rev Mater Sci 30(1):545–610 21. Trindade T, O’Brien P, Pickett NL (2001) Nanocrystalline semiconductors: synthesis, properties, and perspectives. Chem Mater 13(11):3843–3858 22. Samokhvalov P, Artemyev M, Nabiev I (2013) Basic principles and current trends in colloidal synthesis of highly luminescent semiconductor nanocrystals. Chem A Eur J 19(5):1534–1546 23. Park J et al (2007) Synthesis of monodisperse spherical nanocrystals. Angew Chem Int Ed 46 (25):4630–4660 24. Xie R, Rutherford M, Peng X (2009) Formation of high-quality I  III  VI semiconductor nanocrystals by tuning relative reactivity of cationic precursors. J Am Chem Soc 131 (15):5691–5697 25. Bharali DJ et al (2005) Folate-receptor-mediated delivery of InP quantum dots for bioimaging using confocal and two-photon microscopy. J Am Chem Soc 127(32):11364–11371 26. Ryu E et al (2009) Step-wise synthesis of InP/ZnS core  shell quantum dots and the role of zinc acetate. Chem Mater 21(4):573–575 27. Xie R, Battaglia D, Peng X (2007) Colloidal InP nanocrystals as efficient emitters covering blue to near-infrared. J Am Chem Soc 129(50):15432–15433 28. Kortan A et al (1990) Nucleation and growth of cadmium selenide on zinc sulfide quantum crystallite seeds, and vice versa, in inverse micelle media. J Am Chem Soc 112(4):1327–1332 29. Zimmer JP et al (2006) Size series of small indium arsenide-zinc selenide core-shell nanocrystals and their application to in vivo imaging. J Am Chem Soc 128(8):2526–2527 30. Gerion D et al (2001) Synthesis and properties of biocompatible water-soluble silica-coated CdSe/ZnS semiconductor quantum dots. J Phys Chem B Condens Phase 105(37):8861–8871 31. Fernández-Argüelles MT et al (2007) Synthesis and characterization of polymer-coated quantum dots with integrated acceptor dyes as FRET-based nanoprobes. Nano Lett 7 (9):2613–2617

28

Cadmium-Free Quantum Dots for Biophotonic Imaging and Sensing

867

32. Smith AM, Nie S (2008) Minimizing the hydrodynamic size of quantum dots with multifunctional multidentate polymer ligands. J Am Chem Soc 130(34):11278–11279 33. Alivisatos AP, Gu W, Larabell C (2005) Quantum dots as cellular probes. Annu Rev Biomed Eng 7:55–76 34. Wang Y et al (2013) Functionalized quantum dots for biosensing and bioimaging and concerns on toxicity. ACS Appl Mater Interf 5:2786 35. Qian J et al (2007) Imaging pancreatic cancer using surface-functionalized quantum dots. J Phys Chem B 111(25):6969–6972 36. Kumar S et al (2013) Room temperature ferromagnetism in Ni doped ZnS nanoparticles. J Alloys Compd 554:357–362 37. Xie RS et al (2011) Fe:ZnSe semiconductor nanocrystals: synthesis, surface capping, and optical properties. J Alloys Compd 509(7):3314–3318 38. Zou WS et al (2011) Synthesis in aqueous solution and characterisation of a new cobalt-doped ZnS quantum dot as a hybrid ratiometric chemosensor. Anal Chim Acta 708(1–2):134–140 39. Pradhan N et al (2005) An alternative of CdSe nanocrystal emitters: pure and tunable impurity emissions in ZnSe nanocrystals. J Am Chem Soc 127(50):17586–17587 40. Liu N et al (2012) Enhanced luminescence of ZnSe:Eu3+/ZnS core-shell quantum dots. J Non Cryst Solids 358(17):2353–2356 41. Reddy DA et al (2012) Effect of Mn co-doping on the structural, optical and magnetic properties of ZnS:Cr nanoparticles. J Alloys Compd 537:208–215 42. Pradhan N et al (2007) Efficient, stable, small, and water-soluble doped ZnSe nanocrystal emitters as non-cadmium biomedical labels. Nano Lett 7(2):312–317 43. Pradhan N, Peng XG (2007) Efficient and color-tunable Mn-doped ZnSe nanocrystal emitters: control of optical performance via greener synthetic chemistry. J Am Chem Soc 129 (11):3339–3347 44. Wolkin M et al (1999) Electronic states and luminescence in porous silicon quantum dots: the role of oxygen. Phys Rev Lett 82(1):197–200 45. Warner JH et al (2005) Water-soluble photoluminescent silicon quantum dots. Angew Chem Int Ed 117(29):4626–4630 46. Erogbogbo F et al (2008) Biocompatible luminescent silicon quantum dots for imaging of cancer cells. ACS Nano 2(5):873–878 47. Park J-H et al (2009) Biodegradable luminescent porous silicon nanoparticles for in vivo applications. Nat Mater 8(4):331–336 48. Belomoin G et al (2002) Observation of a magic discrete family of ultrabright Si nanoparticles. Appl Phys Lett 80(5):841–843 49. Wilcoxon J, Samara G, Provencio P (1999) Optical and electronic properties of Si nanoclusters synthesized in inverse micelles. Phys Rev B 60(4):2704 50. Holmes JD et al (2001) Highly luminescent silicon nanocrystals with discrete optical transitions. J Am Chem Soc 123(16):3743–3748 51. Heath JR (1992) A liquid-solution-phase synthesis of crystalline silicon. Science 258 (5085):1131–1133 52. Bley RA, Kauzlarich SM (1996) A low-temperature solution-phase route for the synthesis of silicon nanoclusters. J Am Chem Soc 118(49):12461–12462 53. Bapat A et al (2003) Synthesis of highly oriented, single-crystal silicon nanoparticles in a low-pressure, inductively coupled plasma. J Appl Phys 94(3):1969–1974 54. Littau K et al (1993) A luminescent silicon nanocrystal colloid via a high-temperature aerosol reaction. J Phys Chem 97(6):1224–1230 55. Li X et al (2003) Process for preparing macroscopic quantities of brightly photoluminescent silicon nanoparticles with emission spanning the visible spectrum. Langmuir 19 (20):8490–8496 56. Hua F et al (2006) Organically capped silicon nanoparticles with blue photoluminescence prepared by hydrosilylation followed by oxidation. Langmuir 22(9):4363–4370

868

B. Zhang et al.

57. Sun YP et al (2006) Quantum-sized carbon dots for bright and colorful photoluminescence. J Am Chem Soc 128(24):7756–7757 58. Zheng LY et al (2009) Electrochemiluminescence of water-soluble carbon nanocrystals released electrochemically from graphite. J Am Chem Soc 131(13):4564 59. Lu J et al (2009) One-pot synthesis of fluorescent carbon nanoribbons, nanoparticles, and graphene by the exfoliation of graphite in ionic liquids. ACS Nano 3(8):2367–2375 60. Ding H et al (2013) Luminescent carbon quantum dots and their application in cell imaging. New J Chem 37(8):2515–2520 61. Jeong J et al (2012) Color-tunable photoluminescent fullerene nanoparticles. Adv Mater 24 (15):1999–2003 62. Luo PG et al (2014) Carbon-based quantum dots for fluorescence imaging of cells and tissues. RSC Adv 4(21):10791–10807 63. Wang F et al (2010) One-step synthesis of highly luminescent carbon dots in noncoordinating solvents. Chem Mater 22(16):4528–4530 64. Sahu S et al (2012) Simple one-step synthesis of highly luminescent carbon dots from orange juice: application as excellent bio-imaging agents. Chem Commun 48(70):8835–8837 65. Yang ST et al (2009) Carbon dots as nontoxic and high-performance fluorescence imaging agents. J Phys Chem C 113(42):18110–18114 66. Wang X et al (2010) Bandgap-like strong fluorescence in functionalized carbon nanoparticles. Angew Chem Int Ed 49(31):5310–5314 67. Bhunia SK et al (2013) Carbon nanoparticle-based fluorescent bioimaging probes. Sci Rep 3:1473 68. Shen JH et al (2012) One-pot hydrothermal synthesis of graphene quantum dots surfacepassivated by polyethylene glycol and their photoelectric conversion under near-infrared light. New J Chem 36(1):97–101 69. Lu J et al (2011) Transforming C60 molecules into graphene quantum dots. Nat Nanotechnol 6 (4):247–252 70. Krunks M et al (1999) Structural and optical properties of sprayed CuInS2 films. Thin Solid Films 338(1):125–130 71. Torimoto T et al (2007) Facile synthesis of ZnS-AgInS2 solid solution nanoparticles for a color-adjustable luminophore. J Am Chem Soc 129(41):12388–12389 72. Weissleder R (2001) A clearer vision for in vivo imaging. Nat Biotechnol 19(4):316–317 73. Chemseddine A, Weller H (1993) Highly monodisperse quantum sized CdS particles by size selective precipitation. Berichte der Bunsengesellschaft für physikalische Chemie 97(4):636–638 74. Murray CB et al (2001) Colloidal synthesis of nanocrystals and nanocrystal superlattices. IBM J Res Dev 45(1):47–56 75. Nose K et al (2009) Synthesis of ternary CuInS2 nanocrystals; phase determination by complex ligand species. Chem Mater 21(13):2607–2613 76. Jiang P et al (2012) Water-soluble Ag2S quantum dots for near-infrared fluorescence imaging in vivo. Biomaterials 33(20):5130–5135 77. Zhang Y et al (2012) Ag2S quantum dot: a bright and biocompatible fluorescent nanoprobe in the second near-infrared window. ACS Nano 6(5):3695–3702 78. Zhu C-N et al (2013) Ag2Se quantum dots with tunable emission in the second near-infrared window. ACS Appl Mater Interfaces 5(4):1186–1189 79. Hocaoglu I et al (2012) Development of highly luminescent and cytocompatible near-IRemitting aqueous Ag2S quantum dots. J Mater Chem 22(29):14674–14681 80. Hong G et al (2012) In vivo fluorescence imaging with Ag2S quantum dots in the second near-infrared region. Angew Chem Int Ed 124(39):9956–9959 81. Yamazaki K et al (2000) Long term pulmonary toxicity of indium arsenide and indium phosphide instilled intratracheally in hamsters. J Occup Health Engl Ed 42(4):169–178 82. Wu P, Yan XP (2013) Doped quantum dots for chemo/biosensing and bioimaging. Chem Soc Rev 42(12):5489–5521

28

Cadmium-Free Quantum Dots for Biophotonic Imaging and Sensing

869

83. Yuan X et al (2014) Thermal stability of Mn2+ ion luminescence in Mn-doped core-shell quantum dots. Nanoscale 6(1):300–307 84. Jurbergs D et al (2006) Silicon nanocrystals with ensemble quantum yields exceeding 60%. Appl Phys Lett 88(23):233116-3 85. He GS et al (2008) Two-and three-photon absorption and frequency upconverted emission of silicon quantum dots. Nano Lett 8(9):2688–2692 86. Erogbogbo F et al (2011) In vivo targeted cancer imaging, sentinel lymph node mapping and multi-channel imaging with biocompatible silicon nanocrystals. ACS Nano 5(1):413 87. Fan JY, Chu PK (2010) Group IV nanoparticles: synthesis, properties, and biological applications. Small 6(19):2080–2098 88. Zeng S et al (2014) Nanomaterials enhanced surface plasmon resonance for biological and chemical sensing applications. Chem Soc Rev 43(10):3426–3452 89. Ding C, Zhu A, Tian Y (2013) Functional surface engineering of C-dots for fluorescent biosensing and in vivo bioimaging. Acc Chem Res 47(1):20–30 90. Zhang Z et al (2012) Graphene quantum dots: an emerging material for energy-related applications and beyond. Energy Environ Sci 5(10):8869–8890 91. Wang X et al (2010) Bandgap-like strong fluorescence in functionalized carbon nanoparticles. Angew Chem Int Ed 122(31):5438–5442 92. Cao L et al (2012) Photoluminescence properties of graphene versus other carbon nanomaterials. Acc Chem Res 46(1):171–180 93. Sun H et al (2013) Highly photoluminescent amino-functionalized graphene quantum dots used for sensing copper ions. Chem A Eur J 19(40):13362–13368 94. Li H et al (2010) Water-soluble fluorescent carbon quantum dots and photocatalyst design. Angew Chem Int Ed 49(26):4430–4434 95. Bacon M, Bradley SJ, Nann T (2014) Graphene quantum dots. Part Part Syst Char 31 (4):415–428 96. Anilkumar P et al (2011) Toward quantitatively fluorescent carbon-based “quantum” dots. Nanoscale 3(5):2023–2027 97. Zhu L et al (2013) Plant leaf-derived fluorescent carbon dots for sensing, patterning and coding. J Mater Chem C 1(32):4925–4932 98. Krysmann MJ, Kelarakis A, Giannelis EP (2012) Photoluminescent carbogenic nanoparticles directly derived from crude biomass. Green Chem 14(11):3141–3145 99. Li L et al (2009) Highly luminescent CuInS2/ZnS core/shell nanocrystals: cadmium-free quantum dots for in vivo imaging. Chem Mater 21(12):2422–2429 100. Yong K-T et al (2010) Synthesis of ternary CuInS2/ZnS quantum dot bioconjugates and their applications for targeted cancer bioimaging. Integr Biol 2(2–3):121–129 101. Pons T et al (2010) Cadmium-free CuInS2/ZnS quantum dots for sentinel lymph node imaging with reduced toxicity. ACS Nano 4(5):2531–2538 102. Deng D et al (2012) High-quality CuInS2/ZnS quantum dots for in vitro and in vivo bioimaging. Chem Mater 24(15):3029–3037 103. Guo W et al (2013) Synthesis of Zn-Cu-In-S/ZnS core/shell quantum dots with inhibited blueshift photoluminescence and applications for tumor targeted bioimaging. Theranostics 3 (2):99–108 104. Liu L et al (2013) Synthesis of luminescent near-infrared AgInS2 nanocrystals as optical probes for in vivo applications. Theranostics 3(2):109–115 105. Wang Y, Yan X-P (2013) Fabrication of vascular endothelial growth factor antibody bioconjugated ultrasmall near-infrared fluorescent Ag2S quantum dots for targeted cancer imaging in vivo. Chem Commun 49(32):3324–3326 106. Yong K-T et al (2009) Imaging pancreatic cancer using bioconjugated InP quantum dots. ACS Nano 3(3):502 107. Low PS, Antony AC (2004) Folate receptor-targeted drugs for cancer and inflammatory diseases. Adv Drug Deliv Rev 56(8):1055

870

B. Zhang et al.

108. Lee RJ, Low PS (1994) Delivery of liposomes into cultured KB cells via folate receptormediated endocytosis. J Biol Chem 269(5):3198–3204 109. Antony A (1992) The biological chemistry of folate receptors. Blood 79(11):2807–2820 110. Chang SQ et al (2011) One-step fabrication of biocompatible chitosan-coated ZnS and ZnS: Mn2+ quantum dots via a gamma-radiation route. Nanoscale Res Lett 6:1–7 111. Jayasree A et al (2011) Mannosylated chitosan-zinc sulphide nanocrystals as fluorescent bioprobes for targeted cancer imaging. Carbohydr Polym 85(1):37–43 112. Manzoor K et al (2009) Bio-conjugated luminescent quantum dots of doped ZnS: a cytofriendly system for targeted cancer imaging. Nanotechnology 20(6):065102 113. Xu ZG et al (2011) Glycopolypeptide-encapsulated Mn-doped ZnS quantum dots for drug delivery: fabrication, characterization, and in vitro assessment. Coll Surf B Biointerf 88 (1):51–57 114. Gaceur M et al (2012) Polyol-synthesized Zn0.9Mn0.1S nanoparticles as potential luminescent and magnetic bimodal imaging probes: synthesis, characterization, and toxicity study. J Nanopart Res 14(7):1 115. Yu JH et al (2013) High-resolution three-photon biomedical imaging using doped ZnS nanocrystals. Nat Mater 12(4):359–366 116. Erogbogbo F et al (2010) Biocompatible magnetofluorescent probes: luminescent silicon quantum dots coupled with superparamagnetic iron (III) oxide. ACS Nano 4(9):5131 117. Brooks PC et al (1994) Integrin αvβ3 antagonists promote tumor regression by inducing apoptosis of angiogenic blood vessels. Cell 79(7):1157–1164 118. Peng H, Travas-Sejdic J (2009) Simple aqueous solution route to luminescent carbogenic dots from carbohydrates. Chem Mater 21(23):5563–5565 119. Li X et al (2011) Preparation of carbon quantum dots with tunable photoluminescence by rapid laser passivation in ordinary organic solvents. Chem Commun 47(3):932–934 120. Cao L et al (2007) Carbon dots for multiphoton bioimaging. J Am Chem Soc 129 (37):11318–11319 121. Kong B et al (2012) Carbon dot-based inorganic–organic nanosystem for two-photon imaging and biosensing of pH variation in living cells and tissues. Adv Mater 24(43):5844–5848 122. Huang X et al (2013) Effect of injection routes on the biodistribution, clearance, and tumor uptake of carbon dots. ACS Nano 7(7):5684–5693 123. Dong Y et al (2012) One-step and high yield simultaneous preparation of single- and multilayer graphene quantum dots from CX-72 carbon black. J Mater Chem 22(18):8764–8766 124. Liu Q et al (2013) Strong two-photon-induced fluorescence from photostable, biocompatible nitrogen-doped graphene quantum dots for cellular and deep-tissue imaging. Nano Lett 13 (6):2436–2441 125. Wu X et al (2013) Fabrication of highly fluorescent graphene quantum dots using l-glutamic acid for in vitro/in vivo imaging and sensing. J Mater Chem C 1(31):4676–4684 126. Su Y et al (2009) The cytotoxicity of cadmium based, aqueous phase–synthesized, quantum dots and its modulation by surface coating. Biomaterials 30(1):19–25 127. Chen N et al (2012) The cytotoxicity of cadmium-based quantum dots. Biomaterials 33 (5):1238–1244 128. Zhang Y et al (2013) Biodistribution, pharmacokinetics and toxicology of Ag2S near-infrared quantum dots in mice. Biomaterials 34(14):3639–3646 129. Liu J et al (2013) Assessing clinical prospects of silicon quantum dots: acute doses in mice prove safe in monkeys. ACS Nano 7:7303 130. SUN Y-P et al (2010) Cytotoxicity evaluations of fluorescent carbon nanoparticles. Nano Life 01((01n02)):153–161 131. Zhu S et al (2012) Surface chemistry routes to modulate the photoluminescence of graphene quantum dots: from fluorescence mechanism to up-conversion bioimaging applications. Adv Funct Mater 22(22):4732–4740

Development of Extraordinary Optical Transmission-Based Techniques for Biomedical Applications

29

Seunghun Lee, Hyerin Song, Seonhee Hwang, Jong-ryul Choi, and Kyujung Kim

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theoretical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fabrication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Focused Ion-Beam Lithography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electron-Beam Lithography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nanoimprinting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photolithography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . EOT Gas Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biological Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . EOT-Based Imaging Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

872 873 873 874 875 876 877 879 879 884 887 889 891

Abstract

Highly sensitive detection techniques have drawn tremendous interest because this method allows the precise tracking of molecular interactions and the S. Lee • H. Song • K. Kim (*) Department of Cogno-Mechatronics Engineering, Pusan National University, Busan, South Korea e-mail: [email protected]; [email protected]; [email protected]; [email protected] S. Hwang Department of Advanced Circuit Interconnection, Pusan National University, Busan, South Korea e-mail: [email protected] J.-r. Choi School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea Medical Device Development Center, Daegu-Gyeongbuk Medical Innovation Foundation (DGMIF), Daegu, South Korea e-mail: [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_1

871

872

S. Lee et al.

observation of dynamics on a nanometric scale. Intracellular and extracellular processes can be measured at the molecular level; thus, highly sensitive techniques advance our understanding of biomolecular events in cellular and subcellular conditions and have been applied to many areas such as cellular and molecular analysis and ex vivo and in vivo observations. In this chapter, we review near-field based biosensors that rely on extraordinary optical transmission (EOT) and some application techniques that have emerged recently based on the localization of a surface plasmon. Also, we refer to the fabrication methods for making various nanostructures: first, focused ion beam that employs the high-energy ions to create high-precision nanopatterns. Second, electron-beam lithography that capitalizes on highly focused electron beam to draw submicron size patterns on the metallic surfaces. Third, nanoimprint lithography suitable for massive nanostructure fabrications. Finally, photolithography feasible for the cost-effective fabrication. At the end of this chapter, we introduce some applications based on EOT for enhancement of sensitivity and techniques which assist high-resolution imaging. Keywords

Extraordinary Optical Transmission(EOT) • Surface Plasmon Polaritons(SPPs) • Nanoaperature • Nanoplasmonics • Sensors, Imaging Technique

Introduction Using newly developed measurement and imaging techniques, biomolecular activation and cellular dynamics can be analyzed, and detection of small molecular events in subcellular environments is now possible. A significant restriction in previous optical imaging measurements is the lateral or axial spatial resolution caused by Abbe’s diffraction limits. To detect or observe molecular dynamics on subnanometric scales, diffraction limits should be resolved. Therefore, new optical techniques that can break the diffraction limit and enhance spatial resolution are desired. Recently, multiple microscopic methods that can replace current techniques have been proposed. Stimulated emission and depletion microscopy (STED) [1], which is based on the establishment of narrow excitation light spots, is one such strategy. Photo-activated light microscopy (PALM) [2] (obtained by the computation of the stochastic photo-activation of fluorescent molecules) and structured illumination microscopy (SIM) acquired by reconstructing high-resolution images from partial images with sinusoidal illumination that are spatially encoded [3] can also provide a good solution. Additionally, other perspectives focusing on tomographic systems that reinforce section measurement and imaging capabilities have been introduced: total internal reflection fluorescence microscopy (TIRFM) [4] and selective plane imaging microscopy (SPIM) [5]. These techniques have become the focus of attention, but they retain barriers to commercialization such as limited detection, limited imaging speed, experimental complexity, and high cost. Meanwhile, surface plasmon (SP) phenomena can be a

29

Development of Extraordinary Optical Transmission-Based Techniques. . .

873

goodalternativeforimagingenhancementsbyvirtueofnanotechnology[6].Applying nanotechnology on the surface has many merits, notably resolution improvement because of field relocalization using nanostructures. Additionally, localized field sizes canbe designed to besmallerthanthediffraction limit.When anareaoflocalized fieldsisnearlyidenticaltothatofasinglefluorescencemolecule,wecanassumethatlight emittedisgivenbyasinglemolecule[5].Anextraordinarytransmissionphenomenonis oneoftherepresentativetechniquesfortheSP basedapproach[6]. In the chapter, we summarize varied lithographic techniques to fabricate optimized nanoapertures for extraordinary optical transmission (EOT) and introduce interesting applications using EOT samples. E-beam lithography is one kind of method that can perforate nanosize hole. When the high-precision nanohole array is needed, focused ion-beam (FIB) lithography is a better way to satisfy that quality. Another strategy for making large area patterning is nanoimprint lithography which is suitable for mass production. Some EOT based applications for the enhancement of sensitivity is introduced.

Theoretical Background When the light passes through the matter the beam spreads in all directions. So far, the matter how the light interacts with other objects is an old problem in optics. In 1944, a fundamental theory was known by Bethe [7]. He found that the formula of optical transmission through an array of circular hole perforating an infinitely thin perfect electric conductor. T

64 r 4 27π 2 λ

where λ is the wavelength of the incident light and r is the radius of the hole. According to Bethe’s theory, a light passing through the subwavelength hole rarely exists. However, Ebbesen et al. found that array of holes show highly unusual zeroorder transmission spectra at wavelength larger than the array period without diffraction that is called extraordinary optical transmission [8]. The weird thing is the intensity of transmission light. According to the Bethe’s previous theory, the transmission of light is expected to be lower than ever. One of the reasons this unusual phenomenon occurs is the coupling of light with SP, notably on a periodically patterned metallic nanostructure. SP is a collective oscillation of electrons between the edge of the metal and insulator [8].

Fabrication Although theoretical fundamentals of EOT through various nanoapertures are well established, the accurate fabrication of nanoapertures for EOT and plasmonic applications are required for experimental and industrial implementations. During

874

S. Lee et al.

the last few decades, several fabricating techniques have been developed and applied to investigate plasmonic nanostructures including nanoapertures for EOT. Several techniques for producing nanohole structures have been developed. Notably, two main concepts suited for the fabrication of nanohole arrays have been introduced: dead-ended nanohole arrays and through-nanohole arrays [9]. Dead-ended nanoholes are used when fluid-containing analyte is moved over the nanohole array; through-nanohole arrays are used for nanoconfinement of analyte. Here, we describe several techniques related to nanohole array-based sensing. In this section, we explore relevant fabrication methods, ranging from techniques used in industrial areas to newly investigated and experimentally applied methods.

Focused Ion-Beam Lithography FIB lithography was the first developed method that can create high-precision nanohole arrays on a metal surface. The main principle of this method is that ion-material collisions which remove the target from the sample. FIB comprises two processes: sputtering (removal of the material from the substrate) using highenergy ions and redeposition (relocation onto the other surface). The FIB technique has many advantages: FIB can fabricate various nanostructures on the substrate directly without the mask, the resist, and the chemical developer. Additionally, FIB provides improved resolution compared to photolithography and e-beam lithography because ion beams have smaller wavelengths than optical/UV light and electron beams. Also, by regulating the ion energy, the penetration depth of the ion is easily controlled. Therefore, FIB has been used to fabricate accurate nanostructures including nanoapertures for plasmonic applications. Figure 1 shows the 300-nm nanohole pattern which was fabricated by FIB on silicon nitride substrate. The single nanohole, which has a 100-nm height, is clear and solid, while deep and clear nanohole patterns cannot be easily fabricated by e-beam lithography. Lesuffleur et al. investigated real-time biosensors using periodic nanohole arrays constructed using ion-beam lithographic fabrication as shown in Fig. 2 [10]. Xue et al. introduced surface plasmon enhanced EOT of gold quasicrystals using Fig. 1 A SEM image of a 300-nm diameter nanohole fabricated by focused ion-beam lithography

29

Development of Extraordinary Optical Transmission-Based Techniques. . .

875

Fig. 2 A SEM image of a fabricated double-hole array using focused ion-beam lithography. The period of each double-hole is 800 nm. An inlet graph illustrates the optical transmission through the double-hole array (This figure is used with permission from Ref. [10] by the American Institute of Physics)

focused ion-beam fabrication [11]. Furthermore, ion-beam lithography can be employed to construct complicated nanostructures that cannot be established by photolithographic and e-beam lithographic methods such as three-dimensional nanostructures and metamaterials [12, 13]. A drawback of ion-beam lithography is that the consumed time per fabricated unit area is longer than in photolithography and e-beam lithography. This longer time is caused by the small size of the beams used in the lithography processes. Because of the shortage, the integration of ion-beam lithographic techniques in massive fabrications of nanostructures is not appropriate.

Electron-Beam Lithography Electron-beam lithography (EBL) is a commonly used technique in the fabrication of metallic nanostructures. The main principle of EBL is identical to scanning electron microscopy (SEM). Therefore, this process utilizes an electron lens to focus the electron beam. EBL uses focused electron beams to draw wanted micro- and nanopatterns on an electron-beam-sensitive resist above the substrate. The electron beams alter the properties of the resist; therefore, micro- and

876

S. Lee et al.

Fig. 3 A SEM image of a 250-nm diameter nanohole array on Si glass fabricated using e-beam lithography

nanostructures can be established using selective eliminations of areas that are exposed or not exposed to a chemical developer. EBL combines other fabrication procedures, often containing in excess of three steps. First, positive or negative photoresist is spin-coated on a surface; Si glass can be used as the surface. Second, the soft-baked sample is exposed to an e-beam to create intended patterns. Third, a chemical development exposes the negative resist patterns on the substrate. The final step provides a metallic coating on the surface with intermediate adhesive layers. For example, to obtain the Au nanohole array, a thin layer of Cr (~5 nm) is coated to provide a proper adhesion before a layer of Au is deposited (Fig. 3). A remarkable advantage of EBL compared to photolithography is the enhanced fabricating resolution. The size of the focused electron beam is smaller than the diffraction limit in photolithographic fabrications using an optical or UV beam. Although applications to large areas or multiple sample fabrications remain difficult, e-beam lithography has been employed to fabricate various types of nanoapertures for plasmonic enhancement including for EOT. For example, Sharpe et al. introduced gold nanoaperture arrays fabricated by e-beam lithography to produce immunobiosensors [14]. Additionally, various nanogratings and nanoapertures (as illustrated in Fig. 4) for plasmonic enhancements of biosensing or of the resolution in imaging were constructed by EBL because of the appropriate lithographic resolution and absence of photomasks [15, 16].

Nanoimprinting Nanoimprint lithography is a recently investigated fabrication method to construct nanoscale patterns in a cost-effective and high-throughput manner. During nanoimprinting, designed patterns can be created by mechanically stamping resists on a master structure. Generally, a monomer or polymer resist that can be cured by heat or light exposures is employed in nanoimprint lithography. The remarkable advantages of nanoimprint lithography include the simplicity of lithographic processes, accessibility of the massive nanostructure fabrication, and

29

Development of Extraordinary Optical Transmission-Based Techniques. . .

877

Fig. 4 A SEM image of the nanoapertures antenna array fabricated by the process of e-beam lithography and postdevelopment. Nanoapertures with a 300-nm hole diameter and 1-μm period are fabricated into a gold film (tfilm = 30 nm) (The use of this figure from Ref. [16] is permitted by WileyVCH Verlag GmbH & Co. KGaA)

cost efficiency. For these reasons and with the improvement of fabricating resolutions, nanoimprint lithography has been developed to investigate nanoapertures for EOT. For example, Martinez-Perdiguero et al. introduced the nanoimprint fabrication to demonstrate gold nanoaperture array-based sensors [17]. Barbillon described high-density plasmonic gold nanodisks on a glass substrate for biomedical sensing applications produced with soft UV nanoimprint lithographic fabrication [18].

Photolithography Photolithography is a fabrication method applied for establishing micro- and nanopatterns on a thin film substrate. This method uses transferred light through a microscale photomask to estimate designed patterns on a light-sensitive layer; this layer is called a photoresist. After the development of the photoresist with patterned-light illuminated structures, several chemical processes and the deposition of a specific material results in micro- or nanopatterns on the substrate (Fig. 5a). A main advantage of photolithography is the accessibility of cost-effective, large-area microfabrication. Larger collimated light sources and photomasks expand the photolithographic fabricating area in both research and industrial applications. High-precision motorized nanoscale stages also offer parallel fabrication processes for cost-effective multiple-sample estimations. One critical issue in the photolithographic fabrication of nanoapertures for EOT is limited fabricating resolution. The resolution of photolithography is bounded by

878

S. Lee et al. Optical/UV light

a

Mask Photoresist Nanostructured material Substrate

Negative photoresist

Positive photoresist

Photoresist development

Etching

b t = 0.2

t = 0.5

t = 0.9

1 µm

Fig. 5 (continued)

29

Development of Extraordinary Optical Transmission-Based Techniques. . .

879

the Abbe’s lateral diffraction limit and the effective resolution is a little broader because of reflection and scattering. General photolithography using optical or ultraviolet (UV) light can be applied in microfabrication; these processes have resolutions of approximately a few hundreds of nanometers. To exceed the resolution limit and build nanostructures for plasmonic applications including EOT, several alternative fabrication approaches based on photolithography have been suggested. Kelf et al. fabricated metallic circular nanoapertures to generate a localized plasmon using the combination of photo- and nanosphere lithography (Fig. 5b) [19]. Using an identical method, Goncalves et al. described the fabrication of triangular nanoapertures [20]. Henzie et al. introduced phase-shifting photolithography, a high-resolution photolithographic fabrication method using phasechanging light illuminations to investigate plasmonic nanostructures [21].

Applications Nanohole array structures have been widely used in multiple fields such as chemical, medical, and biological sensors for sensitively detecting dielectric changes [22]. It also can be applied to the EOT-based super resolution imaging technique [23]. The detailed information of the EOT-based high-sensitivity sensors and highresolution imaging techniques are introduced below.

EOT Gas Sensing Recently, a chemoselective gas sensor was designed using plasmonic nanohole array structures. Gas molecules are small, complicating detection, but it is possible to detect gas molecules using the nanohole arrays [24]. Thus, this plasmonic sensor is designed to detect gaseous compounds sensitively using EOT. Figure 6 shows a schematic design of the gas detecting sensor composed of periodic nanohole arrays and a voltage-directed assembly. A variety of methods such as e-beam lithography, photolithography, and FIB were used to fabricate the nanohole structures. A nanohole array structure that is fabricated using e-beam lithography is in each sensor region A, B, and N (see Fig. 6a) [25, 26]. All sensors in Fig. 6a attached to an electrical contact have a nanohole array, and an electrical bias is provided for voltage-directed assembly process. A different type of molecule was ä Fig. 5 (a) Simplified illustration of photolithographic fabrication procedures using positive and negative photoresists. (b) SEM images (viewing angle = 45 ) of nanoapertures established by the combination of photo- and nanosphere lithography using 600 nm nanosphere particles. The differences in the embedded thickness (t) cause the properties of nanoapertures to be relevantly changed (The use of this figure from Ref. [19] is permitted by the American Physical Society)

880

S. Lee et al.

Fig. 6 (a) Schematic representation of the different steps for achieving a multiplexed chemoselective sensor array. (b) Assembly process of a series of chemoselective compounds to construct a multiplexed sensor array. Chemoselectivity is given to an individual sensing area using a voltage-directed assembly technique. Since the assembly occurs only in the presence of an applied voltage, separate sensors can be given different chemistries in subsequent steps (The permission of reprinting Fig. 6 from Ref. [25] was granted by the Optical Society)

composed to the surface chemistry. These nanohole arrays are connected to an electrical contact to supply an electrical bias. All sensor regions are comprised of different molecular conditions and can display chemoselectivity, a developing multiplexed sensor. The characteristic of voltage-directed assembly process makes localized sensing by electrical isolation possible. A voltage-directed assembly was implemented using optical lithography to obtain different chemistries. A few gas molecules in the dielectric environment can be absorbed surrounding the nanohole area by the change of molecules but the change would be very small because the density of absorbed molecule is low (Fig. 7a). Thus, the gas chemoselectivity may be difficult. Figure 7b displays the simulated transmission spectra results from finite-difference time-domain (FDTD) [27] and highlights a similarity between the simulated and measured transmission spectra. The sample used in experiment was created in a 50 nm gold film on a silicon substrate, the diameter of hole is 200 nm, and a pitch of the separated holes is 400 nm. The range of 250–400 nm was considered a proper period of the hole array in a spectral wavelength of 400–800 nm [25].

29

b Transmission (%)

a

Development of Extraordinary Optical Transmission-Based Techniques. . .

60

881

Experiment Simulation

40

20

500

600 700 800 900 Wavelength (nm)

1000

Fig. 7 (a) Schematic representation of the difficulty of gas sensing using plasmonics. (b) The results of simulated (blue-solid) and measured (dotted-red) transmission through a nanoscale perforated gold film (The permission of reprinting Fig. 7 from Ref. [25] was granted by the Optical Society)

Fig. 8 (a) Microscope image of two sensor elements each composed of multiple nanohole arrays fabricated using PMMA with a complimentary pattern. The two sensor arrays are electrically isolated and both are connected to a gold contact pad. (b) SEM image showing the dimensions of the perforation in the gold film. The nanoholes are 200 nm in diameter and are separated by 400 nm in a square lattice (The permission of reprinting Fig. 8 from Ref. [25] was granted by the Optical Society)

Figure 8a shows the two sensors that was patterned a periodical nanohole arrays. Each sensor array is isolated electrically, also combined with gold pad not shown in the image. This gold pad which plays a role as the electrical bias by the length of 10 μm was used to conduct the voltage assembly process. The simulation data in Fig. 7b were obtained from multiple nanohole array patterns using a diameter of 200 nm, distance between the center of circles of 400 nm (Fig. 8), and gold film thickness of 50 nm. The SEM image shows a typical of nanohole array structures were patterned using E-beam lithography and completed using a lift-off process (Fig. 8b).

882

S. Lee et al. Light Sensor Array Hydrogen Carrier Gas

Bubbler Flow Cell Detector

Fig. 9 Schematic representation of the gas chromatography system; A hydrogen carrier gas is used in a bubbler to transport the test molecule to the sensor array and is analyzed downstream by an HP gas chromatographer (The permission of reprinting Fig. 9 from Ref. [25] was granted by the Optical Society)

A simple delivery system is used to test the gas sensing assembly by employing a high magnification objective to illuminate and collect light from a sensor array. The light passes through a single sensor array and the beam spot is detected by the CCD camera allowing real-time imaging. This light was emitted through an optical fiber, so beam splitters simply focus the alignment. A bubbler is connected to the nanohole array and the nanohole array is exposed for 15 min (Fig. 9). Therefore, the transmission peak displays a 3 nm shift [25]. It is demonstrated that reusable multiplexed gas sensors using nanohole array structures can be used to detect gaseous compounds. A gas sensor can obtain enhanced sensitivity using a plasmonic metal hole array structure that represents the EOT phenomenon [8, 28]. Higher sensitivity and selectivity are required for gas sensing in analytical chemistry. The application of surface-enhanced infrared absoption (SEIRA) [29–31] with metal hole array structures has been studied. Figure 10 shows the setup for gas detection. A metal hole array with a hole diameter c and periodicity a for nano- to microsized holes is designed and fabricated through lithography and lift-off [26]. The transmission peak is shifted depending on the hole diameter and periodicity. A proper ratio between c and a can be selected through experiments. Two metal hole array mirrors replace normal mirrors. This system uses a filament-type infrared light source to deliver black-body-like emission and an oscilloscope to detect an output signal. This device can develop selectivity through employing additional mirrors. The experiments used three different conditions to compare the results, because the device has windows on both sides. First, windows having both patterns were located (black-framed picture in Fig. 11a). Second, windows with one metal hole array were used in the experiment (red-framed). Third, silicon substrates faced inside the gas cell (blue-framed). The three sets of results were obtained by

29

Development of Extraordinary Optical Transmission-Based Techniques. . .

883

Fig. 10 Schematic illustration of the SF6 gas detection set up used in the experiments. The IR light source is an IRS-001C (IRS, Ltd.) and the SF6 detector is a LIM-122 (Infratec, LLC). The gas cell mirrors were replaced with Si substrates carrying silver multihole arrays (MHA) for augmented sensitivity. Size reference: the output diameter of the parabolic mirror is 2.5 cm, the window diameter of the gas cell is 1.5 cm, and a single window on the detector cap measures 3.5  2.5 mm2 (The permission of reprinting Fig. 10 from Ref. [29] was granted by the Optical Society)

monitoring the output signal versus the gas concentration. The absorbance change (ΔA) is shown in Fig. 11b (I is the detected signal). The signal from the metal hole array and one-side metal hole array is calculated to be 27 and 9 times larger than the signal from the silicon facing inside [32]. Using the 3D FDTD, the transmission spectra and enhancement properties of the metal hole array gas sensor are simulated [32]. Mirrors with a metal hole array of hole diameter c = 1.6 μm and a period a = 3.3 μm were used to obtain the data (Fig. 12a). The reflected and transmitted spectra changing thickness of the metal hole array layer t are shown in Fig. 12b. These spectra display a wider transmission bandwidth caused by a thin layer. However, the transmission peak in the experiment was measured to be lower than the results of the simulation. A comparison between Fig. 12c (t = 100 nm) and (d) (t = 20 nm) shows the intensity of the extraordinary optical transmission in the xz-plane. The thinner layer provides the lower enhancement and that region becomes weaker in 100 nm over the metal hole arrays.

884

S. Lee et al.

Fig. 11 (a) Output signal dependence on the SF6 concentration. The insets show schematically the configuration of the gas cell windows (whether the MHA is exposed to the inside of the gas cell or not). (b) Absorption change ΔA = log(I/I0) as a function of SF6 concentration; the hashed region marks the detection threshold at 0.1%. The line is the linear fitting by the least-squares method. The inset figure shows the log-log plot of absorption and concentrations. The lines are drawn as eye guides for the linear dependence (The permission of reprinting Fig. 11 from Ref. [29] was granted by the Optical Society)

Biological Sensing EOT can be used as a biosensor. Because the transmitted light range is tens or hundreds of nanometers, sensing can occur in small areas with high sensitivity enough to detect a biomolecule, biomolecular interaction, or changes of cell membrane. Select studies introduce the EOT-based biological sensor; these studies are described below. Brolo et al. applied periodic subwavelength nanohole arrays in a gold film to an SP resonance system to sensitively monitor the binding of biological molecules to the metallic film surface [33]. Figure 13 shows the transmitted light spectra of normally incident white light through hole arrays in gold after successive surface modifications. Figure 13 spectrum a presents a distinct resonance at 645 nm from the interface between air and bare metal. Spectrum b shown in Fig. 13 was obtained after the metallic surface had been modified with a MUA monolayer. The maximum transmission resonance shifted to 650 nm. An additional resonance shift to 654 nm in spectrum c was observed when proteins were adsorbed to the MUA layer. The sensitivity of the sensor was found to be 400 nm/refractive index unit. Yanik et al. demonstrated a label-free optofluidic nanoplasmonic sensor that can detect intact viruses directly from fluidic biological media at a concentration of 109 PFU/mL. This concentration is clinically relevant, and this process requires minimal sample preparation. This group also demonstrated the detection and

29

Development of Extraordinary Optical Transmission-Based Techniques. . .

885

Fig. 12 (a) Simulation layout of the Ag MHA of period a = 3.3 μm and μmhole diameter c = a/2, the box indicates the footprint of the elementary simulation cell, made periodic through the boundaries. (b) Normalized power reflected (RX) and transmitted (TX) out of a 10 μm long gas cell realized with Si:MHA mirrors as in the black inset of Fig. 4a, for different thickness t of the Ag MHA. (c) xz-plane cross-section of a Si:MHA mirror with t = 100 nm illuminated from bottom, depicting E-field intensity enhancement at 10.6 μm wavelength; (d) same for t = 20 nm (The permission of reprinting Fig. 12 from Ref. [29] was granted by the Optical Society)

Fig. 13 Normalized transmission spectra of normally incident white light through an array of subwavelength (200-nm diameter) holes on a 100-nmthick gold substrate deposited on a glass slide. (a) Bare (clean) Au surface; (b) Au modified with a monolayer of MUA; (c) Au-MUA modified with BSA [33]

886

S. Lee et al.

Fig. 14 Three-dimensional renderings (not drawn to scale) and the experimental measurements illustrate the detection scheme using optofluidic nanoplasmonic biosensors based on resonance transmissions because of the extraordinary light transmission effect. (a) Detection (immobilized with capturing antibody targeting VSV) and control sensors (unfunctionalized) are shown. (b) VSV attaches only to the antibody immobilized sensor. (c) No observable shift is detected for the control sensor after the VSV incubation and washing. (d) Accumulation of the VSV because of the capturing by the antibodies is experimentally observed. A large effective refractive index increases results in a strong redshifting of the plasmonic resonances (100 nm) [34]

recognition of small enveloped RNA viruses (e.g., vesicular stomatitis virus and pseudotyped Ebola) to large enveloped DNA virus (e.g., vaccinia virus), spanning a dynamic range of three orders of magnitude [34]. Figure 14a, b are three-dimensional schematic images of EOT-biosensors. The sensing area consists of an aligned nanohole array, and the antibodies are immobilized on the hole array to detect the antigens flowing through the fluidic channel. Simultaneously, to compare the resonance peak shift, a nontreated sensing area is prepared. The results are shown in Fig. 14c, d. The measured spectra represent both before (blue line) and after (red line) incubating the virus-containing sample. The distinct resonance peak observed at 690 nm (blue line) with 25 nm of full width at half maximum (FWHM) was measured from the extraordinary

29

Development of Extraordinary Optical Transmission-Based Techniques. . .

887

transmitted light through the hole. This transmitted resonance peak (blue line) of the antibody immobilized detection sensor corresponds to the excitation of SPP mode at the metal-dielectric interface. After diffusively delivering the analyte containing the virus sample, a strong resonance red-shift (100 nm) is observed (red line) in the end point measurement. This strong red-shift results from biomolecules binding to the functionalized surface. For the unfunctionalized control sensors (Fig. 14c), a slight resonance red shifting (1 nm) is observed when compared to the resonance of the blue curve, likely because of nonspecific binding events. This research suggests promising specific target detectable optofluidic biosensors.

EOT-Based Imaging Techniques Although developments in EOT-based super-resolution imaging techniques is not as advanced as EOT-based high-sensitivity sensing methods, several studies have introduced meaningful results in the investigation of EOT-assisted high-resolution imaging as described below. Docter et al. investigated a novel microscopic method using multiple diffraction-limited illumination spots that were established by extraordinary optical transmission through metallic grids of nanoscale apertures [35]. In the feasibility study using theoretical calculations, the EOT-based structured illumination microscopy can improve imaging resolution when compared to confocal microscopy. The study confirmed actual resolution enhancements that were predicted by the simulation by imaging fluorescent samples using the experimental EOT-based structured illumination setup with fabricated nanoscale aperture grids as Fig. 15. To acquire fluorescence information in selected axial points, Choi et al. developed an EOT-based axial imaging (EOT-AIM) method using linear arrays of metallic nanoscale apertures [23]. They applied different EOT penetration depths through nanoscale apertures with different sizes to estimate and reconstruct fluorescence intensities of selected axial regions. First, they calculated the optical near-field of the EOT through the nanoscale aperture array and estimated the EOT penetration depths in each sized nanoscale aperture. The calculated results were then compared with experimental calibration data using a Fluorescein-gel matrix on the nanoscale aperture arrays. When imaging living cells that were expressed by a fluorescent indicator and cultured on the nanoapertures, the possibility of live cell measurements using EOT-based super-resolved fluorescence imaging with axial resolution enhancements (Δz = 25–125 nm) was suggested (Fig. 16). However, select studies established EOT-based multispectral imaging by dimensional expansion (1D ! 2D) of EOT-based spectral sensing. Yao et al. introduced a 3D plasmonic crystal and 2D protein binding mapping that can be determined by EOT-based high-sensitivity spectral sensing [36]. They fabricated a 3D plasmonic device after FDTD simulation-based optimizations to detect protein bindings. The plasmonic device was applied on a conventional optical microscope with a

888

S. Lee et al.

Fig. 15 (a) Experimental setup of a structured illumination microscope based on extraordinary optical transmission through subwavelength nanoapertures arrays (b) Fluorescence images (xzand xy-) of a fluorescent solution that was illuminated by EOT through the nanoapertures. This 3D fluorescence profile consists of 2D images which captured 100-nm scans in the axial (z-) direction (This figure is reprinted with the permission from the Society of Photo-Optical Instrumentation Engineers [35])

charge-coupled device (CCD) camera. By capturing 2D EOT field changes, protein binding properties (submonolayer quantities of alkanethiols) on the 3D plasmonic crystal can be monitored with high resolution and high sensitivity. Recently, Najiminaini et al. introduced EOT-based high-sensitivity hyperspectral imaging methods using transmission characteristics through gold nanohole arrays [37]. This hyperspectral device consists of multiple components of nanoapertures that have unique periods for specific transmission resonance wavelengths (Fig. 17). They established a spectral unmixing algorithm to acquire 2D hyperspectral images

29

Development of Extraordinary Optical Transmission-Based Techniques. . .

889

Fig. 16 Extraordinary transmission-based axial imaging super-resolved living cell analysis. (a) Fluorescence intensity image of a RAW264.7 cell that was expressed by FITC-conjugated cholera toxin subunit B (FITC-CT-B) and measured by EOT-AIM. The intensity image is overlaid with a wide-field image. (b) Profiles of the fluorescence intensity along the axial direction after normalization. The intensity profiles at j (array number) = 3 and 10 are also described on the right (The figure is reprinted with permission from WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim [23])

within the wavelength region of 662–832 nm. This device suggests a potential to build 2D hyperspectral imaging sensors based on EOT by designed metallic nanoapertures and a potential for various applications in biomedical imaging.

Summary In this chapter, we reviewed EOT-based sensing and imaging techniques for varied applications. Before introducing the applications, we summarized fabrication methods of nanostructures for the EOT phenomenon. Focused ion-beam and electron-beam lithography for highly precise nanostructures, nanoimprint lithography

890

S. Lee et al. Telecentric lens Imaging target

a

Multispectral device 20 by 20 blocks Filters

m

12 m

CMOS camera

Transmission source

MB* Reflection source

b Distilled water

NHA

Band1 662–714 nm

Telecentric lens Band2 705–752 nm

Quartz

Band3 746–794 nm

2 mm Band4 788–832 nm

120

100 MB* 10 µM

80 Intensity

60 MB* 30 µM 40

20 MB* 50 µM

Fig. 17 Nanoaperture-array-based EOT implemented multispectral imaging. (a) Experimental schematic of a multispectral imaging platform using EOT through nanoaperture-arrays. (b) Raw image of nanoaperture-arrays and unmixed transmission multispectral images of distilled water with 0, 10, 30, and 50 μM concentrations of methylene blue. Selected spectral bands are 662–714, 705–752, 745–794, and 788–832 nm (The reuse of this figure is permitted by the Nature Publishing Group [37])

for massive nanostructure fabrications, and photo lithography for cost-effective fabrication were introduced and optimized nanosamples fabricated by those lithographic techniques were shown for sensing and imaging applications. Consequently, we reviewed several interesting applications based on EOT to enhance sensitivity of biosensors and to assist high-resolution imaging. The EOT based approaches using nanostructures have been successfully applied for gas or bio sensing, and there have been lots of novel approaches for high-resolution imaging or optical manipulation. We verified the potentials of the plasmonic-based EOT phenomenon for broad research fields and suggested successful approaches for further researches in biosensing and bioimaging.

29

Development of Extraordinary Optical Transmission-Based Techniques. . .

891

References 1. Hell SW, Wichmann J (1994) Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt Lett 19:780–782 2. Hess ST, Girirajan TP, Mason MD (2006) Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys J 91:4258–4272 3. Gustafsson MG (2000) Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J Microsc 198:82–87 4. Kim K, Kim DJ, Cho EJ, Suh JS, Huh YM, Kim D (2009) Nanograting-based plasmon enhancement for total internal reflection fluorescence microscopy of live cells. Nanotechnology 20:015202 5. Huisken J, Swoger J, Del Bene F, Wittbrodt J, Stelzer EH (2004) Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305:1007–1009 6. Barnes WL, Dereux A, Ebbesen TW (2003) Surface plasmon subwavelength optics. Nature 424:824–830 7. Bethe HA (1944) Theory of diffraction by small holes. Phys Rev 66:163 8. Ebbesen TW, Lezec HJ, Ghaemi HF, Thio T, Wolff PA (1998) Extraordinary optical transmission through sub-wavelength hole arrays. Nature 391:667–669 9. Escobedo C (2013) On-chip nanohole array based sensing: a review. Lab Chip 13:2445–2463 10. Lesuffleur A, Im H, Lindquist NC, Oh SH (2007) Periodic nanohole arrays with shapeenhanced plasmon resonance as real-time biosensors. Appl Phys Lett 90:243110 11. Xue J, Zhou W, Dong B, Wnag X, Chen Y, Huq E, Zeng W, Qu X, Liu R (2009) Surface plasmon enhanced transmission through planar gold quasicrystals fabricated by focused ion beam technique. Microelectron Eng 86:1131–1133 12. Xu T, Lezec HJ (2014) Visible-frequency asymmetric transmission devices incorporating a hyperbolic metamaterial. Nat Commun 5:4141 13. Liu Y, Zhang X (2011) Metamaterials: a new frontier of science and technology. Chem Soc Rev 40:2494–2507 14. Sharpe JC, Mitchell JS, Lin L, Sedoglavich N, Blaikie RJ (2008) Gold nanohole array substrates as immunobiosensors. Anal Chem 80:2244–2249 15. Oh Y, Lee W, Kim Y, Kim D (2014) Self-aligned colocalization of 3D plasmonic nanogap arrays for ultra-sensitive surface plasmon resonance detection. Biosens Bioelectron 51:401–407 16. Kim K, Yajima J, Oh Y, Lee W, Oowada S, Nishizaka T, Kim D (2012) Nanoscale localization sampling based on nanoantenna arrays for super‐resolution imaging of fluorescent monomers on sliding microtubules. Small 8:892–900 17. Martinez-Perdiguero J, Retolaza A, Otaduy D, Juarros A, Merino S (2013) Real-time labelfree surface plasmon resonance biosensing with gold nanohole arrays fabricated by nanoimprint lithography. Sensors 13:13960–13968 18. Barbillon G (2012) Plasmonic nanostructures prepared by soft UV nanoimprint lithography and their application in biological sensing. Micromachines 3:21–27 19. Kelf TA, Sugawara Y, Cole RM, Baumberg JJ, Abdelsalam ME, Cintra S, Mahajan S, Russell AE, Bartlett PN (2006) Localized and delocalized plasmons in metallic nanovoids. Phys Rev B 74:245415 20. Gonc¸alves MR, Makaryan T, Enderle F, Wiedemann S, Plettl A, Marti O, Ziemann P (2011) Plasmonic nanostructures fabricated using nanosphere-lithography, soft-lithography and plasma etching. Beilstein J Nanotechnol 2:448–458 21. Henzie J, Lee J, Lee MH, Hasan W, Odom TW (2009) Nanofabrication of plasmonic structures. Annu Rev Physiol 60:147–165 22. Liedberg B, Nylander C, Lunstro¨m I (1983) Surface-plasmon resonance for gas-detection and biosensing. Sensors Actuators 4:299–304 23. Genet C, Ebbesen TW (2007) Light in tiny holes. Nature 445:39–46 24. Wright JB, Cicotte KN, Subramania G, Dirk SM, Brener I (2012) Chemoselective gas sensors based on plasmonic nanohole arrays. Opt Mater Express 2:1655–1662

892

S. Lee et al.

25. Nishijima Y, Nigorinuma H, Rosa L, Juodkazis S (2012) Selective enhancement of infrared absorption with metal hole arrays. Opt Mater Express 2:1367–1377 26. Lumerical. Retrieved http://www.lumerical.com 27. Martín-Moreno L, Carcía-Vidal FJ, Lezec HJ, Pellerin KM, Thio T, Pendry JB, Ebbesen TW (2001) Theory of extraordinary optical transmission through subwavelength hole arrays. Phys Rev Lett 86:1114–1117 28. Ohta N, Nomura K, Yagi I (2010) Electrochemical modification of surface morphology of Au/Ti bilayer films deposited on a Si prism for in situ surface-enhanced infrared absorption (SEIRA) spectroscophy. Langmuir 26:1897–18104 29. Miyatake H, Hosono E, Osawa M, Okada T (2006) Surface-enhanced infrared absorption spectroscopy using chemically deposited pd thin film electrodes. Chem Phys Lett 428:451–456 30. Aouani H, Sipova H, Rahmani M, Navarro-Cia M, Hegnerova K, Homola J, Hong M, Maier SA (2013) Ultrasensitive broadband probing of molecule vibrational modes with multifrequency optical antennas. ACS Nano 7:669–675 31. Nishijima Y, Adachi Y, Rosa L, Juodkazis S (2013) Augmented sensitivity of an IR-absorption gas sensor employing a metal hole array. Opt Mater Express 3:968–976 32. Brolo AG, Gordon R, Leathem B, Kavanagh KL (2004) Surface plasmon sensor based on the enhanced light transmission through arrays of nanoholes in gold films. Langmuir 20:4813–4815 33. Yanik AA, Huang M, Kamohara O, Artar A, Geisbert TW, Connor JH, Altug H (2010) An optofluidic nanoplasmonic biosensor for direct detection of live viruses from biological media. Nano Lett 10:4962–4969 34. Docter MW, Van den Berg PM, Alkemade PF, Kutchoukov VG, Piciu OM, Bossche A, Young IT, Garini Y (2007) Structured illumination microscopy using extraordinary transmission through sub-wavelength hole-arrays. J Nanophotonics 1:011665–011665 35. Choi J-R, Kim K, Oh Y, Kim AL, Kim SY, Shin JS, Kim D (2014) Extraordinary transmission‐ based plasmonic nanoarrays for axially super‐resolved cell imaging. Adv Opt Mater 2:48–55 36. Yao J, Stewart ME, Maria J, Lee TW, Gray SK, Rogers JA, Nuzzo RG (2008) Seeing molecules by eye: surface plasmon resonance imaging at visible wavelengths with high spatial resolution and submonolayer sensitivity. Angew Chem Int Ed 120:5091–5095 37. Najiminaini M, Vasefi F, Kaminska B, Carson JJ (2013) Nanohole-array-based device for 2D snapshot multispectral imaging. Sci Rep 3:2589

Miniaturized Fluidic Devices and Their Biophotonic Applications

30

Alana Mauluidy Soehartono, Liying Hong, Guang Yang, Peiyi Song, Hui Kit Stephanie Yap, Kok Ken Chan, Peter Han Joo Chong, and Ken-Tye Yong

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miniaturized Fluidic Regimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fluidic Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fabrication of a Miniaturized Fluidic Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biophotonic Applications with Miniaturized Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nucleic Acid Optical Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bioanalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flow Cytometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plasmonic Biosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nanoparticle Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary and Future Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

894 895 898 904 905 905 907 912 914 919 928 931

Abstract

Miniaturized fluidic devices provide a platform for reaction processes to be scaled down into the milli-, micro-, and nanoscale level. The advantages of using miniaturized devices include the reduction of sample volumes, faster processing rates, automation, portability, low cost, and enhanced detection limit. Bioanalysis, biosensing, bioimaging, and nanoparticle synthesis are some of the important research areas in the biophotonic field which are often burdened by A.M. Soehartono (*) • L. Hong • G. Yang • P. Song • H.K.S. Yap • K.K. Chan • K.-T. Yong School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected] P.H.J. Chong Department of Electrical and Electronic Engineering, Auckland University of Technology, Auckland, New Zealand e-mail: [email protected]; [email protected] # Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4_39

893

894

A.M. Soehartono et al.

time-consuming reaction processes, requirement of large quantity of samples, and cumbersome equipment with large footprint. As such, scaling down these reaction processes using miniaturized devices will be a promising approach to greatly improve the overall sensitivity of bioanalysis and biosensing and shorten the reaction time for producing high-quality nanoparticles for biophotonic applications. However, scaling down the reaction processes in the fluidic domains poses different technical challenges since the underlying physical phenomena differs from that at the macroscale. In this chapter, we aim to highlight the advancements and challenges in the fabrication of miniaturized devices for biophotonic applications. Keywords

Miniaturization • Biophotonics • Millifluidic • Microfluidic • Nanofluidic • Labon-a-chip

Introduction From using conventional upright microscopes to study cellular phenomena to the excitation of nanoparticles in cells toward tumor therapy, the interaction of light with biological matter is ubiquitous in modern medicine and bioengineering research. Biophotonics is a field that pertains to (i) studying the interactions of light with biological systems and (ii) developing sensitive and high-resolution light-based technologies for these biological studies. Biophotonic optical technologies permit the observation of morphological changes at the cellular and tissue level through characteristics such as reflectivity, scattering, absorption, and chemical changes [1]. To date, the biophotonic community has been continuously developing novel bio-optical approaches for improving the performance of healthcare tools such as diagnostics, therapeutics, drug screening, genome profiling, etc. For example, exploring molecular events and kinetics at the cellular level will certainly shed light on the overall reaction mechanism that would speed up the development of new drug therapies for treating different human diseases. Also, the ability to monitor the disease progression will allow clinicians to optimize therapeutic plans for patients [2]. However, many of the biophotonic-related reaction processes mentioned above still require bulky and expensive equipment, timeintensive steps, complex processing protocols, and low throughput with little-to-no automation. Miniaturized systems, particularly for fluidic devices, aim at scaling down the macroscale processes into the dimensions of milli-, micro-, and nanoscale and can also provide solutions to the shortcomings experienced in conventional tabletop processes. Miniaturized fluidic devices are generally used to speed up the processing and analysis time with their ability to multiplex and automate operations. Owing to their relatively small dimensions, smaller amounts of reagents and analyte are needed; this is especially useful in situations where samples are very limited or in

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

895

low-resource settings. Being a low-cost alternative to conventional methods, an inexpensive miniaturized device also offers convenience to the users and the ease of disposability. The origin of miniaturized fluidic devices is a culmination of progress in the fields of molecular analysis, biodefense, molecular biology, and microelectronics [3] into what would become known as the field of microfluidics. Currently it is defined as the study of precision manipulation of low-volume fluids for various reaction processes. The diversity of its beginnings is a true reflection of the interdisciplinary nature of the microfluidic field: combining physics, chemistry, biochemistry, engineering, and nanotechnology. Furthermore, the introduction of soft lithography in the 1990s by Whitesides’ group has substantially accelerated the fabrication of various miniaturized systems through a cheaper microfabrication alternative upon comparing to standard photolithography. Thus, it is no surprise that we have witnessed a burst of research developments in the field of microfluidic devices. In this chapter, we aim to provide a broad discussion of the fabrication and applications of miniaturized devices in the dimensions of milli-, micro-, and nanoscale level. We review underlying principles that affect fluidic manipulation and briefly describe the fabrication techniques. Then, we highlight selected biophotonic applications using miniaturized devices – nucleic acid mapping, bioanalysis, flow cytometry, plasmonic biosensors, and nanoparticle synthesis. We take an application-based approach to our review in view of the fact that many biophotonic applications overlap in their miniaturized fluidic regime. For each of these applications, miniaturization provides a platform that can overcome shortcomings in the current process. For example, optical mapping has benefitted from the development of nanofluidic devices, as nanochannels permit interrogation of longer DNA sequence structures. Benchtop bioanalytical methods and nanoparticle synthesis tend to be long and require bulky equipment and specialized operators, also making them suitable candidates for miniaturization. Due to the scope of our review, we regret that not all related publications could be included, but we wish to direct the reader to the literature for more details in the miniaturization of each regime. Finally, we conclude the chapter with a summary and present our perspective on the future of healthcare with a vision of miniaturized devices.

Miniaturized Fluidic Regimes Miniaturized fluidic devices permit the manipulation of small fluid volumes up to orders of attoliters (1018) [4], and they can be classified as those in the millifluidic, microfluidic, and nanofluidic regimes. The prefixes of each fluidic regime refer to the critical operational lengths. Millifluidic systems operate with channel dimensions in the millimeter (mm) range, microfluidics in the micrometer range (μm), and nanofluidics in the nanometer (nm) range. With all miniaturized devices, an increase in the surfacearea-to-volume ratio (SA/V) can be achieved, unparalleled to that on the macroscale. Nevertheless, miniaturization is more than just simply reducing length scales of a macroscale process. At smaller length scales, fluidic channel exhibits properties

896

A.M. Soehartono et al.

which differ from that of the macroscale, bringing about a new set of challenges in the device implementation. Fluids confined into small channels tend to have laminar flow, unlike large-scale channels which would exhibit turbulent flow at increased fluid velocities. Laminar flow, while beneficial for its controllability, presents challenges in mixing applications. Additional channel designs or micromixers have to be incorporated as mixing can only be achieved through diffusion in the laminar flow regime. Furthermore, differences in fluidic properties exist between the three regimes that can determine its application suitability. For example, millifluidic channels can achieve relatively high surface-area-to-volume ratio (SA/V) with a lower pressure drop compared with microfluidics [6]. When used for reactionware for nanoparticle synthesis, this property can alleviate problems such as blockages that would otherwise occur in microfluidic channels. With increasingly smaller dimensions, fundamental physical phenomena are increasingly influenced by the fluid interactions with its channel boundaries [7], and this is especially true of nanochannels. Typically, fluids in the microscale can be considered as a continuum, holding true in the milli- and microscale [8]. However, at the nanoscale, the liquid is considered as an ensemble of molecules [9]. Nanochannels are typically fabricated with high aspect ratios (between 5 and 200) [10]. As a result of higher fluidic resistance in nanochannels, pressure-driven flow is difficult, prompting the need for other mechanisms to drive fluid through the channel. The ultrahigh SA/V that can be achieved in nanochannels means that bulk properties of the fluid are subjected to more interfacial effects, such as electrostatic forces seen in the formation of the electric double layer (EDL). Other forces from nanoscaling such as solute entropy, van der Waals forces, and hydrophilic repulsion have similarly been exploited to drive DNA separation, chromatographic separation, and adsorption prevention in nanochannels, respectively [11]. The full potential of nanofluidic devices may not have been realized yet as the theoretical framework of nanofluidics is not fully developed, unlike its microfluidic counterpart, and is an active area of investigation. The practical implementation and fabrication tools available also need to be carefully considered. Achieving small-scale features, doing so consistently, and characterizing the features become increasingly more difficult as the dimensions approach the nanoscale. In this respect, nanofluidics lacks an easy fabrication method. Millifluidic benefits from its suitability for non-lithographic rapid prototyping using 3D printing. This approach is a cheap and convenient alternative to microfluidic soft lithography, which often uses lithographic patterning to create a mold. However, 3D printing is limited by the resolution of the printer, where typically submillimeter resolution is not consistently achievable. Table 1 highlights a sampling of the differences in fluidic properties and highlights the advantages and limitations between the three regimes. Miniaturized fluidic devices have been applied in a variety of research areas corresponding to the scale of detectable structures (Fig. 1). On the larger end of the spectrum, millifluidics has been reported in reactionware [6, 12, 13] and bioanalysis [14, 15]. The larger channel dimensions are suitable for studies on the cell population scale, with dynamic cell studies involving more realistic tumor spheroid models,

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

897

Table 1 Overview of characteristics of miniaturized fluidic devices, its advantages, and limitations Regime, critical operating length [m] Millifluidic, 103

Fluidic channel characteristics Lower surface tension Lower SA/V ratio Higher fluid volumes Liquid considered as continuum

Microfluidic, 106

Higher surface tension High SA/V ratio Low fluid volumes Liquid considered as continuum

Nanofluidic, 109

Ultrahigh SA/V ratio Greater interfacial effects such as strong electrostatic interaction between charged surface and ions in channel (EDL formation) Low fluid volumes Liquid considered as ensemble of molecules

Advantages Cell population scale Non-lithographic rapid prototyping using 3D printing Able to process viscous fluids Suitable for nanoparticle synthesis and large-scale in vitro bioanalysis Developed theoretical framework Single-cell scale Rapid prototyping using soft lithography Wide range of working flow rates Suitable for flow cytometry, sensing, and bioanalysis Single macromolecule confinement Valence-based macromolecule separation Suitable for DNA sequencing

Limitations Low-resolution features Nonstandard components Nonstandard characterization Limited range of working flow rates Prone to clogging and air bubbles Nonstandard components Nonstandard characterization

Incomplete theoretical framework Nonstandard components Nonstandard characterization Lacks a disruptive fabrication technology Difficult repeatability

small animal models, and embryos [16]. However, larger channels may lead to the use of more reagents and samples. Microfluidic channels, on the other hand, can achieve high SA/V ratios, but, at the expense of higher surface tensions. Nevertheless, laminar flow can be maintained with a wide working range [17], and thus a host of applications have been reported such as flow cytometry [18–20], biosensing [21–24], nanoparticle synthesis [25–27], and single-cell studies [28–31]. Since the average human cell diameters range in 10–30 μm, this regime provides a platform for detecting structures on the single-cell level. Finally, nanofluidic devices are suitable for probing molecular-scale structures such as small bacterium [32, 33], viruses [34–36], and DNA [37–39], with diameters on the order of 200 nm, 75 nm, and 2 nm, respectively. Furthermore, surface effects can be leveraged to perform

898

A.M. Soehartono et al.

Fig. 1 Selected biophotonic applications using different miniaturized devices working at milli-, micro-, and nanoscale level, along with the corresponding detectable biological systems at the length scale

functions such as valence-based ion separation. When compared to other regimes, microfluidics is the most mature. However, there is a proliferation in the research work dedicated to millifluidic and nanofluidic studies, as these regimes can overcome some of the application-based limitations seen in microfluidics.

Fluidic Manipulation One of the main assets of miniaturized fluidic devices is the ability to flexibly manipulate fluidic motion. The basis of operations within fluidic devices is built upon on a set of fundamental operations to perform fluid manipulation and transport, such as flow actuating, valving, mixing, and separating. Application of one or more of these components results in a functional device to perform multiple tasks. Miniaturization permits large-scale integration, for example, thousands of operations can be combined onto a single device, warranting high-throughput testing and analysis in a single device. Naturally, the range of applications from which these operations have bred is diverse. For instance, to enhance the performance of a surface plasmon resonance (SPR)-based biosensor, the device simultaneously regulates temperature through integrated micro-heaters and temperature sensors to maintain a uniform temperature distribution and ensure the integrity of the biosensing

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

899

Temperature Sensors

Outlet Inlet

Detection area Microvalue

Micropump

Heaters

Microchannel Flow sensor

Fig. 2 Schematic of a multifunctional SPR sensor, comprising of microvalves, micropumps, flow sensors, heaters, and temperature sensors (Reprinted from Ref. [40]. Copyright (2007), with permission from Elsevier)

measurements [40] (Fig. 2). This multifunctional capability has further been exploited in on-chip devices, such as lab-on-a-chip (LoC) devices, which aim to translate conventional benchtop laboratory analysis into a compact device for sample analysis. This idea has spawned alternate iterations in biomedical studies such as tumor-on-a-chip [41] and organs-on-chips [42] such as brain-on-a-chip [43], to name a few. In each of these devices, the purpose is to act as an in vitro model that mimics the true nature of the biological systems on a single chip for predicting outcome of medical diagnosis and therapeutic treatment without the use of any animal models. Here, we briefly look at some of the physical phenomena encountered at the milli-, micro-, and nanoscales before reviewing the unit operators. The theoretical framework of fluidic flow is built on the Navier–Stokes formalism, which describes the effects of fluidic confinement. In an example of this, we look at the Reynolds number, Re. In order to ensure predictable device performance, a well-defined flow is desirable. As small-scale systems offer a marked increase in the SA/V ratio, surface forces dominate over gravitational forces [44, 45]. Re is a dimensionless parameter that dictates the flow regime of a channel, whether laminar or turbulent, and is given by the ratio of inertial forces due to velocity of the fluid to the viscosity expressed as: Re ¼

ρVlc η

where ρ is the fluid density, η is the fluid viscosity, lc is the characteristic length for the flow, and V is the fluid velocity. The effects of scaling are reflected in fluidic flow

900

A.M. Soehartono et al.

through the Re. Inertial forces, decreasing proportionally with the reduction of lc, result in a smaller Re. Conversely, the reduction of lc means viscosity becomes more dominant, signified by a larger Re. A low Re means a Stokes’ flow, translating to laminar flow. For channel flows with Re values approaching 2300, transition flow sets in, and Re numbers much greater than 2300 denote turbulent flow [46]. Predictably, with this characteristic, flow regime can be manipulated by geometry design. In miniaturized fluidic devices, small channel dimensions, with the critical dimension usually being the channel height, results in Re much less than 100. For example, the predictability of laminar flow has been used to pattern different cell types in a single stream, flowing in parallel in one conduit [47]. Applications where turbulent flow is desirable, such as mixing, are facilitated by modes such as chaotic advection. Although several transport phenomena differ greatly at the nanoscale, the Re generally holds true in channels with one dimension smaller than 100 nm [48]. Fluid flow in channels is most commonly actuated by pressure or electrokinetics. Pressure-driven flow is widely adopted for its simplicity and can be accomplished off-chip, such as using syringe pumps. However, fluidic resistance may dampen these flows as large hydrodynamic pressure may be necessary to actuate the fluids. This is evident when observing the governing equations of an incompressible, Newtonian fluid. The fluidic resistance of a circular tube is expressed as: R¼

8μL πr 4

where μ is the fluid viscosity, L the channel length, and r the channel radius. As evidenced by the equation, resistance is inversely proportional to the fourth power of the channel radius. So, as the channel diameter decreases, resistance rapidly increases. Alternately, electrokinetics, which applies external electric fields to move the fluid through the channel, with mechanisms including electroosmosis, and electrophoresis, can be used to drive the fluid flow. Thus, another important concept is the occurrence of the electric double layer (EDL), bringing about surface-charge-governed ion transport [45], which can be found in nanochannels. The changing electrical potential near the surface results in spontaneous formation of an EDL, comprising of the Stern layer and the diffuse layer as shown in Fig. 3a. Although a bulk solution may have a neutral charge, surface charges are observed at the solid–liquid interface. In this instance, surfaces interacting with an aqueous solution gain a net surface charge. Due to electrostatic attraction, ions opposite in charge to the surface ions (i.e., counter-ions) accumulate to form a shielding layer along the charged surface, forming a layer called the Stern layer. Co-ions are repelled away from the layer, and so free ions in the fluid form a diffuse layer. The zeta potential ζ is at the slip plane between the two layers (Fig. 3a). The thickness of such a layer is determined by the Debye length, which is the distance where the surface charge has decayed to 1e of its original charge toward the bulk value. In aqueous solutions, the Debye length is found to be between 1 and 100 nm [49]. As can be observed from Fig. 3d, in microchannels, the EDL is negligible as the electrical potential decays to the bulk value, and the surface charges

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

901

a

+

LD

+ +

+ Y

+

Stern layer

+

+

+ +

Diffuse layer

+

y

-

-

d

f Distance

b

+

Electric Potential

Microchannel

e

g Distance

c

Ionic Concentration

Nanochannel

Fig. 3 (a) Schematic of the electric double layer, showing the accumulation of counter-ions on the charged surface, along with the diffuse layer and Debye length, LD. The bold line indicates the electrical potential profile, ψ, which becomes more significant with shrinking dimensions (Reproduced from Ref. [11] with permission of The Royal Society of Chemistry). A comparison of the surface charge effects on (b) a microchannel versus (c) a nanochannel. In a microchannel, the electrical potential is decayed to bulk value (d), unlike the nanochannel center which has potential at its center (e). Furthermore, (g) the nanochannel shows a higher counter-ion concentration compared to co-ions, unlike (f) the microchannel (Reprinted with permission from Ref. [49]. Copyright (2005) American Chemical Society)

are not significant enough to electrostatically manipulate ions. However, the effects of the EDL become more pronounced as the channel dimensions diminish closer to the Debye length, making this unique to nanochannels [9, 50]. In the case of the nanochannel, an electrical potential is present in the center (Fig. 3e). Mainly, the EDL introduces nonuniform motion and electric fields transverse to the flow. Thus, the resultant axial and transversal fluxes of electrostatic fields can be used to separate and disperse the analyte ions [51]. For example, the speed of small molecules moving through the EDL depends on their valence or molecular weight [10],

902

A.M. Soehartono et al.

which can be used to separate molecules electrophoretically [52]. Perhaps the most notable application of nanochannel electrophoresis is in DNA sequencing, where application of an electric field can stretch, relax, or recoil DNA molecules [53]. Let us now discuss about operators within the miniaturized devices. Control can be achieved through the implementation of components such as valves [54], pumps [55], separators and concentrators [56], and mixers to integrate functionality on a single device [57]. On-chip pumping is necessary for self-contained bioanalytical devices such as genomic analysis and immunoassays and complements other components within a micrototal analysis system (μTAS) such as microfilters and microreservoirs [58]. Many micropumping schemes exist in the literature, and they can be classified as mechanical, like check valve pumps and peristaltic pumps, or nonmechanical [59], such as electrochemical pumps and the phase transfer pump. The performance is characterized by metrics such as flow rate, stability, and efficiency. More recently, an optical pump, driven by optical tweezing, was reported to transport volumes flows of up to 200 fL/s across a microchannel [60]. The pump utilizes the rotation induced in birefringent particles from the transfer of spin angular momentum in particles trapped with circularly polarized light. To facilitate fluid movement, two vaterite particles were counter-rotated and shown in Fig. 4c, and the study observed the movement of a 1 μm silica bead. Surface tension is another phenomenon that has also been exploited, where a nanofluidic bubble pump drove picoliter volumes through a channel via surface tension-directed gas injection [61]. The mechanisms for valve control include mechanical, pneumatic, and electrokinetic forces. In one pneumatically controlled valve, a dual-layer poly(dimethylsiloxane) (PDMS) substrate was used, with the one layer as the control layer and the other as the flow layer. Pressure forces the control layer down, obstructing the fluid flow path in the flow layer. By varying the pressure gradients, valve switching states can be achieved. Using microvalves, fluid flow into separate chambers can be controlled, creating a serial dilution network [62] to rapidly detect varying concentrations of analyte. While microvalves have been widely reported, nanovalves pose a technical challenge as existing valves reported are in the micrometer dimensions. Mawatari et al. reported a nonmechanical Laplace valve integrated into a nanochannel, operating based on a wettability boundary to generate fluid droplets. The valve is a nanopillar structure formed on the bottom of a nanochannel (Fig. 4a, b-i). By modifying the surface within the channel to be hydrophobic, the nanopillars become more hydrophobic as compared to the flat surface, thereby enabling the valve to withstand up to Laplace pressure at the liquid surface. When the valve breakthrough pressure is exceeded, the valve opens and actuates the droplet (Fig. 4b-iv) [54]. In another example, polymer brushes grafted onto a nanochannel were modulated by an external electric field to gate the fluid flow within the channel [63]. Further, several strategies can be used for separating and combining fluids, such as mixers. Mixers have the goal of thoroughly and rapidly mixing multiple samples in a device [64], which can be accomplished actively or passively. Attributing to the typically small Re values in microfluidic devices, mixing is effectively performed through diffusion. When two laminar streams come into contact with

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

a

Hydrophobic Layer

903

c

Glass P

Laplace pressure PL

100 -1000 nm

Glass Nanostructure

b (i) Nanochannels

5μm/s

(ii)

5μm

valve closed

P1 water

(iii)

P2

(iv)

P3 valve opened

10 5 0 −5 −10 −7.5

∼1 fL P2

Pump speed (μm/s)

d

Nanopillars

−5 −2.5 0 2.5 5 7.5 Frequency of rotating particle (Hz)

P3

Fig. 4 (a) Schematic showing the operation of a Laplace nanovalve. (b) Droplet generation and actuation through the valve, with i) channel design (ii) filling the channel with water, the valve is closed (iii) application of pressure, creating a femtoliter droplet (iv) movement of droplet through the opened valve with breakthrough pressure (Reprinted with permission from Ref. [54]. Copyright (2012) American Chemical Society). (c) Flow field of the fluid while the birefringent pumps are rotating, and (d) the fluid speed is measured at the center of the pump (Reprinted from Ref. [60] with permission of The Royal Society of Chemistry)

each other, mixing will only occur through diffusion, resulting in a slow mixing time. Thus, the challenge of mixers is to attain efficient mixing rapidly. In milliand microchannels, passive mixing can be achieved by engineering the channel geometry to increase fluid folding. For example, a serpentine mixer split and recombined flows in successive F-shaped units [65], accumulating an overall chaotic advection. Other mixers that have been reported include a T-type mixer [66, 67] and a vortex mixer [68, 69]. Although nanofluidic devices are still in its infancy stage, several nanofluidic mixers have been reported. Accounting for the greater effects of the channel walls, mixing was realized with hybrid surfaces, alternating between hydrophilic and hydrophobic patterns in the channel walls of a Y-shaped mixer [70]. Active mixers, on the other hand, rely on external sources to increase the interfacial area, such as through electrokinetic mixers [71, 72] and magnetic mixers [73].

904

A.M. Soehartono et al.

Fabrication of a Miniaturized Fluidic Network Microchannels typically consist of a substrate layer and the channel layer. Some of the most common substrates include glass, silicon, and polymers [74]. Early adopters of microfluidic chips opted for hard materials such as glass or silicon, inorganic materials typically used in photolithography [3, 75]. Glass and silicon microfluidic chips can be patterned using surface micromachining; buried-channel techniques, including deep reactive-ion etching (DRIE) and chemical vapor deposition (CVD); or bulk micromachining [76]. These materials are particularly useful for high-temperature processing, high aspect ratio devices (up to 20:1), or electrode integration. However, the fabrication of glass- and silicon-based microfluidic chips requires specialized equipment, hazardous chemicals such as hydrofluoric acid (HF), and clean room conditions for fabrication, making it expensive and rendering it inaccessible for many researchers working on miniaturized devices. Furthermore, since glass and silicon are not gas permeable, long-term cell culturing cannot be maintained inside such chips. More recently, polymers have replaced the silicon substrate to be the dominant material for microchannel fabrication due to its low cost and ease for rapid prototyping, with the most common methods being casting or hot embossing [77] for replication. In particular, soft lithography is a non-lithographic microfabrication technique to replicate structures. Briefly, an elastomeric polymer is poured over a master mold, after which it is cured and peeled off [78]. Elastomeric polymers are flexible cross-linked polymer chains which can stretch or compress with force. PDMS is a notable elastomeric polymer widely used in soft lithography because of its low cost, biocompatibility, optical transparency, and gas permeability [79]. Furthermore, due to its permeability to oxygen, nitrogen, and carbon dioxide, viable cell cultures can be maintained, thus enabling PDMS chips to be used for long-term cell imaging applications. The master is typically created using standard lithographic techniques such as photolithography, electron beam lithography, and micromachining. Alternately, millifluidic channel molds can be fabricated using three-dimensional (3D) printers as a low-cost alternative. This is an attractive option compared to clean room lithography, which is more expensive and time-consuming. However, variable results have been reported with 3D – as much as 40% variation on 3D printed mold designs with submillimeter resolution. Therefore, application of such technology may only be suitable for milli-scale or macroscale fluidics [80]. Nanofabrication, however, may not be as amenable to soft lithography as resolution limitations in conventional photolithography may not be sufficient to achieve the critical dimensions required for nanofluidics [81]. In this instance, where soft lithography is analogous to microfluidics, nanofluidics has yet to see a revolutionary fabrication technique [82], essentially limiting the complexity scale that can be achieved with nanometric channels. Perhaps the most important development in nanofabrication has been with the introduction of nanoimprint lithography (NIL) [83], which works by mechanically deforming the resist material to achieve feature sizes of up to 10 nm [84]. Chou et al. devised nanochannels fabricated by NIL [85]. Briefly, a mold with the designed pattern is produced by electron beam

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

905

lithography (EBL) followed by reactive-ion etching (RIE). The mold is then pressed against the substrate coated with a resist layer, such as poly(methyl methacrylate) (PMMA). During the imprint step, the resist is heated to a temperature above its glass transition temperature. The mold is then removed, and the pattern on the resist layer is fully developed by RIE with oxygen. Other nanolithography techniques reported include focused ion beam, interferometric lithography, and sphere lithography [81].

Biophotonic Applications with Miniaturized Devices Nucleic Acid Optical Mapping Fluidic device with nanochannels have been used to stretch single DNA molecules for length measurements and optical mapping. The advances in the NIL technique enable the fabrication of nanochannels with cross sections smaller than the persistence length of DNA. As a result, labeled single DNA molecules can be stretched and imaged in nanochannels [86]. This could be a powerful tool for studying DNA because it enables isolating a single molecule from the bulk by limiting the degrees of freedom of molecules to 1D. As a result, imaging and analysis of DNA which is not immobilized to a surface becomes possible [87]. Generally, DNA stretching is achieved in nanochannels by two methods. In the first method, DNA molecules flowed through a funnel-like nanochannel spontaneously stretched because of the gradually increasing velocity, known as elongational flow [88] (Fig. 5a). In the

Fig. 5 Stretching of single DNA molecules in nanochannel (a) SEM image of a funnel-like nanochannel for generation of elongational flow to drive DNA molecules into the nanochannel for DNA stretching. Scale bar is 1 μm (Reprinted with permission from Ref. [88]. Copyright (2010) American Chemical Society). (b) Schematic and fluorescence image of DNA stretching by electrophoretically driving DNA molecules into the nanochannel (Reprinted by permission from Macmillian Publishers Ltd: Ref. [91])

906

A.M. Soehartono et al.

second method, the motion of DNA molecules is controlled by entropic confinement [89, 90]. For example, in one report, stretched DNA is achieved by electrophoretically driving long DNA molecules into 45  45 nm nanochannels [91] (Fig. 5b). The spontaneous stretching of DNA molecules in the confinement of nanochannel is a result of the self-repulsion between negatively charged phosphate groups on the DNA backbone [87]. Three main techniques have been utilized to perform optical mapping on stretched DNA: restriction mapping, denaturation mapping, and sequence-specific tagging [92]. In conventional restriction mapping, dsDNA is first cut into small fragments by restriction enzymes, and the fragments are separated by electrophoresis. The length of each DNA fragment is determined by its distance traveled in electrophoresis process, after which a restriction map of the DNA can be generated. However, this method does not work well with long DNA molecule and furthermore is unable to perform single-cell mapping. These problems can be conveniently solved when restriction cleavage is performed on stretched single DNA molecule in nanochannel. Due to the confinement of nanochannel, the DNA fragments formed after restriction cleavage are retained at their original position. They can be fluorescence labeled and imaged by microscopy [93] (Fig. 6a). Moreover, the electrophoretic separation of DNA fragments is no longer necessary as the fragmented lengths can be determined by either direct length measurement from microscopy or measuring the fluorescence intensity signal of each fragment [92]. Thus this method is promising in ensuring high-throughput processing. Additionally, the capability of performing single-cell mapping is important to study the genomic heterogeneity in the same cell types, and, consequently, differences in disease progression and drug response can be monitored. For example, Gupta et al. [94] applied this optical mapping technique to study the structural variation in multiple myeloma genome. Denaturation mapping is a technique utilizing the differences in local melting temperatures along the length of a long dsDNA molecule to generate a unique barcode for that DNA. On a DNA molecule, the AT-rich regions tend to have lower double-helix stability than GC-rich regions, indicating that AT-rich regions begin melting at lower temperatures. Reisner et al. [95] used this technique to generate a grayscale barcode with brighter and darker region along the length of stretched DNA molecules. Briefly, dsDNA molecules are stained with intercalating dye and stretched in nanochannel. When treated with heat, the DNA partially melts according to its base pair sequence, and the intercalating dye at the melted regions diffuses away to generate darker regions, while the unmelted regions remain as brighter regions (Fig. 6b). One advantage of this method is the relatively simple procedure (staining and heating); no enzymatic pretreatments of DNA are required. Nyberg et al. [96] used an antibiotic that has high affinity to AT-rich regions to prevent the binding of intercalating dye to AT-rich regions, thus creating a fluorescence map of AT versus GC-rich regions on the DNA. For sequence-specific tagging, DNA molecules first undergo enzymatic pretreatments before being stretched in a nanochannel. Briefly, nicking enzymes are applied to create single-stranded nicks on dsDNA molecules in a sequence-specific manner. Then fluorescent nucleotides are incorporated into the nicking site by DNA

Intensity (a.u.)

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

907

2 1 0 −1 −2 30

35 40 45 50 55 60 Position along nanochannel (μm)

65

70

Fig. 6 Three optical mapping techniques for single DNA analysis. (a) Restriction mapping: Stretched DNA molecules are cut into fragments by restriction enzymes and the locations of the restriction sites are denoted as the holes (indicated by white arrows in the diagram) along the length of DNA molecules (Reprinted from Ref. [98], under CC BY 2.0). (b) Denaturation mapping: Stretched DNA molecules partially melt when treated with heat. The melted regions appear darker, while the unmelted brighter, generating a unique barcode for that DNA molecule. Scale bar is 10 μm (Reprinted by permission from Macmillian Publishers Ltd: Ref. [99]). (c) Sequence-specific tagging: Sequence-specific nicks are created on DNA samples after which the nicked sites are refilled with fluorescent nucleotides. These labeled DNA molecules are then stretched in the nanochannel and imaged under the microscope (Reprinted by permission from Macmillian Publishers Ltd: Ref. [91])

polymerase. At last, the treated DNA molecules are stretched in a nanochannel and the fluorescent image can subsequently be observed [90, 91] (Fig. 6c). The distance between occurrences of the specific sequence can be directly measured from fluorescent imaging to construct an optical map. The position of the nicking site along the DNA also can be imaged by other mechanisms, such as detecting the fluorescence resonance energy transfer (FRET) between the intercalating dye labeling the DNA and the acceptor dye labeling the nicking sites [97].

Bioanalysis The development of millifluidic-based bioanalytical devices, often categorized as LoC, offers a highly efficient platform for analysis of cell manipulation, drug

908

A.M. Soehartono et al. Em

a

bry

b

or

Gr

I ien nlet tg en ug era Dr ter ad

oo

m

Outlet

dia

Me

7

2

Outlet

6

5

4

3

1

ug

Gr

Dr

ad

Fis

hr

oo

m

ien

tg

ia

d en era Me ter Inl et

300μm

400μm

Fig. 7 Design of the fluidic device: (a) schematic diagram of the fluidic device and (b) actual image of the fabricated device from top view and micrographs of the embryo and larvae fish in the chamber (Reprinted from Ref. [101], under CC BY 4.0)

screening, mimicking tumor in vivo, and others [100–103]. The miniaturization efforts in the area of biological analysis encourage rapid analysis and simultaneous operation of multiple analyses, requiring low volume of sample and reagent, which in turn leads to low waste levels. Regardless of their methods of fabrication, demand of milli-scale fluidic devices remains exceptionally high, particularly in bioanalytical research. Li et al. developed a microfluidic device to assess the toxicity and teratogenic effect of antiasthmatic agent – aminophylline (Apl) toward zebrafish embryos and larvae [101]. The work also demonstrated in situ analysis of the modeled microorganism to assess their survival rate, body length, and hatched rate of the embryos. The design of the fluidic device comprised of two layers which include a PDMS (top layer) bonded onto a glass slide (bottom layer). The fabrication process was done by scribing the culturing chambers and fluidic channel patterns on a copper-based mold for molding a PDMS replica. A single fabricated chip composed of two units allowed simultaneous analyses, where one unit was used for toxicity and teratogenicity experiments and another for larvae experiments (Fig. 7). Each unit consists of a concentration gradient generator (CGG) with a sigmoidal distribution pattern to split and mix the drug and media to produce variations of drug concentrations and flow into culturing chamber (C1–C7). With this feature, rapid analysis of drug screening test at single organism level can be easily performed. While there exists many different methods to fabricate a fluidic-based device, it is of interest to find out how the performance of the device can be influenced by the choice of fabrication techniques and materials. A study done by Zhu et al. uses two different fabrication techniques, namely, Multi-Jet Modeling (MJM) and stereolithography (SLA) with several types of UV-curable resin, to investigate the quality and performance of the fabricated three-dimensional (3D) printed millifluidic device for in situ analysis of zebrafish embryo. In this work, the fluidic-based device features a serpentine channel for loading of living embryos and medium into an array of traps, using hydrodynamic force via small suction channels as seen in Fig. 8.

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

909

a R1

R2

T1

R3

R4

T9 T8

T2 T7 T6 T4

T14

T12 T5

R7

T22

T20 T13

T41

T30

4 mm

T47

T39 T35

Outlet

T42 T38

T36 T29

T48

T40 T34

T31 T27

R10 R11 R12

T33

T28 T21

R9

T32 T26

T23 T19

R8

T25 T24

T18 T15

T11

R6

T17 T16

T10

T3

R5

T43

T46

T44 T37

T45

Inlet

b

c

Fig. 8 Schematic of fluidic chip (a) 2D CAD drawing showing miniaturized traps (T ) positioned in array of rows (R) (b) 3D CAD drawing (c) Actual fabricated device using SLA technique (Reproduced from Ref. [100] with permission of AIP Publishing)

Several tests were conducted in the study, including biocompatibility of the 3D printed microenvironment for long-term monitoring of bioassays on the living embryos and trapping efficiency of the embryos into the traps. Based on the results obtained, the sign of toxicity was not observed for most of the tested resins at 72 h of incubation. Similar to other tested chips, embryos placed within the chip fabricated using MJM showed no sign of embryo mortality after 48 h of incubation. However, sudden embryo mortality of more than 75% was observed at 72 h of incubation. This observation suggests that the toxicity responses of embryos toward water-soluble photo-curable polymer leachates are cumulative, where signs of toxicity can only be observed after a certain period of time. On top of this, the fabricated devices demonstrated their capability for fluorescence imaging of the zebrafish embryos, as the quality of captured images are very much influenced by the choice of resins. For instance, PDMS-fabricated chips are capable of producing high-resolution stacked confocal images for a high-definition view of the embryo specimens. On the contrary, some resins were not able to give a clear fluorescence image as seen in Fig. 9. Further details of chip performance fabricated using other resin types are shown in the paper [101]. Overall, this study showed that millifluidic device is suitable for analysis of biological specimens.

910

a

b

A.M. Soehartono et al.

PDMS

PDMS

VisiJet Crystal

Watershed 11122XC

Dreve Fototec 7150 Clear

VisiJet SL Clear

Form Clear

VisiJet SL Clear

Fig. 9 Optical transparency of the printed substrate for fluorescence imaging. (a) Fluorescence imaging of zebrafish embryo that was immobilized inside the 3D printed fluidic-based device. (b) High-resolution stacked confocal imaging of zebrafish embryo on devices fabricated using PDMS and soft lithography (left) and stereolithography in VisiJet SL clear resin (right) (Reproduced from Ref. [100] with permission of AIP Publishing)

Apart from this, investigations focusing on millifluidic-based droplet analyzer devices have been actively carried out in recent years. Millifluidic technologies are particularly appealing for droplet-based analyzer applications such as microbial studies [102, 103]. Baraban et al. has developed a millifluidic droplet analyzer (MDA) to monitor the activities of Escherichia coli, including its growth rate and its resistance to the antibiotic cefotaxime. Referring to Fig. 10, the MDA demonstrated successful encapsulation of bacteria into droplets with predefined culture medium (e.g., nutrients and antibiotics) while relaying this droplet train to an attached detector block. The detector block, which comprised of a light source (i.e., mercury lamp) and photomultiplier tube (PMT), suggests the capability of the MDA is not only limited to develop a large scale of isolated droplets containing bacteria strain in a predefined microenvironment but can also enable the analysis of minimal inhibitory concentration (MIC) of the cefotaxime. The galK locus in the bacterial chromosome was pre-inserted with yellow fluorescent protein (YFP) to correlate fluorescence signal and number of cells within the droplets; thus, MIC of cefotaxime toward growth of Escherichia coli over a time span of 90 h can be monitored.

Miniaturized Fluidic Devices and Their Biophotonic Applications

911

DETECTOR

DROP-MAKER Inset I PMT output, V

Inset II

Antibiotics

Waste

Syringe-Antb.

LB (nutrients) Syringe-LB

Cross A

Waste

30

Acquisition time, s

V3

V2

V1

HFE-oil + LB+E.coli

surfactant

Cross B

Forth

Objective lens

Back

Syringe-Oil Syringe-Bact.

Lamp

DM ExF

EmF

Mineral oil

PC

PMT

Fig. 10 Schematic of the (a) millifluidic-based droplet analyzer where antibiotics, nutrients, and bacteria are injected and mixed at Cross A; water in oil droplets are formed at Cross B using HFE oil. Mineral oil droplets are used as spacers to separate individual droplets. (b) Detector block measures growth of bacteria over time (Reproduced from Ref. [102] with permission of The Royal Society of Chemistry)

a

b

c 25 °C

Algae in TAP medium 0.75 mm

HFE oil

Air (Spacer) Detection

Fig. 11 Millifluidic devices for the analysis of microalgae. (a) Generation of uniform droplets containing microalgae and spatially separated by air spacers. (b) Detection block to measure chlorophyll fluorescence emission intensities inside each droplet. (c) Transparent FEP tube wrapped in coil (Reprinted from Ref. [14], under CC BY 4.0)

On the other hand, Damodaran et al. of the same research group demonstrated an improved version of MDA to which is capable for monitoring growth kinetics of microalgae for up to 140 h long while conserving their viability and metabolism [14]. The working principle does not deviate much from their previous design; however, some newly developed key features include the injection of air spacers (replacing mineral oil spacer) for the separation of adjacent algal droplets (Fig. 11). The air spacer not only reduces chances of algal cross contamination but also extends the stability and life span of droplets. Moreover, complete isolation of millidroplets from each other also enables precise monitoring of growth kinetics and the size of

912

A.M. Soehartono et al.

individual droplet. Similar to the work presented by Baraban et al., a fluorescence readout was used to measure the chlorophyll fluorescence intensity of each drop to estimate changes in the number of algal cells. Finally, an additional droplet-sorter module was included in the new design to enable collection of single healthy algal droplet for further experimentation, such as MTT and MIC assay screening.

Flow Cytometry Flow cytometry is a powerful cell analysis tool for cytology, molecular biology, and clinical applications [19, 104–107] and works by passing a series of single cells through a focused laser beam at a rate of thousands cells per second. Optical signals from the system such as fluorescence, forward scatter, and side scatter can be detected and translated into morphological information such as cell size, count, and group. In clinical diagnosis, flow cytometers are the gold standard in HIV diagnostics, used to count CD4+ T lymphocytes [106, 108, 109]. However, conventional flow cytometers are extremely expensive, complex, difficult to operate, and bulky, impeding widespread usage. Microfluidics provides a promising solution for developing low-cost, portable, and ease-of-use flow cytometer [4, 107, 110]. Stateof-the-art microfluidic systems enable testing with single cells, which naturally fits the requirement of flow cytometry. A few microfluidic flow cytometers have been reported [106–113]. In comparison to conventional flow cytometers, microfluidic flow cytometers are much smaller where cells are flowing into on-chip microchannels with micrometer dimensions. Illumination and optical signal detection on cells are conducted through off-chip optics or on-chip optofluidic setups (Figs. 12a, b, and 13c). Microfluidic flow cytometry has several unique advantages, such as the capability of processing small-volume (100 nL–100 μL) and low-density samples [104]. Furthermore, the ability to integrate more sophisticated sample-manipulating functions in microfluidic device enables the screening, capturing, and testing on the rare cells in each sample. For example, circulating tumor cells (CTCs) contain information of cancer metastasis such as their viabilities, concentrations, and phenotypes, but are exceedingly rare in cancer patient’s blood sample (1–100 CTCs per 109 blood cells) [115]. Researchers have designed microfluidic interfaces for the enrichment of CTCs from blood by removing other blood cells to allow subsequent flow cytometry processing. Reported CTC enrichment methods include affinity-based capture using antibodies [116], deformability-based capture using microstructures [117, 118], and hydrodynamic separation, such as that by inertial lifting forces in a spiral channel [119, 120]. Besides CTCs, on-chip capturing and imaging of lymphomas from cerebrospinal fluids has been reported by Turetsky et al. as a new method to diagnose and characterize central nervous system (CNS) lymphoma [121]. In conventional flow cytometry, a surrounding sheath flow has to be added to the cell solution in order to achieve a focused single-file passing of cells so that cells can pass through the focused laser beam individually [107]. The surrounding sheath flow is achieved with a coaxial injection flow chamber, which is complex and expensive. It is

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

913

Fig. 12 Microfluidic flow cytometry. (a) Photograph of a microfluidic flow cytometer with fluidic networks and integrated optofluidic setup. (b) Microscope image shows the hydrodynamic focusing of fluorescent polystyrene beads (green) and the alignment of optical fibers (125 μm in outer diameter). The FL fiber detects fluorescent light, the SSC fiber detects side scatter light, and the FSC detects forward scatter light. (c) Schematic of a microfluidic flow cytometer with hydrodynamic cell focusing. A, B, C, and D are inputs for sheath flow channels. Cell distribution in cross-sectional planes is shown in inserts 1, 2, 3, and 4 (Reprinted from Ref. [114] with permission of AIP Publishing)

important to note that adding a sheath flow would significantly reduce the sample concentration, which is undesirable when sample/reagent volumes are limited [104]. In microfluidic flow cytometry, novel ideas have been created for cell focusing. By defining the microfluidic network, horizontal focusing sheath flows can be added for cell focusing without the needs of an external flow chamber (Fig. 12b, c) [107, 122]. For applications that require focusing of cells in the vertical direction, microfluidic devices provide a simple yet useful method, utilizing the Dean vortex in the curved channels [107, 123–125]. In it, cells are focused in a specific cross-sectional plane (Fig. 12c). Combining the two focusing techniques, 3D hydrodynamic cell focusing is achieved which is equivalent to the coaxial sheath flow. All sheath flow channels, along with microfluidic structures, can be easily fabricated by soft lithography. Sheathless microfluidic focusing techniques have also been reported, such as bulk acoustic standing waves (BAW) [126], standing surface acoustic waves (SSAW) [127], dielectrophoresis

914

A.M. Soehartono et al.

(DEP) [128], and lateral flow displacement, which uses microstructures to induce flow path changes [129]. However, these techniques are difficult to incorporate with flow cytometry as the system complexity would be increased. Conversely, combining several focusing techniques can compensate for some limitations in a singular method. For example, hydrodynamic focusing can be incorporated with DEP and SSAW devices in order to reduce the orthogonal dispersion effect [107]. So far, many microfluidic flow cytometers are based on hydrodynamic cell focusing. Optical detection is another key function in flow cytometry. Most microfluidic flow cytometers have integrated components such as lasers, optical fibers, filters, PMTs, and oscilloscopes to mimic the illumination, waveguide, photodetection, and signal processing of conventional systems [107, 122]. However, in this respect, the miniaturization and integration of optical system still have room for improvement. Recent demonstration of on-chip CD4+ T-cell counts using a microfluidic flow cytometer provide another idea of optical system integration. Moon et al. [110] and Ozcan et al. [130] presented a lensless shadow imaging technique to observe cells and acquire images with CCD/CMOS detectors integrated with the microfluidics. High-resolution cell imaging, labeled microscope-on-a-chip, was demonstrated by Cui et al. [131]. Indeed, with the development of built-in cameras on smartphones or tablets, there is indeed potential in using such compact imaging device in flow cytometry. Tseng et al. presented a lens-free microscopy technique installed on a cell phone [132]. Zhu et al. also designed a smartphone imaging system for microfluidic chips (Fig. 13) [106]. Furthermore, the image processing and result analysis were accomplished through software available in the smartphone operation system (Android by Google). It is also worth noting that since cell phones have connectivity to the Internet, it is then convenient for the results to be analyzed by doctors or technicians located anywhere in the world. By integrating common equipment like the optical microscope or CCD/CMOS sensors on smartphones, reliable cell counting results with high accuracy could be acquired without the need of specialized hospital or laboratory equipment.

Plasmonic Biosensors Immunoassays are routinely used in the clinical setting for disease diagnosis and drug therapeutic monitoring. However, these tests are labor-intensive, slow with long incubation times, due to inefficient mass transport, and use expensive reagents and immunoagents [133]. Miniaturized biosensors are a platform that can eliminate the problems seen in conventional immunoassays by providing portability, high throughput, reduced reagent usage, automation, and low sample volumes. This is achieved through integration of microfluidic architecture with biosensing components. Biosensing is the analytical interrogation of biological samples by recording the response of a biological receptor upon analyte binding. Their main elements comprise of a transducer (mechanical, electrical, or optical) and a biological recognition element to capture an analyte. Analyte binding with the recognition element induces a measureable change in the signal, through variations in the refractive index

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

a

915

b 35

mm

Simple LED Lens

12 mm

14 .5m m

c

m m .5 28

55

Color Filter

m m

Cellphone Lens

27.9 mm

Glass or PDMS (n1~1.45 or 1.41)

LED

LED

Liquid (n2=1.33)

CMOS sensor

d

Glass (n3=1.45)

Fluorescence

Absorption filter Fluorescence External lens

CMOS

Cellphone camera unit

Fig. 13 CD4+ cells were imaged via lens and CMOS on a smartphone. (a–c) Diagrams of optofluidic designs on the cell phone and cell imaging principle. (d) Photograph of the actual platform (Reprinted with permission from Ref. [106]. Copyright (2011) American Chemical Society)

sensitivity, intensity, or interference patterns. These correspond to optical properties of the sample, such as absorption, scattering, and reflectance. An ideal biosensor has high specificity, sensitivity, and selectivity in addition to its multiplexing compatibility, real-time detection, low-cost fabrication, and portability. Detection sensitivity Detection sensitivity is important in the miniaturization process in the miniaturization process, as the detection volume scales with the device, and many works are devoted to improving this parameter. SPR has been widely used in the development of label-free biosensors. The excitation of surface plasmon polaritons (SPP) forms an exponentially decaying evanescent wave of nanometer range in the medium that is highly sensitive to the surrounding medium. A change in the refractive index of the medium results in a change of the characteristics of incident light [134]. This is the basis of SPR biosensing, during which analyte binding to a recognition element changes the resonance coupling of the incident light and wave. SPR measurements can reveal the specificity affinity, kinetics of biomolecular interaction, and analyte concentration [135]. Transduction is actuated by a thin film, usually gold, for its chemical stability and free electron behavior and functionalized with a biological recognition element [136]. The typical interrogation configuration implements the prismcoupled Kretschmann configuration, where a metal thin film is excited by incident

916

A.M. Soehartono et al.

source, usually a laser, through a prism, and reflected light is collected at a detector. Evanescence is limited to approximately 300 nm from the dielectric interface, making SPR sensing well suited for fluidic miniaturization. Additionally, the complete confinement of the liquid creates a perfectly conformal dielectric environment [137]. SPR imaging permits the multiplexed and real-time detection by monitoring intensity or phase changes over a large surface area. Parallel channel or waveguide arrangements allow the interrogation of multiple analytes, reducing analysis time and useful in molecular analysis and diagnostic applications, such as immunoassays. Ouellet et al. reported a 264-element individually addressable arrays of patterned gold films with a serial dilution network for binding kinetic measurements. Compared to conventional multi-plating enzyme-linked immunosorbent assay (ELISA), detection and quantitative analysis can be done in 10 min, as opposed to at least 60 min with the ELISA method [138] (Fig. 14a). Luo et al. performed a direct immunoassay for the detection of biotin-bovine serum albumin (BSA), analyzed by an array of gold spots under a network of microfluidic circuitry with control layer and flow layer and found the limit of detection (LOD) can be as low as 0.21 nM. By implementing a sandwich assay using a gold nanoparticle-labeled antibody, the LOD was improved up to 38 pM. In another study, a microfluidic device studied the two-dimensional spatial phase variation of rabbit immunoglobulin (IgG) adsorption onto anti-rabbit IgG functionalized film with built-in temperature regulation [40]. The analyte was delivered to the detection area through a series of microvalves and pumps. Instead of thin films, localized surface plasmon resonance (LSPR) biosensors are based on the excitation of metallic nanostructures. Resonance properties are dependent on its shape, size, interparticle distance, and dielectric medium refractive index [139]. In LSPR, the shorter evanescent wave range (100 nm) compared with SPR means the sensing volume is reduced. Several structures have been reported, from nanospheres [140], nanorods [141], nanorings [142], and nanoholes with patterning achievable through self-assembly, electron beam lithography, and, more recently, NIL. In one example, arrayed gold nanodisks were fabricated on glass to detect prostate-specific antigen (PSA) in a sandwich immunoassay. The wavelength shift amplification due to gold nanodisks resulted in femtomolar detection using nanoimprint lithography [143]. In another example, a multiplex immunoassay was developed to detect six different cytokines simultaneously using barcode channel arrays comprised of immobilized gold nanorods [144]. The arrays were patterned using a simple PDMS channel, where gold nanorods have been shown to have a superior sensitivity over their nanosphere counterparts. Six cytokines concentrations, quantified with the scattering intensity, were measured over 480 sensing spots (Fig. 14b) with sensitivities between 5 and 20 pg/mL of a limited sample volume (1 μL). This assay was used to monitor the inflammatory response of infants after cardiopulmonary bypass surgery. More recently, nanowells and nanoholes have shown to be attractive sensing platforms that can host LSPR modes, with in-hole sensing of proteins in the attomolar range, while confining fluids within its volumes [145–147]. Its relative ease of fabrication through sacrificial array layers or imprinting makes it an attractive

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

a

PDMS

917

Control Layer Flow Layer

Glass Vertical Reagent Reservoirs

Opening for Vertical Valves

+ +. .+ + + +. .+ + Horizontal Reagent Reservoirs Opening for Horizontal Valves

b PDMS

Fluidic channel

Imaging

Sample flow

Glass idic channel

Flu

PDMS

icroarray

Glass

1.0

(a.u.) 1.0

⏐Er/E0⏐2

Decay length 20 65 nm

0.5

Frequency

15

0.5 0

0.0 0

40

80

Distance (nm)

10 5 0 0

1200 400 800 Interparticle Distance (nm)

Binding

Scattering Intensity (a.u.)

norod M

Gold na

Intensity Increase

AuNR

AuNR

Spectrum Shift 500

550

600

650

700

750

800

Wavelength (nm)

Fig. 14 (a) Microfluidic architecture of an SPR-microfluidic chip (Reproduced from Ref. [138] with permission of The Royal Society of Chemistry). (b) Fabrication of an LSPR multiplex cytokine immunoassay, accompanied by a histogram showing interparticle distance of the nanorods, and the principle of LSPR detection (Reprinted with permission from Ref. [144]. Copyright (2015) American Chemical Society)

918

A.M. Soehartono et al.

platform for high-sensitivity detection. The earliest reported nanohole SPR sensors comprised of dead-end bottoms, where analytes were flown over the array, detecting sensitivities of 400 nm/refractive index unit (RIU) [148]. However, it was later found that the innermost parts of the holes had the highest levels of transduction with sensitivities of 650 nm/RIU [146], prompting research to efficiently transport analytes into the holes. Flow-through nanohole array sensor permits the targeted delivery of analytes by using extraordinary transmission (EOT) as the basis of detection. EOT is a phenomenon where SPP excitation results in transmission of the optical wave [149] and can be captured with a collinear detector. Yanik et al. demonstrated a lift-off free fabrication process using EBL and RIE to pattern a nanohole array suspended between two fluidic chambers. The fluidic flow-through ensures analytes are bound to the hole and is achieved by blocking the one of the two inlets/outlets at the top and bottom layers, steering the flow perpendicularly. Both flow-over (diffusive flow) and flow-through (targeted delivery) methods are shown in Fig. 15. In a comparative study, a 14-fold increase in mass transport rate constants was seen jumping from 0.0158 min1 in a flow-over scheme, to 0.2193 min1 in a flow-through scheme [150], with refractive index shifts of 630 nm/RIU reported. As mentioned before, the decreasing dimensions also contribute to the allowance of multifunctional operation, including the combination of multi-fluidic regimes. To improve sensitivity at ultralow concentrations, Wang et al. increased the target concentration molecule prior to analyte binding in a bead-based immunoassay electrokinetically in a nanofluidic preconcentrator integrated within a microfluidic channel [152]. Varying times in the preconcentrator enhanced the immunoassay sensitivity by more than 500-fold, going from 50 pM to the sub 100 fM range. Furthermore, coupled with advances in optical technology, monolithic optoelectronic integration could remove some of the needed bulky instrumentation. Optical fiber delivery of the excitation illumination may present challenges in integration with planar fluidic devices, thus multimode optical waveguides can be used to this end. As mentioned earlier, SPR sensing systems comprises of several components: the source, prism, transducer, and detector. In an example of a fully integrated miniaturized device, an SPR sensor using a planar waveguide with two lightemitting diodes (LEDs) and a photodetector was reported. The dual LED configuration detects the differential intensity shift of two wavelengths to increase the detection sensitivity [151]. An aperture focuses the LED illumination, with alternating illumination patterns controlled through a microcomputer. The light passes through a prism sheet and polarizer before being confined in the Pyrex waveguide encasing a gold thin film transducer, and a photodetector records the output light. The schematic and actual device is shown in Fig. 15c, d. Without needing a laser or spectrograph, this device presents a low-cost and portable alternative to conventional SPR systems. Waveguides also permit the use of interferometric interrogation. In a Mach–Zehnder interferometric waveguide [153], the light path is split into two arms: a reference and sensing arm. The sensing arm contains a sensing area, with recognition elements, and is where evanescent field interactions with the medium occur. The paths are subsequently recombined and the optical signal detected as an interference pattern. The principle of operation relies on a refractive index change

Miniaturized Fluidic Devices and Their Biophotonic Applications

a

b

(Diffusive Flow)

(Targeted Delivery)

Light

Light

Au NITRIDE

#2

PDMS

#1

#1

Au NITRIDE

#4

#3

b

LED1

a

Prism Sheet

#2

#4

#3

EOT Signal

c

919

PDMS

30

EOT Signal Gold film (50 nm thick)

PD2 (sensing) LED2

Pyrex glass plate (10x60x0.4 mm3)

Polarizer PD1 (reference)

Pinhole a: aperture, b: spherical cover

d LED 1

LED 2

Fig. 15 Schematic of (a) a flow-over nanohole array and (b) a flow-through nanohole array. Flow through is initiated by the blockage of the inlet/outlet at the top and bottom layer (Reprinted from Ref. [150] with permission of AIP Publishing). (c) Schematic configuration and (d) actual image of the planar optical waveguide (Reprinted from Ref. [151]. Copyright (2005) with permission from Elsevier)

in the medium that changes the effective propagation index of guided modes, resulting in a phase shift observed in the interference pattern. Integrated with SU-8 microfluidics, the device had a detection limit of 6  104 RIU.

Nanoparticle Synthesis Nanoparticles are materials in the order of 1–100 nm in size and have been shown to have various biomedical applications. For example, nanoparticles have been used as

920

A.M. Soehartono et al.

drug delivery vehicles for the treatment of various chronic diseases such as cancer and also as imaging probes for the visualization of tumors and marking of various cells. Increasing miniaturization of laboratory systems and devices has led to the integration of functional components in miniaturized devices known as LoC devices. In particular, for the field of nanosynthesis of materials for biomedical applications, these LoC systems possess many unique characteristics favorable for nanoparticle synthesis. For instance, LoC devices require only small amounts of samples, have quick reaction times, and, moreover, provide unique, controlled microenvironments which are crucial for nanosynthesis. These microenvironments are paramount to control crucial nanoparticle characteristics, such as morphology, size, and surface chemistry. Traditionally, in conventional benchtop (BT) nanoparticle synthesis, reactants are added into a three-neck flask. For certain synthesis protocols where an inert environment is necessary to prevent oxidation of the reactant species, nitrogen (N2) or argon (Ar) gas supply is fed into the three-neck flask. A magnetic stirrer bar is added to allow mixing of the reactants by physical agitation. The reaction mixture can be heated directly on a hot plate, heating mantle, or by immersion in a water or oil bath, with a temperature probe to regulate the temperature of the reaction mixture. On the other hand, for nanoparticle synthesis in miniature channels, the reaction mixtures are fed into the inlets of the chip using tubes connected to syringe pumps. The channels serve as the reaction vessel for microfluidic synthesis as the fluid traverses the entire chip. There are two main categories of fluid flow in the microchannels, namely, continuous flow (single phase) or segmented flow. The continuous fluid flow scheme involves a single uninterrupted fluid stream consisting of the reactant solutions (Fig. 16a). Mixing in the continuous fluid flow scheme occurs via passive diffusion between the laminar flow reaction mixtures. Another

b

a

reagent fluid reagent fluid

c reagent fluid

reagent fluid

reagent fluid

carrier fluid reagent fluid

gas

Fig. 16 (a) Continuous flow, (b) liquid–liquid segmented flow, and (c) liquid–gas fluid slug segmented flow (Reprinted with permission from Ref. [154]. Copyright 2006 John Wiley & Sons, Inc.)

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

921

flow regime is the segmented flow scheme, which can be further subdivided into gas–liquid segmented flow and liquid–liquid segmented flow. In contrast to a single fluid stream, the gas–liquid segmented flow has the reaction mixtures trapped in liquid slugs separated by gas bubbles (Fig. 16c), while the reaction mixture forms a droplet carried by an immiscible carrier fluid for the case of the liquid–liquid segmented flow scheme (Fig. 16b). The segmented flow scheme promotes rapid mixing within the discrete droplets in the otherwise laminar fluid flow in the microchannels and also prevents channel fouling as the contact between reactant droplets and the channel walls are minimized compared to the continuous flow regime. The process of nanoparticle formation is best explained using the LaMer plot [156] as illustrated in Fig. 17, with three phases in this process. In the first phase (phase I), the monomer concentration of the precursors increases and exceeds the supersaturation level (S). In order to form highly monodispersed nanoparticles, it is essential that no seeds are present. Otherwise, heterogenous nucleation will occur, resulting in a series of particles with different sizes. At this point, due to the high activation energy of the homogenous nucleation process, no particle formation occurs yet, and the concentration of the monomers continues to build up. Then in phase II, the nucleation stage, as the monomer concentration reaches the critical supersaturation level (SC), the energy of the system is now sufficient to overcome the energy barrier. This results in a rapid formation of nuclei (i.e., burst nucleation). As a result of the burst nucleation, the monomer concentration starts to fall rapidly until it drops below the critical SC value and goes into the phase III which is the growth stage. In this phase, further nucleation cannot be sustained; hence the particles start to increase in size, constituting the growth process. Therefore, burst nucleation is highly desirable to produce monodispersed nanoparticles as all the nuclei are formed simultaneously, and the growth conditions are identical. Although there is much interest in the synthesis of nanoparticles and polymeric materials using microfluidic chips and flow chemistry [154, 157, 158], it appears that subsequent applications of these as-synthesized nanoparticles are rather limited. Therefore, we have selected a few studies which we hope will generate more interest Fig. 17 LaMer plot showing nanoparticle formation showing the nucleation (phase II) and growth (phase III) of the nanoparticles (Reproduced from Ref. [155] with permission of The Royal Society of Chemistry)

922

A.M. Soehartono et al.

in the biophotonic applications of these materials and spur further development of nanoparticles using fluidic devices. In the following subsections, three different biophotonic applications of nanoparticles synthesized using miniature fluidic chip devices will be presented. Firstly, we will discuss cadmium tellurite (CdTe) quantum dots (QDs) synthesis in a microfluidic chip and its application for cell imaging. Next, fabrication of chitosan nanoparticles loaded with the drug paclitaxel for cancer therapy using a microfluidic chip will be studied. The last application involves the millifluidic chip synthesis of copper sulfide (Cu2xS) nanocrystals for laser photothermal therapy.

Bioimaging Using QDs Quantum dots (QDs) are semiconductor nanoparticles smaller than the exciton Bohr radius of the bulk material. Therefore, as a consequence of the quantum confinement effect, the emission wavelength of the QDs can be tuned by controlling their physical sizes. The photoluminescence of the QD is the reason why these nanoparticles are widely used in bioimaging applications. Compared to traditional fluorescent organic dyes with narrow excitation peaks and broad emission spectrums [159], QDs have broad excitation spectrums and narrow emission peaks. This unique property enables multiple fluorescent QD markers (with different emission wavelengths) to label different cellular regions of interest to be observed while using a single excitation wavelength [160, 161]. Besides that, QDs are more resistant to photobleaching as compared to organic dyes [162], retaining their ability to fluoresce under continuous exposure at the excitation wavelength, thereby allowing studies which require uninterrupted monitoring of fluorescence emission such as the elucidation of signaling pathways [163] and cancer metastasis [164]. Hu et al. from our group demonstrated a detailed study of the microfluidic chip synthesis of cadmium tellurite (CdTe) QDs conjugated with bovine serum albumin (BSA) and folic acid (FA) biomolecules for imaging macrophage and pancreatic cancer cells [165]. In this work, the effects of parameters such as the reaction temperature and channel dimensions (800 mm in length, 200 μm in height, and widths of 200 μm, 400 μm, and 600 μm) on the quality of the QDs produced (Fig. 18a, b) were investigated. In addition, the photoluminescence and photostability of the microfluidic (MF) chip synthesized CdTe–BSA QDs, and the conventional benchtop (BT) method were compared. As presented in Fig. 18c, d, it was discovered that the microfluidic chip could produce QDs of comparable quality and stability in a drastically shorter duration. RAW264.7 mice macrophages and Panc-1 human pancreatic cancer cells were seeded onto cover glasses with Dulbecco’s Modified Eagle’s Medium (DMEM) in a six-well plate prior to treatment with the QDs. The cells were then treated with the BSA-QD, 3-mercaptopropionic acid QD (MPA-QD), and FA-QD formulations and incubated for 4 h. The treated cells were then rinsed thrice with phosphate-buffered saline after the incubation and observed under a microscope. The microscope images in Fig. 19a, c show the RAW264.7 and Panc-1 cells labeled using the as-synthesized BSA-QDs and FA-QDs. The bright-green fluorescence signals are a clear indication that the microfluidic BSA-QDs and FA-QDs

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

923

Fig. 18 (a) Simulated and actual microfluidic chips with different channel dimensions, (b) the corresponding experimental results of the microfluidic chip synthesized BSA–CdTe quantum dots (QDs), (c) photoluminescence, and (d) photostability characterizations of the microfluidic (MF) and benchtop (BT) BSA-QDs. (Reproduced from Ref. [165] with permission of The Royal Society of Chemistry)

provide excellent contrast and hence can be used as optical contrast agents for cell imaging. In addition, folic acid (FA) functions as a targeting ligand as folate receptors are overexpressed in many types of cancer cells. Lastly, the reason for inclusion of the middle row MPA-QDs (Fig. 19b), despite MPA not being a biomolecule, is to indicate that uptake of the QDs into the cell occurred via a specific bio-mediated pathway and was not due to passive diffusion.

Drug Delivery Most anticancer drugs such as camptothecin and paclitaxel are hydrophobic in nature, making it difficult to deliver these drugs directly into the infected sites; hence nanoparticles can be used as drug delivery agents to encapsulate these hydrophobic drugs and transport them to the tumor cells. Majedi et al. [166] made use of a T-shaped microfluidic chip where the dimensions of the mixing channel were 150 μm (width) by 60 μm (height) by 1 cm (length) to assemble chitosan nanoparticles loaded with a cancer chemotherapy drug – paclitaxel. Chitosan is a polysaccharide which is popularly used as a drug delivery carrier because it is biodegradable, easily chemically functionalized, and sensitive to

924

A.M. Soehartono et al.

Fig. 19 Bioimaging of RAW264.7 mice macrophage cells and Panc-1 human pancreatic cells labeled with the microfluidic (MF) (a) BSA-QDs, (b) MPA-QDs, and (c) FA-QDs. (Reproduced from Ref. [165] with permission of The Royal Society of Chemistry)

pH changes [167]. By injecting the hydrophobically modified chitosan supplement solution (HMCS) along with the paclitaxel chemotherapy drug (i.e., the polymeric stream – dark-green region in Fig. 20) and having the other two inlets (i.e., carrier stream, light-green regions) infused with water (an immiscible solution), a tightly focused fluid stream was formed. By varying the relative flow rates of the polymeric stream and the carrier stream, different mixing regimes were attained, resulting in fabrication of nanoparticles with different sizes, compactness, and surface charges. The chitosan nanoparticles were labeled with fluorescein isothiocyanate (FITC) dye so that the cellular uptake of these nanoparticles could be quantified by measuring the mean fluorescence intensity at different nanoparticle concentrations with using flow cytometry. Human breast adenocarcinoma (MCF-7) cells were employed as the cell model in this case study. Figure 21 showed that the microfluidic HMCS nanoparticles exhibited enhanced cellular internalization as compared to the bulk synthesized nanoparticles, which were attributed to the reduced particle sizes, increased compactness, and higher surface charge conferred by the microfluidic synthesized nanoparticles [168, 169].

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

925

Fig. 20 T-shaped microfluidic chip for assembly of high molecular weight chitosan supplement (HMCS) nanoparticles via hydrodynamic flow focusing (Reprinted with permission from Ref. [166]. Copyright 2014 John Wiley & Sons, Inc.)

Fig. 21 Fluorescence intensity of fluorescein isothiocyanate (FITC)-labeled chitosan nanoparticles (FITC-HMCS or f-HMCS) indicating the cellular uptake with respect to f-HMCS concentration (Reprinted with permission from Ref. [166]. Copyright 2014 John Wiley & Sons, Inc.)

926

A.M. Soehartono et al.

Fig. 22 Paclitaxel (PTX) drug release profile of the chip synthesized chitosan nanoparticles as a function of the environmental pH to mimic circulation around tumor cells (Reprinted with permission from Ref. [166]. Copyright 2014 John Wiley & Sons, Inc.)

The microfluidic chip synthesized chitosan nanoparticles enabled controlled release of the PTX drug over time where the drug release rates vary according to the environmental pH. Figure 22 depicts the in vitro simulation of the cellular environment as the nanoparticles circulate in the body (pH = 7.4) maintaining their drug load, encounter tumor cells (pH = 6.5) causing an increase in the drug release rate, and being engulfed by the lysosome after they enter the tumor cell (pH = 5.5) where the nanoparticles rapidly off-load the drug. Therefore, the microfluidic chip synthesized chitosan nanoparticles are capable of carrying and delivering the anticancer drug to the targeted tumor cells due to their pH sensitivity, and the process can be monitored by measuring the fluorescence emission as the nanoparticles are labeled with FITC dye.

Photothermal Therapy Photothermal therapy (PTT) makes use of photothermal agents which can convert light energy (laser irradiation) to heat, thereby causing localized temperature elevations. Nanoparticles such as gold nanostructures [170, 171] and carbon-based nanomaterials like graphene [172] and carbon nanotubes [173] are commonly studied photothermal agents, while copper sulfide nanoparticles have only recently gained popularity [62, 174, 175]. The idea underlying photothermal therapy is to induce only malignant cells to uptake the nanoparticles, so that upon near-infrared (NIR) irradiation, only the cancerous cells will be annihilated, leaving the neighboring healthy tissues unscathed, thereby achieving targeted cancer therapy. Our group recently published a work on the synthesis of copper sulfide (Cu2xS) nanoparticles using a millifluidic chip with channels in the millimeter dimensions (1.5 mm (width) by 1.5 mm (height) by 545 mm (total length)) [176] as presented in Fig. 23. This chip produced Cu2xS nanoparticles of different sizes, morphologies,

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

a Sulphur Precursor

927

b

Mixing of Precursors

Copper Precursor

Nucleation Growth Cu2–xS NC

Ethanol –

Quenching of as synthesized particles 10 mm

c

d

Fig. 23 (a) The annotated schematic diagram, (b) actual chip, (c) top view, and (d) cross-sectional view of the millifluidic chip device used for the Cu2xS nanocrystal synthesis (Reproduced from Ref. [176] with permission of The Royal Society of Chemistry)

and crystal structures by varying factors such as the injection flow rate of the precursors and molar ratio of the copper and sulfur precursors. The millifluidic chip is simple and cost-effective to fabricate, provides relatively high throughputs, and at the same time retains the advantages of microfluidic synthesis like the laminar flow in the channels [157], distance-to-time spatial resolution of the reaction process [158], and large surface-area-to-volume ratio for uniform heating. In order to be used for biological applications, the Cu2xS nanoparticles had to undergo ligand exchange with the L-glutathione (GSH) to change the initial organic phase of the nanoparticles to the aqueous phase to ensure biocompatibility and facilitate its subsequent uptake into the cells. RAW264.7 mice macrophage cells were cultured with DMEM in a six-well plate before 13.5 μM and 27 μM of Cu2xSGSH nanoparticles were added. The treated cells were then washed thrice with PBS buffer following 4 h of incubation at 37  C and 5% CO2. These treated cells were then exposed to a 915 nm near-infrared (NIR) fiber laser at power densities of 36.7 W/cm2 and 52.1 W/cm2 for 15 min each. After which, the cells are then stained with propidium iodide (PI) – a fluorescent dye which is impermeable to the cell membrane of a healthy viable cell. Figure 24 illustrates the results after PI staining, where regions in red indicate that the macrophage cells have lost their plasma membrane integrity, thereby allowing

928

A.M. Soehartono et al.

Fig. 24 Microscope pictures of the effect of varying copper sulfide (Cu2xS) nanoparticle concentration on RAW264.7 mice macrophage cells after illumination by near-infrared (NIR) laser light at 915 nm for 15 min at power densities of 36.7 W/cm2 and 52.1 W/cm2. Cell death and damage show up in red after staining with propidium iodide (PI). The inset at the extreme left corner depicts the view of the photothermal therapy setup captured through a NIR viewer (Reproduced from Ref. [176] with permission of The Royal Society of Chemistry)

the dye to diffuse through the cell membranes and stain the cells. At power densities of 36.7 W/cm2 and 52.1 W/cm2, conspicuous regions of cell death were observed, while the controls – untreated macrophage cells (i.e., no Cu2xS) and the cells without NIR irradiation – remained viable. Close examination of the stained regions revealed seemingly circular regions annotated in Fig. 25 where at a fixed power density of 52.1 W/cm2, the diameter of the stained region was 571 μm for the lower Cu2xS concentration of 13.5 μM compared to 1472 μm for that of the higher concentration of 27 μM. Despite the limited photothermal conversion efficiency provided by the as-synthesized spherical Cu2xS, we believe that higher conversion efficiencies can be brought about by further optimization of the laser irradiation wavelength and employing specially engineered Cu2xS superstructures as demonstrated by Tian et al. [174].

Summary and Future Perspectives Miniaturized devices have emerged as a promising platform for biophotonics as they provide advantages such as tunability, portability, and potential integration of different components. We believe that the combination of milli- and nanofluidic regimes into existing microfluidic systems will provide a comprehensive system for personalized medicine. In the near future, we predict steady growth in a few research areas such as diagnostics and therapy where miniaturized devices have potential, but need further refinements for overcoming the challenges in medical research, diagnostics, nanomedicine, and fabrication technologies.

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

929

Fig. 25 Microscope images with annotated circular regions indicating the extent of cell damage as a result of both copper sulfide (Cu2xS) nanoparticle concentrations and the near-infrared (NIR) laser power densities (Reproduced from Ref. [176] with permission of The Royal Society of Chemistry)

In medical research, we see two important applications: first, in the area of genomics, facilitated by nanofluidics, and, second, microfluidics and millifluidics in cellomics. Nanofluidics enables the imaging of stretched single DNA molecule, which can be used to generate optical maps of single DNA with high sensitivity and resolution. Current optical gene mapping is limited by the DNA length and is not capable of handling small sample volumes. These challenges can be addressed by using nanofluidic devices which can handle large DNA molecules (10s to 100s of kilobase pairs). When integrated with existing microfluidic devices for sample preparation, such as cell sorting and lysing, DNA extraction from a single cell is made possible. Such a tool could broaden the current knowledge in the influence of genetics on disease predisposition and predict individualized drug responses. However, current nanofabrication techniques are still laborious and expensive. Thus, nanofluidics urgently needs simpler and highly reproducible fabrication technologies that can achieve nanometric resolutions before it can reach critical mass. In diagnostics and therapeutic monitoring, miniaturized fluidic devices are attractive as a platform through flow cytometry and plasmonic biosensing. To this end, plasmonic biosensors need to reliably detect trace quantities of molecules of interest with high sensitivity and specificity. Although many biomarkers exist, their

930

A.M. Soehartono et al.

effectiveness is contingent on the intrinsic ability of the marker to correctly associate a detectable shift with a diagnosis or outcome. Further, the biomarkers should have long-term shelf life stability. As such, more work needs to be done in understanding the fundamentals of the biomarkers. In addition to improving the limit of detection, multiplexed biosensing will allow for high-throughput screening. While some works have explored this, it would be of interest to extend multiplexed sensing. Furthermore, by employing multiple biomarkers, it is hoped that the detection accuracy can be increased and cases of false positives will be greatly reduced. In flow cytometry, microfluidics allows low cost, portability, and ease of use; flow cytometers would be possible with the development of microfluidic technology. Similarly, it is still too early to confidently substitute existing flow cytometers with microfluidic devices in many cell studies or clinical analyses as throughput, accuracy, system integration, and multiple cell types’ compatibility still need to be improved. In nanomedicine, on-chip reactionware provides a means to develop and customize medications with higher reproducibility and tight size distributions. While there has been tremendous developments using conventional bulk macrosynthesis methods, the translation of such synthesis protocols into the microscale is not a direct process as the reaction conditions are intrinsically different. Therefore, we believe that in the coming few years, there will be more exploratory work regarding the synthesis of different nanoparticles using microfluidic and millifluidic technologies. By employing the miniaturized chip synthesis scheme, the reaction can be controlled with very fine millisecond resolution. Therefore, there might be hope of uncovering novel advantageous characteristics of nanoparticles such as narrower absorption peaks and higher densities, which otherwise might be challenging to achieve using the traditional synthesis methods. While many reported devices may perform one part of a process on a chip with supporting off-chip components, we feel that it is desirable to combine all operations to be on-chip, as is the goal with LoC-type devices. New developments in complementary areas such as optoelectronics (for more compact and efficient instruments) and fabrication will enable the device to become a fully independent operational device. Furthermore, we project that as these platforms move toward commercialization and clinical translation, fabrication methods and materials will shift to the research spotlight. Despite the ease of rapid prototyping of microfluidic devices with PDMS and soft lithography, predominantly used in academia, translation to mass fabrication using this material remains to be seen. Other rapid prototyping methods, such as 3D printing, are attractive options; however, the variety of materials available, their corresponding structural integrity, and biocompatibility are important factors which require further investigation. Additionally, despite a plethora of publications dedicated to miniaturized fluidic devices, there is a lack of standardization in fluidic topology and characterization, which could potentially hamper substantial research work. Without unifying guidelines on fluidic device operations, efforts that may have gone to research would instead shift to troubleshooting the experimental setup. Finally, we envision that personalized medicine can be revolutionized by the integration of modular components playing different roles as shown in Fig. 26.

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

931

Fig. 26 Our goal of individualized medicine achieved by integration of various fluidic component where the nanofluidic, microfluidic, and millifluidic regimes are abbreviated by nF, μF, and mF, respectively

In this way, after a tissue or blood sample is obtained from the patient, diagnostics and treatment can be carried out within the fluidic system taking into account his or her unique genetic makeup, thereby elucidating an ideal course of action to treat the disease, if any. We think that there is great potential of miniature fluidic chips in the abovementioned areas, and thus continued development to address current limitations is essential to realize these goals.

References 1. Jürgens M, Mayerhöfer T, Popp J, Lee G, Matthews DL, Wilson BC (2013) Introduction to biophotonics. In: Handbook of biophotonics. Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim 2. Dougherty TJ, Gomer CJ, Henderson BW, Jori G, Kessel D, Korbelik M et al (1998) Photodynamic therapy. J Natl Cancer Inst 90:889–905 3. Whitesides GM (2006) The origins and the future of microfluidics. Nature 442:368–373

932

A.M. Soehartono et al.

4. Song P, Hu R, Tng DJH, Yong K-T (2014) Moving towards individualized medicine with microfluidics technology. RSC Adv 4:11499–11511 5. Prakash S, Yeom J (2014) Introduction, Chapter 1. In: Nanofluidics and microfluidics. William Andrew Publishing, Waltham, pp 1–8 6. Kitson PJ, Rosnes MH, Sans V, Dragone V, Cronin L (2012) Configurable 3D-Printed millifluidic and microfluidic ‘lab on a chip’ reactionware devices. Lab Chip 12:3267–3271 7. Prakash S, Karacor MB, Banerjee S (2009) Surface modification in microsystems and nanosystems. Surf Sci Rep 64:233–254 8. Nguyen NT, Wereley ST (2006) Fundamentals and applications of microfluidics, 2nd edn. Available: http://NTUSG.eblib.com.au/patron/FullRecord.aspx?p=286927 9. Hu G, Li D (2007) Multiscale phenomena in microfluidics and nanofluidics. Chem Eng Sci 62:3443–3454 10. Baldessari F, Santiago JG (2006) Electrophoresis in nanochannels: brief review and speculation. J Nanobiotechnol 4:1–6 11. Napoli M, Eijkel JCT, Pennathur S (2010) Nanofluidic technology for biomolecule applications: a critical review. Lab Chip 10:957–985 12. Biswas S, Miller JT, Li Y, Nandakumar K, Kumar CSSR (2012) Developing a millifluidic platform for the synthesis of ultrasmall nanoclusters: ultrasmall copper nanoclusters as a case study. Small 8:688–698 13. Navin CV, Krishna KS, Bovenkamp-Langlois GL, Miller JT, Chattopadhyay S, Shibata T et al (2015) Investigation of the synthesis and characterization of platinum-DMSA nanoparticles using millifluidic chip reactor. Chem Eng J 281:81–86 14. Damodaran SP, Eberhard S, Boitard L, Rodriguez JG, Wang Y, Bremond N et al (2015) A millifluidic study of cell-to-cell heterogeneity in growth-rate and cell-division capability in populations of isogenic cells of Chlamydomonas reinhardtii. PLoS One 10, e0118987 15. Cooper JA Jr, Li W-J, Bailey LO, Hudson SD, Lin-Gibson S, Anseth KS et al (2007) Encapsulated chondrocyte response in a pulsatile flow bioreactor. Acta Biomater 3:13–21 16. Wang WS, Vanapalli SA (2014) Millifluidics as a simple tool to optimize droplet networks: case study on drop traffic in a bifurcated loop. Biomicrofluidics 8:064111 17. Sackmann EK, Fulton AL, Beebe DJ (2014) The present and future role of microfluidics in biomedical research. Nature 507:181–189 18. Wang L, Flanagan LA, Jeon NL, Monuki E, Lee AP (2007) Dielectrophoresis switching with vertical sidewall electrodes for microfluidic flow cytometry. Lab Chip 7:1114–1120 19. Chung TD, Kim HC (2007) Recent advances in miniaturized microfluidic flow cytometry for clinical use. Electrophoresis 28:4511–4520 20. Dongeun H, Wei G, Yoko K, James BG, Shuichi T (2005) Microfluidics for flow cytometric analysis of cells and particles. Physiol Meas 26:R73 21. Prakash S, Pinti M, Bhushan B (2012) Theory, fabrication and applications of microfluidic and nanofluidic biosensors. Philos Trans R Soc Lond A Math Phys Eng Sci 370:2269–2303 22. Srinivasan V, Pamula V, Pollack M, Fair R (2003) A digital microfluidic biosensor for multianalyte detection. In: Micro electro mechanical systems, 2003. MEMS-03 Kyoto. IEEE the sixteenth annual international conference on, 2003, pp 327–330 23. Maeng J-H, Lee B-C, Ko Y-J, Cho W, Ahn Y, Cho N-G et al (2008) A novel microfluidic biosensor based on an electrical detection system for alpha-fetoprotein. Biosen Bioelectron 23:1319–1325 24. Adams AA, Okagbare PI, Feng J, Hupert ML, Patterson D, Göttert J et al (2008) Highly efficient circulating tumor cell isolation from whole blood and label-free enumeration using polymer-based microfluidics with an integrated conductivity sensor. J Am Chem Soc 130:8633–8641 25. Hung L-H, Choi KM, Tseng W-Y, Tan Y-C, Shea KJ, Lee AP (2006) Alternating droplet generation and controlled dynamic droplet fusion in microfluidic device for CdS nanoparticle synthesis. Lab Chip 6:174–178

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

933

26. Zhao C-X, He L, Qiao SZ, Middelberg APJ (2011) Nanoparticle synthesis in microreactors. Chem Eng Sci 66:1463–1479 27. Shestopalov I, Tice JD, Ismagilov RF (2004) Multi-step synthesis of nanoparticles performed on millisecond time scale in a microfluidic droplet-based system. Lab Chip 4:316–321 28. Holmes D, Pettigrew D, Reccius CH, Gwyer JD, van Berkel C, Holloway J et al (2009) Leukocyte analysis and differentiation using high speed microfluidic single cell impedance cytometry. Lab Chip 9:2881–2889 29. Wheeler AR, Throndset WR, Whelan RJ, Leach AM, Zare RN, Liao YH et al (2003) Microfluidic device for single-cell analysis. Anal Chem 75:3581–3586 30. Brouzes E, Medkova M, Savenelli N, Marran D, Twardowski M, Hutchison JB et al (2009) Droplet microfluidic technology for single-cell high-throughput screening. Proc Natl Acad Sci 106:14195–14200 31. Carlo DD, Lee LP (2006) Dynamic single-cell analysis for quantitative biology. Anal Chem 78:7918–7925 32. Wang Z, Han T, Jeon T-J, Park S, Kim SM (2013) Rapid detection and quantification of bacteria using an integrated micro/nanofluidic device. Sens Actuators B 178:683–688 33. Jacobson SC, Baker JD, Kysela DT, Brun YV (2015) Integrated microfluidic devices for studying aging and adhesion of individual bacteria. Biophys J 108:371a 34. Harms ZD, Mogensen KB, Nunes PS, Zhou K, Hildenbrand BW, Mitra I et al (2011) Nanofluidic devices with two pores in series for resistive-pulse sensing of single virus capsids. Anal Chem 83:9573–9578 35. Mitra A, Deutsch B, Ignatovich F, Dykes C, Novotny L (2010) Nano-optofluidic detection of single viruses and nanoparticles. ACS Nano 4:1305–1312 36. Hamblin MN, Xuan J, Maynes D, Tolley HD, Belnap DM, Woolley AT et al (2010) Selective trapping and concentration of nanoparticles and viruses in dual-height nanofluidic channels. Lab Chip 10:173–178 37. Balducci A, Mao P, Han J, Doyle PS (2006) Double-stranded DNA diffusion in slitlike nanochannels. Macromolecules 39:6273–6281 38. Reisner W, Morton KJ, Riehn R, Wang YM, Yu Z, Rosen M et al (2005) Statics and dynamics of single DNA molecules confined in nanochannels. Phys Rev Lett 94:196101 39. Walter R, Jonas NP, Robert HA (2012) DNA confinement in nanochannels: physics and biological applications. Rep Prog Phys 75:106601 40. Lee K-H, Su Y-D, Chen S-J, Tseng F-G, Lee G-B (2007) Microfluidic systems integrated with two-dimensional surface plasmon resonance phase imaging systems for microarray immunoassay. Biosen Bioelectron 23:466–472 41. Albanese A, Lam AK, Sykes EA, Rocheleau JV, Chan WC (2013) Tumour-on-a-chip provides an optical window into nanoparticle tissue transport. Nat Commun 4:2718 42. Bhatia SN, Ingber DE (2014) Microfluidic organs-on-chips. Nat Biotechnol 32:760–772 43. Park J, Lee BK, Jeong GS, Hyun JK, Lee CJ, Lee S-H (2015) Three-dimensional brain-on-achip with an interstitial level of flow and its application as an in vitro model of Alzheimer’s disease. Lab Chip 15:141–150 44. van den Berg A, Craighead HG, Yang P (2010) From microfluidic applications to nanofluidic phenomena. Chem Soc Rev 39:899–900 45. Schoch RB, Han J, Renaud P (2008) Transport phenomena in nanofluidics. Rev Mod Phys 80:839–883 46. Beebe DJ, Mensing GA, Walker GM (2002) Physics and applications of microfluidics in biology. Annu Rev Biomed Eng 4:261–286 47. Takayama S, McDonald JC, Ostuni E, Liang MN, Kenis PJA, Ismagilov RF et al (1999) Patterning cells and their environments using multiple laminar fluid flows in capillary networks. Proc Natl Acad Sci 96:5545–5548 48. Sparreboom W, van den Berg A, Eijkel JCT (2010) Transport in nanofluidic systems: a review of theory and applications. New J Phys 12:015004

934

A.M. Soehartono et al.

49. Karnik R, Fan R, Yue M, Li D, Yang P, Majumdar A (2005) Electrostatic control of ions and molecules in nanofluidic transistors. Nano Lett 5:943–948 50. Abgrall P, Nguyen NT (2008) Nanofluidic devices and their applications. Anal Chem 80:2326–2341 51. Pennathur S, Santiago JG (2005) Electrokinetic transport in nanochannels. 1. Theory. Anal Chem 77:6772–6781 52. Pennathur S, Santiago JG (2005) Electrokinetic transport in nanochannels. 2. Experiments. Anal Chem 77:6782–6789 53. Mannion JT, Reccius CH, Cross JD, Craighead HG (2006) Conformational analysis of single DNA molecules undergoing entropically induced motion in nanochannels. Biophys J 90:4538–4545 54. Mawatari K, Kubota S, Xu Y, Priest C, Sedev R, Ralston J et al (2012) Femtoliter droplet handling in nanofluidic channels: a laplace nanovalve. Anal Chem 84:10812–10816 55. Song P, Tng DJH, Hu R, Lin G, Meng E, Yong K-T (2013) An electrochemically actuated MEMS device for individualized drug delivery: an in vitro study. Adv Healthcare Mater 2:1170–1178 56. Bhagat AAS, Hou HW, Li LD, Lim CT, Han J (2011) Pinched flow coupled shear-modulated inertial microfluidics for high-throughput rare blood cell separation. Lab Chip 11:1870–1878 57. Morteza A, John TWY, Mehdi S (2011) System integration in microfluidics. In: Microfluidics and nanofluidics handbook. CRC Press, Boca Raton, pp 269–286 58. Nisar A, Afzulpurkar N, Mahaisavariya B, Tuantranont A (2008) MEMS-based micropumps in drug delivery and biomedical applications. Sens Actuators B 130:917–942 59. Nguyen N-T, Huang X, Chuan TK (2002) MEMS-micropumps: a review. J Fluids Eng 124:384–392 60. Leach J, Mushfique H, di Leonardo R, Padgett M, Cooper J (2006) An optically driven pump for microfluidics. Lab Chip 6:735–739 61. Tas NR, Berenschot JW, Lammerink TSJ, Elwenspoek M, van den Berg A (2002) Nanofluidic bubble pump using surface tension directed gas injection. Anal Chem 74:2224–2227 62. Ouellet E, Lausted C, Lin T, Yang CWT, Hood L, Lagally ET (2010) Parallel microfluidic surface plasmon resonance imaging arrays. Lab Chip 10:581–588 63. Ouyang H, Xia Z, Zhe J (2010) Voltage-controlled flow regulating in nanofluidic channels with charged polymer brushes. Microfluid Nanofluid 9:915–922 64. Lee C-Y, Chang C-L, Wang Y-N, Fu L-M (2011) Microfluidic mixing: a review. Int J Mol Sci 12:3263 65. Kim DS, Lee SH, Kwon TH, Ahn CH (2005) A serpentine laminating micromixer combining splitting/recombination and advection. Lab Chip 5:739–747 66. Bothe D, Stemich C, Warnecke H-J (2006) Fluid mixing in a T-shaped micro-mixer. Chem Eng Sci 61:2950–2958 67. Wong SH, Ward MCL, Wharton CW (2004) Micro T-mixer as a rapid mixing micromixer. Sens Actuators B 100:359–379 68. Che-Hsin L, Chien-Hsiung T, Lung-Ming F (2005) A rapid three-dimensional vortex micromixer utilizing self-rotation effects under low Reynolds number conditions. J Micromech Microeng 15:935 69. Long M, Sprague MA, Grimes AA, Rich BD, Khine M (2009) A simple three-dimensional vortex micromixer. Appl Phys Lett 94:133501 70. Ye Z, Li S, Zhou B, Hui YS, Shen R, Wen W (2014) Nanofluidic mixing via hybrid surface. Appl Phys Lett 105:163501 71. Yu S, Jeon T-J, Kim SM (2012) Active micromixer using electrokinetic effects in the micro/ nanochannel junction. Chem Eng J 197:289–294 72. Kim D, Raj A, Zhu L, Masel RI, Shannon MA (2008) Non-equilibrium electrokinetic micro/ nano fluidic mixer. Lab Chip 8:625–628 73. Liang-Hsuan L, Kee Suk R, Chang L (2002) A magnetic microstirrer and array for microfluidic mixing. J Microelectromech Syst 11:462–469

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

935

74. Lei KF (2015) Materials and fabrication techniques for nano- and microfluidic devices, Chapter 1. In: Microfluidics in detection science: lab-on-a-chip technologies. The Royal Society of Chemistry, Cambridge, pp 1–28 75. Ren K, Zhou J, Wu H (2013) Materials for microfluidic chip fabrication. Acc Chem Res 46:2396–2406 76. Iliescu C, Taylor H, Avram M, Miao J, Franssila S (2012) A practical guide for the fabrication of microfluidic devices using glass and silicon. Biomicrofluidics 6:016505 77. Becker H, Gärtner C (2008) Polymer microfabrication technologies for microfluidic systems. Anal Bioanal Chem 390:89–111 78. Xia Y, Whitesides GM (1998) Soft lithography. Annu Rev Mater Sci 28:153–184 79. Whitesides GM, Ostuni E, Takayama S, Jiang X, Ingber DE (2001) Soft lithography in biology and biochemistry. Annu Rev Biomed Eng 3:335–373 80. Tsuda S, Jaffery H, Doran D, Hezwani M, Robbins PJ, Yoshida M et al (2015) Customizable 3D printed ‘plug and play’ millifluidic devices for programmable fluidics. PLoS One 10, e0141640 81. Duan C, Wang W, Xie Q (2013) Review article: fabrication of nanofluidic devices. Biomicrofluidics 7:026501 82. Bocquet L, Tabeling P (2014) Physics and technological aspects of nanofluidics. Lab Chip 14:3143–3158 83. Guo LJ (2007) Nanoimprint lithography: methods and material requirements. Adv Mater 19:495–513 84. Li D (2008) Nanochannel fabrication. In: Li D (ed) Encyclopedia of microfluidics and nanofluidics. Springer US, Boston, pp 1409–1414 85. Chou SY, Krauss PR, Renstrom PJ (1996) Nanoimprint lithography. J Vac Sci Technol B 14:4129–4133 86. Kim Y, Kim KS, Kounovsky KL, Chang R, Jung GY, dePablo JJ et al (2011) Nanochannel confinement: DNA stretch approaching full contour length. Lab Chip 11:1721–1729 87. Marie R, Kristensen A (2012) Nanofluidic devices towards single DNA molecule sequence mapping. J Biophotonics 5:673–686 88. Cipriany BR, Zhao R, Murphy PJ, Levy SL, Tan CP, Craighead HG et al (2010) Single molecule epigenetic analysis in a nanofluidic channel. Anal Chem 82:2480–2487 89. Tegenfeldt JO, Prinz C, Cao H, Chou S, Reisner WW, Riehn R et al (2004) From the cover: the dynamics of genomic-length DNA molecules in 100-nm channels. Proc Natl Acad Sci U S A 101:10979–10983 90. Das SK, Austin MD, Akana MC, Deshpande P, Cao H, Xiao M (2010) Single molecule linear analysis of DNA in nano-channel labeled with sequence specific fluorescent probes. Nucleic Acids Res 38, e177 91. Lam ET, Hastie A, Lin C, Ehrlich D, Das SK, Austin MD et al (2012) Genome mapping on nanochannel arrays for structural variation analysis and sequence assembly. Nat Biotechnol 30:771–776 92. Friedrich SM, Zec HC, Wang TH (2016) Analysis of single nucleic acid molecules in microand nano-fluidics. Lab Chip 16:790–811 93. Miller JM (2013) Whole-genome mapping: a new paradigm in strain-typing technology. J Clin Microbiol 51:1066–1070 94. Gupta A, Place M, Goldstein S, Sarkar D, Zhou S, Potamousis K et al (2015) Single-molecule analysis reveals widespread structural variation in multiple myeloma. Proc Natl Acad Sci U S A 112:7689–7694 95. Reisner W, Larsen NB, Silahtaroglu A, Kristensen A, Tommerup N, Tegenfeldt JO et al (2010) Single-molecule denaturation mapping of DNA in nanofluidic channels. Proc Natl Acad Sci U S A 107:13294–13299 96. Nyberg LK, Persson F, Berg J, Bergstrom J, Fransson E, Olsson L et al (2012) A single-step competitive binding assay for mapping of single DNA molecules. Biochem Biophys Res Commun 417:404–408

936

A.M. Soehartono et al.

97. Jo K, Dhingra DM, Odijk T, de Pablo JJ, Graham MD, Runnheim R et al (2007) A singlemolecule barcoding system using nanoslits for DNA analysis. Proc Natl Acad Sci U S A 104:2673–2678 98. Riley MC, Kirkup BC, Johnson JD, Lesho EP, Ockenhouse CF (2011) Rapid whole genome optical mapping of Plasmodium falciparum. Malar J 10:1–8 99. Welch RL, Sladek R, Dewar K, Reisner WW (2012) Denaturation mapping of Saccharomyces cerevisiae. Lab Chip 12:3314–3321 100. Zhu F, Skommer J, Macdonald NP, Friedrich T, Kaslin J, Wlodkowic D (2015) Threedimensional printed millifluidic devices for zebrafish embryo tests. Biomicrofluidics 9:046502 101. Li Y, Yang F, Chen Z, Shi L, Zhang B, Pan J et al (2014) Zebrafish on a chip: a novel platform for real-time monitoring of drug-induced developmental toxicity. PLoS One 9, e94792 102. Baraban L, Bertholle F, Salverda MLM, Bremond N, Panizza P, Baudry J et al (2011) Millifluidic droplet analyser for microbiology. Lab Chip 11:4057–4062 103. Boitard L, Cottinet D, Bremond N, Baudry J, Bibette J (2015) Growing microbes in millifluidic droplets. Eng Life Sci 15:318–326 104. Piyasena ME, Graves SW (2014) The intersection of flow cytometry with microfluidics and microfabrication. Lab Chip 14:1044–1059 105. Yao B, Luo G-A, Feng X, Wang W, Chen L-X, Wang Y-M (2004) A microfluidic device based on gravity and electric force driving for flow cytometry and fluorescence activated cell sorting. Lab Chip 4:603–607 106. Zhu HY, Mavandadi S, Coskun AF, Yaglidere O, Ozcan A (2011) Optofluidic fluorescent imaging cytometry on a cell phone. Anal Chem 83:6641–6647 107. Mao X, Lin S-CS, Dong C, Huang TJ (2009) Single-layer planar on-chip flow cytometer using microfluidic drifting based three-dimensional (3D) hydrodynamic focusing. Lab Chip 9:1583–1589 108. Cheng XH, Irimia D, Dixon M, Sekine K, Demirci U, Zamir L et al (2007) A microfluidic device for practical label-free CD4 + T cell counting of HIV-infected subjects. Lab Chip 7:170–178 109. Rodriguez WR, Christodoulides N, Floriano PN, Graham S, Mohanty S, Dixon M et al (2005) A microchip CD4 counting method for HIV monitoring in resource-poor settings. Plos Med 2:663–672 110. Moon S, Keles HO, Ozcan A, Khademhosseini A, Haeggstrom E, Kuritzkes D et al (2009) Integrating microfluidics and lensless imaging for point-of-care testing. Biosens Bioelectron 24:3208–3214 111. Patra B, Peng C-C, Liao W-H, Lee C-H, Tung Y-C (2016) Drug testing and flow cytometry analysis on a large number of uniform sized tumor spheroids using a microfluidic device. Sci Rep 6:21061 112. Simonnet C, Groisman A (2006) High-throughput and high-resolution flow cytometry in molded microfluidic devices. Anal Chem 78:5653–5663 113. Strohm EM, Gnyawali V, Van De Vondervoort M, Daghighi Y, Tsai SS, Kolios MC (2016) Classification of biological cells using a sound wave based flow cytometer. In: SPIE BiOS, 9708:2016, pp 97081A–97081A-6 114. Mao X, Nawaz AA, Lin S-CS, Lapsley MI, Zhao Y, McCoy JP et al (2012) An integrated, multiparametric flow cytometry chip using “microfluidic drifting” based three-dimensional hydrodynamic focusing. Biomicrofluidics 6:024113–024113-9 115. Sarioglu AF, Aceto N, Kojic N, Donaldson MC, Zeinali M, Hamza B et al (2015) A microfluidic device for label-free, physical capture of circulating tumor cell clusters. Nat Methods 12:685–691 116. Gleghorn JP, Pratt ED, Denning D, Liu H, Bander NH, Tagawa ST et al (2010) Capture of circulating tumor cells from whole blood of prostate cancer patients using geometrically enhanced differential immunocapture (GEDI) and a prostate-specific antibody. Lab Chip 10:27–29

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

937

117. Tan S, Yobas L, Lee G, Ong C, Lim C (2009) Microdevice for trapping circulating tumor cells for cancer diagnostics. In: 13th international conference on biomedical engineering, 2009, pp 774–777 118. Tan SJ, Lakshmi RL, Chen P, Lim W-T, Yobas L, Lim CT (2010) Versatile label free biochip for the detection of circulating tumor cells from peripheral blood in cancer patients. Biosen Bioelectron 26:1701–1705 119. Hou HW, Warkiani ME, Khoo BL, Li ZR, Soo RA, Tan DS-W et al (2013) Isolation and retrieval of circulating tumor cells using centrifugal forces. Sci Rep 3:1259 120. Warkiani ME, Guan G, Luan KB, Lee WC, Bhagat AAS, Chaudhuri PK et al (2014) Slanted spiral microfluidics for the ultra-fast, label-free isolation of circulating tumor cells. Lab Chip 14:128–137 121. Turetsky A, Lee K, Song J, Giedt RJ, Kim E, Kovach AE et al (2015) On chip analysis of CNS lymphoma in cerebrospinal fluid. Theranostics 5:796 122. Guo J, Ma X, Menon NV, Li CM, Zhao Y, Kang Y (2015) Dual fluorescence-activated study of tumor cell apoptosis by an optofluidic system. IEEE J Sel Top Quantum Electron 21:392–398 123. Bhagat AAS, Kuntaegowdanahalli SS, Kaval N, Seliskar CJ, Papautsky I (2010) Inertial microfluidics for sheath-less high-throughput flow cytometry. Biomed Microdevices 12:187–195 124. Di Carlo D, Irimia D, Tompkins RG, Toner M (2007) Continuous inertial focusing, ordering, and separation of particles in microchannels. Proc Natl Acad Sci 104:18892–18897 125. Hur SC, Tse HTK, Di Carlo D (2010) Sheathless inertial cell ordering for extreme throughput flow cytometry. Lab Chip 10:274–280 126. Lenshof A, Magnusson C, Laurell T (2012) Acoustofluidics 8: applications of acoustophoresis in continuous flow microsystems. Lab Chip 12:1210–1223 127. Ding X, Li P, Lin S-CS, Stratton ZS, Nama N, Guo F et al (2013) Surface acoustic wave microfluidics. Lab Chip 13:3626–3649 128. Li M, Li S, Cao W, Li W, Wen W, Alici G (2012) Continuous particle focusing in a waved microchannel using negative dc dielectrophoresis. J Micromech Microeng 22:095001 129. Golden JP, Kim JS, Erickson JS, Hilliard LR, Howell PB, Anderson GP et al (2009) Multiwavelength microflow cytometer using groove-generated sheath flow. Lab Chip 9:1942–1950 130. Ozcan A, Demirci U (2008) Ultra wide-field lens-free monitoring of cells on-chip. Lab Chip 8:98–106 131. Cui X, Lee LM, Heng X, Zhong W, Sternberg PW, Psaltis D et al (2008) Lensless highresolution on-chip optofluidic microscopes for Caenorhabditis elegans and cell imaging. Proc Natl Acad Sci 105:10670–10675 132. Tseng D, Mudanyali O, Oztoprak C, Isikman SO, Sencan I, Yaglidere O et al (2010) Lensfree microscopy on a cellphone. Lab Chip 10:1787–1792 133. Lin C-C, Wang J-H, Wu H-W, Lee G-B (2010) Microfluidic immunoassays. J Assoc Lab Autom 15:253–274 134. Zeng S, Baillargeat D, Ho H-P, Yong K-T (2014) Nanomaterials enhanced surface plasmon resonance for biological and chemical sensing applications. Chem Soc Rev 43:3426–3452 135. Hoa XD, Kirk AG, Tabrizian M (2007) Towards integrated and sensitive surface plasmon resonance biosensors: a review of recent progress. Biosen Bioelectron 23:151–160 136. Shankaran DR, Gobi KV, Miura N (2007) Recent advancements in surface plasmon resonance immunosensors for detection of small molecules of biomedical, food and environmental interest. Sens Actuators B 121:158–177 137. Kim J (2012) Joining plasmonics with microfluidics: from convenience to inevitability. Lab Chip 12:3611–3623 138. Luo Y, Yu F, Zare RN (2008) Microfluidic device for immunoassays based on surface plasmon resonance imaging. Lab Chip 8:694–700 139. Sepúlveda B, Angelomé PC, Lechuga LM, Liz-Marzán LM (2009) LSPR-based nanobiosensors. Nano Today 4:244–251

938

A.M. Soehartono et al.

140. Huang C, Bonroy K, Reekmans G, Laureyn W, Verhaegen K, Vlaminck I et al (2009) Localized surface plasmon resonance biosensor integrated with microfluidic chip. Biomed Microdevices 11:893–901 141. Aćimović SS, Ortega MA, Sanz V, Berthelot J, Garcia-Cordero JL, Renger J et al (2014) LSPR chip for parallel, rapid, and sensitive detection of cancer markers in serum. Nano Lett 14:2636–2641 142. Huang C, Ye J, Wang S, Stakenborg T, Lagae L (2012) Gold nanoring as a sensitive plasmonic biosensor for on-chip DNA detection. Appl Phys Lett 100:173114 143. Lee S-W, Lee K-S, Ahn J, Lee J-J, Kim M-G, Shin Y-B (2011) Highly sensitive biosensing using arrays of plasmonic Au nanodisks realized by nanoimprint lithography. ACS Nano 5:897–904 144. Chen P, Chung MT, McHugh W, Nidetz R, Li Y, Fu J et al (2015) Multiplex serum cytokine immunoassay using nanoplasmonic biosensor microarrays. ACS Nano 9:4173–4181 145. De Leebeeck A, Kumar LKS, de Lange V, Sinton D, Gordon R, Brolo AG (2007) On-chip surface-based detection with nanohole arrays. Anal Chem 79:4094–4100 146. Ferreira J, Santos MJL, Rahman MM, Brolo AG, Gordon R, Sinton D et al (2009) Attomolar protein detection using in-hole surface plasmon resonance. J Am Chem Soc 131:436–437 147. Eftekhari F, Escobedo C, Ferreira J, Duan X, Girotto EM, Brolo AG et al (2009) Nanoholes as nanochannels: flow-through plasmonic sensing. Anal Chem 81:4308–4311 148. Brolo AG, Gordon R, Leathem B, Kavanagh KL (2004) Surface plasmon sensor based on the enhanced light transmission through arrays of nanoholes in gold films. Langmuir 20:4813–4815 149. Martín-Moreno L, García-Vidal FJ (2004) Optical transmission through circular hole arrays in optically thick metal films. Opt Express 12:3619–3628 150. Yanik AA, Huang M, Artar A, Chang T-Y, Altug H (2010) Integrated nanoplasmonicnanofluidic biosensors with targeted delivery of analytes. Appl Phys Lett 96:021101 151. Suzuki A, Kondoh J, Matsui Y, Shiokawa S, Suzuki K (2005) Development of novel optical waveguide surface plasmon resonance (SPR) sensor with dual light emitting diodes. Sens Actuators B 106:383–387 152. Wang Y-C, Han J (2008) Pre-binding dynamic range and sensitivity enhancement for immunosensors using nanofluidic preconcentrator. Lab Chip 8:392–394 153. Sepúlveda B, del Río JS, Moreno M, Blanco FJ, Mayora K, Domínguez C et al (2006) Optical biosensor microsystems based on the integration of highly sensitive Mach–Zehnder interferometer devices. J Opt A Pure Appl Opt 8:S561 154. Song H, Chen DL, Ismagilov RF (2006) Reactions in droplets in microfluidic channels. Angew Chem Int Ed 45:7336–7356 155. Schladt TD, Schneider K, Schild H, Tremel W (2011) Synthesis and bio-functionalization of magnetic nanoparticles for medical diagnosis and treatment. Dalton Trans 40:6315–6343 156. LaMer VK, Dinegar RH (1950) Theory, production and mechanism of formation of monodispersed hydrosols. J Am Chem Soc 72:4847–4854, 1950/11/01 157. Lohse SE, Eller JR, Sivapalan ST, Plews MR, Murphy CJ (2013) A simple millifluidic benchtop reactor system for the high-throughput synthesis and functionalization of gold nanoparticles with different sizes and shapes. ACS Nano 7:4135–4150 158. Sai Krishna K, Navin CV, Biswas S, Singh V, Ham K, Bovenkamp GL et al (2013) Millifluidics for time-resolved mapping of the growth of gold nanostructures. J Am Chem Soc 135:5450–5456 159. Sharma P, Brown S, Walter G, Santra S, Moudgil B (2006) Nanoparticles for bioimaging. Adv Colloid Interface Sci 123–126:471–485 160. Voura EB, Jaiswal JK, Mattoussi H, Simon SM (2004) Tracking metastatic tumor cell extravasation with quantum dot nanocrystals and fluorescence emission-scanning microscopy. Nat Med 10:993–998 161. Han M, Gao X, Su JZ, Nie S (2001) Quantum-dot-tagged microbeads for multiplexed optical coding of biomolecules. Nat Biotechnol 19:631–635

30

Miniaturized Fluidic Devices and Their Biophotonic Applications

939

162. Medintz IL, Uyeda HT, Goldman ER, Mattoussi H (2005) Quantum dot bioconjugates for imaging, labelling and sensing. Nat Mater 4:435–446 163. Rosenthal SJ, Tomlinson I, Adkins EM, Schroeter S, Adams S, Swafford L et al (2002) Targeting cell surface receptors with ligand-conjugated nanocrystals. J Am Chem Soc 124:4586–4594 164. Gonda K, Watanabe TM, Ohuchi N, Higuchi H (2010) In vivo nano-imaging of membrane dynamics in metastatic tumor cells using quantum dots. J Biol Chem 285:2750–2757 165. Hu S, Zeng S, Zhang B, Yang C, Song P, Hang Danny TJ et al (2014) Preparation of biofunctionalized quantum dots using microfluidic chips for bioimaging. Analyst 139:4681–4690 166. Majedi FS, Hasani-Sadrabadi MM, VanDersarl JJ, Mokarram N, Hojjati-Emami S, Dashtimoghadam E et al (2014) On-chip fabrication of paclitaxel-loaded chitosan nanoparticles for cancer therapeutics. Adv Funct Mater 24:432–441 167. Prabaharan M (2012) Chitosan and its derivatives as promising drug delivery carriers. Momentum Press, New York 168. Chiu Y-L, Ho Y-C, Chen Y-M, Peng S-F, Ke C-J, Chen K-J et al (2010) The characteristics, cellular uptake and intracellular trafficking of nanoparticles made of hydrophobically-modified chitosan. J Control Release 146:152–159 169. Majedi FS, Hasani-Sadrabadi MM, Hojjati Emami S, Shokrgozar MA, VanDersarl JJ, Dashtimoghadam E et al (2013) Microfluidic assisted self-assembly of chitosan based nanoparticles as drug delivery agents. Lab Chip 13:204–207 170. Chen J, Glaus C, Laforest R, Zhang Q, Yang M, Gidding M et al (2010) Gold nanocages as photothermal transducers for cancer treatment. Small 6:811–817 171. Huang X, El-Sayed IH, El-Sayed MA (2010) Applications of gold nanorods for cancer imaging and photothermal therapy. In: Grobmyer RS, Moudgil MB (eds) Cancer nanotechnology: methods and protocols. Humana Press, Totowa, pp 343–357 172. Yang K, Zhang S, Zhang G, Sun X, Lee S-T, Liu Z (2010) Graphene in mice: ultrahigh in vivo tumor uptake and efficient photothermal therapy. Nano Lett 10:3318–3323 173. Robinson JT, Welsher K, Tabakman SM, Sherlock SP, Wang H, Luong R et al (2010) High performance in vivo near-IR (>1 μm) imaging and photothermal cancer therapy with carbon nanotubes. Nano Res 3:779–793 174. Tian Q, Tang M, Sun Y, Zou R, Chen Z, Zhu M et al (2011) Hydrophilic flower-like CuS superstructures as an efficient 980 nm laser-driven photothermal agent for ablation of cancer cells. Adv Mater 23:3542–3547 175. Mou J, Li P, Liu C, Xu H, Song L, Wang J et al (2015) Ultrasmall Cu2xS nanodots for highly efficient photoacoustic imaging-guided photothermal therapy. Small 11:2275–2283 176. Cheung T-L, Hong L, Rao N, Yang C, Wang L, Lai WJ et al (2016) The non-aqueous synthesis of shape controllable Cu2xS plasmonic nanostructures in a continuous-flow millifluidic chip for the generation of photo-induced heating. Nanoscale 8:6609–6622

Index

A Abbe resolution, 549 Absorption, 223 Absorption, Au nanocrystals, 811 light absorption, 822, 824, 830 NIR absorption, 811, 813 optical absorption, 820 scattering-to-absorption intensity ratio, 814 two-photon absorption, 822 Absorption spectra, 75, 79 Acoustic resolution photoacoustic microscopy (AR-PAM), 249 Angular spectrum, 277 Anti-angiogenesis, 677 Antibody detection, 140, 141 Anti-tumor immunity, 673 Anti-tumor vaccine, 675 Aperture synthesis process, 282 Attenuated total reflection (ATR), 126 Au nanocrystals. See Gold nanocrystals Autofluorescence, 228 B Back focal plane (BFP), 530 Band gap, 849 Benchtop (BT) nanoparticle synthesis, 920 Bethe’s theory, 873 Bioconjugation, 845 Bioimaging, 30, 818 nonlinear optical properties for, 822 photothermal and photoacoustic imaging, 823–824 scattering-based imaging, 820–822 and sensing with SERS, 32–52 X-ray Computer Tomography Contrast Agent, 818–820

Biolabeling, 818 dye molecule labeling, 835 nonlinear optical properties for, 822 scattering-based labeling, 820–822 Bioluminescence, 226 Biomolecules, 67 Biophotonics, 894 laser and tissue interaction, 720–721 optical modulation (see Optical modulation) Biosensing, 39–52, 62, 154, 440–445, 769, 771, 818 colorimetric, 827–828 photoluminescence quenching for, 828–829 plasmonic, 825–830 refractive index-based sensing, 825–827 surface-enhanced Raman scattering for, 829–830 Blind spectral unmixing, 243 Blood flow, 182–184, 200–202 C Cadmium-based quantum dots, 843 Calcium, 721–722 Cancer diagnosis, 31, 52–54 Cancer metastasis, IVFC in, 20 intravital confocal microscopy (ICM), 24–25 real-time detection of, 21–24 tumor cell circulation, 20–21 Cancer therapy antivascular therapy, 198 diffuse correlation spectroscopy, 189–190 diffuse fluorescence spectroscopy, 191–192 diffuse optical imaging, 192–194 diffuse optical tomography instrument, 196–197

# Springer Science+Business Media Dordrecht 2017 A.H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-5052-4

941

942 Cancer therapy (cont.) diffuse reflectance spectroscopy, 187–189 head and neck cancer, 202–203 multimodal optical instrument, 194–195 photodynamic therapy, 200 photon diffusion model, 184–187 skin cancer, 200–202 Cathodoluminescence, 409, 416, 420–424 Cell death, 671 Cell signalling, 721, 726 Chitosan, 923 Circulating tumor cells, 912 Coherent anti-stokes Raman scattering (CARS), 602, 604 and FWM, 468–469 two-photon fluorescence and harmonic generation microscopy, 489–498 Color-space-time (COST) coding technology, 111 Combination therapy, 676 Confocal fluorescent laser scanning microscope (CLSM), 410, 411 Confocal microscopy, 601 Continuous fluid flow scheme, 920 Cylindrical vector beam, 433, 440–445 Cytotoxicity, 862 D Denaturation mapping, 906 Dielectric waveguide microscopy, 624–625 Differential phase measurement scheme, 133 Diffuse correlation spectroscopy, 183, 189–190 Diffuse fluorescence spectroscopy, 191–192 Diffuse optical imaging, 192–194 Diffuse optical spectroscopy, 210 Direct electron-beam excitation assisted optical microscope (D-EXA microscope), 416–420 Displacement along z-direction, calibration of, 759–760 Doped-dots, 846 Doppler flowmetry, 258 E Electric double layer (EDL), 900 Electron beam lithography (EBL), 875–876 Electron-beam excitation assisted (EXA) optical microscopy development of, 413–416 D-EXA microscope, 416–420

Index ELISA. See Enzyme-linked immunosorbent assay (ELISA) Ellipsometry, 516 Elongational flow, 905 Endogenous fluorophore, 483, 489 Enzyme-linked immunosorbent assay (ELISA), 101, 916 Evanescent field, 77 Evanescent waves, 532–536 Ex vivo flow cytometer, 4, 5 Exogenous fluorophores, 227 Extraordinary optical transmission (EOT), 873 biological sensing, 884–887 EBL, 875–876 EOT-based axial imaging (EOT-AIM) method, 887 EOT-based high-sensitivity spectral sensing, 887 EOT-based super-resolved fluorescence imaging, 887 FIB lithography, 874–875 gas sensing, 879–884 nano-imprint lithography, 876–877 photolithography, 877–879 F Fabrication EBL, 875–876 FIB lithography, 874–875 nano-imprint lithography, 876–877 photolithography, 877–879 Fano resonance, 510 Femtosecond-pulsed laser, 718 brain hemodynamics and vasculature in vivo, 722–725 intracellular Ca2+ levels, 721–722 irreversible effects, 726–727 muscle contraction, 725–726 Fiber optical tweezers, 684 Fiber optics, 688 Field enhancement effect, 565 Finite-difference time-domain (FDTD), 420, 880, 883, 887 Flow cytometry (FCM), 912–914. See also In vivo flow cytometry (IVFC) Fluorescence, 227, 547 Fluorescence anisotropy imaging (FAIM), 381 Fluorescence based IVFC, 6–7 basic principle, 8–12 cell detection mechanism, 9 data processing and analysis, 12–14 two-color two-channel IVFC, 10

Index Fluorescence detection, 95 Fluorescence enhancement, 368 Fluorescence microscopy, 365, 368 Fluorescence molecular tomography (FMT), 233 Fluorescence spectroscopy, 381 Focused ion beam (FIB) lithography, 874–875 Force constant, 733 Förster resonance energy transfer (FRET), 360, 379 Forward model, 231 Frequency-domain analysis, 309 G Gold nanocrystals advantages, 810, 811 biolabeling and bioimaging, 818–824 drug/gene delivery and release, 833–836 growth from gold sources, 811–813 photothermal conversion properties, 830–832 photothermal therapy, 832–833 plasmonic biosensing, 825–830 shape and size tuning, 814–815 shell encapsulation, 817–818 surface functionalization, 816–817 Gold nanoparticles, 136 Goos-Hänchen (GH) effect, 618 Green’s function, 234 H Harmonic generation, 602 Hemoglobin, 487–489 Hemoglobin oxygen saturation (SO2) 257 High resolution optical microscope cathodoluminescence, analysis of, 420–424 CLSM, 410 EXA optical microscopy (see Electron-beam excitation assisted (EXA) optical microscopy) LSMs, 409 PALM, 412 Hot colloidal synthesis, 844 Hybrid mode, 166 Hydrodynamic focusing, 113 Hyperlens, 537 I Image based IVFC, 8 Image reconstruction, 237 Indium tin oxide (ITO), 527

943 Interference reflection microscopy (IRM), 615 Interferometric microscope, 275 Interferometry approach, 132 Intracellular dynamics, 337 Intravital confocal microscopy (ICM), 24–25 In vivo flow cytometry (IVFC), 4–6 fluorescence based IVFC, 6–7 fluorescence based IVFC, 8–14 image based IVFC, 8 photoacoustic based IVFC, 7 photoacoustic flow cytometry (PAFC), 15–20 photothermal based IVFC, 7 Raman based IVFC, 7 J Jablonski diagram, 548 K Kinesin-microtubule system, 761–762 Kretschmann configuration, 125, 126, 514 L Label-free cell-based assay, 342 Label-free imaging, 464, 468, 498 hemoglobin fluorescence label-free microvascular imaging, 487–489 motivation of, 474–475 Label free SERS Detection, 33–39 Lab-on-a-chip, 695 LaMer plot, 921 Laser driven aerosol reactor, 846, 847 Laser scanning microscopes (LSM), 409, 411 Lensless microendoscopy by single fiber (LMSF), 290 Light scattering, 92 effect, 272 Light-tissue interaction low-density plasma, 720 reactive oxygen species, 721 Linewidth broadening, 163, 171 Localized surface plasmon, 449 Localized surface plasmon resonance (LSPR) biosensors, 916 LoC devices, 920 Low-density plasma, 719, 720

944 M Mean-squared displacement (MSD), 338 Metabolic Rate of Oxygen Consumption (MRO2), 259 Metal-enhanced fluorescence (MEF), 561 Metalens, 537 Michelson interferometer, 134 Microalgae, 911 Microcavity, 149 Microfabrication, 772 Microfluidic(s) morphology dependent resonance sensing, 787–800 SPR based optical manipulation (see Surface plasmon resonance (SPR)) Microfluidic image cytometry (MIC), 98 Microlaser, 170 Microrheology living cells, 743 polymer solutions, 739 Millifluidic droplet analyzer (MDA), 910 Miniaturized fluidic devices applications, 896–898 bioanalysis, 907–912 definition, 895 fabrication, 904–905 flow cytometry, 912–914 fluidic manipulation, 898–903 implementation and fabrication tools, 896 nanoparticle synthesis, 919–928 nucleic acid optical mapping, 905–907 origin, 895 photothermal therapy, 926–928 plasmonic biosensors, 914–919 surface-area-to-volume ratio, 895 uses, 894 Minimal inhibitory concentration (MIC) of cefotaxime, 910 Modal sensing, 597 Mode shift, 156, 157 Mode splitting, 156 Molecular sensors, 338 Monitoring and predicting response, 207 Monte Carlo method, 226 Monte Carlo simulation, 420 Multi-jet modelling (MJM), 909 Multi-modal imaging of C.elegans, 494–495 challenges of, 475–476 hamster oral tissue in vivo, 490 motivation of, 474–475 3T3 cells, lipid metabolism of, 495–498

Index Multiphoton microscopy, 601 Multiplex sensing and imaging, 45, 50–52, 55, 56 Multispectral optoacoustic tomography (MSOT), 242, 244 N Nanofabrication, 904 Nanofluidic mixers, 903 Nano-imprint lithography, 876–877 Nanoparticle detection, 161 Nanoparticle trapping, 168 Nanoparticles, 667 Nanoscale localization sampling (NLS), 573 Nanostructures metallic, 262 organic, 262 Near-field optical scanning microscopes (NSOMs), 408, 425 Nonlinear optical microscopy advantages of, 471–474 CARS see Coherent anti-stokes Raman scattering (CARS) cell and tissue imaging, 483–487 challenges of, 475–476 excitation efficiency, maximization of, 477–479 excitation source, 476–477 hemoglobin fluorescence label-free microvascular imaging, 487–489 motivation, 474–475 multi-channel photomultiplier tube, 480 photothermal and photoaccoustic phenomenon, 469 pump-probe transient absorption, 471 SFG and SHG, 467–468 THG, 468 time-correlated single photon counting, 480, 481 TPEF, 464–467 Non-local complex modulus, 739 Non-thermal forces, 746 Nucleic acid optical mapping, 905–907 O One-pot synthesis approach, 846 Open eigenchannel, 295 Optical detection, 914 Optical heterodyne, 127

Index Optical modulation brain hemodynamics and vasculature in vivo, 722–725 intracellular Ca2+ levels, 721–722 irreversible effects, 726–727 muscle contraction, 725–726 Optical resolution photoacoustic microscopy (OR-PAM), 251 Optical transfer functions (OTFs), 411, 626 Optical trapping, 608, 709, 756, 771, 772, 776, 783, 802 with 3D tracking, 762–765 Optical tweezers, 732. See also Optical trapping Optical waveguides, 286 Optoacoustic tomography, 235 Optofluidics, 768 Organic dyes, 261 Oxygenation, 182 Oxygen metabolism, 212 Oxygen saturation, 320 P Paclitaxel, 923 Parallax, 756 Penetration depth, 505, 506 Phootoacoustic microscopy, optical resolution, 251 Photoacoustic based IVFC, 7 Photoacoustic computed tomography (PACT), 252 Photoacoustic flow cytometry (PAFC) basic principle, 15 biomedical applications of PAFC, 18–20 in vivo PAFC setup, 15–18 Photoacoustic microscopy, 313 acoustic resolution, 249 Photoacoustics (PA), 303 Photo-activated light microscopy (PALM), 872 Photo-activated localization microscope (PALM), 412 Photodynamic therapy (PDT), 658 Photoelastic modulator, 130 Photolithography, 877–879 Photoluminescence, 846 Photonic bandgap (PBG), 63 Photonic crystal fibers (PCFs), 61 Photonics, 769 Photoresist, 877 Photosensitizer, 658 Photothermal based IVFC, 7

945 Photothermal conversion, 822 based therapy, 830–833 controlled drug/gene delivery and release, 833–836 properties of gold nanocrystals, 830–832 Photothermal therapy, 811, 824, 832–833, 926–928 Plasmon coupling, 826, 828, 830 Plasmon-enhanced microscopy. See Metal-enhanced fluorescence (MEF) Plasmon resonance, 811, 826, 830 Plasmonic tweezers, 445–449 Plasmonic(s), 368, 772 biosensors, 914–919 enhancement, 165 Plasmonic microscopy basics of, 430–436 SPP standing wave illumination fluorescence microscope, 437–439 supersensitivity and super dynamic range biosensors, 440–445 surface plasmon-assisted gap-mode Raman, 449–457 trapping metallic particles, 445–449 Plasmonics-based spatially activated light microscopy (PSALM), 576 Point spread function (PSF), 626 Polarimetry, 129 Poynting vector, 512 Q Quantum dots, 922 R Radiative transfer equation (RTE), 225 Raman based IVFC, 7 Raman reporters, 33, 40, 41, 43, 45, 48, 56 Reactive oxygen species (ROS), 660, 670, 721 Red blood cell aggregation, 316 Red blood cell morphology and geometry, 315 Reflection interference contrast microscopy (RICM), 623 Refractive index, 504 Refractive index-based sensing, 825–827 Resolution, 78 Resonant waveguides (RWGs) dielectric waveguide microscopy, 624–625 interference reflection microscopy, 623 SPR (see Surface plasmon resonance (SPR))

946 Resonant waveguides (RWGs) (cont.) total internal reflection, 618–621 waveguide based sensors, 615–616 waveguide microscopy, structured illumination, 626–629 Response function, 736 Restriction mapping, 906 Reynolds number, 899 S Scanning electron microscope (SEM), 414 Scanning SPR microscope (SSPRM), 637, 639, 644, 646 Scattering labeling and imaging, 820–822 surface-enhanced Raman scattering, 829–830 Second harmonic generation (SHG), 467–468, 473, 489–498 Selective plane illumination microscopy (SPIM), 605 Semiconductor nanoparticles, 842 Sensitivity, 79 Sequence-specific tagging, 906 Serpentine mixer, 903 SERS. See Surface enhanced Raman scattering (SERS) Shack-Hartmann wavefront sensor, 595 Shot noise, 332 Soft lithography, 904 Spatial light modulator (SLM), 520 Spatial phase stability, 334 Spatially-modulated emission technique, 102 Spectral-domain optical coherence phase microscopy (SD-OCPM), 327 SP-enhanced random activation imaging, 569 Stern layer, 900 Stimulated emission and depletion (STED) microscopy, 605, 872 Subwavelength trapping, 707 Sum frequency generation (SFG), 467–468 Supercontinuum, 463, 476, 482, 483, 491 Superfocusing, 704 Super-resolution, 437–439 microscopy, 550 Support vector machine (SVM), 108 Surface enhanced Raman scattering (SERS), 30–32, 67, 70, 449–457 detection, label free, 33–39 labels, detection with, 39–52 SERS nanotags, 45–52 clinical applications, 52–54

Index Surface enhanced Raman spectroscopy (SERS), 93 Surface functionalization, of Au nanocrystals, 816–817 Surface plasmon microscopy (SPM), 553 Surface plasmon polaritons description, 430 interference pattern, 432 standing wave illumination fluorescence microscope, 437–439 Surface plasmon resonance (SPR), 94, 554, 616–617, 775–777 for biological applications, 637–646 dynamic fluid environment, 781–783 imaging, 437–439 patterned metallic surface, 783–787 principles of, 629–633 sensivity of, 633–637 static fluid environment, 777–780 Surface plasmon resonance waveguide (SPRWG) microscopy, 642–646 Surface plasmons (SPs), 66, 72, 703, 872–873 confocal SP microscopy, 524–530 and evanescent wave properties, 532–536 exotic methods, 537–538 heterodyne and confocal detection, 522–523 heterodyne microscopy for, 523–524 non-confocal, 515–522 prism based excitation, 514–515 transmission coefficients, 512 widefield confocal SP imaging, 530–532 Surface-enhanced infrared absoption (SEIRA), 882 T Temporal phase stability, 332 Theranostic systems, 264 Third harmonic generation (THG), 468 3-D single particle tracking optical system construction, 757–758 custom-ordered optical device, 758 design, 756–757 implementation, 758 kinesin-microtubule system, 761–762 mini-stage equipped with piezo-actuator, 759 proteins inside a single cell, 762 schematic illustration, 757 Three-dimensional stem-cell viability assessment, 343 Time and spectral-resolve detection, 463

Index Time-averaged displacement (TAD), 338 Time-correlated single photon counting (TCSPC), 371, 382 Time-resolved fluorescence anisotropy imaging (TR-FAIM), 379, 380 Total hemoglobin concentration (HbT), 257 Total internal reflection (TIR), 62, 126, 618–621 Total internal reflection fluorescence (TIRF), 381 Total internal reflection fluorescence microscopy (TIRFM), 559, 872 Total internal reflection imaging microscopy (TIRM), 533, 535, 623 Transmission eigenchannels, 297 Tumor targeting, 665 Turbid lens imaging (TLI) method, 273 Two-particle microrheology, 742

947 Two-photon excitation fluorescence (TPEF), 464–467, 474, 488–498 Two-photon photoluminescence, 822 W Wavefront modulators, 592 Wavefront sensing technique, 272 Waveguide evanescent field scattering (WEFS) microscopy, 625 Whispering gallery mode (WGM), 147 Wide-field endoscopy, 290 Z Zernike polynomials, 588 Zn chalcogenide, 846 Zonal sensing, 598 Zweifach-Fung effect, 101