Handbook of Photonics for Biomedical Engineering 9789400761742

595 66 40MB

English Pages [841] Year 2015

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Handbook of Photonics for Biomedical Engineering
 9789400761742

Citation preview

Development of Extraordinary Optical Transmission-Based Techniques for Biomedical Applications Seunghun Lee, Hyerin Song, Seonhee Hwang, Jong-ryul Choi, and Kyujung Kim

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theoretical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fabrication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Focused Ion-Beam Lithography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electron-Beam Lithography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nanoimprinting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Photolithography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . EOT Gas Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biological Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . EOT-Based Imaging Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 3 3 4 5 6 7 9 9 13 17 20 20

Abstract

Highly sensitive detection techniques have drawn tremendous interest because this method allows the precise tracking of molecular interactions and the observation of dynamics on a nanometric scale. Intracellular and extracellular S. Lee • H. Song • K. Kim (*) Department of Cogno-Mechatronics Engineering, Pusan National University, Busan, South Korea e-mail: [email protected]; [email protected]; [email protected]; [email protected] S. Hwang Department of Advanced Circuit Interconnection, Pusan National University, Busan, South Korea e-mail: [email protected] J.-r. Choi Medical Device Development Center, Daegu-Gyeongbuk Medical Innovation Foundation (DGMIF), Daegu, South Korea e-mail: [email protected] # Springer Science+Business Media Dordrecht 2015 A. H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-6174-2_1-1

1

2

S. Lee et al.

processes can be measured at the molecular level; thus, highly sensitive techniques advance our understanding of biomolecular events in cellular and subcellular conditions and have been applied to many areas such as cellular and molecular analysis and ex vivo and in vivo observations. In this chapter, we review near-field based biosensors that rely on extraordinary optical transmission (EOT) and some application techniques that have emerged recently based on the localization of a surface plasmon. Also, we refer to the fabrication methods for making various nanostructures: first, focused ion beam that employs the high-energy ions to create high-precision nanopatterns. Second, electron-beam lithography that capitalizes on highly focused electron beam to draw submicron size patterns on the metallic surfaces. Third, nanoimprint lithography suitable for massive nanostructure fabrications. Finally, photolithography feasible for the cost-effective fabrication. At the end of this chapter, we introduce some applications based on EOT for enhancement of sensitivity and techniques which assist high-resolution imaging. Keywords

Extraordinary Optical Transmission(EOT) • Surface Plasmon Polaritons(SPPs) • Nanoaperature • Nanoplasmonics • Sensors, Imaging Technique

Introduction Using newly developed measurement and imaging techniques, biomolecular activation and cellular dynamics can be analyzed, and detection of small molecular events in subcellular environments is now possible. A significant restriction in previous optical imaging measurements is the lateral or axial spatial resolution caused by Abbe’s diffraction limits. To detect or observe molecular dynamics on subnanometric scales, diffraction limits should be resolved. Therefore, new optical techniques that can break the diffraction limit and enhance spatial resolution are desired. Recently, multiple microscopic methods that can replace current techniques have been proposed. Stimulated emission and depletion microscopy (STED) [1], which is based on the establishment of narrow excitation light spots, is one such strategy. Photo-activated light microscopy (PALM) [2] (obtained by the computation of the stochastic photo-activation of fluorescent molecules) and structured illumination microscopy (SIM) acquired by reconstructing high-resolution images from partial images with sinusoidal illumination that are spatially encoded [3] can also provide a good solution. Additionally, other perspectives focusing on tomographic systems that reinforce section measurement and imaging capabilities have been introduced: total internal reflection fluorescence microscopy (TIRFM) [4] and selective plane imaging microscopy (SPIM) [5]. These techniques have become the focus of attention, but they retain barriers to commercialization such as limited detection, limited imaging speed, experimental complexity, and high cost. Meanwhile, surface plasmon (SP) phenomena can be a

Development of Extraordinary Optical Transmission-Based Techniques for. . .

3

good alternative for imaging enhancements by virtue of nanotechnology [6]. Applying nanotechnology on the surface has many merits, notably resolution improvement because of field relocalization using nanostructures. Additionally, localized field sizes can be designed to be smaller than the diffraction limit. When an area of localized fields is nearly identical to that of a single fluorescence molecule, we can assume that light emitted is given by a single molecule [5]. An extraordinary transmission phenomenon is one of the representative techniques for the SP based approach [6]. In the chapter, we summarize varied lithographic techniques to fabricate optimized nanoapertures for extraordinary optical transmission (EOT) and introduce interesting applications using EOT samples. E-beam lithography is one kind of method that can perforate nanosize hole. When the high-precision nanohole array is needed, focused ion-beam (FIB) lithography is a better way to satisfy that quality. Another strategy for making large area patterning is nanoimprint lithography which is suitable for mass production. Some EOT based applications for the enhancement of sensitivity is introduced.

Theoretical Background When the light passes through the matter the beam spreads in all directions. So far, the matter how the light interacts with other objects is an old problem in optics. In 1944, a fundamental theory was known by Bethe [7]. He found that the formula of optical transmission through an array of circular hole perforating an infinitely thin perfect electric conductor. T

64 r 4 27π 2 λ

where λ is the wavelength of the incident light and r is the radius of the hole. According to Bethe’s theory, a light passing through the subwavelength hole rarely exists. However, Ebbesen et al. found that array of holes show highly unusual zeroorder transmission spectra at wavelength larger than the array period without diffraction that is called extraordinary optical transmission [8]. The weird thing is the intensity of transmission light. According to the Bethe’s previous theory, the transmission of light is expected to be lower than ever. One of the reasons this unusual phenomenon occurs is the coupling of light with SP, notably on a periodically patterned metallic nanostructure. SP is a collective oscillation of electrons between the edge of the metal and insulator [8].

Fabrication Although theoretical fundamentals of EOT through various nanoapertures are well established, the accurate fabrication of nanoapertures for EOT and plasmonic applications are required for experimental and industrial implementations. During

4

S. Lee et al.

the last few decades, several fabricating techniques have been developed and applied to investigate plasmonic nanostructures including nanoapertures for EOT. Several techniques for producing nanohole structures have been developed. Notably, two main concepts suited for the fabrication of nanohole arrays have been introduced: dead-ended nanohole arrays and through-nanohole arrays [9]. Dead-ended nanoholes are used when fluid-containing analyte is moved over the nanohole array; through-nanohole arrays are used for nanoconfinement of analyte. Here, we describe several techniques related to nanohole array-based sensing. In this section, we explore relevant fabrication methods, ranging from techniques used in industrial areas to newly investigated and experimentally applied methods.

Focused Ion-Beam Lithography FIB lithography was the first developed method that can create high-precision nanohole arrays on a metal surface. The main principle of this method is that ion-material collisions which remove the target from the sample. FIB comprises two processes: sputtering (removal of the material from the substrate) using highenergy ions and redeposition (relocation onto the other surface). The FIB technique has many advantages: FIB can fabricate various nanostructures on the substrate directly without the mask, the resist, and the chemical developer. Additionally, FIB provides improved resolution compared to photolithography and e-beam lithography because ion beams have smaller wavelengths than optical/UV light and electron beams. Also, by regulating the ion energy, the penetration depth of the ion is easily controlled. Therefore, FIB has been used to fabricate accurate nanostructures including nanoapertures for plasmonic applications. Figure 1 shows the 300-nm nanohole pattern which was fabricated by FIB on silicon nitride substrate. The single nanohole, which has a 100-nm height, is clear and solid, while deep and clear nanohole patterns cannot be easily fabricated by e-beam lithography. Lesuffleur et al. investigated real-time biosensors using periodic nanohole arrays constructed using ion-beam lithographic fabrication as shown in Fig. 2 [10]. Xue et al. introduced surface plasmon enhanced EOT of gold quasicrystals using focused ion-beam fabrication [11]. Furthermore, ion-beam lithography can be employed to construct complicated nanostructures that cannot be established by photolithographic and e-beam lithographic methods such as three-dimensional nanostructures and metamaterials [12, 13]. A drawback of ion-beam lithography is that the consumed time per fabricated unit area is longer than in photolithography and e-beam lithography. This longer time is caused by the small size of the beams used in the lithography processes. Because of the shortage, the integration of ion-beam lithographic techniques in massive fabrications of nanostructures is not appropriate.

Development of Extraordinary Optical Transmission-Based Techniques for. . .

5

Fig. 1 A SEM image of a 300-nm diameter nanohole fabricated by focused ion-beam lithography

Fig. 2 A SEM image of a fabricated double-hole array using focused ion-beam lithography. The period of each double-hole is 800 nm. An inlet graph illustrates the optical transmission through the double-hole array (This figure is used with permission from Ref. [10] by the American Institute of Physics)

Electron-Beam Lithography Electron-beam lithography (EBL) is a commonly used technique in the fabrication of metallic nanostructures. The main principle of EBL is identical to scanning electron microscopy (SEM). Therefore, this process utilizes an electron lens to focus the electron beam. EBL uses focused electron beams to draw wanted micro-

6

S. Lee et al.

Fig. 3 A SEM image of a 250-nm diameter nanohole array on Si glass fabricated using e-beam lithography

and nanopatterns on an electron-beam-sensitive resist above the substrate. The electron beams alter the properties of the resist; therefore, micro- and nanostructures can be established using selective eliminations of areas that are exposed or not exposed to a chemical developer. EBL combines other fabrication procedures, often containing in excess of three steps. First, positive or negative photoresist is spin-coated on a surface; Si glass can be used as the surface. Second, the soft-baked sample is exposed to an e-beam to create intended patterns. Third, a chemical development exposes the negative resist patterns on the substrate. The final step provides a metallic coating on the surface with intermediate adhesive layers. For example, to obtain the Au nanohole array, a thin layer of Cr (~5 nm) is coated to provide a proper adhesion before a layer of Au is deposited (Fig. 3). A remarkable advantage of EBL compared to photolithography is the enhanced fabricating resolution. The size of the focused electron beam is smaller than the diffraction limit in photolithographic fabrications using an optical or UV beam. Although applications to large areas or multiple sample fabrications remain difficult, e-beam lithography has been employed to fabricate various types of nanoapertures for plasmonic enhancement including for EOT. For example, Sharpe et al. introduced gold nanoaperture arrays fabricated by e-beam lithography to produce immunobiosensors [14]. Additionally, various nanogratings and nanoapertures (as illustrated in Fig. 4) for plasmonic enhancements of biosensing or of the resolution in imaging were constructed by EBL because of the appropriate lithographic resolution and absence of photomasks [15, 16].

Nanoimprinting Nanoimprint lithography is a recently investigated fabrication method to construct nanoscale patterns in a cost-effective and high-throughput manner. During nanoimprinting, designed patterns can be created by mechanically stamping resists on a master structure. Generally, a monomer or polymer resist that can be cured by heat or light exposures is employed in nanoimprint lithography.

Development of Extraordinary Optical Transmission-Based Techniques for. . .

7

Fig. 4 A SEM image of the nanoapertures antenna array fabricated by the process of e-beam lithography and postdevelopment. Nanoapertures with a 300-nm hole diameter and 1-μm period are fabricated into a gold film (tfilm = 30 nm) (The use of this figure from Ref. [16] is permitted by WileyVCH Verlag GmbH & Co. KGaA)

The remarkable advantages of nanoimprint lithography include the simplicity of lithographic processes, accessibility of the massive nanostructure fabrication, and cost efficiency. For these reasons and with the improvement of fabricating resolutions, nanoimprint lithography has been developed to investigate nanoapertures for EOT. For example, Martinez-Perdiguero et al. introduced the nanoimprint fabrication to demonstrate gold nanoaperture array-based sensors [17]. Barbillon described high-density plasmonic gold nanodisks on a glass substrate for biomedical sensing applications produced with soft UV nanoimprint lithographic fabrication [18].

Photolithography Photolithography is a fabrication method applied for establishing micro- and nanopatterns on a thin film substrate. This method uses transferred light through a microscale photomask to estimate designed patterns on a light-sensitive layer; this layer is called a photoresist. After the development of the photoresist with patterned-light illuminated structures, several chemical processes and the deposition of a specific material results in micro- or nanopatterns on the substrate (Fig. 5a). A main advantage of photolithography is the accessibility of costeffective, large-area microfabrication. Larger collimated light sources and photomasks expand the photolithographic fabricating area in both research and industrial applications. High-precision motorized nanoscale stages also offer parallel fabrication processes for cost-effective multiple-sample estimations.

8

S. Lee et al. Optical/UV light

a

Mask Photoresist Nanostructured material Substrate

Negative photoresist

Positive photoresist

Photoresist development

Etching

b t = 0.2

t = 0.5

t = 0.9

1 µm

Fig. 5 (a) Simplified illustration of photolithographic fabrication procedures using positive and negative photoresists. (b) SEM images (viewing angle = 45 ) of nanoapertures established by the combination of photo- and nanosphere lithography using 600 nm nanosphere particles. The differences in the embedded thickness (t) cause the properties of nanoapertures to be relevantly changed (The use of this figure from Ref. [19] is permitted by the American Physical Society)

Development of Extraordinary Optical Transmission-Based Techniques for. . .

9

One critical issue in the photolithographic fabrication of nanoapertures for EOT is limited fabricating resolution. The resolution of photolithography is bounded by the Abbe’s lateral diffraction limit and the effective resolution is a little broader because of reflection and scattering. General photolithography using optical or ultraviolet (UV) light can be applied in microfabrication; these processes have resolutions of approximately a few hundreds of nanometers. To exceed the resolution limit and build nanostructures for plasmonic applications including EOT, several alternative fabrication approaches based on photolithography have been suggested. Kelf et al. fabricated metallic circular nanoapertures to generate a localized plasmon using the combination of photo- and nanosphere lithography (Fig. 5b) [19]. Using an identical method, Goncalves et al. described the fabrication of triangular nanoapertures [20]. Henzie et al. introduced phase-shifting photolithography, a high-resolution photolithographic fabrication method using phasechanging light illuminations to investigate plasmonic nanostructures [21].

Applications Nanohole array structures have been widely used in multiple fields such as chemical, medical, and biological sensors for sensitively detecting dielectric changes [22]. It also can be applied to the EOT-based super resolution imaging technique [23]. The detailed information of the EOT-based high-sensitivity sensors and highresolution imaging techniques are introduced below.

EOT Gas Sensing Recently, a chemoselective gas sensor was designed using plasmonic nanohole array structures. Gas molecules are small, complicating detection, but it is possible to detect gas molecules using the nanohole arrays [24]. Thus, this plasmonic sensor is designed to detect gaseous compounds sensitively using EOT. Figure 6 shows a schematic design of the gas detecting sensor composed of periodic nanohole arrays and a voltage-directed assembly. A variety of methods such as e-beam lithography, photolithography, and FIB were used to fabricate the nanohole structures. A nanohole array structure that is fabricated using e-beam lithography is in each sensor region A, B, and N (see Fig. 6a) [25, 26]. All sensors in Fig. 6a attached to an electrical contact have a nanohole array, and an electrical bias is provided for voltage-directed assembly process. A different type of molecule was composed to the surface chemistry. These nanohole arrays are connected to an electrical contact to supply an electrical bias. All sensor regions are comprised of different molecular conditions and can display chemoselectivity, a developing multiplexed sensor. The characteristic of voltage-directed assembly process makes localized sensing by electrical isolation possible. A voltage-directed assembly was implemented using optical lithography to obtain different chemistries.

10

S. Lee et al.

Fig. 6 (a) Schematic representation of the different steps for achieving a multiplexed chemoselective sensor array. (b) Assembly process of a series of chemoselective compounds to construct a multiplexed sensor array. Chemoselectivity is given to an individual sensing area using a voltage-directed assembly technique. Since the assembly occurs only in the presence of an applied voltage, separate sensors can be given different chemistries in subsequent steps (The permission of reprinting Fig. 6 from Ref. [25] was granted by the Optical Society)

b Transmission (%)

a

60

Experiment Simulation

40

20

500

600 700 800 900 Wavelength (nm)

1000

Fig. 7 (a) Schematic representation of the difficulty of gas sensing using plasmonics. (b) The results of simulated (blue-solid) and measured (dotted-red) transmission through a nanoscale perforated gold film (The permission of reprinting Fig. 7 from Ref. [25] was granted by the Optical Society)

Development of Extraordinary Optical Transmission-Based Techniques for. . .

11

Fig. 8 (a) Microscope image of two sensor elements each composed of multiple nanohole arrays fabricated using PMMA with a complimentary pattern. The two sensor arrays are electrically isolated and both are connected to a gold contact pad. (b) SEM image showing the dimensions of the perforation in the gold film. The nanoholes are 200 nm in diameter and are separated by 400 nm in a square lattice (The permission of reprinting Fig. 8 from Ref. [25] was granted by the Optical Society)

A few gas molecules in the dielectric environment can be absorbed surrounding the nanohole area by the change of molecules but the change would be very small because the density of absorbed molecule is low (Fig. 7a). Thus, the gas chemoselectivity may be difficult. Figure 7b displays the simulated transmission spectra results from finite-difference time-domain (FDTD) [27] and highlights a similarity between the simulated and measured transmission spectra. The sample used in experiment was created in a 50 nm gold film on a silicon substrate, the diameter of hole is 200 nm, and a pitch of the separated holes is 400 nm. The range of 250–400 nm was considered a proper period of the hole array in a spectral wavelength of 400–800 nm [25]. Figure 8a shows the two sensors that was patterned a periodical nanohole arrays. Each sensor array is isolated electrically, also combined with gold pad not shown in the image. This gold pad which plays a role as the electrical bias by the length of 10 μm was used to conduct the voltage assembly process. The simulation data in Fig. 7b were obtained from multiple nanohole array patterns using a diameter of 200 nm, distance between the center of circles of 400 nm (Fig. 8), and gold film thickness of 50 nm. The SEM image shows a typical of nanohole array structures were patterned using E-beam lithography and completed using a lift-off process (Fig. 8b). A simple delivery system is used to test the gas sensing assembly by employing a high magnification objective to illuminate and collect light from a sensor array. The light passes through a single sensor array and the beam spot is detected by the CCD camera allowing real-time imaging. This light was emitted through an optical fiber, so beam splitters simply focus the alignment. A bubbler is connected to the

12

S. Lee et al. Light Sensor Array Hydrogen Carrier Gas

Bubbler Flow Cell Detector

Fig. 9 Schematic representation of the gas chromatography system; A hydrogen carrier gas is used in a bubbler to transport the test molecule to the sensor array and is analyzed downstream by an HP gas chromatographer (The permission of reprinting Fig. 9 from Ref. [25] was granted by the Optical Society)

nanohole array and the nanohole array is exposed for 15 min (Fig. 9). Therefore, the transmission peak displays a 3 nm shift [25]. It is demonstrated that reusable multiplexed gas sensors using nanohole array structures can be used to detect gaseous compounds. A gas sensor can obtain enhanced sensitivity using a plasmonic metal hole array structure that represents the EOT phenomenon [8, 28]. Higher sensitivity and selectivity are required for gas sensing in analytical chemistry. The application of surface-enhanced infrared absoption (SEIRA) [29–31] with metal hole array structures has been studied. Figure 10 shows the setup for gas detection. A metal hole array with a hole diameter c and periodicity a for nano- to microsized holes is designed and fabricated through lithography and lift-off [26]. The transmission peak is shifted depending on the hole diameter and periodicity. A proper ratio between c and a can be selected through experiments. Two metal hole array mirrors replace normal mirrors. This system uses a filament-type infrared light source to deliver black-body-like emission and an oscilloscope to detect an output signal. This device can develop selectivity through employing additional mirrors. The experiments used three different conditions to compare the results, because the device has windows on both sides. First, windows having both patterns were located (black-framed picture in Fig. 11a). Second, windows with one metal hole array were used in the experiment (red-framed). Third, silicon substrates faced inside the gas cell (blue-framed). The three sets of results were obtained by monitoring the output signal versus the gas concentration. The absorbance change (ΔA) is shown in Fig. 11b (I is the detected signal). The signal from the metal hole array and one-side metal hole array is calculated to be 27 and 9 times larger than the signal from the silicon facing inside [32]. Using the 3D FDTD, the transmission spectra and enhancement properties of the metal hole array gas sensor are simulated [32]. Mirrors with a metal hole array of

Development of Extraordinary Optical Transmission-Based Techniques for. . .

13

Fig. 10 Schematic illustration of the SF6 gas detection set up used in the experiments. The IR light source is an IRS-001C (IRS, Ltd.) and the SF6 detector is a LIM-122 (Infratec, LLC). The gas cell mirrors were replaced with Si substrates carrying silver multihole arrays (MHA) for augmented sensitivity. Size reference: the output diameter of the parabolic mirror is 2.5 cm, the window diameter of the gas cell is 1.5 cm, and a single window on the detector cap measures 3.5  2.5 mm2 (The permission of reprinting Fig. 10 from Ref. [29] was granted by the Optical Society)

hole diameter c = 1.6 μm and a period a = 3.3 μm were used to obtain the data (Fig. 12a). The reflected and transmitted spectra changing thickness of the metal hole array layer t are shown in Fig. 12b. These spectra display a wider transmission bandwidth caused by a thin layer. However, the transmission peak in the experiment was measured to be lower than the results of the simulation. A comparison between Fig. 12c (t = 100 nm) and (d) (t = 20 nm) shows the intensity of the extraordinary optical transmission in the xz-plane. The thinner layer provides the lower enhancement and that region becomes weaker in 100 nm over the metal hole arrays.

Biological Sensing EOT can be used as a biosensor. Because the transmitted light range is tens or hundreds of nanometers, sensing can occur in small areas with high sensitivity enough to detect a biomolecule, biomolecular interaction, or changes of cell membrane. Select studies introduce the EOT-based biological sensor; these studies

14

S. Lee et al.

Fig. 11 (a) Output signal dependence on the SF6 concentration. The insets show schematically the configuration of the gas cell windows (whether the MHA is exposed to the inside of the gas cell or not). (b) Absorption change ΔA = log(I/I0) as a function of SF6 concentration; the hashed region marks the detection threshold at 0.1 %. The line is the linear fitting by the least-squares method. The inset figure shows the log-log plot of absorption and concentrations. The lines are drawn as eye guides for the linear dependence (The permission of reprinting Fig. 11 from Ref. [29] was granted by the Optical Society)

are described below. Brolo et al. applied periodic subwavelength nanohole arrays in a gold film to an SP resonance system to sensitively monitor the binding of biological molecules to the metallic film surface [33]. Figure 13 shows the transmitted light spectra of normally incident white light through hole arrays in gold after successive surface modifications. Figure 13 spectrum a presents a distinct resonance at 645 nm from the interface between air and bare metal. Spectrum b shown in Fig. 13 was obtained after the metallic surface had been modified with a MUA monolayer. The maximum transmission resonance shifted to 650 nm. An additional resonance shift to 654 nm in spectrum c was observed when proteins were adsorbed to the MUA layer. The sensitivity of the sensor was found to be 400 nm/refractive index unit. Yanik et al. demonstrated a label-free optofluidic nanoplasmonic sensor that can detect intact viruses directly from fluidic biological media at a concentration of 109 PFU/mL. This concentration is clinically relevant, and this process requires minimal sample preparation. This group also demonstrated the detection and recognition of small enveloped RNA viruses (e.g., vesicular stomatitis virus and pseudotyped Ebola) to large enveloped DNA virus (e.g., vaccinia virus), spanning a dynamic range of three orders of magnitude [34]. Figure 14a, b are three-dimensional schematic images of EOT-biosensors. The sensing area consists of an aligned nanohole array, and the antibodies are immobilized on the hole array to detect the antigens flowing through the fluidic channel. Simultaneously, to compare the resonance peak shift, a nontreated sensing area is prepared. The results are shown in Fig. 14c, d. The measured spectra

Development of Extraordinary Optical Transmission-Based Techniques for. . .

15

Fig. 12 (a) Simulation layout of the Ag MHA of period a = 3.3 μm and μmhole diameter c = a/2, the box indicates the footprint of the elementary simulation cell, made periodic through the boundaries. (b) Normalized power reflected (RX) and transmitted (TX) out of a 10 μm long gas cell realized with Si:MHA mirrors as in the black inset of Fig. 4a, for different thickness t of the Ag MHA. (c) xz-plane cross-section of a Si:MHA mirror with t = 100 nm illuminated from bottom, depicting E-field intensity enhancement at 10.6 μm wavelength; (d) same for t = 20 nm (The permission of reprinting Fig. 12 from Ref. [29] was granted by the Optical Society) Fig. 13 Normalized transmission spectra of normally incident white light through an array of subwavelength (200-nm diameter) holes on a 100-nmthick gold substrate deposited on a glass slide. (a) Bare (clean) Au surface; (b) Au modified with a monolayer of MUA; (c) Au-MUA modified with BSA [33]

16

S. Lee et al.

Fig. 14 Three-dimensional renderings (not drawn to scale) and the experimental measurements illustrate the detection scheme using optofluidic nanoplasmonic biosensors based on resonance transmissions because of the extraordinary light transmission effect. (a) Detection (immobilized with capturing antibody targeting VSV) and control sensors (unfunctionalized) are shown. (b) VSV attaches only to the antibody immobilized sensor. (c) No observable shift is detected for the control sensor after the VSV incubation and washing. (d) Accumulation of the VSV because of the capturing by the antibodies is experimentally observed. A large effective refractive index increases results in a strong redshifting of the plasmonic resonances (100 nm) [34]

represent both before (blue line) and after (red line) incubating the virus-containing sample. The distinct resonance peak observed at 690 nm (blue line) with 25 nm of full width at half maximum (FWHM) was measured from the extraordinary transmitted light through the hole. This transmitted resonance peak (blue line) of the antibody immobilized detection sensor corresponds to the excitation of SPP mode at the metal-dielectric interface. After diffusively delivering the analyte containing the virus sample, a strong resonance red-shift (100 nm) is observed (red line) in the end point measurement. This strong red-shift results from biomolecules binding to the functionalized surface. For the unfunctionalized control sensors (Fig. 14c), a slight resonance red shifting (1 nm) is observed when compared to the resonance of the blue curve, likely because of nonspecific binding events. This research suggests promising specific target detectable optofluidic biosensors.

Development of Extraordinary Optical Transmission-Based Techniques for. . .

17

Fig. 15 (a) Experimental setup of a structured illumination microscope based on extraordinary optical transmission through subwavelength nanoapertures arrays (b) Fluorescence images (xzand xy-) of a fluorescent solution that was illuminated by EOT through the nanoapertures. This 3D fluorescence profile consists of 2D images which captured 100-nm scans in the axial (z-) direction (This figure is reprinted with the permission from the Society of Photo-Optical Instrumentation Engineers [35])

EOT-Based Imaging Techniques Although developments in EOT-based super-resolution imaging techniques is not as advanced as EOT-based high-sensitivity sensing methods, several studies have introduced meaningful results in the investigation of EOT-assisted high-resolution imaging as described below. Docter et al. investigated a novel microscopic method using multiple diffraction-limited illumination spots that were established by extraordinary optical transmission through metallic grids of nanoscale apertures [35].

18

S. Lee et al.

Fig. 16 Extraordinary transmission-based axial imaging super-resolved living cell analysis. (a) Fluorescence intensity image of a RAW264.7 cell that was expressed by FITC-conjugated cholera toxin subunit B (FITC-CT-B) and measured by EOT-AIM. The intensity image is overlaid with a wide-field image. (b) Profiles of the fluorescence intensity along the axial direction after normalization. The intensity profiles at j (array number) = 3 and 10 are also described on the right (The figure is reprinted with permission from WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim [23])

In the feasibility study using theoretical calculations, the EOT-based structured illumination microscopy can improve imaging resolution when compared to confocal microscopy. The study confirmed actual resolution enhancements that were predicted by the simulation by imaging fluorescent samples using the experimental EOT-based structured illumination setup with fabricated nanoscale aperture grids as Fig. 15. To acquire fluorescence information in selected axial points, Choi et al. developed an EOT-based axial imaging (EOT-AIM) method using linear arrays of metallic nanoscale apertures [23]. They applied different EOT penetration depths through nanoscale apertures with different sizes to estimate and reconstruct fluorescence intensities of selected axial regions. First, they calculated the optical near-field of the EOT through the nanoscale aperture array and estimated the EOT penetration depths in each sized nanoscale aperture. The calculated results were

Development of Extraordinary Optical Transmission-Based Techniques for. . . Telecentric lens Imaging target

a

Multispectral device 20 by 20 blocks Filters

19

m

12 m

CMOS camera

Transmission source

MB* Reflection source

b Distilled water

NHA

Band1 662–714 nm

Telecentric lens Band2 705–752 nm

Quartz

Band3 746–794 nm

2 mm Band4 788–832 nm

120

100 MB* 10 µM

80 Intensity

60 MB* 30 µM 40

20 MB* 50 µM

Fig. 17 Nanoaperture-array-based EOT implemented multispectral imaging. (a) Experimental schematic of a multispectral imaging platform using EOT through nanoaperture-arrays. (b) Raw image of nanoaperture-arrays and unmixed transmission multispectral images of distilled water with 0, 10, 30, and 50 μM concentrations of methylene blue. Selected spectral bands are 662–714, 705–752, 745–794, and 788–832 nm (The reuse of this figure is permitted by the Nature Publishing Group [37])

then compared with experimental calibration data using a Fluorescein-gel matrix on the nanoscale aperture arrays. When imaging living cells that were expressed by a fluorescent indicator and cultured on the nanoapertures, the possibility of live cell measurements using EOT-based super-resolved fluorescence imaging with axial resolution enhancements (Δz = 25–125 nm) was suggested (Fig. 16). However, select studies established EOT-based multispectral imaging by dimensional expansion (1D ! 2D) of EOT-based spectral sensing. Yao et al. introduced a 3D plasmonic crystal and 2D protein binding mapping that can be determined by EOT-based high-sensitivity spectral sensing [36]. They fabricated a 3D plasmonic device after FDTD simulation-based optimizations to detect protein bindings.

20

S. Lee et al.

The plasmonic device was applied on a conventional optical microscope with a charge-coupled device (CCD) camera. By capturing 2D EOT field changes, protein binding properties (submonolayer quantities of alkanethiols) on the 3D plasmonic crystal can be monitored with high resolution and high sensitivity. Recently, Najiminaini et al. introduced EOT-based high-sensitivity hyperspectral imaging methods using transmission characteristics through gold nanohole arrays [37]. This hyperspectral device consists of multiple components of nanoapertures that have unique periods for specific transmission resonance wavelengths (Fig. 17). They established a spectral unmixing algorithm to acquire 2D hyperspectral images within the wavelength region of 662–832 nm. This device suggests a potential to build 2D hyperspectral imaging sensors based on EOT by designed metallic nanoapertures and a potential for various applications in biomedical imaging.

Summary In this chapter, we reviewed EOT-based sensing and imaging techniques for varied applications. Before introducing the applications, we summarized fabrication methods of nanostructures for the EOT phenomenon. Focused ion-beam and electron-beam lithography for highly precise nanostructures, nanoimprint lithography for massive nanostructure fabrications, and photo lithography for cost-effective fabrication were introduced and optimized nanosamples fabricated by those lithographic techniques were shown for sensing and imaging applications. Consequently, we reviewed several interesting applications based on EOT to enhance sensitivity of biosensors and to assist high-resolution imaging. The EOT based approaches using nanostructures have been successfully applied for gas or bio sensing, and there have been lots of novel approaches for high-resolution imaging or optical manipulation. We verified the potentials of the plasmonic-based EOT phenomenon for broad research fields and suggested successful approaches for further researches in biosensing and bioimaging.

References 1. Hell SW, Wichmann J (1994) Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt Lett 19:780–782 2. Hess ST, Girirajan TP, Mason MD (2006) Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys J 91:4258–4272 3. Gustafsson MG (2000) Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J Microsc 198:82–87 4. Kim K, Kim DJ, Cho EJ, Suh JS, Huh YM, Kim D (2009) Nanograting-based plasmon enhancement for total internal reflection fluorescence microscopy of live cells. Nanotechnology 20:015202 5. Huisken J, Swoger J, Del Bene F, Wittbrodt J, Stelzer EH (2004) Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305:1007–1009 6. Barnes WL, Dereux A, Ebbesen TW (2003) Surface plasmon subwavelength optics. Nature 424:824–830

Development of Extraordinary Optical Transmission-Based Techniques for. . .

21

7. Bethe HA (1944) Theory of diffraction by small holes. Phys Rev 66:163 8. Ebbesen TW, Lezec HJ, Ghaemi HF, Thio T, Wolff PA (1998) Extraordinary optical transmission through sub-wavelength hole arrays. Nature 391:667–669 9. Escobedo C (2013) On-chip nanohole array based sensing: a review. Lab Chip 13:2445–2463 10. Lesuffleur A, Im H, Lindquist NC, Oh SH (2007) Periodic nanohole arrays with shapeenhanced plasmon resonance as real-time biosensors. Appl Phys Lett 90:243110 11. Xue J, Zhou W, Dong B, Wnag X, Chen Y, Huq E, Zeng W, Qu X, Liu R (2009) Surface plasmon enhanced transmission through planar gold quasicrystals fabricated by focused ion beam technique. Microelectron Eng 86:1131–1133 12. Xu T, Lezec HJ (2014) Visible-frequency asymmetric transmission devices incorporating a hyperbolic metamaterial. Nat Commun 5:4141 13. Liu Y, Zhang X (2011) Metamaterials: a new frontier of science and technology. Chem Soc Rev 40:2494–2507 14. Sharpe JC, Mitchell JS, Lin L, Sedoglavich N, Blaikie RJ (2008) Gold nanohole array substrates as immunobiosensors. Anal Chem 80:2244–2249 15. Oh Y, Lee W, Kim Y, Kim D (2014) Self-aligned colocalization of 3D plasmonic nanogap arrays for ultra-sensitive surface plasmon resonance detection. Biosens Bioelectron 51:401–407 16. Kim K, Yajima J, Oh Y, Lee W, Oowada S, Nishizaka T, Kim D (2012) Nanoscale localization sampling based on nanoantenna arrays for super‐resolution imaging of fluorescent monomers on sliding microtubules. Small 8:892–900 17. Martinez-Perdiguero J, Retolaza A, Otaduy D, Juarros A, Merino S (2013) Real-time labelfree surface plasmon resonance biosensing with gold nanohole arrays fabricated by nanoimprint lithography. Sensors 13:13960–13968 18. Barbillon G (2012) Plasmonic nanostructures prepared by soft UV nanoimprint lithography and their application in biological sensing. Micromachines 3:21–27 19. Kelf TA, Sugawara Y, Cole RM, Baumberg JJ, Abdelsalam ME, Cintra S, Mahajan S, Russell AE, Bartlett PN (2006) Localized and delocalized plasmons in metallic nanovoids. Phys Rev B 74:245415 20. Gonc¸alves MR, Makaryan T, Enderle F, Wiedemann S, Plettl A, Marti O, Ziemann P (2011) Plasmonic nanostructures fabricated using nanosphere-lithography, soft-lithography and plasma etching. Beilstein J Nanotechnol 2:448–458 21. Henzie J, Lee J, Lee MH, Hasan W, Odom TW (2009) Nanofabrication of plasmonic structures. Annu Rev Physiol 60:147–165 22. Liedberg B, Nylander C, Lunstro¨m I (1983) Surface-plasmon resonance for gas-detection and biosensing. Sensors Actuators 4:299–304 23. Genet C, Ebbesen TW (2007) Light in tiny holes. Nature 445:39–46 24. Wright JB, Cicotte KN, Subramania G, Dirk SM, Brener I (2012) Chemoselective gas sensors based on plasmonic nanohole arrays. Opt Mater Express 2:1655–1662 25. Nishijima Y, Nigorinuma H, Rosa L, Juodkazis S (2012) Selective enhancement of infrared absorption with metal hole arrays. Opt Mater Express 2:1367–1377 26. Lumerical. Retrieved http://www.lumerical.com 27. Martín-Moreno L, Carcía-Vidal FJ, Lezec HJ, Pellerin KM, Thio T, Pendry JB, Ebbesen TW (2001) Theory of extraordinary optical transmission through subwavelength hole arrays. Phys Rev Lett 86:1114–1117 28. Ohta N, Nomura K, Yagi I (2010) Electrochemical modification of surface morphology of Au/Ti bilayer films deposited on a Si prism for in situ surface-enhanced infrared absorption (SEIRA) spectroscophy. Langmuir 26:1897–18104 29. Miyatake H, Hosono E, Osawa M, Okada T (2006) Surface-enhanced infrared absorption spectroscopy using chemically deposited pd thin film electrodes. Chem Phys Lett 428:451–456 30. Aouani H, Sipova H, Rahmani M, Navarro-Cia M, Hegnerova K, Homola J, Hong M, Maier SA (2013) Ultrasensitive broadband probing of molecule vibrational modes with multifrequency optical antennas. ACS Nano 7:669–675

22

S. Lee et al.

31. Nishijima Y, Adachi Y, Rosa L, Juodkazis S (2013) Augmented sensitivity of an IR-absorption gas sensor employing a metal hole array. Opt Mater Express 3:968–976 32. Brolo AG, Gordon R, Leathem B, Kavanagh KL (2004) Surface plasmon sensor based on the enhanced light transmission through arrays of nanoholes in gold films. Langmuir 20:4813–4815 33. Yanik AA, Huang M, Kamohara O, Artar A, Geisbert TW, Connor JH, Altug H (2010) An optofluidic nanoplasmonic biosensor for direct detection of live viruses from biological media. Nano Lett 10:4962–4969 34. Docter MW, Van den Berg PM, Alkemade PF, Kutchoukov VG, Piciu OM, Bossche A, Young IT, Garini Y (2007) Structured illumination microscopy using extraordinary transmission through sub-wavelength hole-arrays. J Nanophotonics 1:011665–011665 35. Choi J-R, Kim K, Oh Y, Kim AL, Kim SY, Shin JS, Kim D (2014) Extraordinary transmission‐ based plasmonic nanoarrays for axially super‐resolved cell imaging. Adv Opt Mater 2:48–55 36. Yao J, Stewart ME, Maria J, Lee TW, Gray SK, Rogers JA, Nuzzo RG (2008) Seeing molecules by eye: surface plasmon resonance imaging at visible wavelengths with high spatial resolution and submonolayer sensitivity. Angew Chem Int Ed 120:5091–5095 37. Najiminaini M, Vasefi F, Kaminska B, Carson JJ (2013) Nanohole-array-based device for 2D snapshot multispectral imaging. Sci Rep 3:2589

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method and Application to Optical Trapping Takanobu A. Katoh, Shoko Fujimura, and Takayuki Nishizaka

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Design Rationale for 3-D Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Ready-to-Use Implement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Calibration of the Displacement Along z-Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Optical Trapping Equipped with 3-D Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Conclusion Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Abstract

We here describe the three-dimensional optical tracking method, which is realized with a simple optical component, a quadrangular wedge prism. Additional two lenses located between a conventional optical microscope and a camera enable to track single particles in 3-D. Because of the simplicity of its rationale and construction, any laboratory equipped with 2-D tracking method, under either fluorescence, phase-contrast, bright-field or dark-field illumination, can adopt our method with the same analysis procedure and thus the same precision. Applications to a molecular motor, kinesin-microtubule system, and optical trapping, are also demonstrated, verifying the advantage of our approach to assess the movement of tiny objects, with the size ranging from ten nanometers to a few microns, in an aqueous solution.

T.A. Katoh • S. Fujimura • T. Nishizaka (*) Department of Physics, Gakushuin University, Tokyo, Japan e-mail: [email protected]; [email protected]; takayuki.nishizaka@gakushuin. ac.jp # Springer Science+Business Media Dordrecht 2015 A. H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-6174-2_2-1

1

2

T.A. Katoh et al.

Keywords

3-D single particle tracking optical system construction • Custom-ordered optical device • Design • Implementation • Kinesin-microtubule system • Mini-stage equipped with piezo-actuator • Proteins inside a single cell • Schematic illustration • Displacement along z-direction, calibration of • Optical trapping • With 3D tracking • Optical tweezers • Parallax

Introduction In the past four decades, the technique called optical trapping (also referred to as “optical tweezers”), originally designed by Arthur Ashkin [1–4], has been developed as useful applications aiming to manipulate small particles with the size of microns under optical microscopes (see excellent reviews [5–8] for more details). By tracking single trap particles with the precision of tens of nanometer, the optical trap also works as a force transducer: the meticulous and reliable calibration allows us to measure the external force imposed to a trapped particle through the displacement from the trap center. However, the direction of force measurement is limited in 2-D, simply because the detection of the displacement of a specimen is only capable in the sample plane, which is defined as the plane perpendicular to the optical axis of the objective lens. Although trapping of particles is achieved in all directions, only 2-D force is measured under conventional optical microscopes. This limitation causes inaccurate estimation of tiny forces, such as powers produced by single bacterium or even single motor molecules. In this chapter, the simple but powerful approach is introduced in order to break the limitation. One optical component, a wedge prism, can improve the conventional 2-D detection into the 3-D world [9]. By combining the prismatic setup with the optical trapping, 3-D force measurement is now capable in any microscopes equipped with a laser for the trapping. This approach will be a powerful tool in research fields such as of cell biology, biophysics, biomedical engineering and single-molecule physiology.

Design Rationale for 3-D Detection The way that human beings detect the depth of objects by the eyes is called as “parallax.” Using the different geometry of multiple detectors (two eyes) arranged in horizontal direction, any movement of objects is converted into 3-D cognition at a time. The trick is very simple: intuitively speaking, the moving direction projected to two detectors are inverse when an object moves away from you; the object moves slightly rightward when you watch the object by only your right eye, whereas does leftward by your left eye. If a researcher sets the same arrangement for a specimen and a camera, you can convert any movement parallel to the optical axis into the relative horizontal displacement, as in the case of two eyes.

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method and. . .

3

Our group decided to use a single optical component, a wedge prism, for this task [9]. The prism is located at the back focal plane of the objective lens (BFP) and divides the beam from the point source into two components of the light (Fig. 1a). Two separated beams are projected to a single camera plate and work as “two eyes” because projected positions are perpendicularly displaced. When one light source is displaced parallel to the optical axis, the direction of displacements of two images of the source is inverse, as in the case of horizontally aligned two eyes. In this way, z-movement of a specimen is converted into relative x-directed displacement between two half-separated images. X- and y-movements are simultaneously detected as average positions of two images. Taken together, the absolute position of any particles appeared as a point light source is determined in 3-D directions, with the similar precision of 2-D detection without the prism.

Construction Because a single objective lens is comprised of multiple lenses inside a metal tube and the BFP is located inside the tube, the precise arrangement of any optical component at BFP is impossible in most cases. Instead, BFP is focused outside the camera port of a microscope as an equivalent BFP (eBFP) by setting a convex lens (L1) with the focal length of f1. By locating the lens apart from the equivalent sample plane (eSP) with the distance of f1, eBFP appears apart from L1 with the distance of f1. Subsequently, the other lens (L2) with the focal length of f2 focuses the image on the camera plane (Fig. 1b). The prism is set between two lenses. When the prism is carefully positioned as the edge of the prism just located in the center of the beam flux from the specimen, two spots with a same intensity appear on one camera plate. To achieve the above optical setup, a custom-made prism with the shallow angle that matches both f2 and the size of the camera plate (dx  dy) is needed. The distance between two spots should be the half of dy to track specimens under the microscope to maximize the observation area. Additionally, a linear translator equipped with a Mitsutoyo micrometer head is needed for the precise adjustments of the prism along x-direction. The position can be easily adjusted by watching single emitters, such as fluorescent beads attaching to the glass substrate, so as to produce two equal signals. For z-adjustments of the prism, fine micrometers are not recommended because the determination of eBFP is only capable with the millimeter scale in the above magnification scale. One easy trick is the projection of the signature component within the condenser-lens unit located above the objective, such as the ring slit for the phase-contrast microscope, to the thin paper. By moving the paper along z-direction quickly by a hand, the precise position of eBFP can be directly recognized by the eye as the ring pattern becomes sharpest. The precision is limited to submillimeter scale with the above procedure, which is presumably enough to reproduce the same prism arrangement (Fig. 1b).

Fig. 1 (a) Schematic of 3-D tracking optical system that has been developed by our group. The beam from a single emitter located at the right plane is divided into two components of light (the blue and green path) by the single wedge prism located between two lenses. The beams are projected to two separate regions of a single camera plate (left). The beam runs from –z to + z. (b) Optical design at the camera port. In the case that the microscope has an infinite optical system, the equivalent back-focal-plane (eBFP) should be located at the plane apart from L1 with the distance of f1. The wedge prism should be set at eBFP in order to divide the beam flux precisely in half

4 T.A. Katoh et al.

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method and. . .

5

Fig. 2 Micrograph of the custom-ordered optical device attaching to the camera port of an inverted microscope Ti-E, Nikon. The adjustment block for the prism is equipped in the middle cuboid. The front (right in the micrograph) cuboid has switchable beam splitter inside, which enables detecting an additional channel with a different wavelength by the other camera

Ready-to-Use Implement Our 3-D method is simple and thus the construction is straightforward. The implement including all optics in one device is useful and reliable for the reproducible calibrations and outputs. Our group developed two types of the custom-ordered implement for Nikon microscopes: one for TE2000E and the other for Ti-E (Fig. 2). Two lenses were fixed in the device and the prism could be adjusted by three translators keeping the light interception. There were additional two unique tools inside. (1) The prism can be removed from the optical path by a simple click mechanism, which allows us to transform the system into the conventional microscope for other users who do not need 3-D tracking. (2) The optical filter can be located between L1 and the prism, which allows us to split the signal into two with different wavelengths of light by setting a desirable beam splitter. Users can get two channels, one for 3-D tracking and the other for the conventional illumination (such as phase-contrast or fluorescent microscopes), simultaneously with two different cameras. With this implementation, our group succeeded to dismantle the subunit from the single protein immobilized to the glass surface while watching single fluorescent probe labeled upon the protein (Naito et al., manuscript in preparation).

6

T.A. Katoh et al.

Fig. 3 Micrograph of the mini stage equipped with the piezo actuator (PhysikInstrumente, a cube depicted by yellow dotted line) for the axial position displacement of the sample. The sample is placed at the middle hole keeping the same height in the real measurement situation, in which a conventional annual plate for the commercial stage is used

Calibration of the Displacement Along z-Direction The relative x-displacement between two images separated by the prism does not simply correspond to the real displacement toward z-direction, and so the calibration between two values (Δx and z) is needed. To estimate the absolute movement of the sample in the sample chamber filled with a solution, the calibration factor, which is the function to convert Δx into z, should be fixed. For this purpose, a custom-made mini stage (Fig. 3), by which the position of the sample is displaced with the nanometer accuracy in the direction of the optical axis, was constructed. To avoid the gap of the position of the BFP between the calibration and measurement, the position of the objective lens should, ideally, be located at the exact same position. The mini stage is designed to keep the height of the sample plane equivalent with the help of the L-shaped plates. The relationship between the z-position and the relative displacement between two images is almost linear as exemplified in Fig. 4. The slope is 0.66 in this case, and the value depends on the tracking algorithm and types of the objective. One approach is 2-D Gaussian, fitting to each separated emitter albeit emitters represent a fanlike profile as the image is defocused. The alternative one is the estimation of the centroid of each emitter, but the subtraction value may interfere with the spatial resolution especially in the case that the signal becomes low as the image is defocused. Ideally, the pattern-matching algorithm with the reconstruction of single emitters, which is generally used for defocused imaging, will be applied in the future to track divided emitters in the defocused situation.

Fig. 4 (a) Image of a single fluorescent bead with a diameter of 0.5 μm attaching to the glass surface observed under 3-D tracing system. Because the image of single emitter is divided into two by the prism located at the equivalent back focal plane, two spots are projected to two separate regions of the single camera plate. Both images move in the opposite x-directions when the distance between the glass and the objective changes, i.e., the top emitter moves from left to right when the sample moves from 0.65 to +0.65 μm, while the bottom emitter from right to left. (b) Calibration for the 3-D tracking. The abscissa axis is controlled by the piezo actuator, and the ordinate axis was determined from the relative position between separated emitters. The black and red lines show the data when the sample moves upward and downward, respectively. The cyan curve shows the linear fitting of points in the range of 0.50 μm of the black line

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method and. . . 7

8

T.A. Katoh et al.

The slope of Fig. 4b is taken as a calibration factor for the real measurement, with which relative displacement is simply calculated by applying a linear relationship. Typically, the real displacement is determined with the range of 2 μm, but the range depends on what specimen is used and, most importantly, on the numerical aperture of the objective lens. Note that each objective should have its inherent calibration factor; even the optics behind the camera port is equivalent.

Application The prismatic optical tracking described here was now established as one of the general approaches to track particles in 3-D [10]. It has successfully applied to two biomolecule samples so far [9, 11]. The first one is kinesin-microtubule system. The microtubule has a filament structure with the length of tens of microns in cells or in an in vitro motility assay and comprised of subunit proteins called α- and β-tubulin that are alternately arranged in longitudinal direction. The diameter of the microtubule is only 26 nm, and thus the microtubule is fluorescently labeled to be visualized under an optical microscope in a conventional assay. The molecular motor kinesin literally walks along the microtubule as it uses two catalytic domains, called as heads, in a hand-over-hand manner [12]. To address the genuine property of the catalytic core of kinesin, it should be truncated into a single-headed form. With this idea, our research group designed the new experimental setup by which the movement of the surface of microtubule is precisely tracked (Fig. 5a). The quantum dot (QD) was specifically attached to the surface of microtubule, and the trace of single QD was directly reconstructed as 3-D plotting. Interestingly enough, microtubule represents a corkscrewing motion during sliding when it runs on the lawns of recombinants [13]. The handedness of the rotation was directly quantified with the above method [9] as represented in the 3-D plot (Fig. 5b). The oscillation curve in x-z plot also allows quantifying the pitch value with sub-nanometer-scale resolution (Fig. 5c). Note that the radius of corkscrewing was only 20 nm as the diameter of microtubule is 24 nm. The data also work as a good validation of our 3-D tracking method which enables us to detect nanometer-scale displacement of biomolecules. The second one was the tracking of specific proteins inside a single cell [11]. Yeast was employed as it is now established as one of the main biological resources. In this contribution, QD-conjugated protein, prion Sup35, was prepared. The dynamics of prions was directly visualized qualitatively inside living yeast cells and unique appearance of the diffusional motion, perhaps originated from the pattern of prion aggregation, was typically observed. Through these applications, we noticed two pitfalls to develop totally new 3-D methods that had not been expected during designing. First, precise determination of the handedness required several complex steps. When we estimated the calibration factor, the position of an emitter moved by the mini stage was equipped

Fig. 5 (a) Schematic of the modified in vitro motility assay which enables to track the movement of microtubule surface through the quantum dot. In the presence of ATP, the kinesin recombinant fused to gelsolin slides the microtubule. The recombinant is immobilized to the glass substrate through an antibody. (b) 3-D trajectory of the quantum dot. (c) y-z and x-z plots of a

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method and. . . 9

10

T.A. Katoh et al.

with the piezoelectric actuator. Note that the direction of the movement of the emitter becomes relatively inverse when the objective is moved: when the objective is displaced upward, the emitter moves downward, which is realized by the shrinkage of the actuator during calibration. These configurations directly couple to the definition of the plus and minus of the calibration factor. Additionally, the image acquired under inverted microscopes is always mirror image. This simple fact often escapes our memory because the handedness does not matter in most 2-D observations that do not include any information along the optical axis. In the case that an observer does not know the orientation from which they are watching the glass slide, they do not need to define the back side and rear side of the glass. The problem of mirroring emerges only in the case of 3-D reconstructions to determine the handedness of movements or structures of biomolecules.

Optical Trapping Equipped with 3-D Tracking As demonstrated above, 3-D localization with a nanometer accuracy is capable of using one optical device, a wedge prism, with a precise calibration. One may think that this feature can be easily combined with other techniques, especially having a limitation caused by 2-D observation. Optical trapping has been used to capture and manipulate small particles typically with the size ranging from 100 nm to 10 μm, and the direction of the trapping is distributed along not only xy-plane but also optical z-axis even with a single laser beam when the beam is focused at the sample plane. In the case that researchers use the optical trapping as a force transducer to measure the biological force such as gliding force of a single bacterium [14] and binding force of a molecular motor [15], the direction of the force measurement was limited in 2-D although particles were captured in 3-D, simply because we did not have any conventional way to track particles in 3-D. As the dimension problem was solved with the method presented here, all optical assemblies including optical trapping should be expanded in 3-D systems with single prisms. One representative diagram for the optical trapping and 3-D tracking is shown in Fig. 6. Our system typically includes two channels for the observation: the fluorescent image (green and blue fluxes leading to EMCCD camera) and the other additional illumination from the condenser lens (the red line) such as phase-contrast or DIC image. Two channels are split to lead two different cameras by a beam splitter located outside the microscope. Two dichroic mirrors are located between the objective and focusing lens: the one that introduces IR laser that works as the optical trapping and the other that illuminates fluorescent probe as an exciter light. Their optical axes should be completely aligned to the optical axis of the objective to realize the ideal optical trapping, observation, and tracking. In the setup shown in Fig. 6, a fluorescent image is taken for the channel to track the particle in 3-D, because fluorescent beads look symmetrical even in the case that the image is defocused. Note that this choice is not a requirement for an observation: if a researcher needs to use the fluorescent channel to observe other features of the specimen, it can lead to sCMOS camera by switching the beam splitter located

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method and. . .

11

Fig. 6 Schematic of the optical system including the technique to track the particle in three dimensions (3-D unit) and the optical trap. The infrared laser path is shown by the yellow line. The image of the fluorescent bead trapped by the optical trap is captured by EMCCD camera while another channel, such as a DIC image of a bacterium or an organelle in cells, is captured simultaneously by sCMOS camera. Any additional illumination can be easily added to this system as long as it is conventional and commercially available to an inverted microscope

outside the camera port. Such flexibility is the advantage of our 3-D system in which a few optical devices are added to a conventional microscope. The typical trajectories of the single trapping bead are shown in Fig. 7. The optical trapping is known to work as a Hookean elastic spring along x- and y-direction, i.e., the force applied to the trapping object is proportional to the displacement from the trap center. In other words, the movement of the trapped particle is restricted under the potential of 1/2κi[(xi–xi0)2]0.5, where κi and xi0 are the spring constant of trapping and the trap center, respectively, along ith axis. When the low laser power is applied to the trapping, the bead fluctuates and, according to the probability of the position, follows Boltzmann distribution under the potential. In our typical setup, the spring constant along z-axis becomes nearly quarter compared with those along x- and yaxis. This feature possibly originates from multiple unmodifiable factors, such as the characteristic of the objective and the magnification power of the tip of the optical fiber along z-axis. Although the shape of the potential of the trapping is asymmetrical, the force imposed to the trapped particle is precisely determined with 3-D localization method, with the assumption that the force can be split in three components along x-, y-, and z-axis. Histograms of position distribution tell us the limit of displacements for applying three spring constants for any objective lenses (Fig. 7b).

Fig. 7 Validation of the trapping force of the optical trap along three-dimensional direction. (a) Typical x-, y-, and z-time courses of the trapped bead under high (left) and low (right) laser powers. (b) Bead trapped by the low laser power in a was further analyzed to make histograms. Because the probability of the localization should follow Boltzmann distribution, the histogram is converted into the energy diagram assuming that the potential of the optical trap works as a spring

12 T.A. Katoh et al.

3-D Single Particle Tracking Using Dual Images Divided by Prism: Method and. . .

13

Conclusion Remarks The three-dimensional tracking method described here has a structural advantage in terms of the simplicity of assemblies of optical components. For research groups who have already developed methods to track particles in 2-D, totally identical approaches for capturing sequential images and analyzing data can be used for our 3-D method. Additional equipment they need is a prism located between a microscope and a camera and two lenses in order to produce both equivalent back focal plane and sample plane outside the camera plate. Most importantly, the rationale is so simple that any types of illumination, such as bright-field, dark-field, phase-contrast and fluorescent ones, can be extended to 3-D localization. Tracking can be applied to the variety of samples in the broad range from bacteria locomotion to single-molecule tracking, in principle, as exemplified in the molecular motor [9] and the prion protein in yeast cells [11]. Because our method can be used for all types of light microscopy, it is likely to become an important new tool for areas of study ranging from cell biology to single-molecule biophysics. Possible application will be high-speed imaging with nanometer and submillisecond resolution, simultaneous tracking of multiple particles, and, finally, single-fluorophore tracking. Conventional experimental setups can be easily turned into 3-D systems without remodeling the setups. This versatility indicates that new application in various optical microscopes is feasible, including three-dimensional super resolution microscopes. Acknowledgments This study was supported in part by the Funding Program for Next Generation World-Leading Researchers Grant LR033 (to T. N.) from the Japan Society for the Promotion of Science and by Grants-in-Aid for Scientific Research on Innovative Areas “Harmonized Supramolecular Motility Machinery and Its Diversity” (Grant 24117002 to T. N.), “Fluctuation & Structure” (Grant 26103527 to T. N.) and “Cilia & Centrosomes” (Grant 87003306 to T.N.) from the Ministry of Education, Culture, Sports, Science, and Technology of Japan.

References 1. Ashkin A (1970) Acceleration and trapping of particles by radiation pressure. Phys Rev Lett 24:156–159 2. Ashkin A (1984) Stable radiation-pressure particle traps using alternating light beams. Opt Lett 9:454 3. Ashkin A, Dziedzic JM (1985) Observation of radiation-pressure trapping of particles by alternating light beams. Phys Rev Lett 54:1245–1248 4. Ashkin A, Dziedzic JM, Bjorkholm JE, Chu S (1986) Observation of a single-beam gradient force optical trap for dielectric particles. Opt Lett 11:288–290 5. Molloy JE, Padgett MJ (2002) Lights, action: optical tweezers. Contemp Phys 55:241–258 6. Neuman KC, Abbondanzieri EA, Block SM (2005) Measurement of the effective focal shift in an optical trap. Opt Lett 30:1318–1320 7. Moffitt JR, Chemla YR, Smith SB, Bustamante C (2008) Recent advances in optical tweezers. Annu Rev Biochem 77:205–228 8. Marago OM, Jones PH, Gucciardi PG, Volpe G, Ferrari AC (2013) Optical trapping and manipulation of nanostructures. Nat Nanotechnol 8:807–819

14

T.A. Katoh et al.

9. Yajima J, Mizutani K, Nishizaka T (2008) A torque component present in mitotic kinesin Eg5 revealed by three-dimensional tracking. Nat Struct Mol Biol 15:1119–1121 10. Deschout H et al (2014) Precisely and accurately localizing single emitters in fluorescence microscopy. Nat Methods 11:253–266 11. Tsuji T et al (2011) Single-particle tracking of quantum dot-conjugated prion proteins inside yeast cells. Biochem Biophys Res Commun 405:638–643 12. Yildiz A, Tomishige M, Vale RD, Selvin PR (2004) Kinesin walks hand-over-hand. Science 303:676–678 13. Yajima J, Cross RA (2005) A torque component in the kinesin-1 power stroke. Nat Chem Biol 1:338–341 14. Miyata M, Ryu WS, Berg HC (2002) Force and velocity of mycoplasma mobile gliding. J Bacteriol 184:1827–1831 15. Nishizaka T, Miyata H, Yoshikawa H, Ishiwata S, Kinosita K Jr (1995) Unbinding force of a single motor molecule of muscle measured using optical tweezers. Nature 377:251–254

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Surface Plasmon-Enhanced Super-Localization Microscopy Youngjin Oh, Jong-ryul Choi, Wonju Lee and Donghyun Kim* School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea

Abstract Super-resolution microscopy has drawn tremendous interests because it allows precise tracking of molecular interactions and observation of dynamics on a nanometer scale. Intracellular and extracellular processes can be measured at the molecular level; thus, super-resolution techniques help in the understanding of biomolecular events in cellular and sub-cellular conditions and have been applied to many areas such as cellular and molecular analysis and ex vivo and in vivo observation. In this chapter, we review near-field imaging that relies on evanescent waves such as TIRFM with an emphasis on super-resolution microscopy techniques that emerge recently based on excitation and localization of SP. In particular, three approaches are detailed: firstly, SUPRA imaging that employs the electromagnetic localization of near-fields by random nanopatterns; secondly, NLS that capitalizes on nanoscale fluorescence sampling at periodic nanoapertures; and finally, PSALM that depends on temporal switching of amplified local fields for enhancement of imaging resolution. The resolution typically achieved by these techniques is laterally below 100 nm and closely related to the size of a near-field hot spot and nanostructures used to localize SP. We expect the achievable imaging resolution to decrease significantly in the near future.

Keywords Surface plasmon; Localization; Fluorescence; Microscopy; Super-resolution

Abbreviations AFM FDTD FWHM NA NLS PALM PSALM PSF PSIM RCWA SEM SIM SP

Atomic force microscopy Finite-difference time domain Full width at half maximum Numerical aperture Nanoscale localization sampling Photo-activated localization microscopy Plasmonics-based spatially activated light microscopy Point-spread function Plasmonic structured illumination microscopy Rigorous coupled-wave analysis Scanning electron microscopy Structured illumination microscopy Surface plasmon

*Email: [email protected] Page 1 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

SPM SPP SPR SPRi STED STORM SUPRA TIR TIRF TIRFM

Surface plasmon microscopy Surface plasmon polariton Surface plasmon resonance Surface plasmon resonance imaging Stimulated emission depletion Stochastic optical reconstruction microscopy SP-enhanced random activation Total internal reflection Total internal reflection fluorescence Total internal reflection fluorescence microscopy

Introduction Human visual acuity is limited to 100 mm due to finite NA of the human eyes. A microscope is an instrument which allows visualization of objects that may be too small to see with naked eyes such as cells, viruses, metal grains, and even molecules. The first optical microscope was developed around the 1590s by two Dutch eyeglass makers, Hans Lippershey and Zacharias Janssen. In 1625, the term “microscope” was coined for Galilei’s compound optical microscope [1]. Following the analyses of biological samples by Robert Hooke and Antonie van Leeuwenhoek, commercial manufacturing of optical microscopes started in the nineteenth century. Since then, diverse microscopy and imaging methods have been developed. Microscopy contributes to the progress of many scientific fields. The fields influenced most by the development of microscopy may be biomedical sciences, because biological objects, e.g., viruses, that in the past either went unnoticed or were regarded as impossible to observe have come to the attention of researchers by way of various microscopy techniques [2, 3]. For example, electron microscopy, such as SEM and transmission electron microscopy (TEM) [4], allows much better resolution than optical microscopy as shown in Fig. 1. Through electron microscopy, a magnification by more than a million times has been realized with a resolution on the order of a few nanometers [5, 6]. However, electron microscopy techniques have limited applicability and are difficult to use because of many environmental challenges enforced in the measurement process, e.g., need of vacuum and high voltage. These limitations can be critical in the areas of biological and biomedical engineering, where cells and tissues are typically maintained in a humid incubator in liquid buffer or gel forms if they are to be measured alive. In contrast, optical microscopy allows live observation of biological objects although it is at a reduced imaging resolution. To satisfy various

Fig. 1 Microscopy techniques and resolution Page 2 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 2 Jablonski diagram: after a fluorescent molecule absorbs a high energy photon (l ex), it is excited. The system relaxes non-radiatively and eventually to a ground state emitting a photon at a longer wavelength (lem)

needs and requirements, numerous techniques of optical microscopy have been developed with various functions serving different types of samples. For optical microscopy, fluorescence is widely used because it allows functional imaging on a potentially molecular scale by labeling with appropriate fluorescent molecules, e.g., specific cellular components may be observed through molecule-specific labeling. Also, combination of fluorescence with proper light microscopy allows contrast sufficient to observe structures inside a live sample in real time. Fluorescence is light luminescence by a fluorescent substance that absorbs light to excite fluorescence and emits or fluoresces light at a longer wavelength (lower energy), i.e., lex < l em as shown in the Jablonski diagram of Fig. 2 [7]. Note that photon energy E and wavelength l are related by E ¼ hc/l, where h is the Planck constant and c represents the speed of light in vacuum. Excitation and emission wavelengths depend on the energy level structure of fluorescent molecules or fluorophores. When a fluorophore absorbs incident light energy, it enters into an excited state and loses energy to the environment by non-radiative process until releasing the energy radiatively as fluorescence. Since the emitted light energy is lower than the incident light energy, fluorescent light is always redshifted compared to the excitation light and produces Stokes shift. Absorption and emission process may take place between multi-sublevels within the ground and excited states. This can cause light absorption and emission not to occur at discrete wavelengths and instead to take a continuous spectral range. A larger Stokes shift makes it easier to separate emitted fluorescence from excitation light in fluorescence microscopy. Fluorescence is excited largely using organic dye molecules, while inorganic materials such as quantum dots are increasingly popular. The extensive availability of fluorescent materials allows fluorescent imaging techniques to be highly useful not just in biological imaging but also for samples like drugs and vitamins, making the scope of fluorescence microscopy even larger. As stated earlier, optical microscopy has limited imaging resolution. The lateral resolution of an optical imaging system can be defined as the ability to resolve two adjacent self-luminous points located in the lateral plane. When the two points are too close, the images of the two points form a continuous intensity distribution and cannot be distinguished. As a numerical measure of the resolution, Rayleigh criterion was introduced to define the resolution that can be achieved by an imaging system as the distance between two points when the peak of the image arising from one point collocates with the first minimum of the image arising from an adjacent point object [8]. In this case, in the absence of aberration effects, the lateral image resolution (d) is determined as Page 3 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014



0:61l 0:61l ¼ n sin u NA

(1)

and the imaging system is referred to as diffraction limited. Here, n and NA represent the refractive index and the numerical aperture of an objective lens, and u is the cone angle at focal point projected by the lens. Equation 1 suggests that the lateral resolution of an optical system improves with an increase of NA and if wavelength l decreases. For example, for l ¼ 488 nm (blue) and with an oil-immersion lens at NA ¼ 1.6, a microscope can optically resolve points separated by 200 nm. When an intensity dip between two adjacent self-luminous points becomes zero, the Sparrow resolution limit is defined such that d¼

0:5l : NA

(2)

On the other hand, Abbe resolution is associated with the diffraction caused both by the objective and the object itself of an imaging system, according to which images of two adjacent points with spacing d can be resolved if nearest diffraction orders of the points are distinguished by the objective [9]. Therefore, the resolution depends on both imaging and illumination apertures and is given by d¼

l : NAobjective þ NAcondenser

(3)

Unaided human eyes have an angular resolving power of approximately 1.5 arc minutes. For unmagnified objects at a distance of 250 mm from an eye (eye’s minimum focal distance), 1.5 arc minute converts to a resolution of deye ¼ 100 mm. Oftentimes, a microscope objective does not usually provide sufficient magnification. If combined with an ocular or eyepiece, the resolving power of a microscope is then given by d¼

d eye d eye : ¼ M M obj M eyepiece

(4)

Mobj and Meyepiece are the magnification provided by an objective and an eyepiece, respectively, and contribute to the overall magnification M in series. In the Sparrow limit of the lateral resolution, the minimum microscope magnification is given by M min ¼

2d eye NA : l

(5)

Mmin is the magnification required of an imaging system to reach diffraction limit and is calculated from Eq. 5 as about 250–500 NA (depending on wavelength). At lower magnification, an image becomes brighter due to a larger field of view, though at a worse resolution. Unlike common misconception that a combination of objective and eyepiece with higher magnification would provide better resolution, resolution does not improve in proportionate to the magnification if it increases higher than 1,000 NA (typical) due to sampling limits. A higher magnification would result in image degradation, rather than improving image clarity. Useful magnification of a microscope is in the range of 500 NA and 1,000 NA. Usually, any magnification above 1,000 NA is called empty magnification [10], i.e., the highest useful magnification of a microscope is approximately 1,500 for an oil-immersion microscope objective with NA ¼ 1.5. Page 4 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

To break through the limit of resolution in optical microscopy, numerous imaging techniques have been attempted. These techniques are broadly termed as super-resolution microscopy. One of the conventional super-resolution techniques would be TIRFM based on evanescent waves. Fluorophores as contrast agents are excited in the penetration depth of an evanescent wave that ranges from 100 to 200 nm. This provides a high axial resolution below the diffraction limit in an extremely simple way, while it remains diffraction-limited laterally [11, 12]. Thus, many superresolution techniques are in fact built upon TIRFM. More details of TIRFM are described in the next section. Emerging super-resolution imaging techniques include STED microscopy, which reduces the PSF of incident light to 20 nm or smaller for improved resolution [13–16]. The resolution through this technique improves by 5–10 times over the diffraction limit. STED microscopy is based on the depletion of high energy states at the excited spot when treated with a pulsed optical signal. STED microscopy was used to visualize the glycoprotein distribution on the surface of individual virus [17] and tissue slices that are 30 mm deep in live brain tissue at 60-nm resolution for in vivo analysis [18]. Also characterized were transverse tubules (TTs) with nanometric resolution for investigation of TT remodeling during heart failure [19]. In contrast to STED microscopy, PALM and STORM rely upon repeated stochastic photoactivation of fluorescent molecules [20–22]. Measured images are stacked up through subsequent image post-processing. STORM was used to observe HIV viral infection [23], where molecular distribution of structural proteins was quantified before and after infection of cells. On the other hand, live motion of virus was analyzed using STORM for molecular tracking and measurement of viral activity. Also, STORM revealed the structure of pericentriolar material, an amorphous sub-cellular protein [24]. A drastically different approach to superresolution microscopy was introduced as SIM, which is a wide-field technique with patterned illumination generated by a diffraction grating or a spatial light modulator and superimposed on the sample while acquiring images [25–28]. Sinusoidally patterned illumination is shifted and rotated during the image acquiring steps of each image set. Through post-processing using a designed reconstruction algorithm based on imposed patterned illumination, information with high frequency can be derived from the raw image set. Therefore, reconstructed images have enhanced lateral and axial resolution compared to images obtained by conventional wide-field microscopy. Since the first introduction [29], saturated excitation based SIM (SSIM) was developed and employed in various in vitro and in vivo biomedical imaging applications [30–32]. Recently, the feasibility to conjugate SIM in lab-on-a-chip applications was also reported [33]. These super-resolution techniques are not perfect. For example, STED microscopy typically requires a light source under pulsed operation, although continuous wave light source has been employed [34], and is slow due to the need of scanning. PALM and STORM also take a long time for image acquisition associated with stochastic fluorescence excitation and thus are potentially not appropriate to observe fast dynamics. In this chapter, we present various approaches of plasmonenhanced microscopy that can potentially resolve issues raised in super-resolution techniques described above. For this goal, section “Theoretical Background” describes backgrounds needed to understand SP and SP-enhanced microscopy. Section “Conventional Microscopy Techniques Based on SP” describes SP-related microscopy techniques such as SPM, SPRi, TIRFM, plasmonenhanced microscopy, and SP-enhanced imaging. Section “Plasmon-Enhanced Super-Localization Microscopy” details SP-enhanced super-localization techniques for microscopy below the diffraction limit, which is followed by recently emerging super-resolution techniques. Finally, section “Summary” concludes the chapter.

Page 5 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Theoretical Background SP refers to coherent electron density oscillations that are formed at the interface between dielectric and metallic materials under TIR condition. If the real part of metal permittivity is negative and its magnitude larger than that of dielectric permittivity, a phase shift of p is produced at the interface in conjunction with the excitation of SP leading to longitudinal electron concentration waves, as illustrated in Fig. 3. SP is excited by p-polarized incident light, i.e., electric field oscillates in the plane of incidence [35]. Coupled with incident light and guided in the metal-dielectric interface, SP creates electromagnetic wave or SPPs. SPP wavelength is shorter than that of the incident light and thus SPPs can provide significant field localization and spatial confinement [36]. Dispersion relation of SP can be calculated by Maxwell’s equations with appropriate boundary conditions. When SP travels in the x-direction parallel to the surface, wave vectors of an electromagnetic wave at the interface have the following relation: k xd ¼ k xm ðx-directionÞ

(6)

em k zd ¼ ed k zd ðz-directionÞ

(7)

where kd and km represent wave vectors in the dielectric and the metal side. ed and em are the dielectric and metal permittivity, respectively. o is light angular frequency. Putting Eqs. 6 and 7 to the wave equation leads to o2  x 2  z 2 k d þ k d ¼ ed c 

k xm

2

o2  2 þ k zm ¼ em : c

(8) (9)

Dispersion relation between SP momentum (ksp) and incident light energy is obtained by rearranging Eqs. 8 and 9 as

Fig. 3 Illustration of SPP formation and axial geometry. ns and nd represent refractive index of substrate and dielectric ambience

Page 6 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 4 Dispersion relation of SPs and incident light

Fig. 5 Evanescent field amplitude in the dielectric medium. E0 refers to the amplitude at the surface (z ¼ 0). Penetration depth p corresponds to the axial distance where the field amplitude is equal to E0/e

o k sp ¼ c

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi em ed : em þ ed

(10)

Figure 4 shows the dispersion relation of SP as well as that of light waves in air and glass. SP is excited when incident light is momentum-matched to create SP. In other words, excitation of SP occurs at the frequency at which the dispersion relations of SP and incident light coincide. At the frequency of the horizontal asymptote (osp), the real part e 0 m equals negative ed, to which very high SP momentum (ksp) corresponds. The dispersion relation of SP does not cross that of light in air, while light in high refractive index medium crosses the dispersion relation of SP giving rise to SPR [37]. The dependence of SPR on medium permittivity lays the basis for SP sensing [38–41]. SP propagates at the interface and decays laterally with a propagation distance L given by L ¼ 1=k 00sp ;

(11)

00

where k sp represents the imaginary part of SP wave vector at a metal-dielectric interface. Note that SPR occurs under TIR and plasmonic dipoles formed at the interface produce a shallow evanescent wave, a field that decays exponentially in amplitude with distance from surface as shown in Fig. 5, i.e., field amplitudes of an evanescent wave produced under TIR take the following form: E ðzÞ ¼ E 0 ez=p

(12)

Page 7 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

where E0 is the field amplitude at surface. Penetration depth p is expressed as p¼

1=2 l0  2 2 ns sin y  n2d 4p

(13)

l0 is the wavelength of incident light in vacuum. ns and nd are refractive index of dielectric substrate and ambience. Penetration depth is independent of light polarization and decreases if angle of incidence y increases. Molecular layers on the metal thin film affect the penetration depth, although the change is usually negligible [42]. Specific expressions of electromagnetic fields (Ex, Hy, and Ez) can be easily derived and are provided elsewhere [43, 44].

Conventional Microscopy Techniques Based on SP SPM and SPR Imaging

SPM is a label-free imaging technique first attempted in 1980s that couples excitation of SPP with microscopy [45]. SPM uses incident light at a fixed angle (typically just below an SPR angle) and wavelength to measure the changes in reflectivity (DR) that occur when an SPR curve shifts upon molecular interactions on the surface, as shown in Fig. 6 [46]. SPM was often used to visualize and quantify cell/substrate contacts [47]. Although SPM makes it convenient to measure structural changes within the penetration depth from surface without any labels, it has been relatively little used in microscopy applications, because the lateral resolution is dominated by the propagation length of SP, which is a few micrometers long [48–51]. For this reason, the image resolution tends to be larger than the diffraction limit, leading to smeared images in the SP propagation direction, as presented in Fig. 7, that compares interference contrast and SPM images of a goldfish glial cell [46]. Changes in the reflectivity image are clearly visible in Fig. 7c as a result of molecular interactions on a substrate. Intensive efforts have been made to shorten the SP propagation length, thus to improve the lateral resolution of SPM. For example, an objective lens type SPM was implemented based on angle-resolved imaging (Fig. 8) [52, 53]. A prism-based SPM physically limits the magnification and overall NA of an imaging system, thus providing poor spatial resolution. However, an objectivebased SPM can ensure high lateral imaging resolution on the order of 300 nm using a high NA objective lens. Also, sample position and optical paths do not change in an objective type SPM system when scanning the incident angle, which allows pixel-by-pixel tracking of acquired images. The resolution of SPM was also found to enhance by optimizing object orientation [54] or taking advantage of high NA immersion objectives [52, 55], scanning SPM configuration [56, 57], widefield interferometry [58], and locally excited SPP modes [59]. Alternatively, surface enhancement by nanostructures has been attempted to modulate SP propagation length, thereby to improve lateral imaging resolution of SPM [60]. A large part of the efforts of nanostructure-based enhancement of SP was focused previously on the improvement of detection sensitivity of SPR biosensing by nanostructure-mediated excitation and localization of SP [61–64]. SP localization leads to shorter SP propagation, and thus, enhanced imaging resolution may be obtained for SPM. To quantify the degree of resolution improvement, SPM was performed experimentally on nanograting. Material and nanograting thickness (dg) was adjusted to maximize the effect of nanograting on SP propagation length. Figure 9 shows clearly that the resolution is particularly poor in silver, because propagation length of SP in silver is much longer (L ¼ 21.3 mm) than in gold (L ¼ 6.3 mm). The

Page 8 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

a

CCD

b

Light microscope Culture medium Aluminum

Cell Y X

Glass prism L3 L2

LD

L4 BE

L1

Surface plasmon microscope

C C D

P

M

Fig. 6 (a) Typical reflectance curve in angle-scanning SPR measurement. In SPRi, the angle remains fixed while binding is measured in terms of the change in reflectivity (DR). (b) Schematics of prism-coupled SPRi (Reprinted with permission of Cell Press from [46])

image quality along the vertical axis is worse than along the horizontal axis, because the wave vector component of incident light along the vertical axis drives the propagation of SP in the same direction. The grating vector is parallel to the direction of SP propagation, i.e., vertical axis, to modulate the SP propagation length. With resolution measured by the transition distance corresponding to a 10–90 % intensity change across the pattern boundary, the intensity profiles shown in Fig. 9 confirm the enhancement of resolution [60]. The transition distance was measured to be 5.2 mm for a gold nanograting (dg ¼ 10 nm), in contrast to the transition distance measured to be 10 mm on a thin film of gold. The enhancement is more effective on a thicker nanograting (dg ¼ 20 nm) that produced a transition distance of 4.6 mm, thus an enhancement by 2.2 times. The enhancement was more significant with silver in which propagation length of SP is much longer. With dg ¼ 20 nm, the transition distance on a silver nanograting was measured to be 5.5 mm versus 18.4 mm on a silver thin film, i.e., use of nanograting improved the resolution by 3.3 times. Note that

Page 9 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 7 (a) Interference contrast image (b) corresponding SPM image of a goldfish glial cell on an Al film substrate. (c) The same cell with different angles of illumination. (d–f): reflectivity curves for locations 1–4 (marked in (a)); (d) bare substrate (□); (e) thick, organelle-rich part of the lamellipodium (◇); (f) thin part of the lamellipodium (~ and ○). The dashed line in (d) is a fit to the undisturbed surface plasmon curve of the bare substrate and is replotted in (e) and (f) as a reference. The solid lines in e and f are the calculated plasmon curves for regions of the central (e) and peripheral lamelli podium (f). Scale bar in (b) ¼ 100 mm (Reprinted with permission of Cell Press from [46])

there is no propagation of SP along the horizontal axis; therefore, an image remains diffraction limited horizontally. In general, SPM has been relatively limited for imaging applications. However, the technique has been extensively used for high-throughput SPR sensing, usually known as SPRi [65–71]. SPRi is a technique that measures reflectance changes of an evanescent field established by SP, similar to SPM, for simultaneous analysis of arrays of molecules. SPRi has been applied to detecting various biomolecular interactions involving, for example, DNA, RNA, and antibodies on arrays, since it was first used to study molecular monolayers [50].

Page 10 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 8 (a) Schematic of the optical setup. A p-polarized laser beam is injected onto a 47-nm thickness gold-coated glass coverslip through an oil-immersion objective, which is measured by a CCD camera. (b) The entire cell bottom membrane and part of the cell top membrane in the cell edge regions are located within the typical detection depth of the SPM (Reprinted with permission of Macmillan Magazines, Ltd. from [53])

TIRFM

Evanescent fields exist at surface under TIR. TIRFM relies on the excitation of fluorophores in the evanescent field near the surface [72–77]. TIRFM provides extremely fine depth resolution on the order of 100 nm using a very simple optical setup and also allows depth-resolved contrast by avoiding excitation of fluorescence agents that are located far from the surface as shown in Fig. 10. Mainly there are two common types of TIRFM as presented in Fig. 11: (1) illumination through a prism or (2) illumination through an objective lens [78]. Prism-based TIRFM is easily set up because it requires a simple optical configuration. Prism-based TIRFM allows well-controlled light incidence. With a right-angle prism, angular spread of incident light can be minimized. If a prism-based setup is adopted for plasmon-enhanced microscopy, fluorescence is directly acquired without suffering from additional damping through a metal film. On the other hand, TIRFM based on an objective lens uses an optical setup in which illumination and acquisition share an objective lens and are located on the same side. Target samples are placed on the surface of a transparent substrate. In the case that an objective lens is used for TIRFM, its NA tends to be relatively high, e.g., apochromatic lens of Olympus (APO 100x, NA 1.65) or Nikon (APO 60x, NA 1.49), because TIR should be maintained when illuminating biological samples such as live cells. Since the refractive index of live cells is typically between 1.33 and 1.38, NA should be higher than 1.38. Obviously, surface structures of a substrate can modify the properties of TIRFM. For instance, layers of different dielectric materials were deposited to produce an enhanced field at the surface available for increased fluorescence. In this line of research, a two-layer structure of Al2O3 and SiO2 thin films was designed based on reflectance calculation using Fresnel coefficients and was deposited on an SF10 glass substrate to provide maximal field enhancement for 442-nm excitation at a reasonable angle of incidence [79]. Maximum field enhancement in terms of the ratio of electromagnetic field intensity at surface to that of an incident wave was 56.2 and 19.5 for TE and TM polarization, respectively. In this case, the ratio of the field intensity with dielectric films to that of a conventional structure was obtained as 8.5 and 3.0. The experimental verification of this work was performed by imaging A431 human epithelial carcinoma cells using quantum dots for fluorescence excitation. Figure 12 shows the TIRFM image of A431 cells on a bilayer substrate in comparison with a reference image on a bare glass substrate without dielectric thin films and clearly

Page 11 of 35

a

b

c

d

e

f

g

h

Normalized intensity

i

1.0 Thin film control (x-axis) Grating: dg = 10 nm (x-axis) Grating: dg = 20 nm (x-axis) Thin film control (y-axis) Grating: dg = 10 nm (y-axis) Grating: dg = 20 nm (y-axis)

0.8 0.6 0.4 0.2

j

1.0

Normalized intensity

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

0.8 0.6 0.4 0.2 0.0

0.0 –2

0

2

4

6

8

10 12 14 16 18 20

Axial distance [µm]

0

6

12

18

24

30

36

Axial distance [µm]

Fig. 9 SEM image of reference squares on the thin film (a) gold and (b) silver. SPR images (c) gold and (d) silver film. SPR images of the reference pattern on L ¼ 400-nm nanograting structure sample dg ¼ 10 nm (e) gold, (f) silver and dg ¼ 20 nm (g) gold, (h) silver. The lines represent the paths for intensity profiles presented in (i) and (j) (Reprinted with permission of the Optical Society of America from [60])

confirms enhanced fluorescence [79]. In contrast, for bright-field images of the cells, the dielectric films did not make a noticeable difference. Although this work focused on the enhanced evanescent wave amplitude, it is expected that the same approach can be used to optimize axial resolution over conventional TIRFM without degrading lateral imaging resolution. The advantages of TIRFM include an excellent signal-to-noise ratio and relatively low photobleaching in addition to axial super-resolution. In contrast to confocal microscopy that presents enhanced fluorescence contrast relative to epi-fluorescence, TIRFM can provide detailed

Page 12 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 10 Only fluorophores in the evanescent field are excited as indicated in green color. For TIR, the refractive index of the sample (nd) must be lower than the index of refraction of the coverslip (ns)

Fig. 11 TIRFM configurations (a) prism-based TIRFM and (b) objective lens-based TIRFM (L.S light source, P polarizer, M mirror, O objective lens, F optical filter, D.M dichroic mirror)

information regarding molecular dynamics near the plasma membrane as shown in Fig. 13 [12]. TIRFM is widely used for studying cell adhesion and the dynamics of membrane bound molecules [80–83]. For example, TIRFM was used to visualize microtubules distributed throughout the cytoplasm near cell surface for studying, e.g., cortical microtubule attachment and stabilization [84]. In addition, multicolor TIRFM allows visualization of single kinesin molecules that move on individual microtubule tracks to determine the degree of preferential modification of kinesin for posttranslational tubulins [85]. Use of TIRFM in cells can be valuable not only for the research on cortical events but also for investigating the overall microtubule organization and dynamics in the vicinity of the cortex as shown in Fig. 14.

Plasmon-Enhanced Microscopy Excitation of SP is accompanied by an evanescent wave. Fluorescence microscopy using SP-associated evanescent waves is called plasmon-enhanced microscopy or metal-enhanced microscopy. The nature of evanescent waves and metal-enhanced fluorescence (MEF) is quite similar to TIRF in that fluorophores in the excited state interact with localized electromagnetic fields induced in the near-field. Oftentimes, the presence of metal thin films enhances the evanescent wave amplitude and thus excited fluorescence [86, 87]. An important distinction from conventional TIRFM is that the evanescent wave is polarization dependent. Because metal surfaces can increase the radiative decay of fluorophores and the extent of resonance energy transfer caused by interactions of fluorophores with free electrons, fluorescence quenching may occur because of

Page 13 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 12 (a) Bright-field and (b) TIRF images of A431 cells on a thin-film sample for TE polarization, compared with those on a reference sample without thin films, respectively, in (c) and (d) (Reprinted with permission of the Optical Society of America from [79])

Fig. 13 (a) Epithelial cell expressing vesicular stomatitis virus glycoprotein tagged with yellow fluorescent protein and targeted to the plasma membrane. (b) Schematic of the structures imaged in (b). A tubular transport container approaches the plasma membrane, fuses, and then disconnects. Scale bar: 2 mm (Reprinted with permission of Macmillan Magazines, Ltd. from [12])

non-radiative energy transfer between fluorescent dyes and metal film when fluorescence molecules are at short distances and also changes in the radiative decay rates [88]. The quantum yield (Q0) and lifetime (t0) of fluorophores in the free space are given by Q0 ¼ G/(G + knr) and t0 ¼ 1/(G + knr), where G denotes radiative decay rate. knr is non-radiative decay rate. The presence of metal surface increases the radiative rate by the addition of radiative decay in metal (Gm). In this case, the quantum yield (Qm) and lifetime (tm) of the fluorophore near the metal surface can be estimated as Qm ¼ (G + Gm)/(G + Gm + knr) and tm ¼ 1/(G + Gm + knr). In other words, higher Gm increases Qm while lifetime decreases. The radiative decay rate and lifetime can be adjusted by the refractive index of material. The intensity difference of heavily labeled human serum albumin on glass and Ag nanoislands is

Page 14 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 14 Use of TIRFM to enhance signal-to-noise ratio. Images of a CHO cell transiently expressing microtubule plus end marker EB1-YFP were obtained in the same focal plane but with different types of illumination. (a, b) Images were obtained in a TIRF mode with a high angle of incidence and low penetration depth. (c, d) Images were obtained in TIRF mode with a lower angle of incidence and higher penetration depth, compared to (a, b). (e, f) Images were obtained in the epi-fluorescence mode (Reprinted with permission of Elsevier from [84])

presented in Fig. 15 [89]. MEF is dramatic, as can be observed from the nearly invisible intensity on quartz (left-hand side) and the bright image on Ag nanoislands (right-hand side). MEF was applied to nanophotonics [90] and optical spectroscopy [91] for single-molecule sensing [92–96]. Experimentally, novel metal nanostructures and nanoparticles were used for single-molecule detection via MEF [97]. It was also shown that metal particle conjugated fluorescent dye can enhance fluorescence intensity [98, 99]. Plasmon-enhanced microscopy can take advantage of waveguide structures on the metal surface for enhanced resolution [100].

Plasmon-Enhanced Super-Localization Microscopy An extreme variety of nanostructures has been investigated in regard to excitation and localization of SP to produce localized near-field electromagnetic waves that are dramatically amplified, also called hot spots. The localization of hot spots created at the surface of nanostructures has been explored for super-resolution microscopy. Formation of hot spots in the near-field is associated with strong localization of plasmonic dipoles at edges, ridges, or vertices due to lightning rod effect [101, 102]. Some of the simple nanostructures that may be used for the field localization are shown in Fig. 16. Such nanostructures can be fabricated by various techniques, for example, electron-beam lithography, focused ion beam method, and nanoimprinting. Compared to conventional imaging techniques described earlier that are diffraction limited, use of localized plasmonic near-fields can produce resolution enhancement for imaging biomolecules, because the hot spot provides a field that is much smaller than the diffraction limit, for sampling target fluorescence. A well-designed hot spot should meet the following requirements to be useful for super-localization microscopy: (1) the optimum hot spot has the smallest FWHM to provide a sub-diffraction-limited PSF, (2) a circularly symmetric shape is desired because asymmetric nearPage 15 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 15 Photograph of fluorescein-labeled HSA (molar ratio of fluorescein/HSA near 7) on quartz (left) and on an SIF (right) as observed with 430-nm excitation and a 480-nm long-pass filter (Reprinted with permission of Elsevier from [89])

field hot spots would cause the distortion of reconstruction images and degrade the imaging resolution as the FWHM of the long axis of a hot spot dominates the achievable imaging resolution, and (3) TIR should be maintained under all circumstances. In addition, a sufficiently fine grid of hot spots provides a spatial frequency exceeding the frequency stipulated by the diffraction limit by more than twice under the Nyquist theorem. In practice, other issues can also arise, for instance, secondary peaks may exist, which contribute to background noise.

Numerical Calculation of Near-Field Localization Various nanostructures such as bow-ties and C- or T-shapes have been considered to form a hot spot for diverse applications including surface-enhanced Raman spectroscopy (SERS). In this part, we describe a recent report about near-field hot spots that may be formed by relatively simple nanostructures such as circular, rhombic, and square shape [103]. For numerical investigation, many geometrical parameters such as array period, pattern size, depth, or thickness of an underlying layer were varied. For calculation, RCWA in 3D was used with periodic boundary condition. All nanostructures were assumed on a BK7 glass substrate with a 2-nm thick chromium adhesion layer

Page 16 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 16 Scanning electron microscope (SEM) image of various nanostructures: (a) linear nanograting, (b) nano-rings, and (c) nanoapertures of squares and (d) triangles. The patterns were fabricated by electron-beam lithography

and a gold film. For SP excitation, TM-polarized light incidence was used at l ¼ 488 nm that is specific to GFP fluorescence excitation with an angle of light incidence fixed at yi ¼ 60 . In the range of the parameters that were considered, an optimized nanostructure was obtained as periodic squares of 50-nm sides. These square nanoholes were shown to provide a spot size of 5,830 (¼53  110) nm2, which is significantly smaller than a diffraction-limited spot. While this study was limited to simple nanopatterns, a gap-based nanostructure typically produces even smaller near-field localization [104–108].

Field Enhancement Effect Before we address super-resolution microscopy based on near-field localization, it is appropriate to note that the spatial localization is accompanied invariably by significant field enhancement. The field enhancement can be produced in many ways using different nanopatterns for a variety of applications. Recently, the use of grating type sub-wavelength nanostructures was explored for stronger fluorescence emission by near-field localization [109]. Numerical computation by FDTD method found that enhanced localization is induced by a larger dg and that thicker grating makes the evanescent field less uniform and more localized, as shown in Fig. 17. Also, silver nanograting presented stronger field localization over gold. On the other hand, a silver nanograting with dg ¼ 10 nm and L ¼ 300 nm shown in Fig. 18 was found to show relatively uniform field enhancement and yet reasonably strong plasmonic fields at the surface. It is interesting that a 10–20-nm thick structure is sufficient to support SP in contrast to using 40–50-nm thick metal films in traditional SPR structures. The field enhancement was verified on a quantitative basis by exciting and imaging fluorescent microbeads that are distributed randomly on grating samples and a controlled bare prism. For cell imaging experiments, A431 human epithelial carcinoma cells were cultured and quantum dot images of A431 cells were acquired as shown in Fig. 19. The fluorescence emission from cells on

Page 17 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 17 Surface near-field intensity calculated by FDTD methods: (a) conventional TIRF on a silver film, (b) gold at dg ¼ 10 nm and L ¼ 300 nm, (c) dg ¼ 20 nm and L ¼ 300 nm, (d) silver at dg ¼ 10 nm and L ¼ 100 nm, (e) dg ¼ 10 nm and L ¼ 300 nm, and (f) silver at dg ¼ 20 nm and L ¼ 300 nm (Reprinted with permission of IOP Publishing from [109])

Fig. 18 SEM image of a silver grating (dg ¼ 10 nm and L ¼ 300 nm) (Reprinted with permission of IOP Publishing from [109])

the test sample was visibly enhanced in intensity if compared to the images on the control in a manner that is, in general, quantitatively consistent. For cell images in Fig. 19, slight degradation appeared in conjunction with the mismatch between optimal quantum dot excitation spectrum and the design wavelength of the nanograting. Implication of these results is that field enhancement is accompanied by the near-field localization. If desired, the enhanced fields may be utilized for specific applications, although photobleaching may be aggravated with stronger fluorescence excitation. Page 18 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 19 TIRFM images of A431 cells on a bare substrate: (a) and (b) for control sample and (c) and (d) on a sub-wavelength grating sample with  25 digital magnification (Reprinted with permission of IOP Publishing from [109])

Super-Resolution Microscopy Localized near-field hot spots can be utilized for various applications including super-resolution microscopy [110–116]. Here, we focus on super-resolution imaging based on near-field localization and explore related issues such as materials, nanopatterns, optics, and image deconvolution processes. The rationale behind the use of near-field localization for super-resolution microscopy is that if a localized hot spot is optimized to be of a size just to excite a single molecule, fluorophore excitation in a diffraction-limited spot would indicate the existence of a single target molecule and that the measured fluorescence image can be post-processed to produce super-resolved images. However, it should be reminded that simple use of a hot spot does not improve image resolution because PSF is not determined by the near-field characteristics but by the far-field properties. Many studies have tried to use nanoplasmonic localized near-fields that are modulated by nanostructures and temporally sampled fluorescence excited by the near-fields. A recent study reports implementation of highly vertical nanostructures for the generation of a field quickly decaying in a cell and selective visualization of intracellular fluorophores [110]. Though the electromagnetic confinement is not based on the plasmonic field localization, this work is still worth a note because it shares the nature of localization-based spatially selective detection. For the confinement of electromagnetic waves, the authors produced a transparent dielectric nanostructure of a diameter smaller than light wavelength to restrict the propagation of light while generating an evanescent wave along the vertical surface within about 1-mm depth into the sample interior. A highly confined observation volume was created for single-molecule detection and sensitive optical measurement of dyes or proteins of interest in the in vivo cell environment. The vertical nanostructures were illuminated by transmission, as shown in Fig. 20a. Epi-fluorescence by excitation at 488 nm shown in Fig. 20b revealed diffuse fluorescence over the entire cell. Lack of light confinement in the epi-illumination mode caused the fluorescence intensity at the nanostructure to be lower than that of immediate surroundings. In contrast, selective excitation of vertical

Page 19 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 20 Fluorescence imaging using vertical nanostructure illumination in live cells. (a) Transmitted light imaging reveals the locations of nanostructures. (b) Fluorescence imaging by epi-illumination. (c) Nanostructure illumination excites only those fluorescence molecules that are very close to nanostructures inside the cell. Scale bar: 10 mm (Reprinted with permission of the National Academy of Sciences, USA from [110])

Fig. 21 Antibody-labeled nanopillars simultaneously recruit and illuminate proteins of interest in live cells. (a) Whitelight imaging reveals the locations of the nanostructures. (b) Fluorescence imaging by epi-illumination shows the shape of a COS7 cell expressing GFP-synaptobrevin. (c) Nanopillar illumination shows extremely clean signal, colocalizing perfectly with the nanopillars inside the cell area. (d–f) Zoom-in images show that nanopillar locations usually have brighter fluorescence compared with surrounding areas. Scale bar: 10 mm (Reprinted with permission of the National Academy of Sciences, USA from [110])

nanostructure by trans-illumination provided a highly localized fluorescence signal around the pillars as shown in Fig. 20c. Also, nanostructures were coated with antibody for targeted illumination of intracellular proteins. Figure 21 shows COS7 cell lines transfected with a plasmid encoding GFP-fused transmembrane protein synaptobrevin and nanostructures modified with antibodies against GFP. Figure 21c, f show bright fluorescence signals associated with colocalization of anti-GFP antibodies with the Page 20 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

nanostructures inside the cell. The experiment indicates that vertical confinement nanostructures can not only function as an array of localized light sources inside a cell but also probe local cellular events of interest. On the other hand, vertical nanostructures are intrusive and may affect intracellular molecular dynamics and cell viability. SUPRA Imaging Nanolithography techniques used to define nanostructure for localization of near-fields are often too costly and involve low-throughput processes. Obviously, it is desired that a single process would complete the fabrication of nanopatterns as a whole. For this reason, temperature-annealed nanoislands were used as a template for super-localization microscopy, which is called SUPRA imaging [117]. The nanoislands form spatially random distribution of hot spots on the metal surface that is determined by the geometric parameters of the islands in a complicated way. SUPRA imaging thus depends on the excitation of SPs in random nanoisland patterns for enhanced lateral imaging resolution. If fluorophores distributed on a substrate are imaged by conventional microscopy, multiple molecules excited in a field of view cannot be distinguished due to the diffraction limit. In SUPRA imaging, fluorophores are excited by the hot spots created at nanoislands. If the distribution of hot spots can be controlled such that approximately a single hot spot exists in a field of view and also if the hot spot can be small in size enough to excite a single molecule, single-molecule imaging becomes a possibility. In this sense, how much one may control the distribution of nanoislands and the near-fields is critical for the performance of SUPRA imaging. For the preparation of nanoisland samples, a thin silver film was synthesized by thermal annealing method at 175  C for 5 min, whereby a silver film was transformed into nanoislands. The size distribution of nanoislands can be adjusted to some degree by adjusting the film thickness (df). Figure 22a, b present SEM and AFM images of nanoislands as a result of temperature annealing,

Fig. 22 (a) SEM and (b) AFM images of temperature-annealed nanoislands. (c) SEM image of an A549 cell attached on the nanoislands (Reprinted with permission of Wiley & Sons, Inc. from [117]) Page 21 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 23 Geometrical distribution of the fabricated nanoislands: (a) nanoisland size (b) separation, and (c) numerically calculated near-field distribution by RCWA (Reprinted with permission of Wiley & Sons, Inc. from [117])

when the initial silver thin-film thickness was df ¼ 10 nm. An image of an A549 cell on the synthesized nanoislands is also shown in Fig. 22c. When df ¼ 10 and 20 nm, average island size was obtained as 112 and 118 nm, respectively, and the size was found to follow a normal distribution, as shown in Fig. 23a. On the other hand, the distribution of the separation between islands was measured to fit a log-normal probability density function p(s) given by: pffiffi 2 s 1 e½lnðsc Þ= 2ws  pðsÞ ¼ pffiffiffiffiffiffi 2psws

(14)

Here, s, sc, and ws denote separation between islands, average of the separation, and its deviation. Near-field distribution created by the nanoislands was calculated by RCWA in Fig. 23c and clearly shows that each field of view represented by the square in the figure contains one or no hot spot, although the size of a hot spot ranges from 50 to 100 nm in terms of FWHM much larger than a molecular scale. SUPRA imaging was experimentally confirmed by imaging receptor-mediated endocytosis of adenovirus through specific targeting with coxsackie virus and adenovirus receptors (CARs) at A549 cell membrane. Imaging pathways of adenovirus particles across the cell membrane can be an ideal application for SUPRA imaging, because an adenovirus is approximately 90 nm in diameter, similar to the size of a hot spot formed by nanoislands. Figure 24a, e present bright-field images of the cell line taken 30 min after the injection of adenoviruses on a thin-film control and nanoislands, respectively. Figure 24b, c show conventional plasmon-enhanced microscopy images measured 15 and 30 min after injection on a thin-film control. In contrast, Fig. 24f, g are the same images by SUPRA imaging on nanoislands. The images taken 15 min after injection in Fig. 24b, f indicate that Page 22 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 24 Images obtained after endocytosis of adenovirus into A549 cells. (a) Bright-field image of the cell and (b) plasmon-enhanced microscopy images of the thin-film control: 15 and (c) 30 min after the injection of adenoviruses. The same images in (e) bright-field and (f), (g) SUPRA imaging on the nanoisland sample. (d), (h) Magnified images of squares shown in (c) and (g). (i) Fluorescence intensity profiles along the lines shown in (d) and (h) (Reprinted with permission of Wiley & Sons, Inc. from [117])

cell morphology tends to be maintained, whereas the cells become swollen after 30 min as shown in Fig. 24c, g as a result of internalization of the adenoviruses. Comparison of conventional plasmonenhanced microscopy images taken on the thin-film control in Fig. 24c, d and SUPRA images on the nanoisland sample in Fig. 24g, h at the same particle concentration upholds the improved resolution. The intensity profiles are presented in Fig. 24i and contrast the resolution obtained in the images of adenovirus particles. SUPRA imaging provides well-defined intensity characteristics with FWHM measured to be 143 nm. This is approximately in line with the scale of convolution between hot spot and adenovirus. In other words, the result confirms the detection of a single virus particle. NLS Despite the advantages of SUPRA imaging, its random nature makes image deconvolution very difficult because exact locations and the shape of the near-field hot spots cannot be fully determined unless near-field detection is performed in advance. For this reason, periodic nanostructures with known kernel shapes have been used for localization-based super-resolution microscopy. As an example, suppose that closely located fluorescent molecules are excited by a localized near-field hot spot while they move at a constant speed. Conventional imaging techniques cannot distinguish individual molecular fluorescence because of the diffraction limit. However, fluorescence sampling with time progression based on the hot spot can provide information at an improved resolution as long as the kernel shape is known a priori and sufficiently small. In other words, a localized field

Page 23 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 25 (a) Experimental schematics for the NLS. (b) SEM image of the nanoaperture type antenna arrays (f ¼ 300 nm and L ¼ 1 mm). Inset shows magnified and tilted SEM image of nanoaperture arrays (Reprinted with permission of Wiley & Sons from [113])

temporally samples movement of fluorescent molecules for enhanced lateral resolution in NLS [113]. The schematic for the NLS is shown in Fig. 25a. The effective resolution of the NLS is determined by the kernel size, which is to be much smaller than the diffraction-limited PSF. In this regard, optimal design of nanostructures is important in the NLS. Experimental feasibility of the NLS was demonstrated by imaging the sliding of microtubules in vitro that were fluorescently labeled with rhodamine. Microtubules participate in many cellular processes, e.g., cell division, vesicular transport that involves molecular movements in a cell. In this work, circular nanoapertures shown in Fig. 25b were implemented with a diameter (f) at f ¼ 300–500 nm and the period (L) of the arrays in the range of L ¼ 1–3 mm. The highest field localization was obtained with nanoapertures with f ¼ 300 nm and L ¼ 1 mm, as shown in Fig. 26. For image reconstruction, near-field distribution was first calculated by RCWA. Theoretically calculated result was consistent with experimental results. It is clear that hot spots exist in the nearfields as shown in Fig. 26 for the nanoaperture structures with f ¼ 300 nm and L ¼ 1 mm at an angle of incidence 70 . The shape of a hot spot created by the nanostructure was half-elliptical due to oblique light incidence. FWHM of the hot spot was 39 nm (width) and 135 nm (height). The acquired fluorescence intensity can be processed for reconstruction into a super-resolved image through serial convolution with the near-field kernel, i.e., an image can be reconstructed by X   (15) K ðrÞ  im, n ðt Þd r  rm, n I ðr; tÞ ¼ m, n where I(r;t) is the image. K(r) is the near-field kernel formed by a single nanostructure. im,n(t) is the intensity measured at each of the 2D nanostructure elements in an array indexed by m and n. K(r) works as a point spread as a result of finite size of the near-field kernel. Therefore, the product of im,n(t) with d(rrm,n) represents the light intensity measured at each nanostructure element in an array. Equation 15 presumes that the locations of nanostructures and hot spots formed thereby can be exactly determined and the distribution of hot spots is spatially fixed. Equation 15 can be simplified as

Page 24 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 26 Numerically calculated near-field distribution result, light incidence angle ¼ 70 , a nanoaperture of f ¼ 300 nm and L ¼ 1 mm. (a) The near-field hot spot is sized to be 39 nm in width (short axis) and 135 nm in height (long axis). x and y axis are in nm. Dotted lines represent the profile along which the field intensities are provided. (b) Near-field intensity distribution of nanoaperture arrays (Reprinted with permission of Wiley & Sons from [113])

I ðrÞ ¼

X

  K ðrÞ  ia, b ðkDt Þd r  kDt  v  ra, b :

(16)

k

ia,b(kDt) is the temporally measured intensity of microtubular fluorescence at k-th time step. Dt and v are the length of time step and the sliding speed of a microtubule. Each time that a microtubule is sampled, it is displaced by vDt. During the image acquisition, CCD shutter speed was 0.0517 s. The sampling of fluorescence was performed by taking peak fluorescence intensity measured at each acquisition. The reconstructed images of microtubules by the NLS are presented in Fig. 27 in comparison with a reference control image of microtubules captured on a gold film (thickness ¼ 10 nm) using conventional TIRFM. Expectedly, the reference image of Fig. 27a looks coarse due to the diffraction limit, which makes it difficult to resolve details of a microtubule. The diffraction-limited lateral resolution is estimated to be 250 nm. In contrast, dramatic enhancement in the image clarity and resolution is observed in the NLS image of Fig. 27b. The intensity profile shown in Fig. 27c allows the resolution to be estimated to be on the order of 70–80 nm in the direction of the movement, which was determined by the finite kernel size and the CCD sampling rate. Note that nanoapertures create a hot spot in a size that is half the distance a microtubule travels in a single acquisition time step. On the other hand, the CCD acquires an image by integration so that the switching operation causes the kernel to be broadened by twice. This process effectively increases the kernel size and potentially degrades the image resolution. Therefore, all the space within a microtubule was completely filled. The resolution depends on the direction of movement because of the ellipticity of the kernel shape. In the direction orthogonal to the movement, the measured resolution was approximately 135 nm. Overall, it is suggested that the resolution can be enhanced further by reducing the size of a hot spot kernel and increasing the sampling rate, i.e., the shutter speed.

Page 25 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 27 Conventional and reconstructed images by TIRFM and NLS. (a) Microtubules on a 10-nm-thick gold film measured over the microscopic field of view by TIRFM. (b) Reconstructed microtubule image by the NLS using a nanoaperture array of d ¼ 300 nm and L ¼1 mm. The lateral resolution is 76 nm in the direction of movement (135 nm orthogonally). Insets show the magnified images. (c) Fluorescence intensity profiles across the line in the circle of image b (Reprinted with permission of Wiley & Sons from [113])

PSALM While the NLS offers extreme versatility for imaging moving molecules at an enhanced resolution, it is also limited in its own regard for molecules that do not move or move at an unknown speed. For general super-resolution microscopy based on near-field localization, the nanostructure arrays that create hot spots need to be fine with an array period much smaller than light wavelength. Unfortunately, nanostructures cannot be too close to each other as the localized near-fields become merged if they are. This limitation can be circumvented by taking advantage of extremely fine meshes of nanopatterns and switching only a part of those on temporally to excite hot spots reasonably far apart to avoid the coupling. PSALM addresses the temporal switching of hot spots for enhanced super-resolution microscopy [118]. The optical setup for PSALM is shown in Fig. 28. The main idea is spatially switched field localization by temporal variation of incident light paths. Figure 28a presents light incidence at a wave vector kin(y) which creates a well-defined near-field hot spot at the edge of a nanostructure. A symmetric hot spot can be produced at the opposite side of the nanostructure with light incidence at an opposite wave vector kin(y). This causes an excited hot spot to be continuously displaced by a preset distance (nanograting ridge size, in this case). For adequate temporal sampling, the switching can be performed at a time interval much faster than the characteristic time associated with a specific molecular interaction. Super-resolution microscopy is possible, as long as the distance between switched hot spots is below the diffraction limit, assuming that the hot spots are isolated for sufficient separation from each other. Conceptually, PSALM can be regarded as sampling target fluorescence with multiple incident light wave vectors to increase imaging resolution. While imaging PSF does not improve, the lateral resolution of PSALM is dominated by the dimension of the nanostructures used for near-field excitation and also the precision related to the control of light incidence. PSALM also bears resemblance to the super-resolution techniques based on stochastic photoactivation such as STORM. However, PSALM can be much faster since the spatial switching may be performed rapidly. In comparison, the negative side of PSALM is that super-resolved information can be obtained only where hot spots can be excited and switched. For the proof of concept, 2D grating type nanostructures were fabricated to excite and localize evanescent fields. Therefore, evanescent fields were localized only two dimensionally. For full 3D

Page 26 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 28 (a) Schematic illustration of PSALM. Switching of light incidence between L1 and L2 produces spatial switching of hot spots between HS1 and HS2. (b) Experimental configuration (CO collimator, PO polarizer, M mirrors, RM rotating mirror, OB objective, and F filter). (c) SEM image of the fabricated nanograting (Reprinted with permission of the Optical Society of America from [118])

localization, different types of nanostructures can be utilized. Figure 28a presents a schematic of 2D PSALM, where light incidence is switched between angles of yin ¼ 60 and 60 . At yin ¼ 60 , a localized hot spot of approximately 30 nm in terms of FWHM is formed on the left-hand side of the grating ridge. With light incidence at yin ¼ 60 , the hot spot is symmetrically switched. For experimental verification of the PSALM, a prism-based TIRFM was set up with adjustable light incidence, as shown in Fig. 28b. The switching speed was determined by the CCD frame speed to be approximately 160 ms/frame. Nonlinear Gaussian linear square fits were used for image deconvolution process. Figure 28c shows an SEM image of the nanograting used for the experiment. The nanograting was fabricated by electron-beam lithography and samples were made of gold. The thickness of the nanograting sample was 40 nm with 300-nm period. According to the scheme, the ridge length is an approximate measure of the imaging resolution and was determined by SEM to be 100 nm with an overall grating fill factor at 33 %. The imaging performance of PSALM was assessed by imaging fluorescent nanobead spheres (f ¼ 40 nm) with 488-nm excitation and 560-nm emission. Figure 29a presents a raw image of fluorescent nanobeads (f ¼ 40 nm) measured by plasmonenhanced TIRFM. The image looks blurred and the beads cannot be resolved because they are diffraction limited. Each pixel image size is approximately 230 nm, slightly larger than the diffraction limit. However, if the fluorescent beads are imaged by PSALM on the nanograting, fluorescent beads can be resolved at the same magnification, as shown in Fig. 29b. Note that the enhancement works only in one direction because localization was based on linear grating and thus it is diffraction limited in the direction parallel to the grating. Therefore, images of beads in Fig. 29b are elliptical rather than circular. The image can be made to be isotropic if 3D nanostructures that

Page 27 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

Fig. 29 Images of fluorescent beads taken by (a) TIRFM and (b) PSALM. Also shown in insets are the magnified images of beads (marked with arrows) measured by TIRFM and PSALM. The bar in the insets is 300-nm long. For (b), grating wires are directed vertically. Intensity variations during angular switching: (c) TIRFM and PSALM with a bead (d) on the left side of a grating ridge and (e) on both sides. Blue squares and red circles represent light incidence with kx ¼ k0 sin(60 ) and kx ¼ k0 sin(60 ), respectively, where k0 is the light wave vector in the free space (Reprinted with permission of the Optical Society of America from [118])

compensate the ellipticity due to oblique light incidence are used to excite and localize SP. The peakto-peak distance between fluorescence signals was measured to be about 90–100 nm, which is in excellent agreement with the length of a grating ridge. Figure 29c–e represents the intensity variation with the switching of a bead. As shown in Fig. 29c, the angular switching does not affect conventional images; thus, the measured intensity is measured to be almost constant regardless of the angular switching. In contrast, Fig. 29d shows that light incidence at yin ¼ 60 excites a bead that is located on the right-hand side of a grating ridge, which is turned off at yin ¼ 60 . Figure 29e shows the measured intensity when beads are expected to exist on both sides of nanograting ridge. In this case, a smaller intensity difference was measured during the angular switching. The difference itself was associated with the distance from the grating and also with relative dipole orientations.

Summary Thanks to super-resolution microscopy, understanding of molecular events in cellular environment has greatly improved. Super-resolution imaging can benefit many scientific fields including biology and biomedical engineering. A new generation of super-resolution microscopy techniques is revolutionizing biomedical research, allowing nanometer scale observation of molecules within cells. In this chapter, particular emphasis has been placed on SP-enhanced super-localization of biomolecules to produce super-resolution images. SP-enhanced super-localization techniques

Page 28 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

achieve three-dimensional sub-diffraction-limited localization of light fields in two ways: dependence on evanescent waves provides extremely fine depth resolution in the axial direction. Laterally, surface nanostructures modulate near-fields to form localized hot spots that are smaller than 100 nm. Various techniques were described, such as SUPRA imaging, NLS, and PSALM. The size of a hot spot is typically (50–100 nm)2 in the lateral plane and 100 nm axially, which we expect to decrease significantly in the near future. While we have primarily focused on super-resolution microscopy, the range of potential applications is limited only by imagination, and can be further extended to optical sensing, molecular manipulation, and nanofabrication, to say the least.

References 1. Gould SJ (2000) The lying stones of Marrakech: penultimate reflections in natural history. Jonathan Cape, London 2. Rutherford E, Martin C, Murphy PA, Arkwright JA, Barnard JE, Smith KM, Gye WE, Ledingham JCG, Salaman RN, Twort FW, Andrewes CH, Douglas SR, Hindle E, Brierley WB, Boycott AE (1929) Discussion on “Ultra-microscopic viruses infecting animals and plants”. Proc R Soc Lond Ser B 104(733):537–560 3. Koch R (1876) Untersuchungen € uber Bakterien: V. Die Ätiologie der Milzbrand-Krankheit, begr€ undet auf die Entwicklungsgeschichte des Bacillus anthracis. Cohns Beitr Biol Pflanz 2(2):277–310 4. Ardenne M (1938) Das Elektronen-Rastermikroskop. Z Phys 109(9–10):553–572 5. Nebesářová J, Vancová M (2007) How to observe small biological objects in low voltage electron microscope. Microsc Microanal 13(S03):248–249 6. Drummy LF, Yang J, Martin DC (2004) Low-voltage electron microscopy of polymer and organic molecular thin films. Ultramicroscopy 99(4):247–256 7. Valeur B, Berberan-Santos MN (2012) Molecular fluorescence: principles and applications, 2nd edn. Wiley-VCH, Weinheim 8. Born M, Wolf E (1999) Principles of optics, 7th edn. Cambridge University Press, Cambridge 9. Abbe E (1870) Beitrage zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung. Arch Mikrosk Anat 9:413–420 10. Heintzmann R, Ficz G (2006) Breaking the resolution limit in light microscopy. Brief Funct Genomic Proteomic 5(4):289–301 11. Axelrod D (1981) Cell-substrate contacts illuminated by total internal reflection fluorescence. J Cell Biol 89(1):141–145 12. Steyer JA, Almers W (2001) A real-time view of life within 100 nm of the plasma membrane. Nat Rev Mol Cell Biol 2(4):268–275 13. Hell SW, Wichmann J (1994) Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt Lett 19(11):780–782 14. Hell SW (2007) Far-field optical nanoscopy. Science 316(5828):1153–1158 15. Sahl SJ, Leutenegger M, Hilbert M, Hell SW, Eggeling C (2010) Fast molecular tracking maps nanoscale dynamics of plasma membrane lipids. Proc Natl Acad Sci U S A 107(15):6829–6834 16. N€agerl UV, Bonhoeffer T (2010) Imaging living synapses at the nanoscale by STED microscopy. J Neurosci 30(28):9341–9346

Page 29 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

17. Chojnacki J, Staudt T, Glass B, Bingen P, Engelhardt J, Anders M, Schneider J, Muller B, Hell SW, Krausslich HG (2012) Maturation-dependent HIV-1 surface protein redistribution revealed by fluorescence nanoscopy. Science 338(6106):524–528 18. Takasaki KT, Ding JB, Sabatini BL (2013) Live-cell superresolution imaging by pulsed STED two-photon excitation microscopy. Biophys J 104(4):770–777 19. Wagner E, Lauterbach MA, Kohl T, Westphal V, Williams GS, Steinbrecher JH, Streich JH, Korff B, Tuan HT, Hagen B, Luther S, Hasenfuss G, Parlitz U, Jafri MS, Hell SW, Lederer WJ, Lehnart SE (2012) Stimulated emission depletion live-cell super-resolution imaging shows proliferative remodeling of T-tubule membrane structures after myocardial infarction. Circ Res 111(4):402–414 20. Betzig E, Patterson GH, Sougrat R, Lindwasser OW, Olenych S, Bonifacino JS, Davidson MW, Lippincott-Schwartz J, Hess HF (2006) Imaging intracellular fluorescent proteins at nanometer resolution. Science 313(5793):1642–1645 21. Rust MJ, Bates M, Zhuang X (2006) Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat Methods 3(10):793–795 22. Gould TJ, Verkhusha VV, Hess ST (2009) Imaging biological structures with fluorescence photoactivation localization microscopy. Nat Protoc 4(3):291–308 23. Pereira CF, Rossy J, Owen DM, Mak J, Gaus K (2012) HIV taken by STORM: superresolution fluorescence microscopy of a viral infection. Virol J 9:84 24. Mennella V, Keszthelyi B, McDonald KL, Chhun B, Kan F, Rogers GC, Huang B, Agard DA (2012) Subdiffraction-resolution fluorescence microscopy reveals a domain of the centrosome critical for pericentriolar material organization. Nat Cell Biol 14(11):1159–1168 25. Bailey B, Farkas DL, Taylor DL, Lanni F (1993) Enhancement of axial resolution in fluorescence microscopy by standing-wave excitation. Nature 366(6450):44–48 26. Neil MAA, Juskaitis R, Wilson T (1997) Method of obtaining optical sectioning by using structured light in a conventional microscope. Opt Lett 22(24):1905–1907 27. Gustafsson MG (2000) Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J Microsc 198(Pt 2):82–87 28. Choi JR, Kim D (2012) Enhanced image reconstruction of three-dimensional fluorescent assays by subtractive structured-light illumination microscopy. J Opt Soc Am A 29(10):2165–2173 29. Gustafsson MGL (2005) Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proc Natl Acad Sci U S A 102(37):13081–13086 30. Shao L, Kner P, Rego EH, Gustafsson MG (2011) Super-resolution 3D microscopy of live whole cells using structured illumination. Nat Methods 8(12):1044–1046 31. York AG, Parekh SH, Dalle Nogare D, Fischer RS, Temprine K, Mione M, Chitnis AB, Combs CA, Shroff H (2012) Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy. Nat Methods 9(7):749–754 32. Fiolka R, Shao L, Rego EH, Davidson MW, Gustafsson MG (2012) Time-lapse two-color 3D imaging of live cells with doubled resolution using structured illumination. Proc Natl Acad Sci U S A 109(14):5311–5315 33. Arpali SA, Arpali C, Coskun AF, Chiang HH, Ozcan A (2012) High-throughput screening of large volumes of whole blood using structured illumination and fluorescent on-chip imaging. Lab Chip 12(23):4968–4971 34. Rankin BR, Hell SW (2009) STED microscopy with a MHz pulsed stimulated-Ramanscattering source. Opt Express 17(18):15679–15684 Page 30 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

35. Takahara J, Kobayashi T (2004) Low-dimensional optical waves and nano-optical circuits. Opt Photon News 15(10):54–59 36. Zayats AV, Smolyaninov II, Maradudin AA (2005) Nano-optics of surface plasmon polaritons. Phys Rep 408(3–4):131–314 37. Raether H (1988) Surface plasmon on smooth and rough surface and on Gratings. Springer, New York 38. Kim Y, Chung K, Lee W, Kim DH, Kim D (2012) Nanogap-based dielectric-specific colocalization for highly sensitive surface plasmon resonance detection of biotin-streptavidin interactions. Appl Phys Lett 101(23):233701 39. Oh Y, Lee W, Kim D (2011) Colocalization of gold nanoparticle-conjugated DNA hybridization for enhanced surface plasmon detection using nanograting antennas. Opt Lett 36(8):1353–1355 40. Kim K, Kim DJ, Moon S, Kim D, Byun KM (2009) Localized surface plasmon resonance detection of layered biointeractions on metallic subwavelength nanogratings. Nanotechnology 20(31):315501 41. Ma K, Kim DJ, Kim K, Moon S, Kim D (2010) Target-localized nanograting-based surface plasmon resonance detection toward label-free molecular biosensing. IEEE J Sel Top Quantum Electron 16(4):1004–1014 42. Yoon SJ, Kim D (2007) Thin-film-based field penetration engineering for surface plasmon resonance biosensing. J Opt Soc Am A 24(9):2543–2549 43. Lakowicz JR (1991) Topics in fluorescence spectroscopy. Plenum Press, New York 44. De Fornel F (2001) Evanescent waves: from Newtonian optics to atomic optics. Springer, Berlin 45. Rothenhausler B, Knoll W (1988) Surface–plasmon microscopy. Nature 332(6165):615–617 46. Giebel K, Bechinger C, Herminghaus S, Riedel M, Leiderer P, Weiland U, Bastmeyer M (1999) Imaging of cell/substrate contacts of living cells with surface plasmon resonance microscopy. Biophys J 76(1):509–516 47. De Bruijn HE, Kooyman RP, Greve J (1993) Surface plasmon resonance microscopy: improvement of the resolution by rotation of the object. Appl Opt 32(13):2426–2430 48. Boozer C, Kim G, Cong S, Guan H, Londergan T (2006) Looking towards label-free biomolecular interaction analysis in a high-throughput format: a review of new surface plasmon resonance technologies. Curr Opin Biotechnol 17(4):400–405 49. Campbell CT, Kim G (2007) SPR microscopy and its applications to high-throughput analyses of biomolecular binding events and their kinetics. Biomaterials 28(15):2380–2392 50. Hickel W, Kamp D, Knoll W (1989) Surface-plasmon microscopy. Nature 339(6221):186 51. Barnes WL, Dereux A, Ebbesen TW (2003) Surface plasmon subwavelength optics. Nature 424(6950):824–830 52. Huang B, Yu F, Zare RN (2007) Surface plasmon resonance imaging using a high numerical aperture microscope objective. Anal Chem 79(7):2979–2983 53. Wang W, Yang Y, Wang S, Nagaraj VJ, Liu Q, Wu J, Tao N (2012) Label-free measuring and mapping of binding kinetics of membrane proteins in single living cells. Nat Chem 4(10):846–853 54. Berger CEH, Kooyman RPH, Greve J (1994) Resolution in surface plasmon microscopy. Rev Sci Instrum 65(9):2829–2836 55. Somekh MG, Liu S, Velinov TS, See CW (2000) High-resolution scanning surface-plasmon microscopy. Appl Opt 39(34):6279–6287

Page 31 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

56. Tanaka T, Yamamoto S (2003) Laser-scanning surface plasmon polariton resonance microscopy with multiple photodetectors. Appl Opt 42(19):4002–4007 57. Berguiga L, Zhang S, Argoul F, Elezgaray J (2007) High-resolution surface-plasmon imaging in air and in water: V(z) curve and operating conditions. Opt Lett 32(5):509–511 58. Somekh MG, Stabler G, Liu S, Zhang J, See CW (2009) Wide-field high-resolution surfaceplasmon interference microscopy. Opt Lett 34(20):3110–3112 59. Bouhelier A, Ignatovich F, Bruyant A, Huang C, Colas des Francs G, Weeber JC, Dereux A, Wiederrecht GP, Novotny L (2007) Surface plasmon interference excited by tightly focused laser beams. Opt Lett 32(17):2535–2537 60. Kim DJ, Kim D (2010) Subwavelength grating-based nanoplasmonic modulation for surface plasmon resonance imaging with enhanced resolution. J Opt Soc Am B 27(6):1252–1259 61. Byun KM, Kim S, Kim D (2005) Design study of highly sensitive nanowire-enhanced surface plasmon resonance biosensors using rigorous coupled wave analysis. Opt Express 13(10):3737–3742 62. Kim K, Yoon SJ, Kim D (2006) Nanowire-based enhancement of localized surface plasmon resonance for highly sensitive detection: a theoretical study. Opt Express 14(25):12419–12431 63. Byun KM, Yoon SJ, Kim D, Kim SJ (2007) Experimental study of sensitivity enhancement in surface plasmon resonance biosensors by use of periodic metallic nanowires. Opt Lett 32(13):1902–1904 64. Malic L, Cui B, Veres T, Tabrizian M (2007) Enhanced surface plasmon resonance imaging detection of DNA hybridization on periodic gold nanoposts. Opt Lett 32(21):3092–3094 65. Brockman JM, Frutos AG, Corn RM (1999) A multistep chemical modification procedure to create DNA arrays on gold surfaces for the study of protein  DNA interactions with surface plasmon resonance imaging. J Am Chem Soc 121(35):8044–8051 66. Blow N (2009) Proteins and proteomics: life on the surface. Nat Methods 6(5):389–393 67. Wark AW, Lee HJ, Corn RM (2005) Long-range surface plasmon resonance imaging for bioaffinity sensors. Anal Chem 77(13):3904–3907 68. Willets KA, Van Duyne RP (2007) Localized surface plasmon resonance spectroscopy and sensing. Annu Rev Phys Chem 58(1):267–297 69. Yu F, Knoll W (2004) Immunosensor with self-referencing based on surface plasmon diffraction. Anal Chem 76(7):1971–1975 70. Liebermann T, Knoll W (2000) Surface-plasmon field-enhanced fluorescence spectroscopy. Colloids Surf A 171(1–3):115–130 71. Yu F, Yao D, Knoll W (2003) Surface plasmon field-enhanced fluorescence spectroscopy studies of the interaction between an antibody and its surface-coupled antigen. Anal Chem 75(11):2610–2617 72. Millis BA (2012) Evanescent-wave field imaging: an introduction to total internal reflection fluorescence microscopy. Methods Mol Biol 823:295–309 73. Mertz J (2000) Radiative absorption, fluorescence, and scattering of a classical dipole near a lossless interface: a unified description. J Opt Soc Am B 17(11):1906–1913 74. Rohrbach A (2000) Observing secretory granules with a multiangle evanescent wave microscope. Biophys J 78(5):2641–2654 75. Axelrod D (2001) Total internal reflection fluorescence microscopy in cell biology. Traffic 2(11):764–774 76. Axelrod D, Burghardt TP, Thompson NL (1984) Total internal reflection fluorescence. Annu Rev Biophys Bioeng 13:247–268

Page 32 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

77. Schneckenburger H (2005) Total internal reflection fluorescence microscopy: technical innovations and novel applications. Curr Opin Biotechnol 16(1):13–18 78. Toomre D, Manstein DJ (2001) Lighting up the cell surface with evanescent wave microscopy. Trends Cell Biol 11(7):298–303 79. Kim K, Cho EJ, Huh YM, Kim D (2007) Thin-film-based sensitivity enhancement for total internal reflection fluorescence live-cell imaging. Opt Lett 32(21):3062–3064 80. Lang T, Wacker I, Steyer J, Kaether C, Wunderlich I, Soldati T, Gerdes HH, Almers W (1997) Ca2+-triggered peptide secretion in single cells imaged with green fluorescent protein and evanescent-wave microscopy. Neuron 18(6):857–863 81. Fiolka R, Belyaev Y, Ewers H, Stemmer A (2008) Even illumination in total internal reflection fluorescence microscopy using laser light. Microsc Res Technol 71(1):45–50 82. Mattheyses AL, Simon SM, Rappoport JZ (2010) Imaging with total internal reflection fluorescence microscopy for the cell biologist. J Cell Sci 123(21):3621–3628 83. Jouvenet N, Neil SJ, Bess C, Johnson MC, Virgen CA, Simon SM, Bieniasz PD (2006) Plasma membrane is the site of productive HIV-1 particle assembly. PLoS Biol 4(12):e435 84. Grigoriev I, Akhmanova A (2010) Microtubule dynamics at the cell cortex probed by TIRF microscopy. Methods Cell Biol 97:91–109 85. Engel BD, Lechtreck KF, Sakai T, Ikebe M, Witman GB, Marshall WF (2009) Total internal reflection fluorescence (TIRF) microscopy of Chlamydomonas flagella. Methods Cell Biol 93:157–177 86. Kaiser R, Lévy Y, Vansteenkiste N, Aspect A, Seifert W, Leipold D, Mlynek J (1994) Resonant enhancement of evanescent waves with a thin dielectric waveguide. Opt Commun 104(4–6):234–240 87. Ke PC, Gan XS, Szajman J, Schilders S, Gu M (1997) Optimizing the strength of an evanescent wave generated from a prism coated with a double-layer thin-film stack. Bioimaging 5(1):1–8 88. Lakowicz JR (2001) Radiative decay engineering: biophysical and biomedical applications. Anal Biochem 298(1):1–24 89. Lakowicz JR, Malicka J, D’Auria S, Gryczynski I (2003) Release of the self-quenching of fluorescence near silver metallic surfaces. Anal Biochem 320:13–20 90. Ozbay E (2006) Plasmonics: merging photonics and electronics at nanoscale dimensions. Science 311(5758):189–193 91. Lee J, Hernandez P, Govorov AO, Kotov NA (2007) Exciton-plasmon interactions in molecular spring assemblies of nanowires and wavelength-based protein detection. Nat Mater 6(4):291–295 92. Hong G, Tabakman SM, Welsher K, Wang H, Wang X, Dai H (2010) Metal-enhanced fluorescence of carbon nanotubes. J Am Chem Soc 132(45):15920–15923 93. Dubertret B, Calame M, Libchaber AJ (2001) Single-mismatch detection using gold-quenched fluorescent oligonucleotides. Nat Biotechnol 19(4):365–370 94. Ekgasit S, Thammacharoen C, Yu F, Knoll W (2004) Evanescent field in surface plasmon resonance and surface plasmon field-enhanced fluorescence spectroscopies. Anal Chem 76(8):2210–2219 95. Futamata M, Maruyama Y, Ishikawa M (2003) Local electric field and scattering cross section of Ag nanoparticles under surface plasmon resonance by finite difference time domain method. J Phys Chem B 107(31):7607–7617 96. Aslan K, Huang J, Wilson GM, Geddes CD (2006) Metal-enhanced fluorescence-based RNA sensing. J Am Chem Soc 128(13):4206–4207 Page 33 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

97. Zhang J, Fu Y, Chowdhury MH, Lakowicz JR (2007) Metal-enhanced single-molecule fluorescence on silver particle monomer and dimer:coupling effect between metal particles. Nano Lett 7(7):2101–2107 98. Sokolov K, Follen M, Aaron J, Pavlova I, Malpica A, Lotan R, Richards-Kortum R (2003) Real-time vital optical imaging of precancer using anti-epidermal growth factor receptor antibodies conjugated to gold nanoparticles. Cancer Res 63(9):1999–2004 99. Kyriacou SV, Brownlow WJ, Xu XH (2004) Using nanoparticle optics assay for direct observation of the function of antimicrobial agents in single live bacterial cells. Biochemistry 43(1):140–147 100. Edel JB, Wu M, Baird B, Craighead HG (2005) High spatial resolution observation of singlemolecule dynamics in living cell membranes. Biophys J 88(6):L43–L45 101. Kottmann J, Martin O, Smith D, Schultz S (2000) Spectral response of plasmon resonant nanoparticles with a non-regular shape. Opt Express 6(11):213–219 102. Kottmann JP, Martin OJF, Smith DR, Schultz S (2001) Plasmon resonances of silver nanowires with a nonregular cross section. Phys Rev B 64(23):235402 103. Lee W, Kim K, Kim D (2012) Electromagnetic near-field nanoantennas for subdiffractionlimited surface plasmon-enhanced light microscopy. IEEE J Sel Top Quantum Electron 18(6):1684–1691 104. Galloway CM, Kreuzer MP, Acimovic SS, Volpe G, Correia M, Petersen SB, Neves-Petersen MT, Quidant R (2013) Plasmon-assisted delivery of single nano-objects in an optical hot-spot. Nano Lett 13(9):4299–4304 105. Liu ZQ, Liu GQ, Zhou HQ, Liu XS, Huang K, Chen YH, Fu GL (2013) Near-unity transparency of a continuous metal film via cooperative effects of double plasmonic arrays. Nanotechnology 24(15):155203 106. Neubrech F, Weber D, Katzmann J, Huck C, Toma A, Di Fabrizio E, Pucci A, Hartling T (2012) Infrared optical properties of nanoantenna dimers with photochemically narrowed gaps in the 5 nm regime. ACS Nano 6(8):7326–7332 107. Suh JY, Kim CH, Zhou W, Huntington MD, Co DT, Wasielewski MR, Odom TW (2012) Plasmonic bowtie nanolaser arrays. Nano Lett 12(11):5769–5774 108. Ye J, Van Dorpe P (2012) Plasmonic behaviors of gold dimers perturbed by a single nanoparticle in the gap. Nanoscale 4(22):7205–7211 109. Kim K, Kim DJ, Cho E-J, Suh J-S, Huh Y-M, Kim D (2009) Nanograting-based plasmon enhancement for total internal reflection fluorescence microscopy of live cells. Nanotechnology 20(1):015202 110. Xie C, Hanson L, Cui Y, Cui B (2011) Vertical nanopillars for highly localized fluorescence imaging. Proc Natl Acad Sci U S A 108(10):3894–3899 111. Fromm DP, Sundaramurthy A, Schuck PJ, Kino G, Moerner WE (2004) Gap-dependent optical coupling of single “bowtie” nanoantennas resonant in the visible. Nano Lett 4(5):957–961 112. Jin EX, Xu X (2006) Enhanced optical near field from a bowtie aperture. Appl Phys Lett 88(15):153110 113. Kim K, Yajima J, Oh Y, Lee W, Oowada S, Nishizaka T, Kim D (2012) Nanoscale localization sampling based on nanoantenna arrays for super-resolution imaging of fluorescent monomers on sliding microtubules. Small 8(6):892–900 114. Schnell M, Garcia-Etxarri A, Huber AJ, Crozier K, Aizpurua J, Hillenbrand R (2009) Controlling the near-field oscillations of loaded plasmonic nanoantennas. Nat Photon 3(5):287–291 Page 34 of 35

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_4-1 # Springer Science+Business Media Dordrecht 2014

115. Genevet P, Tetienne J-P, Gatzogiannis E, Blanchard R, Kats MA, Scully MO, Capasso F (2010) Large enhancement of nonlinear optical phenomena by plasmonic nanocavity gratings. Nano Lett 10(12):4880–4883 116. Stranahan SM, Willets KA (2010) Super-resolution optical imaging of single-molecule SERS hot spots. Nano Lett 10(9):3777–3784 117. Kim K, Choi JW, Ma K, Lee R, Yoo KH, Yun CO, Kim D (2010) Nanoisland-based random activation of fluorescence for visualizing endocytotic internalization of adenovirus. Small 6(12):1293–1299 118. Kim K, Oh Y, Lee W, Kim D (2010) Plasmonics-based spatially activated light microscopy for super-resolution imaging of molecular fluorescence. Opt Lett 35(20):3501–3503

Page 35 of 35

Novel Plasmonic Microscopy: Principle and Applications Xiaocong Yuan and Changjun Min

Contents Introduction: Basis of the Plasmonic Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plasmonic Microscopy for Imaging, Sensing, Trapping, and Raman Spectroscopy . . . . . . . . . . . . Wide-Field Super-Resolution SPP Standing Wave Illumination Fluorescence Microscope with Plasmonic Structures/Optical Vortices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supersensitivity and Super Dynamic Range Biosensors Based on Cylindrical Vector Beam Trapping Metallic Particles by Plasmonic Tweezers Based On Cylindrical Vector Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radially Polarized Beam for Propagating Surface Plasmon-Assisted Gap-Mode Raman Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 9 9 12 17 22 29 30

Abstract

In this chapter, we introduce a novel dynamic all-optically controlled surface plasmon polariton (SPP) high-performance multifunction optical microscope, combining optical microscopic imaging, biosensing, plasmonic tweezers, and surface-enhanced Raman scattering (SERS) in a single microscopic system. This optical microscope can achieve super-resolved imaging, ultrahigh sensitivity for molecule detection, and real-time monitoring for reaction process of biological samples, fulfilling the requirement of multiparameter multi-perspective realtime in situ measurement of biological samples. This chapter includes:

X. Yuan (*) • C. Min Nanophotonics Research Centre & Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, College of Optoelectronic Engineeringing, Shenzhen University, Shenzhen, China e-mail: [email protected]; [email protected] # Springer Science+Business Media Dordrecht 2016 A. H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-6174-2_5-1

1

2

X. Yuan and C. Min

1. Using phase shift of SPP standing wave to achieve super-resolution wide-field microscopic imaging 2. Based on differential interference between radially and azimuthally polarized beams in microscopic configuration, developing novel biosensors with supersensitivity and super dynamic range 3. Theoretical analysis and experimental demonstration of a plasmonic tweezers in microscopic configuration for trapping metallic particles and its applications in SERS 4. Based on the coupling between SPP virtual probe and localized surface plasmon (LSP) resonance of metallic nanoparticles, building real-time, controllable, and high-sensitivity novel SERS detection systems Keywords

Surface plasmon polaritons • Localized surface plasmon • Plasmonic microscopy • Cylindrical vector beam • Super-resolution • Surface plasmon resonance imaging • Biosensor • Plasmonic tweezers • Surface-enhanced Raman spectroscopy

Introduction: Basis of the Plasmonic Microscopy Typical structural sizes of photonic devices in dielectric material are much greater than electronic ones. Surface plasmon polariton (SPP) waves enable possibilities of the photonic devices to be functional at subwavelength dimensions. It is possible to employ plasmonic components to form building blocks of chip-based optical device technologies, for various applications in imaging [1], spectroscopy [2], chemical/ biological detection [3], and others. SPP waves are essentially electromagnetic surface wave confined on the dielectric/metal interface, and once it is purposively manipulated in those devices, it can act as excitation source in plasmonic microscopy. One of the main aspects of SPP micro-optics is the possibility to concentrate SPP field at different points (i.e., SPP sources) of integrated plasmonic structures. These points can be formed by properly arranged surface defects milled into metal films (e.g., curved or straight slits) or by using special incident light beams (e.g., radially or azimuthally polarized beams). Diffraction through the subwavelength slits on metal film can directly provide coupling of the momentum between far-field illumination and SPP, under the condition that [4] kspp ¼ kk  mG

(1)

where kk is the illuminated wave vector component along metal surface, m is an integer, and the structure momentum G = 2π/Λ is varied by the surface structure pitch Λ. For a flat metallic surface, resonant coupling between SPP and incident light can be fulfilled by total reflection configurations [4]. Resonant excitation occurs when the illuminated wave vector component along metal surface kk matches that of the SPP,

Novel Plasmonic Microscopy: Principle and Applications

kk ¼

ω0 pffiffiffiffiffi εd sinθspr ¼ kspp c

3

(2)

where ω0 is the plasmon frequency, θspp is the surface plasmon resonance (SPR) angle, and εd is the dielectric constant of the dielectric material. Every generated SPP from those points obeys the following rule: the wave vector of the SPP is matched by the metallic surface based on its dispersion relation and will therefore depend on the frequency of incident light and the dielectric functions of the metal and the dielectric [5]. By solving Maxwell’s equations under the appropriate boundary conditions, for a given metallic material, kspp can be expressed as kspp

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi εm εd ¼ k0 εm þ εd

(3)

where k0 is the wave vector of the incident light, and εm is the permittivities of metal. In this chapter, we mainly introduce two kinds of surface plasmon wave modulations which act as functionalized module in plasmonic microscopy: (i) highresolution 2D plasmonic fan-out pattern realized by subwavelength slit arrays and (ii) SPP interference pattern formed by optical vector-vortex beams: (i) High-resolution 2D plasmonic fan-out pattern [6] As depicted in Fig. 1a, our proposed structure consists of silver (Ag) as a metal thin film, subwavelength silt arrays as SPP generators, and quartz glass as the substrate. The thickness of the Ag film is 100 nm, and the pitch of the slit arrays structure is 610 nm in order to match the SPP momentum for normally incident illumination with wavelength of 633 nm as described in Eq. 1. The optimized width of each slit is 265 nm in order to get the highest SPP generation efficiency. The structure is fabricated by electron-beam lithography (EBL) followed by thermal evaporation and lift-off processes. A scanning electron microscopy (SEM) image of the structure is shown in Fig. 1b. Separation between two

Fig. 1 (a) Schematic of subwavelength slit arrays structure. (b) SEM image of structure from Ref. [6]. Permission from Optical Society of America

4

X. Yuan and C. Min

Fig. 2 2D near-field images of electrical field distributions for polarization direction along (a) diagonal direction, (b) x direction, and (c) y direction. Insets are the SPP field intensity distribution of the structure’s center area. The white arrow indicates the incident polarization direction from Ref. [6]. Permission from Optical Society of America

parallel slit arrays is set as 6 μm, which is much smaller than the propagation length of SPP for Ag in order to sustain the uniform of the fan-out pattern. Figure 2a shows the experimental NSOM result of 9  18 plasmonic fan-out array. When linearly polarized light (aligned in diagonal direction) incident on the sample, the four perpendicular counter-propagating SPP waves in both  x and  y directions interfere with each other. A 2D standing SPP pattern which assembled in the fan-out dot array is generated in the center area of structure. 1D interference pattern in either the  x or  y directions due to linearly polarization light in x or y direction is used, shown in Fig. 2b and c, respectively. The plasmonic fan-out spots with full width at half maximum (FWHM) of 0.34λ0 are optimized by various design parameters associated with the subwavelength slit as well as polarization states. (ii) SPP interference pattern formed by optical vector-vortex beams [7, 8] The concept of an “optical vortex” (OV) was first introduced by Nye and Berry [9] in 1974. The optical phase of these modes varies by 2πl when circling once the beam axis, where l is an integer, positive or negative, called the vortex topological charge. At the axis, that is, the core of the vortex, the phase is undefined and the optical field must vanish (for nonzero l), thus giving rise to the characteristic doughnut shape of the beam intensity cross section (see Fig. 3). Optical vortex beams are characterized by a helical-shaped wave front, as shown in Fig. 3.

Novel Plasmonic Microscopy: Principle and Applications

5

Fig. 3 Schematics of the vortex-beam optical field. Shown are the wave front helical structures for vortex topological charges l = 1 (a, b) and l = 3 (c, d). In the latter case, the wave front is composed of three intertwined helical surfaces, here shown in different colors for clarity. In the last example, the associated doughnut-shaped transverse intensity distribution is shown

Fig. 4 Three examples of cylindrical vector beams. Red arrows indicate the local state of polarization, which is linear, but its orientation changes with azimuthal angle

Vector beams have a spatially variant state of polarization. Radial polarized (RP), azimuthal polarized (AP), and spiral polarized (SP) beams are particular cases of this kind of cylindrical vector beams (CVB) as schematically illustrated in Fig. 4. The OV can be conveniently generated from ordinary light beams by modulation on a spiral phase plate (as shown in Fig. 5). In Fig. 5, a Gaussian laser beam is modulated by phase of exp(ilφ) where l denotes the topological charge. It can also be generated by diffraction on a suitable pitchfork hologram

6

X. Yuan and C. Min

Fig. 5 Schematics of the spiral phase plate, in this case the incident light is Gaussian beam

displayed on a spatial light modulator (SLM) [8]. The optical vortex has an intensity profile, which is a ring of primary intensity accompanied by concentric outer rings of diminishing intensity. The radius Rl of the primary intensity ring has been shown depending on the topological charge, Rl / ðl þ 1Þ1=2 for l > 0. Radial polarization is the sum of two circularly polarized beams, the first with a left spiral phase pattern (known as a vortex beam of topological charge l = +1) and the second with the opposite, right, spiral phase pattern (i.e., topological charge l = 1). Further, a variety of techniques produce radial and azimuthal polarization, including interferometric systems [10], specially designed subwavelength structures [11], liquid crystal devices [12], and inhomogeneous birefringent elements named Q-plates [13]. Commercial solutions are available, such as a radial polarization converter [14] and patterned polarizers. In this chapter, we mainly choose the wave train compression method to produce AP/RP beams for high purity [15]. Basically, a spiral phase plate with the functionality of wave train compression and a radial/azimuthal polarization selector (i.e., phase mask) is used to transfer linearly polarized light into the CVB. A great deal of attention has recently been given to the practical applications of optical vector-vortex beams based on that they can produce very small focal spots. When incorporated with plasmonic configuration, the small focal spots can generate even smaller focused SPP spots which can improve spatial resolution, for instance, in plasmonic microscopy herein. As shown in Fig. 6, when the vector-vortex beam converges toward the geometric focus, it gives rise to a diffraction-limited spot containing large spectrum of wave vectors limited by the numerical aperture (NA) of the lens and two sets of diametrically opposed planed waves with incident angles of  θspp, and thus two counter-propagating SPP waves of kspp are generated. In this case, a 45-nm gold film is deposited on a glass substrate. The incidence wavelength is 532 nm, and the theoretical SPR angle θspp is approximately equal to 47 . Rl represents the radius of the primary

Novel Plasmonic Microscopy: Principle and Applications

7

a ksp

Air

–ksp

Au Glass

θsp

Immersion oil

f

RI c

b

500nm

Fig. 6 (a) Illustration of SPP generated by OV beams which focus on the Au thin film under resonant condition. (b) Numerical simulation result for the excitation of surface plasmon intensity generated by optical vortex beam for an Au/air interface y. (c) SPP virtual probe generated through radially polarized beam illumination from Ref. [7]. Permission from Optical Society of America

intensity ring of the CVB. The fringe period λspp/2  242 nm in Fig. 6b is in agreement with the analytical result from Eqs. 2 and 3. Figure 6c gives the SPP field generated by radial polarization illumination, which exhibit a very sharp spot, thus can be used as SPP virtual probe in plasmonic microscopy. In the experiments of SPP excitation by vector-vortex beams, an SLM was used to generate an OV beam of λ ¼ 532 nm (20 mW). The fluorescent polystyrene microspheres in air (diameter of 44 nm, peak emission at 560 nm, Molecular Probes) are excited by the localized SPP. The fluorescence emission

8

X. Yuan and C. Min

Fig. 7 (a) Computer-generated hologram, (b) OV beam, (c) excitation of fluorescence by SPP generated by OV beam (with metal), and (d) excitation of fluorescence by OV beam (without metal) from Ref. [8]. Permission from American Institute of Physics

is collected by a total internal reflection fluorescent (TIRF) lens, and the emitted light images are relayed to the charge-coupled device (CCD) camera. The phase mask, as shown in Fig. 7a, which encoded topological charge is designed to match the SPP resonant criteria. Figures 7b, c, and d show the OV beam, the excitation of fluorescence by SPP generated by OV beam (with metal), and excitation of fluorescence by OV beam (without metal), respectively. Under the SPP resonant condition, we can observe that the excitation of fluorescence takes place not only around the ring forming the circumference of the circle bounding the vortex core but also from fluorescent polystyrene microspheres within the core itself.

Novel Plasmonic Microscopy: Principle and Applications

9

Plasmonic Microscopy for Imaging, Sensing, Trapping, and Raman Spectroscopy Wide-Field Super-Resolution SPP Standing Wave Illumination Fluorescence Microscope with Plasmonic Structures/Optical Vortices The theoretical basis for resolution enhancement of SPP standing wave illumination fluorescence microscope (SPP-SWIFM) is based on the formation of a modulated excitation field containing super-diffraction-limited spatial frequency components [16, 17]. In order to realize a two-dimensional enhanced SPP standing wave fluorescence image, six intermediate fluorescence images taken with x and y directional phase-shifted standing wave illumination are necessary [16]. Once the six fluorescence images are obtained through microscope by successive phase modulation of the excited SP waves, they are superimposed to reconstruct the final superdiffraction-limited image. With subwavelength-sized slit array coupling geometry or OV, a SPP standing wave interference pattern can be generated laterally across a sample. The SPP field then excites the fluorescent stains to form an image which is collected by a traditional microscope. In this section, fluorescence polystyrene microspheres (diameter 20 nm, peak emission at 645 nm) are deposited on the metal surface as nanoscale targets, and the wavelength of the beam is 633 nm. A schematic of a proposed SPP-SWIFM microscope based on the plasmonic structure is shown in Fig. 8. The experimental platform is identical to the system described in the Introduction section. The SPP standing wave patterns are generated by focusing the incident laser beam at the back focal plane of the immersion oil objective (NA = 1.42). Fluorescence imaging is collected by using the same objective and captured by a CCD camera. A notch filter is utilized to block the illumination light into CCD. A half wave plate is rotated to generate perpendicular one-dimensional standing wave patterns in the central area of grating structures. The binary phase plate made of a rectangular-shaped SU-8 photoresist with 350-nm uniform thickness is inserted into optical path corresponding to one of the grating slit arrays to shift the initial phase of the excited SPP. In this approach, 2π/3 phase shift is chosen for the highest signal/ noise ratio [18]. As a result, the SPP standing wave fringes shift 1/3 pitch. From NSOM measurement, we observe the standing waves along two perpendicular directions which offer a unique opportunity to realize two-dimensional resolution improvements. The FWHM of SPP fringe is equal to 155  5 nm. The intensity contrast of the SPP interference in the center area is close to 0.87, and the overall area still presents a contrast larger than 0.68. In the fluorescence imaging experiment, the fluorescence emissions near the metal surface are directly coupled back with SPP angle which results in a doughnut-shaped point spread function (PSF) [19]. Background emission from fluorophores far away from the silver surface will not be detected because they are not coupling with the SPP. Deconvolution algorithm is applied to convert the

10

X. Yuan and C. Min

Metal grating structure

Objective

Binary phase plate Lens Laser

Beam splitter

Notch filter

Half wave plate

CCD camera

Fig. 8 A schematic diagram of the SP-SWF microscope from Ref. [16]. Permission from Springer

original doughnut-shaped PSF into single-peaked PSF. Then, six images with phaseshifted SPP standing wave illumination are superimposed to obtain the highresolution image. From the experimental result in Fig. 9b2, the FWHM of two-dimensional PSF of SPP-SWIFM is approximately 172  5 nm (0.28 λSPP). Compared with the standard high-NA fluorescence images in Fig. 9a2, resolution of SPP-SWIF images achieves a twofold improvement and lower background noise due to high-contrast SPP standing wave fringes. Another SPP-SWIFM with high resolution is based on the plasmonic standing wave generated by OV [20]. Figure 10b shows a schematic diagram of the experimental setup. The OV is produced by a SLM modulation. In order to generate the resolution-enhanced SPP-SWIFM image, at least three intermediate standing wave fluorescence images need to be captured at their respective phases, which are described in standing wave total internal reflection fluorescence (SW-TIRF) imaging [18]. The corresponding phase shift generated by OV (l = 1–4) is approximately equal to 0, 2π/5, 4π/5, and 6π/5, respectively.

Novel Plasmonic Microscopy: Principle and Applications

11

Fig. 9 (a1) Experimental results of standard high-NA fluorescence images. (b1) Experimental results of the reconstructed two-dimensional enhanced resolution SPP-SWF image. (a2) and (b2) cross sections of PSF profiles at a selected region of interest from (a1) and (b1) from Ref. [16]. Permission from Springer

In the experiment, we observe the doughnut-shape images in Fig. 11a1 when the fluorescent excitation light coupled back via a metal surface to the CCD. Therefore, the deconvolution algorithm is applied to convert the original doughnut-shape PSFs into PSFs that are single lobed as shown in Fig. 11b1 by using the surface plasmon-coupled emission (SPCE) PSF kernel [21]. Subsequently, followed by the application of the SW-TIRF algorithm [18], the SP-SWIFM resolution-enhanced image is shown in Fig. 11c1. The PSF profile in Fig. 11c2 demonstrates that the FWHM of SP-SWIFM is more than a factor of 2 narrower than that of the deconvolved surface plasmon resonance fluorescence (SPRF) PSF in Fig. 11b2.

12

X. Yuan and C. Min

a Fluorescent beads –Ksp

+Ksp

Oil-immersion objective lens

Ag Glass

CCD

Incident OV beam

Reflected beams

Lens system Telescope

b Laser

Polarizer

Beam splitter

λ/2 plate Incident beams

SLM

Fig. 10 (a) Optical configuration of SPP generated by OV which focuses on the Ag thin film. The interference pattern is used to excite the fluorescent beads deposited in the dark core of OV and (b) schematic diagram of experimental setup from Ref. [20]. Permission from American Institute of Physics

Supersensitivity and Super Dynamic Range Biosensors Based on Cylindrical Vector Beam Conventional SPR sensing techniques are primarily based on intensity, angular, and spectral interrogation, respectively. The common drawback of these techniques is that they all suffer from limited detection resolution, which is around the order of merely 106 of the refractive index unit (RIU) [22]. Recently, measurement of SPR phase shift has been demonstrated for significant improvement in sensitivity. For example, a differential phase measurement employed in a Mach–Zehnder interferometer has been demonstrated with an achievement of 5.5  108 RIU [23]. It is noted, however, that the steep phase change occurs only in a small region within the plasmon resonance dip, where a limited dynamic range of 103–104 RIU is found too narrow for practical applications. Recently, it was demonstrated that a detection resolution of 2.2  107 RIU and a dynamic range of 0.06 RIU have simultaneously been achieved by combining phase detection and angular interrogation [24]. These techniques are all based on differential phase measurement between p- and s-polarizations in an attenuated total reflection (ATR) configuration, which can be confined to only a fixed angle or a relatively small angular range. We

Novel Plasmonic Microscopy: Principle and Applications

13

Fig. 11 (a1) Original SPRF image with doughnut-shape PSF, (b1) deconvolved SPRF image with SPCE PSF kernel, and (c1) SPP-SWIFM image after applying SW-TIRF algorithm and linear Richardson–Lucy linear deconvolution. (a2–c2) Comparison of PSF profiles at a selected region of interest in (a1–c1) from Ref. [20]. Permission from American Institute of Physics

propose a new scheme of phase-sensitive SPR (pSPR) for obtaining an ultrawide dynamic range without sacrificing sensitivity [38]. In this section we introduce a novel phase-sensitive surface plasmon resonance (pSPR) biosensor based on differential phase measurement between two cylindrical vector beams, namely, RP/AP beams. The proposed pSPR scheme is based on measuring the differential phase between radial and azimuthal polarizations in an inverted microscope. Since the signal beam focused by a TIRF lens contains the entire angular range from 0 to the maximum angle of the lens, the dynamic range can further be improved compared with the ATR configuration. Moreover, with the technique of differential phase measurement between RP and AP employed in our system, a very high sensitivity is obtained simultaneously. The schematic of the experimental setup is illustrated in Fig. 12. In this scheme, a linear polarized 780-nm laser source is used to generate the required half RP and half AP beam. Following that is a Michelson interferometer in which two interferometers (radial and azimuthal polarizations) are used in parallel. In the signal arm focused by the TIRF lens (Olympus 100  NA 1.49), the RP beam (TM polarized) is modified by a surface plasmon wave after exciting SPP, while the AP beam (TE polarized) is merely reflected by the metal–dielectric interface without significant disturbance. In the reference arm, both RP and AP beams are directly reflected by the reference mirror mounted on the end of a piezoelectric transducer (PZT) with a periodic linear phase shift. The phase changes of both radial and azimuthal polarizations at the SPR angle are measured simultaneously. The sensor surface is a 55-nm gold film. In the output path, a polarization analyzer is used to separate the exit beam into RP and AP light.

14

X. Yuan and C. Min

Fig. 12 Experimental setup for measuring the differential phase between RP and AP in an inverted microscope configuration from Ref. [38]. Permission from American Institute of Physics

Two synchronized CCDs (1280  960 pixels) are used to capture the signal. Digitalized intensity data were then processed in a personal computer to deliver the differential phase information with a fringe analysis and phase extraction program. In theory, the signal beam focused by a TIRF microscopic objective contains the entire angular range from 0 to the maximum angle given by the numerical aperture, leading to a dynamic range of 0.41 RIU which is over seven times wider than the best result of the ATR pSPR sensor. Moreover, with the technique of differential phase measurement between radial and azimuthal polarizations employed in our configuration, high sensitivity of 9.05  108 RIU/0.1 can simultaneously be achieved in principle [25]. It is experimentally verified that a high sensitivity of 7.385  107RIU/0.1 and the system’s potential of achieving a dynamic range as wide as 0.35RIU can be achieved. The simulated interference patterns of both RP beam and AP beam are shown in Fig. 13a and b, respectively. Figure 13c and d are the cross section of Fig. 13a and b, respectively, showing the abrupt intensity break of focused RP interference pattern produced by the excitation of SPP, and the relatively constant interference intensity distribution of the AP beam due to the fact SPP is not excited. Figures 14a and b show the images captured with CCDs from the back focal plane of the lens for the sample of air (n = 1) and water (n = 1.333), respectively. Since the position of dark ring corresponds to the SPR angle, changing the sample media (the corresponding refraction index changes) will result in the shift of the dark ring, meaning the change of the radius of dark ring. By combining the detection of dark-ring radius and SPR

Novel Plasmonic Microscopy: Principle and Applications

15

250

250

100

100

200

200

200

300 400

400

150

500 600

100

700

Pixel

Pixel

200

300 150

500 600

100

700

800

50

900

800

50

900 1000

1000 400

200

600

800

1000

200

1

600

800

1000

1

Interference intensity

Interference intensity

400

Pixel number

Pixel number 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1

0 0

10

20

30

40

50

60

Incident angle (degree)

70

80

0 0

10

20

30

40

50

60

70

80

Incident angle (degree)

Fig. 13 Simulated interference patterns of (a) the RP beam and (b) the AP beam. Corresponding intensity curves of (a) and (b) are shown in (c) and (d), respectively. The blue circle in (c) indicates the intensity break in the interference pattern at SPR angle where the dark ring is located. Simulation parameters are NA = 1.49, λ ¼ 632:8 nm, RI of the sample =1.3RIU, and thickness of the gold film = 45 nm from Ref. [38]. Permission from American Institute of Physics

phase, we can achieve a large dynamic range while keeping the high sensitivity. Experimentally captured interference patterns are shown in Fig. 14c and d for the sample of air and water, respectively, is in good agreement with the simulated results in Fig. 13. Given to the rotational symmetry about the optical axis, the probe RP beam reflected off the metal–dielectric interface contains a dark ring at the angular region corresponding to the near-zero reflectance at the SPR angle. Consequently with the amplitude and angle modulation of RP beam at the resonance angle, this dark ring still exists in the interference pattern of RP beam (Figs. 13a, b, and 14c, d), which is the biggest difference from that of AP beam. To verify the sensitivity of the proposed system, NaCl solutions with concentrations ranging from 0.05 % to 0.25 % with increment of 0.05 % were prepared. According to the empirical formula of the refractive index of NaCl solutions n = 1.3331+ 0.00185*C (%), where C is the concentration of Nacl solutions, it is calculated that the corresponding RI is between 1.3331925 and 1.3335625. Figure 15a shows typical waveforms obtained from experimental setup during the periodical movement of PZT. A typical response for different concentrations of NaCl solutions is shown in Fig. 15b, where each data were obtained by averaging the differential phases extracted from four

16

X. Yuan and C. Min

Fig. 14 The reflected intensity distribution at the back focal plane of the objective lens of the signal beam (the radially polarized beam) for the sample of (a) air (the refractive index is 1.00RIU) and (b) water (the refractive index is around 1.3331RIU). (c) and (d) are interference patterns of the radially polarized beam corresponding to (a) and (b), respectively. The dark rings resulted from exciting SPP are highlighted with red circles. The incident wavelength is 632.8 nm, and gold film deposited on a glass substrate has a thickness of approximately 45 nm from Ref. [38]. Permission from American Institute of Physics

successive measurements. We can see that the phase undergoes an abrupt change of 50.1 within a tiny refractive index range of 0.00037 RI, resulting in a high sensitivity of 7.385  107 RIU/0.1 . To perform the differential phase measurement, multiple synchronized meters are required and thus lead to more difficulties in component alignment and data processing procedures. Moreover, compared to p- and s- polarizations split simply by a Wollaston prism, the separation of RP and AP beams requires complex polarization components that induce more deviations. These drawbacks strongly limit the accuracy in SPR phase measurement as well as the maximum sensitivity. So we propose a novel plasmonic petal-shaped vector beam for differential phase measurement in a microscopic SPR biosensor [26], as shown in Fig. 16, realizing an exact common path differential interferometer. Such a beam is sectionalized with alternating RP and AP components in the beam cross section, which enables measurement with a single beam instead of respective measurements for RP and AP beams, and thus greatly simplifies the excitation, detection, and data processing procedures and increases the accuracy of phase measurement. We verified experimentally the performance of the proposed

Novel Plasmonic Microscopy: Principle and Applications

17

Fig. 15 (a) Typical waveforms obtained from experimental setup during the periodic movement of PZT. When detecting a certain refractive index of a sample, the information is extracted from a certain circle of pixels where the dark ring is located. With the periodic movement of PZT driven by a triangular wave, interference rings expand or shrink periodically, generating two sinusoidal waveforms on the pixels corresponding to SPR angle. (b) Phase response for NaCl solutions with concentrations ranging from 0.05 % to 0.25 %. Here, each data were obtained by averaging the differential phases extracted from four successive measurements from Ref. [38]. Permission from American Institute of Physics

a

1

b

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

Fig. 16 (a) Polarization and intensity distribution of plasmonic petal-shaped vector beam. (b) Intensity distribution of reflection captured by CCD camera at the back focal plane of the objective lens

system and achieved an extremely high sensitivity of 9.5  108 RIU/0.1 , which is about eight times improvement over our previous result.

Trapping Metallic Particles by Plasmonic Tweezers Based On Cylindrical Vector Beam Scattering forces in focused light beams push away metallic particles. Thus, trapping metallic particles with conventional optical tweezers, especially those of Mie particle

18

X. Yuan and C. Min

a

Illuminating light

b

SPP virtual prode

z Gold particles 0

Y

Oil-immersion objective lens

X

ld Go s as l G Objesctive lens

RP beam

Computer

CCD

Dichroic mirror Lens

λ/4 plate

Spiral phase plate

Filter AA

Laser Reflector Polarizer

Telescope

PR

Fig. 17 (a) Schematic of trapping metallic particles by a SPP virtual probe. Bottom yellow arrows indicate the polarized directions of the RP beam; blue arrows indicate the direction of force on each gold particle in the SPP virtual probe field. (b) Experimental setup of the focused plasmonic trapping system. The incident wavelength is λ0 ¼ 1064 nm, the thickness of the thin gold film is 45 nm, the refractive index of the glass substrate from Ref. [28]

size, is difficult. Here we introduce a mechanism by which metallic particles are attracted and trapped by plasmonic tweezers when surface plasmons are excited and focused by a RP beam in a high-numerical-aperture microscopic configuration. To the best of our knowledge, the experiments described here represent the first demonstration of plasmonic trapping of metallic particles with diameters above 500 nm. Compared with other plasmonic tweezers based on metallic microdisks or bowtie structures [27], this setup benefits from the structureless excitation of SPP in a dynamic configuration, thereby reducing the need to fabricate nanometer-sized complex structures [28]. The schematic of the plasmonic tweezers is in Fig. 17a. The theoretical full width at half maximum of virtual probe is 261 nm (0.245λ0, λ0 ¼ 1064 nm is the incident wavelength), and the wavelength of excited SPP is 754 nm. Gold particles diffused in water and injected into an in-house-fabricated chamber onto a gold film are manipulated by the plasmonic virtual probe. Detailed experimental setup is presented in Fig. 17b. Sequential snapshots extracted from CCD video (Fig. 18a and b) reveal that the plasmonic virtual probe is able to attract nearby particles as soon as the SPP field encounters the particle. Particles can also be dragged by the virtual probe over large distances (>5 μm from the center), indicating that the force is sufficiently strong for any physical manipulation. More trapping experiments for dielectric and metallic particles in the diameter range 0.5–2.2 μm demonstrate that plasmonic tweezers can trap both dielectric and metallic particles, with an effective large size range from nanometer to micrometer. Opposing behavior is observed when the metallic-coated glass plate is replaced by a bare glass plate (Fig. 18c). Particles are seen being pushed away when the focused optical beam falls on the plate. Meanwhile, the polarization rotator (PR) system (Fig. 17b) is able to switch between RP and AP; as the SPP are

Novel Plasmonic Microscopy: Principle and Applications

19

Fig. 18 Comparison of plasmonic and optical tweezers experiments. (a) Successive images of gold particles (diameter of 1  0.1 μm) trapped by the focused plasmonic tweezers recorded using a CCD camera. The image sequence is from top left to bottom right with a time interval between images of 1/5 s. The black arrow indicates the direction of particle motion, and the black cross

20

X. Yuan and C. Min

sensitive to the polarization of the incident light, one can therefore control the presence of the virtual probe. As seen in Fig. 18d and e, about ten Au particles can be manipulated dynamically to pattern the letter “N” and a triangle “Δ.” To understand the experimental observations, we performed numerical analysis, based on an FDTD calculation and Maxwell stress tensor (MST) method [29], of the forces involved in these two experimental configurations. The resultant electrical field distributions and Poynting vectors, total forces, gradient forces, and scattering forces exerted on the particle are presented for comparison in Fig. 19. In Fig. 19a and b, typical field patterns are shown for the focused plasmonic and optical tweezers corresponding to the horizontal plane (x–y plane) above the gold–water and glass–water interfaces, respectively. The total force, composed of gradient and scattering forces (Fig. 19c and d), mainly indicates left and right horizontal directions, respectively, concordant with the experiments. As displayed in Fig. 19e and f, both gradient forces tend to attract particles toward the center, although the plasmonic tweezers produced a much stronger force due to the SPP-enhanced electric field. In Fig. 19g and h, the scattering force for the optical tweezers opposes the gradient force, whereas the scattering force for the plasmonic tweezers in contrast augments the gradient force in the horizontal direction. As plotted in Fig. 20a, the black dot represents the experimental outcome data points. We can see that they follow the same trend as the numerical results when close to the virtual probe. In Fig. 20b, we also plot the one-dimensional distribution of forces along the vertical direction (Fz), which is found to be different from the horizontal forces (Fx): the former increases exponentially in magnitude until close to the gold film, while the latter reaches a maximum and then decrease in magnitude when close to the SPP virtual probe center. For Fx, both force components (gradient and scattering) follow the same trend, while they show opposite behavior in Fz. In addition, the OV-based plasmonic trapping experiments with gold micrometer particles can also be performed and the particle rotation is visualized. It is used for direct visualization of the orbital angular momentum of the plasmonic vortices (PV) by rotating gold particles circumferentially around the PV center [30], as shown in Fig. 21. When the topological charge of the radially polarized OV changes from 2 to 5, the particle rotation changes accordingly with radius enlargement and speed increase. The rotation radius under charge 2 and 5 is 0.44 and 0.77 μm, and the rotation speed of 2.0 and 2.3 μm/s, respectively. ä Fig. 18 (continued) indicates the position of the plasmonic virtual probe. (b) Successive images of gold particles (diameter of 2.2  0.1 μm) trapped by the focused plasmonic tweezers with a time interval of 1/2 s. (c) Successive images of gold particles (diameter of 1  0.1 μm) pushed by optical tweezers with a time interval of 21/15 s. The incident beam and experimental configuration are the same as for (a) but without the metal film on the chamber surface. The light spot indicated by the black cross is moving from left to right, pushing the particle as indicated by a black arrow. (d, e) Patterns of the letter “N” and triangle “Δ” constructed by gold particles (diameter of 1  0.1 μm) in the focused plasmonic tweezers. The length of all scale bars (black line in lower right corner) is 3 μm from Ref. [28]

Novel Plasmonic Microscopy: Principle and Applications

21

Fig. 19 Comparison of forces in plasmonic and optical tweezers. Distribution of electric field intensity (background) and Poynting vector (green arrows) in the horizontal x–y plane of (a) the

22

X. Yuan and C. Min

To explain the experimental results, a further theoretical investigation is conducted by 3D FDTD method with the full-wave module of the commercial RSOFT software. Figure 22a shows the amplitude distributions of the Ez component with a primary peak ring accompanied by a series of concentric outer rings with gradually diminished amplitude, and (b) is the time-averaged Poynting vectors in the same observation plane, where an azimuthal energy flow can be clearly seen. According to the vector description for PV, the major component of the Poynting vector should be the azimuthal component, while the longitudinal component should be zero because the evanescent field does not carry any energy flow along the decaying direction. The azimuthal energy-flow pattern shown in Fig. 22b indicates that an OAM has been established inside the PV, which provides plasmonic trapping of gold micrometer particles. This novel plasmonic tweezer system is also used in a particle-film system to observe surface-enhanced Raman scattering (SERS) signal from a dynamic plasmonic gap mode that is induced by the hybridization of the OV-based SPP on the film and the localized surface plasmon of the particle [31].

Radially Polarized Beam for Propagating Surface Plasmon-Assisted Gap-Mode Raman Spectroscopy The excitation of plasmon resonances in noble metal nanoparticles (NPs), termed as localized surface plasmon (LSP), can produce an electromagnetic field that is localized around the particle surface and is tremendously enhanced compared to the incident field [32, 33]. This enhancement is widely accepted as the dominant contribution in SERS, which is a powerful analytical tool for chemical and biological sensing applications. Pioneering experimental studies demonstrated that the nanoparticle-plane gap-mode systems are capable of providing remarkably reproducible SERS-active hot spots [34]. Recent experimental works demonstrated that the emitted Raman radiation is also able to couple to SPs with periodic metallic structure [35], or nanoantennas [36], forming a “beam-shaped” Raman scattering. ä Fig. 19 (continued) focused plasmonic tweezers and (b) the focused optical tweezers, where the x–y plane is 50 nm above the gold–water interface in (a) and the glass–water interface in (b), respectively. (c), (e), and (g) show the total force, gradient force, and scattering force, respectively (green arrows), distributed on a gold particle (diameter of 1 μm) in the vertical x–z plane for the plasmonic tweezers. The background is the electric field intensity, while the white lines indicate the spherical particle and gold film. The particle with a diameter of 1 μm is 50 nm above the gold surface and 300 nm away from the SPP peak at the horizontal center. (d), (f), and (h) show similar cases to (c), (e), and (g), respectively, except for the optical tweezers without metal film and the particle is 600 nm away from the horizontal center. The white arrow starting from the center of the sphere in (c–h) denotes the resultant force on the particle. The length of all scale bars (white line in lower right corner) is 1 μm. Other simulation parameters are as follows: the incident wavelength is λ0 ¼ 1:064 nm, the thickness of the gold film is 45 nm, the refractive index of the glass substrate is 1.515 from Ref. [28]

Novel Plasmonic Microscopy: Principle and Applications

23

Fig. 20 Horizontal and vertical forces versus particle positions in plasmonic tweezers. (a) Horizontal force (Fx) and (b) vertical force (Fz) exerted on a metallic particle in focused plasmonic tweezers at different particle positions. Abscissa represents the distance between the particle center and the central peak of the SPP virtual probe. All solid curves are theoretical results based on FDTD and MST calculations; the black dots represent experimental data. For all theoretical results, particle diameters are 1  μm, and for all experimental results, only particles of diameter 1  0.1 μm were chosen from Ref. [28]

Fig. 21 Successive frames of video recordings that show the rotation of a single gold particle trapped by PVs of l = 2 (top row) and l = 5 (bottom row). The circular arrow denotes the rotation direction, and the dashed circles show the orbital trajectory from Ref. [30]. Permission from Optical Society of America

This is called surface plasmon-coupled emission (SPCE) [37] which is able to improve the collection efficiency of SERS. Herein, we introduce a tightly focused RP beam for propagating surface plasmonassisted nanosphere-plane gap-mode SERS system. As shown in Fig. 23, RP beam as the excitation light was achieved according to the approach mentioned above (i.e., by using a spiral phase element and a radial-type analyzer). A high numerical aperture (NA) oil immersion objective lens is used to tightly focus the incident beam onto the sample to excite SPP and collect the SPCE of SERS. Raman signals in the transmission direction are collected with another objective lens. A black and

24

X. Yuan and C. Min

Fig. 22 (a) Amplitude distributions of the Ez component with topological charge of 2. (b) The time-averaged Poynting vector near the primary ring of the PV from Ref. [30]. Permission from Optical Society of America

Fig. 23 (a) Experimental setup investigating the emission pattern of SERS. (b) A detailed view of the sample, which is a sandwiched structure composing of isolated silver nanosphere (60-nm diameter) immobilized on a thin silver film (55-nm thickness), with 4-mba molecules sandwiched between them. (c) SERS spectrum of 4-mba molecules emitted from single nanosphere-film junction with and without SPP excitation. Integration time is 5 s. (d) Reflected laser beam profile obtained at the back Fourier plane of the objective lens (NA = 1.49), with a sharp dark ring representing the excitation of SPP from all azimuthal directions. (e) Emission pattern of SERS at SPP excitation angles, forming an SPCE ring from Ref. [33]. Permission from American Institute of Physics

Novel Plasmonic Microscopy: Principle and Applications

25

white CCD camera (CCD1) is used in the transmission direction or back image plane to obtain the Raman image of individual nanospheres, while a color one (CCD2) placed at the back Fourier plane is used to record the reflected laser beam (without a filter placed before CCD2) or emission pattern of SERS (with a filter placed before CCD2, as shown in Fig. 23a with a dashed rectangle), respectively. Raman spectra originating from the sample are analyzed with a TEC-cooling spectrometer, both in the transmission and reflection directions (CCD2 is replaced with the spectrometer when collecting the Raman spectrum at the reflection direction). The sample is a sandwiched structure composing of isolated silver nanosphere immobilized on a thin silver film, with 4-mercaptobenzoic acid (4-mba) molecules sandwiched between them (Fig. 23b). It is prepared with the following processes: 1. Silver film with thickness about 55 nm is first formed by electron beam deposition onto a cleaned glass coverslip. 2. The coated substrate is then immersed for around 30 min in a 103M ethanolic 4-mba solution to form a self-assembled monolayer of 4-mba molecules, which is subsequently rinsed with ethanol and DI water to remove the excess molecules on the surface. 3. A droplet of diluted silver colloids (diameter 60 nm, 100 μl, 109nanospheres/ml) is applied onto the 4-mba/Ag plane and allowed to evaporate naturally, followed by water rinsing and air drying at ambient temperature. In the experiment, SPP waves are first excited by a tightly focused radially polarized beam. The SPP subsequently interact with the silver NPs on the silver film, leading to a plasmon-hybridized gap mode with electric field significantly enhanced at the NP-film junction. The enhanced electric field is then used to excite SERS of 4-mba molecules sitting at the junction (Fig. 20c). We are referring to all the Stoke’s scattering as marked since all the Raman peaks change in the same proportion with the change of excitation intensity and integration time. The significantly enhanced transmittance near SPP excitation angle results in the well-known field enhancement effect of SPP. Correspondingly, at the back Fourier plane, as incident radiations near the SPP excitation angle are strongly coupled to SPP at the silver–air interface, a sharp dark ring can clearly be seen at the reflected beam (Fig. 23d), due to the full-beam p-polarization of a radially polarized light. Due to the scattering nature of SERS, the evanescent radiation component with in-plane wave vector equal to kspp is able to couple to SPP and eventually reradiates into the glass side at SPP excitation angle, forming an SPCE ring at the back Fourier plane (Fig. 23e). Attention now paid on the role of SPP played in our gap-mode SERS system, which is analogous to the one LSP played in a conventional SERS system. In a conventional system, the power of enhanced Raman radiation can be calculated as 4

[36] P / NσSERS  jjEElocjj4  jE0 j2 where N is the number of Stokes–active scatters 0

within the hotspot, σSRES the scattering cross section, and Eloc and E0 the amplitudes of the enhanced and incident electric field, respectively. The contribution from LSP is illustrated with the fourth-order factor, which is due to the enhancement of both the

26

X. Yuan and C. Min

Fig. 24 (a) Optical scheme of propagating surface plasmon-assisted gap-mode system for SERS. (b)–(e) Raman images of silver nanospheres obtained by CCD camera when the sample is exactly located at the focal plane of the objective lens. (b) With propagating surface plasmon excitation and (c) without PSP excitation; (d) demonstration of on-focal plane excitation configuration, in which incident beam is focused onto the sample into a minimum spot; (e) reflected beam obtained by the CCD camera at the back focal plane. The sharp dark ring representing incident beam at this angle is efficiently coupled to the PSP at the silver–air interface from Ref. [32]. Permission from Springer

incident and emitted light field. In our system, incident light is first enhanced by the excitation of SPP at the silver–air interface, as illustrated in Fig. 23b. The emitted Raman radiation is finally enhanced through the SPCE, as shown in Fig. 23c. LSP can be excited with light of appropriate frequency and polarization, irrespective of its wave vector. As a result, the enhancement factor for the incoming and emitted light field is very close and hence can be consolidated, leading to a fourth-order effect. The excitation condition for an SPP, however, is strongly wave vector dependent, which is demonstrated with the wave vector matching condition. The additional condition results in an excitation of SPP with incident light and an emission of the emitted Raman scattering at fixed angles (i.e., SPP excitation angles) based on the attenuated total reflection configuration. Thus, the total power of Raman radiation 4

collected can be expressed as P / NσSERS  jjEEloc jj4  jESPP j2  CEðSPCEÞ. The fourthSPP

order factor illustrates the enhancement induced by the plasmon-hybridized gap mode, while the last two terms represent the enhancement from SPP, in terms of the SPP excitation with incident light and collection efficiency improvement through SPCE, respectively. In addition to collecting SERS signal by SPEC configuration, impressive Raman enhancement also can be obtained from the air-side detection. Figure 24a shows another gap-mode Raman spectroscopy system based on RP excitation [32]. In this system, a 532-nm radially polarized beam is used as the illumination. A RP beam is also employed to generate the tightly focused focal region of SPP. A 45-nm-thick silver film deposited onto a cleaned cover glass by electron beam evaporation with

Novel Plasmonic Microscopy: Principle and Applications

27

Fig. 25 (a) Electric field distribution around the silver nanosphere with 532-nm illumination. Incident beam with 16-μm width was used, which assures the excitation of PSP. (b) Raman spectrums of R6G molecules at various radially polarized beam sizes, with a–f representing the increase of beam size. Laser power impinged onto the sample is approximately 500 μW. Integration time of 1 s was used to collect the Raman signals from Ref. [32]. Permission from Springer

60-nm diameter silver nanospheres assembled on it. Raman signals emerging from rhodamine 6G(R6G) molecules adsorbing on silver nanospheres were collected by an Ocean Optics TEC-cooling Raman spectrometer (TE65000). A CCD camera was used to capture the Raman images of nanospheres at the transmitted direction or the reflected beam at the back focal plane. A submonolayer of the silver nanoparticles was prepared with most of the particles isolated and uniformly distributed. Figure 24a presents the Raman image of silver nanospheres with PSP excited under on-focal plane configuration, which is captured when the dark ring just comes out at the reflected beam as we increase the incident beam size, while Fig. 24b, which gives the Raman image of silver nanospheres without the PSPs excited, is captured when the dark ring just disappears as we decrease the beam size. The comparison between Fig. 24b and c clearly indicates that the existence of PSP does lead to a significant improvement of Raman enhancement. The on-focal plane excitation configuration is illustrated in Fig. 24d, and a sharp dark ring can clearly be seen in the reflected beam in Fig. 24e, showing that the incident beam at this angle is efficiently coupled to the PSP mode. A 2D contour map of the electric field distribution around the silver NP at 532-nm incident wavelength is illustrated in Fig. 25a. As expected, the electric field is significantly concentrated and enhanced at the NP-plane junction. Raman spectrums of R6G molecules adsorbed on silver nanospheres at various radially polarized beam sizes are obtained, as shown in Fig. 25b. The sample with much lower NPs density is prepared by reducing the silver colloid concentration and immersion time. As can be seen in Fig. 25b, when the radially polarized beam size is too small to cover the SPR angle, PSP cannot be excited, and hence, no Raman signals can be detected due to its limited Raman enhancement, as indicated by the black curve, which is obtained when the beam size is controlled such that the dark ring at the back focal plane just disappears. As incident beam size increases (red and green curves), the dark ring

28

c 1.0 Intensity

a

X. Yuan and C. Min

Measured

0.8 0.6

Calculated FWHM:185nm

0.4 0.2 0.0 -1.2

d 200

Mean

180 Calculated 160

15 counts

FWHM / nm

b

-0.6 0.0 1.2 0.6 Lateral Position / μm

10

140 120 100

5 0

0

10

176

184 192 200 nm

20 30 Number

40

50

Fig. 26 Mapping surface plasmon standing waves with gap-mode SERS imaging.(a) Measured plasmonic near-field distribution produced by a tightly focused radially polarized beam (RPB) (a1), compared with the calculated perpendicular (a2) and in-plane (a3) electric field components. (b) The corresponding mappings of surface plasmon fields generated under a tightly focused linearly polarized Gaussian beam (LPGB) (b1), a circularly polarized beam (CPB) (b2) and a linearly polarized optical vortex beam (LPVB) of charge “1” (b3), to verify further the perpendicular field sensitivity of proposed method and the RPB capability to produce a strongly confined perpendicular field at the center. (c) Cross-section comparison between measured plasmon field (a1) and calculated perpendicular field component (a2). (d) Statistical measurement of the lateral field extension of the central spot with 50 randomly selected nanospheres on the metal film, verifying accuracy and reproducibility of our measurement. Incident wavelength is 532 nm in all cases. Area of each contour map is 4 μm  4 μm from Ref. [39]

appears, meaning that PSP is excited and interacts with the nanospheres on the film surface. Raman enhancement under these circumstances is significantly improved and remarkable Raman signals can be detected. Further increase in beam size will not contribute to the Raman enhancement since the marginal beam cannot couple to the PSP due to their larger incident angle than the SPR angle. Optimized condition occurs when the dark ring is just totally coming out, with Raman spectrum shown by the green curve. A tenfold magnification of the black curve indicates that Raman intensity for the smaller beam size is still two times weaker than that of the green curve, which means the introduction of propagating surface plasmon excitation based on TIRF configuration induces an improvement around 20 times of the Raman enhancement. Based on the confinement and enhancement of electromagnetic field at the gap between nanoparticle and metal film, the Raman signal is significantly enhanced. The field distribution dependency affords us an ingenious approach to mapping SPP field (Fig. 26) [39]. Herein, we reverse our target from the acquisition of Raman signal to SPP field mapping. The surface plasmon-assisted gap-mode Raman spectroscopy thus becomes a converter of near-field information to far field. The intensity of SERS signal is directly decided by

Novel Plasmonic Microscopy: Principle and Applications

29

perpendicular component of SPP, which is the dominance part of SPP. With the proposed scheme, the FWHM of the perpendicular component of SPP is proved to be in the subwavelength scale. The previously mentioned plasmonic tweezers for trapping metallic particles are based exactly on the same configuration as the plasmon-assisted gap-mode Raman spectroscopy except for signal collecting part. So we can employ the plasmonic tweezers to actively control the SERS system. The combination of plasmonic tweezers and SERS supports surface plasmon-assisted gap-mode Raman spectroscopy a desired reproducible and dynamic SERS substrate [31]. Compared with the immobilized nanoparticle, this design provides real-time dynamic location and scanning functions, which are crucial for scanning SERS imaging.

Summary In summary, we showed how to manipulate SPP to form high-resolution SPP spots or spots array. One approach is based on using subwavelength slit arrays, and another choice is to employ the optical vector-vortex beams. As SPP sources, these two configurations find their uses in multifunction optical microscope, combining optical microscopic imaging, biosensing, plasmonic tweezers, and SERS in a single microscopic system. In imaging applications, we have shown that a fan-out array can be generated with FWHM of 0.34 λ0 in a very small thin metal film. The localized high-resolution SPP spot array is confined in an intrinsically thin region of the interface. Numerical study and experimental results demonstrate that the reconstructed fluorescence image can reach up to 0.28 λSP in two dimensions. This is nearly to a factor of two enhancements compared with normal high-NA optical imaging method. The optical vortex beams can be used to excite the fluorescence particles due to the generation of the localized SPP in the middle of the ring. The PSF of SW-SPRF is more than twofold narrower than typical TIRF. When it comes to sensing applications, a novel plasmonic phase-sensitive SPR sensing technique was proposed in a microscopic configuration with ultrawide dynamic range. It is based on differential phase measurement between RP and AP components. The performance of the proposed system achieved an extremely high sensitivity of 9.5  108 RIU/0.1 and a dynamic range of 0.41 RIU which is over seven times wider than the best result of the ATR pSPR sensor. In trapping and SERS applications, the OV-based plasmonic tweezers could be employed for trapping and manipulating various metallic objects from nanometer to micrometer scales. In addition to trapping, this platform can also image the perpendicular component of SPP and fulfill dynamic SERS measurements simultaneously which shows very high SERS enhancement and feasibility in SPCE-assist SERS. Acknowledgments This work was supported by the National Natural Science Foundation of China under Grant Nos. 61138003, 61427819, and 61422506; National Key Basic Research Program of China (973) under grant No. 2015CB352004.

30

X. Yuan and C. Min

References 1. Kawata S, Inouye Y, Verma P (2009) Plasmonics for near-field nano-imaging and superlensing. Nat Photonics 3(7):388–394 2. Pang L, Hwang GM, Slutsky B, Fainman Y (2007) Spectral sensitivity of two-dimensional nanohole array surface plasmon polariton resonance sensor. Appl Phys Lett 91(12):123112 3. Gao D, Chen W, Mulchandani A, Schultz JS (2007) Detection of tumor markers based on extinction spectra of visible light passing through gold nanoholes. Appl Phys Lett 90(7):073901 4. Raether H (1988) Surface plasmons on smooth and rough surfaces and on gratings. Springer, Berlin 5. Barnes WL, Dereux A, Ebbesen TW (2003) Surface plasmon subwavelength optics. Nature 424 (6950):824–830 6. Qian W, Jing B, Yuan X-C (2010) High-resolution 2D plasmonic fan-out realized by subwavelength slit arrays. Opt Express 18:2662–2667 7. Tan PS, Yuan X-C, Lin J, Wangand Q, Burge RE (2008) Analysis of surface plasmon interference pattern formed by optical vortex beams. Opt Express 16:18451–18456 8. Tan PS, Yuan X-C, Lin J, Wang Q, Mei T, Burge RE, Mu GG (2008) Surface plasmon polaritons generated by optical vortex beams. Appl Phys Lett 92:111108 9. Nye JF, Berry MV (1974) Dislocations in Wave Trains. Proc R Soc Lond A 336:165–190 10. Passilly N, de Saint Denis R, Aït-Ameur K, Treussart F, Hierle R, Roch J-F (2005) Simple interferometric technique for generation of a radially polarized light beam. J Opt Soc Am A 22:984–991 11. Bomzon Z, Kleiner V, Hasman E (2001) Formation of radially and azimuthally polarized light using space-variant subwavelength metal stripe gratings. Appl Phys Lett 79:1587–1589 12. Davis JA, McNamara DE, Cottrell DM, Sonehara T (2000) Two-dimensional polarization encoding phase-onlyliquid-crystal spatial light modulator. Appl Opt 39:1549–1554 13. Cardano F, Karimi E, Slussarenko S, Marrucci L, de Lisio C, Santamato E (2012) Polarization pattern of vector vortex beams generated by q-plates with different topological charges. Appl Opt 51:C1–C8 14. Manufacturer of a radial polarization converter. http://www.arcoptix.com/ 15. Moh KJ, Yuan X-C, Bu J, Burge RE, Gao BZ (2007) Generating radial or azimuthal polarization by axial sampling of circularly polarized vortex beams. Appl Opt 46:7544–7551 16. Wang Q, Bu J, Tan PS, Yuan GH, Teng JH, Wang H, Yuan XC (2012) Subwavelength-sized plasmonic structures for wide-field optical microscopic imaging with super-resolution. Plasmonics 7(3):427–433 17. Cragg GE, So PTC (2000) Lateral resolution enhancement with standing evanescent waves. Opt Lett 25:46–48 18. So PTC, Kwon H-S, Dong CY (2001) Resolution enhancement in standing-wave total internal reflection microscopy: a point-spread-function engineering approach. J Opt Soc Am A 18:2833–2845 19. Lakowicz JR, Malicka J, Gryczynski I, Gryczynski Z (2003) Directional surface plasmoncoupled emission: a new method for high sensitivity detection. Biophys Biochem Res Comm 307:435–439 20. Tan PS, Yuan X-C, Yuanand GH, Wang Q (2010) High-resolution wide-field standing-wave surface plasmon resonance fluorescence microscopy with optical vortices. Appl Phys Lett 97:241109 21. Tang WT, Chung E, Kim YH, So PTC, Sheppard CJR (2007) Investigation of the point spread function of surface plasmon-coupled emission microscopy. Opt Express 15:4634 22. Huang YH, Ho HP, Wu SY, Kong SK (2012) Detecting phase shifts in surface plasmon resonance: a review. Adv Opt Technol 2012:471957 23. Wu SY, Ho HP, Law WC, Lin CL (2004) Highly sensitive differential phase-sensitive surface plasmon resonance (SPR) biosensor based on Mach-Zehnder configuration. Opt Lett 29:2378

Novel Plasmonic Microscopy: Principle and Applications

31

24. Huang YH, Ho HP, Wu SY, Kong SK, Wong WW, Shum P (2011) Phase sensitive SPR sensor for wide dynamic range detection. Opt Lett 36:4092 25. Wang R, Zhang C, Yang Y, Zhu S, Yuan X-C (2012) Focused cylindrical vector beam assisted microscopic pSPR biosensor with an ultra wide dynamic range. Opt Lett 37:2091–2093 26. Wang R, Du LP, Zhang CL, Man ZS, Wang YJ, Wei SB, Min CJ, Zhu SW, Yuan X-C (2013) A plasmonic petal-shaped beam for a microscopic phase sensitive SPR biosensor with ultrahigh sensitivity. Opt Lett 38:4770–4773 27. Zhang W, Huang L, Santschi C, Martin OJF (2010) Trapping and sensing 10 nm metal nanoparticles using plasmonic dipole antennas. Nano Lett 10:1006–1011 28. Min CJ, Shen Z, Shen JF, Zhang YQ, Fang H, Yuan GH, Du LP, Zhu SW, Lei T, Yuan XC (2013) Focused plasmonic trapping of metallic particles. Nat Commun 4:2891 29. Griffiths DJ (1998) Introduction to electrodynamics. Prentice Hall, Upper Saddle River 30. Shen Z, Hu ZJ, Yuan GH, Min CJ, Fang H, Yuan XC (2012) Visualizing orbital angular momentum of plasmonic vortices. Opt Lett 37:4627 31. Shen JF, Wang J, Zhang CJ, Min CJ, Fang H, Du LP, Zhu SW, Yuan X-C (2013) Dynamic plasmonic tweezers enabled single-particle-film-system gap-mode Surface-enhanced Raman scattering. Appl Phys Lett 103:191119 32. Du L, Yuan G, Tang D, Yuan XC (2011) Tightly focused radially polarized beam for propagating surface plasmon-assisted gap-mode Raman spectroscopy. Plasmonics 6(4):651–657 33. Du L, Tang D, Yuan G, Wei S, Yuan X (2013) Emission pattern of surface-enhanced Raman scattering from single nanoparticle-film junction. Appl Phys Lett 102:081117 34. Park W-H, Ahn S-H, Kim ZH (2008) Surface-enhanced Raman scattering from a single nanoparticle–plane junction. Chem Phys Chem 9(17):2491–2494 35. Chu YZ, Zhu WQ, Wang DX, Crozier KB (2011) Beamed Raman: directional excitation and emission enhancement in a plasmonic crystal double resonance SERS substrate. Opt Express 19:20054 36. Ahmed A, Gordon R (2011) Directivity enhanced Raman spectroscopy using nanoantennas. Nano Lett 11:1800 37. Calander N (2004) Theory and simulation of surface plasmon-coupled directional emission from fluorophores at planar structures. Anal Chem 76:2168 38. Zhang CL, Wang R, Min CJ, Zhu SW, Yuan X-C (2013) Experimental approach to the microscopic phase-sensitive surface plasmon resonance biosensor. Appl Phys Lett 102:011114 39. Du LP, Lei DY, Yuan GH, Fang H, Zhang X, Wang Q, Tang DY, Min CJ, Maier SA, Yuan XC, (2013) Mapping plasmonic near-field profiles and interferences by surface-enhanced Raman scattering. Sci Rep 3:3064

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

Optical-Tweezers-Based Microrheology of Soft Materials and Living Cells Ming-Tzo Weia, Olga Latinovicb, Lawrence A. Houghc, Yin-Quan Chend, H. Daniel Ou-Yanga,e* and Arthur Chioud,f* a Bioengineering Program, Lehigh University, Bethlehem, PA, USA b Institute of Human Virology, University of Maryland School of Medicine, Baltimore, MD, USA c Complex Assemblies of Soft Matter Lab, UMI 3254 CNRS/UPENN/Rhodia, Bristol, PA, USA d Institute of Biophotonics, National Yang-Ming University, Taipei, Taiwan e Department of Physics, Lehigh University, Bethlehem, PA, USA f Biophotonics & Molecular Imaging Research Center, National Yang-Ming University, Taipei, Taiwan

Introduction Optical tweezers [1] use a highly focused laser beam to form a stable trap to confine one or more micron- or nano-sized particles in three-dimensional space, enabling noninvasive manipulation, without any mechanical contact, of microscopic probe particles embedded in a sample. Since its first demonstration in 1986 by Ashkin et al. [2], single-beam optical tweezers have been used to manipulate microscopic objects such as colloidal particles [3], biomolecules [4, 5], and biological cells [6–9]. In addition, optical tweezers have also been used as pico-Newton force transducers to measure the strength of molecular bonds [10] and to determine the transmission of forces in the microscopic environment of complex fluids [11–14]. Combining the ability to manipulate microparticles with force measurement, optical tweezers have been used to study the micromechanical properties of soft materials [15, 16], such as colloidal crystals [17–20], liquid crystals [21–23], carbon nanotube suspensions [24], actin-coated lipid vesicles [25–27], living cells [28–33], cytoskeletal networks [34–37], DNA networks [38, 39], polymer solutions [40–42], collagen gels [43, 44], human erythrocyte membranes [45–49], and even individual strands of DNA molecules [5, 50]. Depending on the length scales of the local homogeneity of samples, two different oscillatory optical-tweezers-based approaches have been developed to measure the micromechanical properties of soft materials [12, 14, 40, 43, 51, 52]. In the first approach, optical tweezers are used to hold and oscillate a probe particle, setting the particle into a forced oscillation. The amplitude and phase of the oscillating particle (relative to the oscillating laser beam) are used to determine the local mechanical properties of a medium surrounding the probe particle. The second approach uses two optical tweezers; each holds an individual probe particle; one of the optical tweezers is oscillatory, while the other is stationary [53, 54]. The relative motions of the two particles are then used to determine the mechanical properties of the material between the two particles. For materials that are inhomogeneous in length scales comparable to the distance between the two probe particles, the mechanical properties are expected to be a function of the distance between the two particles. By varying the separation between the two particles, the two-particle microrheology can be used to measure the average mechanical properties of the surrounding medium between the two particles and to explore the length scales of the inhomogeneity in mechanical property, which can reveal important insight of heterogeneity in biological samples [35, 53, 55, 56].

*Email: [email protected] *Email: [email protected] Page 1 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

The subsequent sections of this chapter are arranged as follows. In section “Basic Principles of Oscillatory Optical-Tweezers-Based Techniques,” we begin with an overview of the basic principles of oscillatory optical tweezers and the calibration of the optical force constant in the linear restoring force regime. Then, we illustrate the applications of these techniques to determine the micromechanical properties of polymer solutions in section “Microrheology of Polymer Solutions” and living cells in section “Microrheology of Living Cells.” The relationship between the correlated motions of two colloidal probe particles and the viscoelastic properties of the materials between the two particles are also addressed in section “Microrheology of Polymer Solutions.” We end the chapter by summarizing the recent progresses and future prospects of microrheology in section “Summary and Conclusions.”

Basic Principles of Oscillatory Optical-Tweezers-Based Techniques In this section, we describe how the motion of a particle, executing forced oscillation by oscillatory optical tweezers, is used to determine the mechanical properties of the surrounding medium from the amplitude and phase of the particle’s displacement. The oscillatory optical-tweezers-based technique allows us to use a lock-in amplifier to determine the displacement amplitude and phase of the oscillating particle with an improved signal-to-noise ratio by filtering off noncoherent particle motions [51, 57, 58]. A calibration of the force constant (kOT) of optical tweezers in the linear restoring force regime is accomplished by fitting the measured amplitude and phase of a particle’s displacement in a liquid with known viscosity. Provided that the trapping force constant (kOT) is known, the micromechanical properties of soft materials can be measured and characterized in terms of the elastic modulus G0 (o) and the viscous modulus G00 (o) as a function of angular frequency (o). The principle can be extended to determine the mechanical properties of a viscoelastic soft material between two probe particles.

Calibration of the Force Constant of Optical Tweezers With proper calibration [59–68], the optical tweezers can be used as a convenient force transducer. Measurements of optical forces on a particle trapped in optical tweezers can be calibrated by several different approaches. In the fluid-drag approach, a trapped particle is dragged along a direction perpendicular to the optical axis with a viscous force increased gradually until the particle escapes from the optical tweezers [59, 64, 69]. In the thermal-fluctuation approach, one can deduce the force constant either by analyzing the spatial distribution of the Brownian motion of an individual particle in optical tweezers [70–74] or by analyzing the power spectrum of the thermal fluctuations [60, 75]. In the forced-harmonic-oscillator approach, the force constant can be calculated from the frequency-dependent amplitude and phase of a particle that is harmonically driven by oscillatory optical tweezers in a liquid with a known viscosity [51, 57]. In this section, we describe the forcedharmonic-oscillator approach. The motion for a trapped particle, trapped and oscillated by oscillatory optical tweezers in a viscous medium, is determined by the viscous drag force experienced by the particle and the force imparted by the optical tweezers. We describe the optical trapping force by Hooke’s law, with an optical force constant “kOT,” in the linear restoring force regime. Here, the optical force is proportional to the displacement when the displacement is sufficiently small compared with the linear optical force regime [76]. Figure 1 shows the forces on a particle of radius “a,” in an oscillatory optical tweezers residing in a simple viscous liquid. In the linear restoring force regime, the oscillatory optical tweezers can be modeled as a quadratic potential well, oscillating with constant Page 2 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

Fig. 1 Force diagram of the forces on a particle trapped by an oscillatory optical tweezers

amplitude “A.” For a simple viscous liquid, the drag force is the Stokes’ drag, where x_ is the velocity of the particle and  is the shear viscosity of the liquid. The force exerted on the particle by the optical tweezers is given simply as the force constant multiplied by the distance between the center of the particle and the center of the trap. A second, stationary laser beam that is focused by a low numerical aperture, with optical intensity much weaker than that of the trapping beam, is used to track the position of the particle. The equation of motion of the particle is   m€ xðo, t Þ ¼ 6pax_ðo, tÞ þ k OT Aeiot  xðo, t Þ

(1a)

where m is the mass of the particle, x is the displacement of the particle from the center of the tracking beam, and A is the amplitude of the oscillatory optical tweezers with angular frequency o. _ one can ignore the first Since the Reynolds number of the particle motion is low, 6pax_ >> mx, term, mx_ , and Eq. 1a can be simplified into Eq. 1b for a micron-sized particle with oscillating frequencies in the range of o = 1 ~ 6,000 rad/s discussed here: 6pax_ðo, t Þ þ k OT xðo, t Þ ¼ k OT Aeiot

(1b)

A steady-state solution of Eq. 1b can be written in the form x(o, t) = D(o)ei(ot  d)(o), where the displacement amplitude “D” and the relative phase “d” can be obtained experimentally by the use of a lock-in amplifier. The amplitude D(o) of the displacement of the particle is given by Ak OT DðoÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi k OT 2 þ ð6paoÞ2

(2a)

and the phase shift d(o) is given by dðoÞ ¼ tan

1



6pao k OT

 (2b)

The amplitude of the displacement has two distinctive regimes. At low frequencies, the amplitude of the particle’s displacement takes on the form D(o) = A. In this regime, the elastic force of the trap dominates the motion of the particle. At high frequencies, the viscous damping force dominates the motion of the particle, and the amplitude of the particle’s displacement takes the Page 3 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

Normalized Displacement

a 1.2 1 0.8 0.6 0.4 0.2 0 1

10

100

1000

100

1000

Frequency (Hz)

b 100

Phase (degree)

80 60 40 20 0 1

10 Frequency (Hz)

Fig. 2 (a) The normalized amplitude and (b) the relative phase as a function of the oscillation frequency of a 1.5 mm diameter polystyrene particle in deionized water in an oscillatory optical tweezers. The open circles represent the experimental data and the solid lines are the fits with the optical force constant kOT as the only fitting parameter in Eqs. 2a and 2b

form D(o) = AkOT/6pao. Over the entire angular frequency range, the tangent of the phase is a linear function of angular frequency, given by tan d(o) = 6pao/kOT. The optical force constant (kOT) is determined by Eqs. 2a and 2b [29, 76, 77] where the viscosity  of water is taken to be 0.9548 mPasec at 22  C. Figure 2a, b, respectively, depicts experimental data representing the angular frequency-dependent amplitude D(o) and phase shift d(o) of an oscillating polystyrene particle (diameter = 1.5 mm) in deionized water. The open circles represent the experimental data and the solid lines are the fits with the optical force constant kOT as the only fitting parameter in Eq. 2a. The force constant kOT determined from the best fit is 15.65  0.52 pN/mm from the amplitude data and 14.70  0.42 pN/mm from the phase data at a trapping power of 6 mW. Optical forces on colloidal particles have been calculated using mainly three different models: (a) ray optics model [61, 78], which is a good approximation for particle radius “a” > optical wavelength “l”; (b) the electromagnetic field model [79, 80], a good approximation for particle radius “a” < optical wavelength “l”; and (c) generalized Mie theory, which is applicable when the particle size is comparable to the wavelength of the trapping laser [62, 81, 82]. In the linear restoring force regime, optical force constants (per mW of optical trapping power) as a function of particle size, obtained from the generalized Mie theory (the black solid line), are in good agreement with the experimental data (the red dots) as shown in Fig. 3. When the particle diameter approaches the trapping laser wavelength (l ~ 800 nm in water), the transverse linear optical spring constant reaches

Page 4 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

Fig. 3 The experimental (the dots) and the theoretical (the solid line) results for optical force constant at 1 mW versus particle size. The means and the standard deviations are obtained by repeating each experiment ten times under identical experimental conditions

a maximum, which agrees with the theoretical result. This is also consistent with the experimental results reported earlier [73, 74] obtained by tracking the Brownian motion of a trapped particle.

Response Function for an Oscillating Particle in a Viscoelastic Medium To relate the motion of a single particle to the mechanical properties of the material surrounding the particle, the complex response function is extended to deduce the viscoelasticity of the material. The complex response function, a*(o), is the ratio of the displacement of a particle to the external forces on the particle. Specifically, a*(o) = x(o)/F(o) where x(o) is the angular frequency-dependent displacement of the particle and F(o) is the external force on the particle. The response function from Eq. 1, ignoring the inertia of the particle, can be expressed as a ðoÞ ¼

xðoÞ 1 ¼ F ðoÞ io6pa þ k OT

(3)

A single spherical particle trapped by optical tweezers in a viscous medium can be thought of as a model viscoelastic medium with an effective complex shear modulus, G eff(o) = 1/6paa (o). Conventionally, the viscoelasticity of a complex fluid is often characterized in terms the complex shear modulus [83] G (o) = io (o) = G0 (o) + iG00 (o), where  (o) is the complex viscosity and G0 (o) and G00 (o) are the storage (or elastic) modulus and the loss (or viscous) modulus, respectively. Using these nomenclatures, the relationship between G eff(o) and G (o) can be expressed as G eff ðoÞ ¼

1 k OT k OT k OT ¼ þ io ðoÞ ¼ þ G ðoÞ ¼ þ G0 ðoÞ þ iG00 ðoÞ  6paa ðoÞ 6pa 6pa 6pa

(4)

where the first term on the right-hand side of Eq. 4 represents the contribution from the elasticity of optical tweezers and the other terms represent the viscoelasticity of the material surrounding the probe particle.

Page 5 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

Fig. 4 A schematic diagram of two particles in a viscoelastic liquid with complex response function a*; each particle is in one of the two separate quadratic potential wells at distance R apart

When a particle in a fluid is subjected to an oscillating force, the effective complex shear modulus can be deduced from the angular frequency-dependent in-phase and out-of-phase motions relative to the oscillating force as follows: G eff ðoÞ ¼

1 F Ak OT Ak OT ¼ ¼ ¼ ð cos dðoÞ þ i sin dðoÞÞ  6paa ðoÞ 6paxðoÞ 6paxðoÞ 6paDðoÞ

(5)

From Eqs. 4 and 5, the storage and the loss moduli, G0 (o) and G00 (o), of the material surrounding the probe particle can be determined from the phase shift “d” and amplitude “D” of the particle (in response to the oscillatory optical tweezers) via the following relationships:   k OT A   1 ¼ G0 ðoÞ þ iG00 ðoÞ (6a) G ðoÞ ¼ 6pa xðoÞ   k OT A cos dðoÞ  1 and G ðoÞ ¼ DðoÞ 6pa 0

  k OT A sin dðoÞ G ðoÞ ¼ DðoÞ 6pa 00

(6b)

Response Tensor for Coupled Oscillation of Two Particles in a Viscoelastic Medium Much like the single-particle case, the formalism used to describe the hydrodynamic coupling between two particles in a simple viscous liquid can be extended to deduce the viscoelasticity of the medium. Consider a viscoelastic fluid, in which two identical particles are held by two optical tweezers in two separated quadratic potential wells; under the condition that one of optical trap is oscillatory and the other is stationary, the equations of motion of the particles (x1 and x2) are given by (neglecting the inertia of the particles) [11, 12]    x1 ðo, 1Þ ¼ a 11 ðoÞ k OT 1 Aeiot  x1 ðo, t Þ þ a 12 ðoÞðk OT 2 x2 ðo, t ÞÞ

(7a)

   x2 ðo, t Þ ¼ a 21 ðoÞ k OT 1 Aeiot  x1 ðo, t Þ þ a 22 ðoÞðk OT 2 x2 ðo, t ÞÞ

(7b)

where A is the amplitude of the oscillatory optical tweezers with angular frequency o and kOT is the force constant of the optical trap. To allow for possible inhomogeneity in the fluid, we introduce a11(o) = 1/(6paG11), a12(o) = 1/(4pRG12), a21(o) = 1/(4pRG21), and a22(o) = 1/(6paG22). Here, R is the distance between the particles at the initial position, G*11 is the local complex shear modulus of fluid surrounding the particle in the oscillatory trap, G*12 and G*21 are the nonlocal complex shear moduli of the fluid between the two particles, and G*22 is the local complex shear Page 6 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

modulus of fluid surrounding the particle in the stationary trap as shown in Fig. 4. By symmetry, G*12 = G*21, and a*12 = a*21. Viscoelasticity is taken into account by requiring that the viscosity is complex and is a function of frequency. For simplicity, in what follows, we assume that the force constants of the two optical traps are equal; i.e., kOT1 = kOT2 = kOT. The purpose of measuring the correlated motions of two particles arises from the fact that these motions can be used to determine the nonlocal mechanical properties, G*12(o), between the two probe particles. While the local mechanical properties, G*11(o) and G*22(o), can be determined by measuring the motion of single probe particle, the nonlocal properties are expected to vary as a function of the distance between the two particles in an inhomogeneous viscoelastic medium [53, 54]. Thus, by varying the distance between the two particles and measuring the correlated motions, one can actively measure the average mechanical properties between the two particles to probe different length scales of the inhomogeneity. To solve Eq. 7a, and b for the local and nonlocal mechanical properties, we assume that the distance between the particles is large enough (R > a) such that the particle in the stationary trap does not affect the motion of the particle in the oscillatory trap except for the coupling through the medium in between as is prescribed by G*12(w). This weak-coupling approximation is justified by the assumption that the amplitude of the oscillation of the particle in the stationary trap is expected to be so small that its feedback effect to the other particle is a higher-order effect which can be neglected. Thus, the last term on the right-hand side of Eq. 7a can be ignored, and the local complex shear moduli become   k OT A  1 : (8) G 11 ðoÞ ¼ 6pa x1 ðoÞ The above expressions for the local mechanical properties around each probe particle are identical to the corresponding expressions for the storage and loss moduli given in the previous section. The nonlocal complex modulus is given by  

k OT A x1 ðoÞ x1 ðoÞ  2 (9) þ G12 ðoÞ ¼ A 4pR x2 ðoÞ x2 ðoÞ where x1 ðoÞ ¼ D1 ðoÞeid ðoÞ and x2 ðoÞ ¼ D2 ðoÞeid ðoÞ are the displacements of the particles in the oscillatory optical tweezers and the stationary optical tweezers, respectively. The mechanical properties of the medium between the two particles can thus be determined by measuring the motion of the particles x1(o) and x2(o). Explicitly, the nonlocal storage and loss moduli are given by 1

2

G012 ðoÞ



k OT A cos d2 ðoÞ D1 2 ðoÞ cos ðd2 ðoÞ  2d1 ðoÞÞ 2D1 ðoÞ cos ðd2 ðoÞ  d1 ðoÞÞ þ  ¼ D2 ðoÞ AD2 ðoÞ D2 ðoÞ 4pR (10a)

G0012 ðoÞ



k OT A sin d2 ðoÞ D1 2 ðoÞ sin ðd2 ðoÞ  2d1 ðoÞÞ 2D1 ðoÞ sin ðd2 ðoÞ  d1 ðoÞÞ ¼ þ  D2 ðoÞ AD2 ðoÞ D2 ðoÞ 4pR (10b)

Page 7 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

Fig. 5 The experimental results of storage modulus G0 (solid symbols) and the loss modulus G00 (open symbols) as a function of frequency obtained by active oscillatory-optical-tweezers approach (□) and passive particle-tracking approach (solid lines) with a 1.5 mm diameter polystyrene particle suspended in 20 wt% 100 kg/mol PEO solutions. The dotted lines represent G0 (o) and G00 (o) based on the Maxwell model. The inset shows the imaginary part of compliance function a"(o) measured by passive particle-tracking approach (without optical tweezers; solid line) and by active oscillatory approach (squares)

Microrheology of Polymer Solutions In this section, we consider the mechanical properties of polymer solutions measured by optical tweezers. This approach allows us to measure the storage and the loss moduli in the range of approximately 101 to 104 dyne/cm2, for an optical trapping force constant kOT around 0.01 dyne/cm. In section “Single-Particle Measurements in a Viscoelastic Medium,” we consider the single-particle active microrheology of homogeneous polymer solution. The results are consistent with the corresponding bulk mechanical properties measured by a conventional rheometer and the micromechanical properties obtained from passive microrheological approach [84]. However, the experimental results of the single-particle microrheology of inhomogeneous soft materials differ from the corresponding macroscopic mechanical properties. In section “Two-Particle Measurements in a Viscoelastic Medium,” we consider the application of two-particle active microrheology to study the effects of microscopic heterogeneous mechanical properties of a polymer solution between two probe particles.

Single-Particle Measurements in a Viscoelastic Medium In this section, we present, as a specific example, the application of oscillatory optical tweezers to study the micromechanical properties of an aqueous solution of polyethylene oxide (PEO; Mw = 100 kg/mol). The results are in good agreement with the bulk mechanical properties measured by Dasgupta et al. [85]. For a 20 wt% solution of PEO in water, the average mesh size of the polymer network is much smaller than the size of the probe particle (1.5 mm diameter polystyrene spheres). Thus, the intrinsic inhomogeneity of the network should not affect the micromechanical properties. Although PEO has been shown to absorb onto the surface of polystyrene particle, absorption is not an issue because the thickness of the absorbed polymer has been determined to be approximately 24 nm [86]. The data of microrheological studies reported by Dasgupta et al. [85] indicate that the frequency dependence of the viscoelasticity of the polymer solution agrees with their macroscopic measurements for all surface treatments and particle sizes. Figure 5 shows a comparison of the mechanical properties of a 20 wt% solution of PEO in water as measured by the active oscillatory-optical-tweezers approach versus the corresponding results Page 8 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

obtained by the passive particle-tracking approach (without optical tweezers). In the active approach, the complex shear modulus can be measured directly as is prescribed by Eq. 6; in the passive approach, the complex shear modulus can be determined by tracking the Brownian motion and using the fluctuation-dissipation theorem (FDT) [87, 88] or the generalized Stokes-Einstein relation (GSER) [89]. According to the fluctuation-dissipation theorem, the imaginary part of the complex response function a"(o) can be written as a00 ðoÞ ¼

o C ðoÞ 2k B T

(11)

where C(o) is the power spectral density analyzed from particle displacement fluctuations, kB is the Boltzmann constant, and T is the absolute temperature. The imaginary part of the complex response function a"(o) deduced from Eq. 11 agrees with the corresponding results obtained by the active approach in this equilibrium system. According to the Kramers-Kronig relation (KKR), the real part of the compliance function a0 (w) can be expressed as 2 a ðoÞ ¼ P p 0

ð1 0

xa00 ðxÞ 2 dx ¼ 2 p x  o2

ð1

cos ðotÞdt

ð1

0

a00 ðxÞ sin ðxt Þdx

(12)

0

The reciprocal of a*(o) is the complex shear modulus G*(o) = G0 (o) + iG00 (o), where G 0 (o) and G00 (o) are given by G0 ðoÞ ¼

1 a0 ðoÞ 6pa a0 ðoÞ2 þ a00 ðoÞ2

(13a)

G00 ðoÞ ¼

1 a00 ðoÞ 6pa a0 ðoÞ2 þ a00 ðoÞ2

(13b)

The loss modulus G00 (o) obtained by the active and passive approaches agrees well. However, the storage modulus G0 (o) measured by the active approach and the passive approach does not agree at both the high- and low-frequency regimes because the lower and upper bounds of the frequency range of the integral in Eq. 12 are replaced by finite values (6 rad/s and 6,000 rad/s, respectively). In Fig. 5, the results indicate that the microrheological properties of semi-dilute PEO solutions agree with the bulk mechanical properties [85] within the frequency range of 6–100 rad/s accessible by both techniques [77]. For angular frequencies in the range of 100–6,000 rad/s, a comparison can be made between the micromechanical properties determined by the oscillatory optical tweezers and by dynamic light scattering [85]. The results in Fig. 5 indicate the polymer solution has liquid-like behavior (G00 > G0 ) in the lower-frequency regime and solid-like behavior (G0 > G00 ) in the higher-frequency regime. The experimental results of the mechanical properties as a function of frequency can be fitted to the Maxwell model represented by a purely viscous damper and a purely elastic spring connected in series: 0

00

G þ iG ¼ G



 1

t2 o2 þ ito 1 þ t2 o2

 (14)

Page 9 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

Fig. 6 A comparison between (a) the local storage modulus G0 11(o) and the nonlocal storage modulus G0 12(o); (b) the local loss modulus G00 11(o) and the nonlocal storage modulus G00 12(o), at several particle distances for a 20 wt% solution of PEO. In the legends in the lower right, “R” is the distance between the two probe particles and “a” is the particle radius

with two adjustable parameters t and G1, where t is the relaxation time of the system and G1 is the plateau modulus. According to the theory of rubber elasticity [90], G1 = vkBT, where v is the number of elastically active chains in the network per unit volume. The rheology is well described by the Maxwell model [91, 92], with fitting parameters G1 = 10,578 dyne/cm2 and t = 1 ms (the dotted lines in Fig. 5).

Two-Particle Measurements in a Viscoelastic Medium Single-particle microrheology can be extended to two-particle microrheology to investigate the mechanical inhomogeneities in soft materials at length scales comparable to the distance between the probes. Two-particle microrheology particularly contributes to the understanding of the mechanical properties of biological materials [35, 93]. In this section, we present the micromechanical properties of a 20 wt% polyethylene oxide solution (Mw = 100 kg/mol) surrounding a single probe particle and between two particles. As noted earlier, PEO solution is homogeneous on length scales comparable to the size (1.5 mm diameter) of the probe particle, meaning that the local mechanical properties of the medium surrounding the probe particle are comparable to the bulk mechanical properties. Figure 6 shows the nonlocal storage modulus and loss modulus at several length scales (i.e., the distance between two particles). The nonlocal mechanical properties probed by the two-particle microrheological approach agree reasonably well with the results obtained by the single-particle microrheological approach as well as with bulk viscoelastic properties over the accessible frequency range of the bulk rheometer.

Page 10 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

Fig. 7 A sketch of an oscillatory optical-tweezers-based cytorheometer with an intracellular granular structure (lamellar body, right circle) or an extracellular antibody-coated particle (left circle)

In conclusion, since the two-particle microrheology technique allows the distance between two particles to be varied systematically, the nonlocal micromechanical properties of the medium between two probe particles can be used to compare with the local micromechanical properties surrounding a probe particle to study the homogeneity of the medium; for homogeneous polymer solutions, the local and nonlocal micromechanical properties agree well with the bulk mechanical properties, as expected. For systems in mechanical equilibrium, a good agreement was achieved in the measurement of the imaginary part of the response function a"(o) by active (single- and two-particle) and passive approaches. We will discuss the nonequilibrium system in the next section.

Microrheology of Living Cells Microrheology of living cells can be used to gain insight into the inhomogeneous structure and dynamics of the cytoskeleton [56, 94–96]. As a living polymer network, the cytoskeleton is constantly polymerizing and depolymerizing with activities, depending on the biological functions of cells [97]. It is known that the cytoskeleton maintains the cell shape and regulates cellular mechanics, but there is also evidence indicating that the cytoskeletal network may contribute to other important cellular functions. It has been shown that the cytoskeleton is directly connected to the nucleus and that external shear stress stimuli [98] can lead to cytoskeletal reorganization as well as modulations of gene expression in cells [99]. It is also well known that the differentiation of stem cells depends on the attachment of the cell to a substrate [100–105]. Knowledge of the role of mechanical forces in living cells in discerning signaling pathways and the quantification of how the transmission of mechanical forces in the cytoskeleton affects the micromechanical properties can provide a better understanding of the complex system of cellular signaling pathways [99].

Comparative Study of Extracellular and Intracellular Microrheology The ability to measure the mechanical properties at the subcellular level is important for the study of mechanotransduction. Rotational optical tweezers [106], which often require a spherical birefringent probe particle, enable highly localized measurements because the probe particle does not change position in the surrounding medium. In contrast, oscillatory optical tweezers, which do not require probe particles to be birefringent, enable the trapping of an endogenous intracellular organelle as a probe to measure the intracellular microrheological properties. In this section, we present a comparison of the measurements of cellular mechanical properties using a probe particle located exterior to the plasma membrane and an intracellular probe endogenous to the cell as shown in Fig. 7, where

Page 11 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

b G ′or G ″ (dyne cm–2)

G ′or G ″ (dyne cm–2)

a 105 104

103 102 1

104 102 100 10–2

10

100

Frequency (Hz)

1000

1

10

100

1000

Frequency (Hz)

Fig. 8 The storage modulus G0 (solid symbols) and the loss modulus G" (open symbols) of cells, as a function of frequency (o), probed with (a) an anti-integrin conjugated silica particle attached to the plasma membrane and (b) an intracellular organelle. In both (a) and (b), the dashed line represents a power-law fit to G0 (Note: G00 (o) does not follow the power law)

a micron-sized endogenous intracellular organelle is shown on the right and an extracellular 1.5 mm silica particle attached to the cytoskeleton through transmembrane integrin receptor is shown on the left on top of the cell membrane [29, 30]. Figure 8a, b is a microrheology data obtained by extracellular and intracellular probes. Both the storage modulus (G0 ) and the magnitude of the complex shear modulus (G) followed a weak powerlaw dependence on frequency. The behavior has been attributed to a distribution of relaxation times in the soft material. Fabry et al. interpret the mechanical properties of cells to be soft glassy materials where disorder and metastability may be essential features underlying the cell’s mechanical functions [107]. The exponents of the power-law dependence of the data from the intra- and extracellular measurements are similar; however, the differences in the magnitudes of the moduli from the two measurements are statistically significant. It is possible that the larger moduli measured with the external particles might be partly due to the extensional stiffness or other mechanical properties of the plasma membrane. Although using an intracellular organelle as a probe provides a direct measurement of intracellular local mechanical properties, the optical force constant, kOT, of the optical tweezers is determined by assuming that the indices of refraction of the probe and the surrounding material are known, which leads to uncertainty in the measured mechanical properties.

Comparative Study of Active and Passive Cellular Microrheologies Cells generate and react to forces through a nonequilibrium network of cytoskeleton and motor proteins [108, 109]. The mechanical properties of active cytoskeletal network systems can change due to intracellular tension created by the active motors [35, 110–117]. The motor activity gives the essence of how nonequilibrium systems [118] may be created in living cells [33, 119]. Nonequilibrium mechanical behavior has been observed with an in vitro model system consisting of an actin network with embedded myosin motors [34] and has also been observed in living cells [32, 33]. In this section, we discuss the nonequilibrium mechanical system using a combination of active and passive microrheological approaches to characterize intracellular forces and mechanical properties for biological systems. Cellular mechanical properties caused by active motor proteins are investigated by comparing the data obtained by the active and passive microrheological approaches against the predictions of the fluctuation-dissipation theorem. For active microrheology (AMR), the mechanical properties can be directly determined from the experimentally measured particle displacement magnitude and phase shift (Eq. 6) [29]. By using the fluctuation-dissipation theorem (Eq. 11), the theoretical value of the thermal fluctuations (Cthermal) can be determined from (2kBTa00 /o). For passive microrheology

Page 12 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

Fig. 9 Experimental results of the fluctuation-dissipation theorem violation (2kBTa"/o) as a function of frequency by probing with (a) an endosome and (b) an engulfed microparticle

Table 1 A comparison of the measurements of cellular nonthermal forces

(PMR), the fluctuations of a probed particle in a living cell were tracked by a CCD camera. In an equilibrium system, where only thermal forces are acting on the probe, the power spectral density of the displacement fluctuations is directly related to the estimated value of the thermal fluctuations (2kBTa00 /o) measured by AMR. Violation of the fluctuation-dissipation theorem would be defined as the ratio of experimental power spectrum (C) measured by passive microrheology to the thermal fluctuations (2kBTa00 /o) measured by active microrheology. The measurements using an endosome [120] and an engulfed micron-sized polystyrene particle as a probe (Wei et al.) are shown in Fig. 9a, b, respectively. In previous studies, this ratio was also defined as the ratio of an effective temperature of the nonequilibrium system to the bath temperature [32, 33]. At frequencies lower than 10 Hz, the ratios obtained via an engulfed microparticle are much smaller than the corresponding values obtained via an endosome [95] presumably due to the different nonthermal fluctuations induced by different molecular motors (i.e., actin motors vs. microtubule motors). At frequencies higher than 10 Hz, the AMR and PMR results agree, indicating the frequency limit of the nonequilibrium dynamics caused by molecular motors. These results are qualitatively consistent with the previous studies with either a probe particle attached to a cellular cortex [32, 93] or a probe particle bound inside a cell [33]. A “nonthermal force” () [35], caused by active driving forces (e.g., motor activities), could be obtained from the extra fluctuations, the difference between the total fluctuation spectrum “C” measured by PMR and the thermal fluctuations (2kBTa00 /o) estimated by AMR: 2 2 2k B T a00 f a ¼ C  C thermal ¼ C  o

(15)

Page 13 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

The cellular nonthermal forces using either a probe particle attached to a cellular cortex [32, 93] or a probe particle bound inside a cell [33] can be studied by both local and nonlocal microrheology [33]. Comparing with previous reports [32, 33, 93] as shown in Table 1, the measurements using an engulfed particle (1 mm-diameter polystyrene particles) as a probe show that intracellular force is smaller than the tension on the cellular cortex. The result indicates that intracellular motors might be weaker or less active than motors on the cellular cortex. Whereas extracellular probes attached to the cytoskeleton provide measurements of global cell mechanical properties, intracellular probes provide direct measurements of intracellular mechanical properties. The latter may be more useful in investigating the microrheology of intracellular heterogeneity and temporal fluctuations. Comparisons of passive and active intracellular microrheologies allow thermal and nonthermal fluctuations to be distinguished in a nonequilibrium system. Studying the intracellular nonthermal forces and mechanical properties would help advance our current understanding of how cells sense and respond to their mechanical environment, leading to new designs in biomaterials and advancing our understanding of diseases linked to cellular mechanotransduction [99, 121–125].

Summary and Conclusions This chapter describes several experiments that use the techniques of oscillatory optical tweezers for the determination of the viscoelasticity of mechanical systems with complex shear moduli ranging from 101 to 104 dyne/cm2 over a wide frequency range (101 < o < 103 rad/s). The measurements of micromechanical properties of semi-dilute polymer solutions illustrate that the techniques lead to results consistent with bulk properties when the length scale of inhomogeneity intrinsic to the polymer network is much smaller than the size of the probe particle. The results are also in good agreement with the measurements by passive approach for equilibrium mechanical systems. The two-particle technique allows us to study the microscopic inhomogeneous mechanical properties at length scales of the distance between the probe particles. Microrheology inside living cells using an engulfed microparticle or an endosome as a probe demonstrates the possibility of investigating intracellular heterogeneity and temporal fluctuations of viscoelasticity as well as nonthermal forces for nonequilibrium mechanical systems in biological cellular matters, from which important biomedical implications can be expected.

References 1. Ashkin A (1970) Acceleration and trapping of particles by radiation pressure. Phys Rev Lett 24(4):156–159 2. Ashkin A, Dziedzic J, Bjorkholm J, Chu S (1986) Observation of a single-beam gradient force optical trap for dielectric particles. Opt Lett 11(5):288–290 3. Ashkin A (1997) Optical trapping and manipulation of neutral particles using lasers. Proc Natl Acad Sci U S A 94:4853–4860 4. Svoboda K, Schmidt CF, Schnapp BJ, Block SM (1993) Direct observation of Kinesin stepping by optical trapping interferometry. Nature 365:721–727 5. Lien C-H, Wei M-T, Tseng T-Y, Lee C-D, Wang C, Wang T-F, Ou-Yang HD, Chiou A (2009) Probing the dynamic differential stiffness of dsDNA interacting with RecA in the enthalpic regime. Opt Express 17(22):20376–20385 Page 14 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

6. Ashkin A, Dziedzic JM (1987) Optical trapping and manipulation of single cell using infrared laser beams. Nature 330:769–771 7. Svoboda K, Schmidt CF, Branton D, Block SM (1992) Conformation and elasticity of the isolated red blood cell membrane skeleton. Biophys J 63:784–793 8. Liu S-L, Karmenyan A, Wei M-T, Huang C-C, Lin C-H, Chiou A (2007) Optical forced oscillation for the study of lectin-glycoprotein interaction at the cellular membrane of a Chinese hamster ovary cell. Opt Express 15(5):2713–2723 9. Wei M-T, Hua K-F, Hsu J, Karmenyan A, Tseng K-Y, Wong C-H, Hsu H-Y, Chiou A (2007) The interaction of lipopolysaccharide with membrane receptors on macrophages pretreated with extract of Reishi polysaccharides measured by optical tweezers. Opt Express 15(17):11020–11032 10. Stout AL (2001) Detection and characterization of individual intermolecular bonds using optical tweezers. Biophys J 80:2976–2986 11. Meiners J-C, Quake SR (1999) Direct measurement of hydrodynamic cross correlations between two particles in an external potential. Phys Rev Lett 82(10):2211–2214 12. Hough LA, Ou-Yang HD (2002) Correlated motions of two hydrodynamically coupled particles confined in separate quadratic potential wells. Phys Rev E 65:021906 (021907 pages) 13. Henderson S, Mitchell S, Bartlett P (2002) Propagation of hydrodynamic interactions in colloidal suspensions. Phys Rev Lett 88:088302 (088304 pages) 14. Ou-Yang HD, Wei M-T (2010) Complex fluids: probing mechanical properties of biological systems with optical tweezers. Annu Rev Phys Chem 61:421–440 15. Yao A, Tassieri M, Padgett M, Cooper J (2009) Microrheology with optical tweezers. Lab Chip 9:2568–2575 16. Preece D, Warren R, Evans RML, Gibson GM, Padgett MJ, Cooper JM, Tassieri M (2011) Optical tweezers: wideband microrheology. J Opt 13:044022 (044026 pages) 17. Pertsinidis A, Ling XS (2001) Equilibrium configurations and energetics of point defects in two-dimensional colloidal crystals. Phys Rev Lett 87(9):098303 (098304 pages) 18. Crocker JC, Grier DG (1994) Microscopic measurement of the pair interaction potential of charge-stabilized colloid. Phys Rev Lett 73(2):352–355 19. En A-R, Díaz-Leyva P, Arauz-Lara JL (2005) Microrheology from rotational diffusion of colloidal particles. Phys Rev Lett 94:106001 (106004 pages) 20. Wilson LG, Harrison AW, Poon WCK, Puertas AM (2011) Microrheology and the fluctuation theorem in dense colloids. EPL 93:58007 21. Murazawa N, Juodkazis S, Tanamura Y, Misawa H (2006) Rheology measurement at liquidcrystal water interface using laser tweezers. Jpn J Appl Phys 45(2A):977–982 22. Koenig GM Jr, Ong R, Cortes AD, Antonio Moreno-Razo J, Pablo JJ, Abbott NL (2009) Single nanoparticle tracking reveals influence of chemical functionality of nanoparticles on local ordering of liquid crystals and nanoparticle diffusion coefficients. Nano Lett 9(7):2794–2801 23. Mizuno D, Kimura Y, Hayakawa R (2004) Electrophoretic microrheology of a dilute lamellar phase: relaxation mechanisms in frequency-dependent mobility of nanometer-sized particles between soft membranes. Phys Rev E 70:011509 24. Hough LA, Islam MF, Janmey PA, Yodh AG (2004) Viscoelasticity of single wall carbon nanotube suspensions. Phys Rev Lett 93(16):168102 (168104 pages) 25. Helfer E, Harlepp S, Bourdieu L, Robert J, MacKintosh FC, Chatenay D (2001) Viscoelastic properties of actin-coated membranes. Phys Rev E 63:021904 (021913 pages) 26. Helfer E, Harlepp S, Bourdieu L, Robert J, MacKintosh FC, Chatenay D (2000) Microrheology of biopolymer-membrane complexes. Phys Rev Lett 85:457–460 Page 15 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

27. Helfer E, Harlepp S, Bourdieu L, Robert J, MacKintosh FC, Chatenay D (2001) Buckling of actin-coated membranes under application of a local force. Phys Rev Lett 87(8):088103 (088104 pages) 28. Yanai M, Butler JP, Suzuki T, Kanda A, Kurachi M, Tashiro H, Sasaki H (1999) Intracellular elasticity and viscosity in the body, leading, and trailing regions of locomoting neutrophils. Am J Physiol Cell Physiol 277:C432–C440 29. Wei M-T, Zaorski A, Yalcin HC, Wang J, Hallow M, Ghadiali SN, Chiou A, Ou-Yang HD (2008) A comparative study of living cell micromechanical properties by oscillatory optical tweezer. Opt Express 16(12):8594–8603 30. Yalcin HC, Hallow KM, Wang J, Wei M-T, Ou-Yang HD, Ghadiali SN (2009) Influence of cytoskeletal structure and mechanics on epithelial cell injury during cyclic airway reopening. Am J Physiol Lung Cell Mol Physiol 297:L881–L891 31. Balland M, Desprat N, Icard D, Féréol S, Asnacios A, Browaeys J, Hénon S, Gallet F (2006) Power laws in microrheology experiments on living cells: comparative analysis and modeling. Phys Rev E 74:021911–021917 32. Gallet F, Arcizet D, Bohec P, Richert A (2009) Power spectrum of out-of-equilibrium forces in living cells: amplitude and frequency dependence. Soft Matter 5:2947–2953 33. Wilhelm C (2008) Out-of-equilibrium microrheology inside living cells. Phys Rev Lett 101:028101 (028104 pages) 34. Mizuno D, Tardin C, Schmidt CF, MacKintosh FC (2007) Nonequilibrium mechanics of active cytoskeletal networks. Science 315:370–373 35. Mizuno D, Head DA, MacKintosh FC, Schmidt CF (2008) Active and passive microrheology in equilibrium and nonequilibrium systems. Macromolecules 41(19):7194–7202 36. Mofrad MRK (2009) Rheology of the cytoskeleton. Annu Rev Fluid Mech 41:433–453 37. Pelletier V, Gal N, Fournier P, Kilfoil ML (2009) Microrheology of microtubule solutions and actin-microtubule composite networks. Phys Rev Lett 102:188303 (188304 pages) 38. Zhu X, Kundukad B, van der Maarel JRC (2008) Viscoelasticity of entangled l-phage DNA solutions. J Chem Phys 129:185103 (185106 pages) 39. Mason TG, Ganesan K, Zanten JH, Wirtz D, Kuo SC (1997) Particle tracking microrheology of complex fluids. Phys Rev Lett 79:3282–3285 40. Hough LA, Ou-Yang HD (2006) Viscoelasticity of aqueous telechelic poly(ethylene oxide) solutions: relaxation and structure. Phys Rev E 73:031802 (031808 pages) 41. Chiang C-C, Wei M-T, Chen Y-Q, Yen P-W, Huang Y-C, Chen J-Y, Lavastre O, Guillaume H, Guillaume D, Chiou A (2011) Optical tweezers based active microrheology of sodium polystyrene sulfonate (NaPSS). Opt Express 19(9):8847–8854 42. Lee H, Shin Y, Kim ST, Reinherz EL, Lang MJ (2012) Stochastic optical active rheology. Appl Phys Lett 101:031902 43. Latinovic O, Hough LA, Ou-Yang HD (2010) Structural and micromechanical characterization of type I collagen gels. J Biomech 43:500–506 44. Shayegan M, Forde NR (2013) Microrheological characterization of collagen systems: from molecular solutions to fibrillar gels. PLoS One 8(8):e70590 45. Hénon S, Lenormand G, Richert A, Gallet F (1999) A new determination of the shear modulus of the human erythrocyte membrane using optical tweezers. Biophys J 76:1145–1151 46. Rancourt-Grenier S, Wei M-T, Bai J-J, Chiou A, Bareil PP, Duval P-L, Sheng Y (2010) Dynamic deformation of red blood cell in dual-trap optical tweezers. Opt Express 18(10):10462–10472

Page 16 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

47. Lim CT, Dao M, Suresh S, Sow CH, Chew KT (2004) Large deformation of living cells using laser traps. Acta Mater 52:1837–1845 48. Daoa M, Limb CT, Suresha S (2003) Mechanics of the human red blood cell deformed by optical tweezers. J Mech Phys Solid 51:2259–2280 49. Lyubin EV, Khokhlova MD, Skryabina MN, Fedyanin AA (2012) Cellular viscoelasticity probed by active rheology in optical tweezers. J Biomed Opt 17(10):101510 50. Meiners J-C, Quake SR (2000) Femtonewton force spectroscopy of single extended DNA molecules. Phys Rev Lett 84(21):5014–5017 51. Hough LA, Ou-Yang HD (1999) A new probe for mechanical testing of nanostructures in soft materials. J Nanopart Res 1:495–499 52. Wei M-T (2014) Microrheology of soft matter and living cells in equilibrium and nonequilibrium systems. Ph.D., Bioengineering, Lehigh University, Bethlehem 53. Crocker JC, Valentine MT, Weeks ER, Gisler T, Kaplan PD, Yodh AG, Weitz DA (2000) Two-point microrheology of inhomogeneous soft materials. Phys Rev Lett 85(4):888–891 54. Levine AJ, Lubensky TC (2000) One- and two-particle microrheology. Phys Rev Lett 85:1774–1777 55. Hoffman BD, Crocker JC (2009) Cell mechanics: dissecting the physical responses of cells to force. Annu Rev Biomed Eng 11:259–288 56. Hoffman BD, Massiera G, Citters KMV, Crocker JC (2006) The consensus mechanics of cultured mammalian cells. Proc Natl Acad Sci U S A 103(27):10259–10264 57. Valentine MT, Dewalt LE, Ou-Yang HD (1996) Forces on a colloidal particle in a polymer solution: a study using optical tweezers. J Phys Condens Matter 8:9477–9482 58. Ou-Yang HD (1999) Design and applications of oscillating optical tweezers for direct measurements of colloidal forces. In: Farinato RS, Dubin PL (Eds.), Colloid–polymer interactions: from fundamentals to practice. Wiley, New York 59. Wright WH, Sonek GJ, Berms MW (1993) Radiation trapping forces on microspheres with optical tweezers. Appl Phys Lett 63(9):715–717 60. Ghislain LP, Switz NA, Webb WW (1994) Measurement of small forces using an optical trap. Rev Sci Instrum 65(9):2762–2768 61. Ashkin A (1992) Forces of a single-beam gradient laser trap on a dielectric sphere in the ray optics regime. Biophys J 61:569–582 62. Mazolli A, Neto PAM, Nussenzveig HM (2003) Theory of trapping forces in optical tweezers. Proc R Soc Lond A 459:3021–3041 63. Richardson AC, Reihani SNS, Oddershede LB (2008) Non-harmonic potential of a single beam optical trap. Opt Express 16(20):15709–15717 64. Merenda F, Boer G, Rohner J, Delacrétaz GD, Salathé R-P (2006) Escape trajectories of single-beam optically trapped micro-particles in a transverse fluid flow. Opt Express 14(4):1685–1699 65. Greenleaf WJ, Woodside MT, Abbondanzieri EA, Block SM (2005) Passive all-optical force clamp for high-resolution laser trapping. Phys Rev Lett 95:208102 (208104 pages) 66. Neves AAR, Fontes A, Pozzo LY, Thomaz AA, Chillce E, Rodriguez E, Barbosa LC, Cesar CL (2006) Electromagnetic forces for an arbitrary optical trapping of a spherical dielectric. Opt Express 14(26):13101–13106 67. Jahnel M, Behrndt M, Jannasch A, Sch€affer E, Grill SW (2011) Measuring the complete force field of an optical trap. Opt Lett 36(7):1260–1262 68. Ling L, Zhou F, Huang L, Guo H, Li Z, Li Z-Y (2011) Perturbation between two traps in dualtrap optical tweezers. J Appl Phys 109:083116 Page 17 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

69. Huang C-C, Wang C-F, Mehta DS, Chiou A (2001) Optical tweezers as sub-pico-newton force transducers. Opt Commun 195:41–48 70. Rohrbach A, Kress H, Stelzer EHK (2003) Three-dimensional tracking of small spheres in focused laser beams influence of the detection angular aperture. Opt Lett 28(6):411–413 71. Rohrbach A, Tischer C, Neumayer D, Florin E-L, Stelzer EHK (2004) Trapping and tracking a local probe with a photonic force microscope. Rev Sci Instrum 75(6):2197–2210 72. Rohrbach A (2005) Stiffness of optical traps: quantitative agreement between experiment and electromagnetic theory. Phys Rev Lett 95(16):168102 73. Wei M-T, Yang K-T, Karmenyan A, Chiou A (2006) Three-dimensional optical force field on a Chinese hamster ovary cell in a fiber-optical dual-beam trap. Opt Express 14(7):3056–3064 74. Wei M-T, Chiou A (2005) Three-dimensional tracking of Brownian motion of a particle trapped in optical tweezers with a pair of orthogonal tracking beams and the determination of the associated optical force constants. Opt Express 13(15):5798–5806 75. Ghislain LP, Webb WW (1993) Scanning-force microscope based on an optical trap. Opt Lett 18(19):1678–1680 76. Wei M-T, Ng J, Chan CT, Chiou A, Ou-Yang HD (2012) Transverse force profiles of individual dielectric particles in an optical trap. In: SPIE optics photonics, San Diego 77. Latinovic O (2010) Micromechanics and structure of soft and biological materials: an optical tweezers study. Verlag Dr. Muller Publishing, Saarbrucken 78. Wright WH, Sonek GJ, Berns MW (1999) Parametric study of the forces on microspheres held by optical tweezers. Appl Optics 33(9):1735–1748 79. Barton JP, Alexander DR, Schaub SA (1989) Theoretical determination of net radiation force and torque for a spherical particle illuminated by a focused laser beam. J Appl Phys 66:4594–4602 80. Zemánek P, Jonáš A, Šrámek L, Liška M (1998) Optical trapping of Rayleigh particles using a Gaussian standing wave. Opt Commun 151:273–285 81. Ganic D, Gan X, Gu M (2004) Exact radiation trapping force calculation based on vectorial diffraction theory. Opt Express 12(12):2670–2675 82. Viana NB, Mazolli A, Neto PAM, Nussenzveig HM (2006) Absolute calibration of optical tweezers. Appl Phys Lett 88:131110 83. Ferry JD (1970) Viscoelastic properties of polymers. Wiley, New York 84. Brau RR, Ferrer JM, Lee H, Castro CE, Tam BK, Tarsa PB, Matsudaira P, Boyce MC, Kamm R, Lang MJ (2007) Passive and active microrheology with optical tweezers. J Opt A: Pure Appl Opt 9:S103–S112 85. Dasgupta BR, Tee S-Y, Crocker JC, Frisken BJ, Weitz DA (2002) Microrheology of polyethylene oxide using diffusing wave spectroscopy and single scattering. Phys Rev E 65:051505 (051510 Pages) 86. Huang Y, Santore MM (2002) Dynamics in adsorbed layers of associative polymers in the limit of strong backbone-surface attractions. Langmuir 18(6):2158–2165 87. Gittes F, MacKintosh FC (1998) Dynamic shear modulus of a semiflexible polymer network. Phys Rev E 58(2):R1241–R1244 88. Schnurr B, Gittes F, MacKintosh FC, Schmidt CF (1997) Determining microscopic viscoelasticity in flexible and semiflexible polymer networks from thermal fluctuations. Macromolecules 30(25):7781–7792 89. Mason TG, Weitz DA (1995) Optical measurements of frequency-dependent linear viscoelastic moduli of complex fluids. Phys Rev Lett 74(7):1250–1253

Page 18 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

90. Green MS, Tobolsky AV (1946) A new approach to the theory of relaxing polymeric media. J Chem Phys 14(80):1724109 91. Annable T, Buscall R, Ettelaie R, Whittlestone D (1993) The rheology of solutions of associating polymers: comparison of experimental behavior with transient network theory. J Rheol 37:695–727 92. Pham QT, Russel WB, Thibeault JC, Lau W (1999) Polymeric and colloidal modes of relaxation in latex dispersions containing associative triblock copolymers. J Rheol 43:1599–1616 93. Mizuno D, Bacabac R, Tardin C, Head D, Schmidt CF (2009) High-resolution probing of cellular force transmission. Phys Rev Lett 102:168102 (168104 pages) 94. Hale CM, Sun SX, Wirtz D (2009) Resolving the role of actoymyosin contractility in cell microrheology. PLoS One 4(9):e7054 (7011 pages) 95. Robert D, Nguyen T-H, Fo G, Wilhelm C (2010) In vivo determination of fluctuating forces during endosome trafficking using a combination of active and passive microrheology. PLoS One 5(4):e10046 96. Kollmannsberger P, Fabry B (2011) Linear and nonlinear rheology of living cells. Annu Rev Mater Res 41:75–97 97. Aratyn-Schaus Y, Oakes PW, Gardel ML (2011) Dynamic and structural signatures of lamellar actomyosin force generation. Mol Biol Cell 22:1330–1339 98. Wang N, Butler JP, Ingber DE (1993) Mechanotransduction across the cell surface and through the cytoskeleton. Science 260:1124–1127 99. Wang Y, Botvinick EL, Zhao Y, Berns MW, Usami S, Tsien RY, Chien S (2005) Visualizing the mechanical activation of Src. Nature 434:1040–1045 100. Engler AJ, Sen S, Sweeney HL, Discher DE (2006) Matrix elasticity directs stem cell lineage specification. Cell 126(4):677–689. doi:10.1016/j.cell.2006.06.044 101. Wang N, Tolić-Nørrelykke IM, Chen J, Mijailovich SM, Butler JP, Fredberg JJ, Stamenović D (2002) Cell prestress. I. Stiffness and prestress are closely associated in adherent contractile cells. Am J Physiol Cell Physiol 282:C606–C616 102. Byfield FJ, Wen Q, Levental I, Nordstrom K, Arratia PE, Miller RT, Janmey PA (2009) Absence of filamin a prevents cells from responding to stiffness gradients on gels coated with collagen but not fibronectin. Biophys J 96:5095–5102 103. Trichet L, Digabel JL, Hawkins RJ, Vedula SRK, Gupta M, Ribrault C, Hersen P, Voituriez R, Ladoux B (2012) Evidence of a large-scale mechanosensing mechanism for cellular adaptation to substrate stiffness. Proc Natl Acad Sci U S A 109(18):6933–6938 104. Han SJ, Bielawski KS, Ting LH, Rodriguez ML, Sniadecki NJ (2012) Decoupling substrate stiffness, spread area, and micropost density: a close spatial relationship between traction forces and focal adhesions. Biophys J 103(4):640–648 105. Tee S-Y, Fu J, Chen CS, Janmey PA (2011) Cell shape and substrate rigidity both regulate cell stiffness. Biophys J 100(5):L25–L27 106. Bishop AI, Nieminen TA, Heckenberg NR, Rubinsztein-Dunlop H (2004) Optical microrheology using rotating laser-trapped particles. Phys Rev Lett 92(19):198104 (198104 pages) 107. Fabry B, Maksym GN, Butler JP, Glogauer M, Navajas D, Fredberg JJ (2001) Scaling the microrheology of living cells. Phys Rev Lett 87(14):148102 (148104 pages) 108. Koenderink GH, Dogic Z, Nakamura F, Bendix PM, MacKintosh FC, Hartwig JH, Stossel TP, Weitz DA (2009) An active biopolymer network controlled by molecular motors. Proc Natl Acad Sci U S A 106(36):15192–15197 Page 19 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_6-1 # Springer Science+Business Media Dordrecht 2014

109. Silva MS, Depken M, Stuhrmann B, Korsten M, MacKintosh FC, Koenderink GH (2011) Active multistage coarsening of actin networks driven by myosin motors. Proc Natl Acad Sci U S A 108(23):9408–9413 110. John K, Caillerie D, Peyla P, Raoult A, Misbah C (2013) Nonlinear elasticity of cross-linked networks. Phys Rev E 87:042721 111. Reymann A-C, Boujemaa-Paterski R, Martiel J-L, Guérin C, Cao W, Chin HF, Cruz EMDL, Théry M, Blanchoin L (2012) Actin network architecture can determine myosin motor activity. Science 336(6086):1310–1314 112. Stuhrmann B, Silva MS, Depken M, MacKintosh FC, Koenderink GH (2012) Nonequilibrium fluctuations of a remodeling in vitro cytoskeleton. Phys Rev E 86:020901(R) (020905 pages) 113. Lau AWC, Hoffman BD, Davies A, Crocker JC, Lubensky TC (2003) Microrheology, stress fluctuations, and active behavior of living cells. Phys Rev Lett 91:198101 (198104 pages) 114. MacKintosh FC, Levine AJ (2008) Nonequilibrium mechanics and dynamics of motoractivated gels. Phys Rev Lett 100:018104 115. Brangwynne CP, Koenderink GH, MacKintosh FC, Weitz DA (2008) Nonequilibrium microtubule fluctuations in a model cytoskeleton. Phys Rev Lett 100:118104 116. Kollmannsberger P, Mierke CT, Fabry B (2011) Nonlinear viscoelasticity of adherent cells is controlled by cytoskeletal tension. Soft Matter 7:3127–3132 117. Fernández P, Pullarkat PA, Ott A (2006) A master relation defines the nonlinear viscoelasticity of single fibroblasts. Biophys J 90:3796–3805 118. Yao NY, Broedersz CP, Depken M, Becker DJ, Pollak MR, MacKintosh FC, Weitz DA (2013) Stress-enhanced gelation: a dynamic nonlinearity of elasticity. Phys Rev Lett 110:018103 119. Bruno L, Salierno M, Wetzler DE, Despósito MA, Levi V (2011) Mechanical properties of organelles driven by microtubule-dependent molecular motors in living cells. PLoS One 6(4): e18332 120. Wei M-T, Ou-Yang HD (2010) Thermal and non-thermal intracellular mechanical fluctuations of living cells. In: SPIE optics photonics, San Diego, p 77621L 121. Chien S (2007) Mechanotransduction and endothelial cell homeostasis: the wisdom of the cell. Am J Physiol Heart Circ Physiol 292:H1209–H1224 122. Chen CS (2008) Mechanotransduction – a field pulling together? J Cell Sci 121(20):3285–3291 123. Wang N, Tytell JD, Ingber DE (2009) Mechanotransduction at a distance: mechanically coupling the extracellular matrix with the nucleus. Nat Rev 10:75–82 124. Parker KK, Ingber DE (2007) Extracellular matrix, mechanotransduction and structural hierarchies in heart tissue engineering. Philos Trans R Soc B 2114:1–13 125. Alamo JC, Norwich GN, Li Y-sJ, Lasheras JC, Chien S (2008) Anisotropic rheology and directional mechanotransduction in vascular endothelial cells. Proc Natl Acad Sci U S A 105(40):15411–15416

Page 20 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

Cadmium-Free Quantum Dots for Biophotonic Imaging and Sensing Butian Zhanga, Yucheng Wanga, Rui Hua, Indrajit Royb and Ken-Tye Yonga* a School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore b Department of Chemistry, University of Delhi, Delhi, India

Abstract The use of cadmium-based quantum dots (QDs) in biomedical applications has made substantial progress during the past few years. However, several environmental, clinical, and toxicological groups have raised serious concerns related to cadmium-based toxicity and are doubtful about using cadmium-based QDs for clinical research. Studies have shown that some cadmium-based QD formulations induced in vitro and in vivo toxicity when they degrade and release cadmium ions in the biological environment. This concern has prompted the QD community to explore new means to design and develop next generation of cadmium-free QDs for replacing the commonly used heavymetal-based nanocrystals in biological sciences research and applications. With the advancement of solution-phase synthesis methods, a series of QDs based on indium phosphide (InP), copper indium sulfide (CuInS2), silver indium sulfide (AgInS2), silver sulfide (Ag2S), doped Zn chalcogenide, carbon (C), and silicon (Si) have been successfully developed, which bear a strong resemblance to the cadmium-based QDs in terms of their optical property and colloidal stability. Many research groups have started to use these cadmium-free QDs and evaluate them using various in vitro and in vivo models. However, there remain some challenges that need to be overcome before perfecting the bioconjugated cadmium-free QD formulations for biomedical applications and translational medicine research. In this review, our aim is to provide an overview and discussions on the current findings and challenges in designing and applying colloidal cadmium-free nanocrystals as the next generation of optical nanoprobes for theranostic use. In particular, we highlight the current trend in the synthesis and surface modification of cadmium-free QDs, the use of bioconjugated cadmiumfree QDs for in vitro and in vivo imaging and sensing, surface-cell labeling with QDs, biodistribution of cadmium-free QDs, and the potential toxicity of cadmium-free QDs from cellular to nonhuman primate models. Such information will be viable in generating a set of guidelines for engineering clinically usable QDs for applications ranging from optical image-guided surgery to targeted stem cell therapy research.

Keywords Quantum dots; Biophotonics; Nanomedicine; Biomedical engineering and nanotoxicity

Introduction To date, semiconductor nanoparticles are referred to as powerful building blocks for future healthcare innovations and excellence. As the size of semiconductor particle approaches the

*Email: [email protected] Page 1 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

nanometer region, their optical and electronic properties are significantly different from those in the bulk form [1]. Due to this reason, these semiconductor nanoparticles, also known as quantum dots (QDs), have size-tunable band-edge absorption and emission in the visible and near-infrared (NIR) regions, narrow emission and broad absorption bands, large absorption cross sections, and very high photostability [2]. This unique optical property arises from quantum confinement of excitons. These properties make QDs interesting and novel for applications such as imaging and sensing [3]. For example, the size-tunable luminescence of QDs can be applied for multiplex imaging. The narrow luminescence bands of QDs are useful in identifying a multitude of biomolecules during dynamic cell imaging. Finally, the bright and highly photostable luminescence of QDs allow one to perform ultrasensitive and prolonged bioimaging at the single-molecule level. The first demonstration of utilizing QDs for biological study started in 1998 when Alivisatos’s and Nie’s groups showed that the QDs can be made water dispersible and can be conjugated with biomolecules for imaging of live cells [4, 5]. Following this breakthrough achievement, QDs have received great attention and have been constantly used in various biological and biomedical studies such as sensing DNA, imaging live cells, drug delivery, targeted tumor imaging, and multimodal imaging of animals [6–12]. However, almost 90 % of these research studies were accomplished using colloidal cadmium-based QDs. Through the systematic tailoring of various synthesis parameters during nanoparticle growth by many research groups, the guidelines for fabricating highly luminescent cadmium-based QDs can be found in many reported literatures [13–15]. So far, cadmium-based QDs have been commonly employed for biological and clinical research owing to the wide availability of cadmium precursors, well-studied knowledge of crystal growth, and sizetunable luminescence ranging from ultraviolet (UV)-visible to near-infrared (NIR) range. Due to the widespread and rapidly growing biomedical applications of cadmium-based QDs, the toxicity of QDs has been a subject of intense discussion in the last few years [16, 17]. More importantly, studies have demonstrated that some cadmium-based QDs could induce the toxicity when they degrade and release cadmium ions to the biological environment [18]. Therefore, researchers ranging from biomedical engineers to clinicians kept raising the following questions in various reviews and articles: (i) What happens if QDs degrade in the body? (ii) What is the biodistribution of QDs in vivo? (iii) Can these particles be excreted from the body after performing their task? (iv) What is the long-term fate of QDs in the body if they do not excrete? All these questions have raised serious concerns about translating cadmium-based QDs for clinical research applications. To date, there are two general methods to overcome the current challenges mentioned above, which are either passivating the surface of cadmium-based QDs with biocompatible and long-lasting polymeric layer for preventing the breakdown of the particles or creating cadmium-free QDs. Many researchers are currently focused on the later method where they have started to develop cadmium-free QDs using greener chemistry approaches and employed them to substitute cadmium-based QDs for some specific biological applications. This review is intended to provide an account on the current research status of controlled synthesis of colloidal cadmium-free QDs for biomedical applications. We give an overview of the colloidal synthesis methods for cadmium-free QD preparation in section “Methods for Synthesizing Quantum Dots.” Section “Electronic and Optical Property of Various Types of Cadmium-Free Quantum Dots” summarizes currently available types of cadmium-free QDs and discusses the synthesis of bioconjugated QDs. Section “Bioconjugated Cadmium-Free Quantum Dots for Biomedical Applications” presents bioconjugated cadmium-free QDs for bioimaging and biosensing applications. Section “Nanotoxicity of Cadmium-Free QDs: From Cellular to Primate Studies” reports the current findings on the toxicity of cadmium-free QDs in vitro and in vivo. Finally, section “Summary

Page 2 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

and Future Outlook” briefly discusses the conclusion and future outlook of cadmium-free QDs for translational clinical research.

Methods for Synthesizing Quantum Dots In the early 1990s, cadmium-based QDs such as CdSe QDs were generally synthesized by the pyrolysis of dimethyl cadmium (CdMe2) and trioctylphosphineselenide (TOPSe) in a surfactant mixture of trioctylphosphine and trioctylphosphine oxide [19]. This approach is also known as hot colloidal synthesis. The QD synthesis was performed by injecting cadmium and selenium precursors together in the surfactant mixture at 300  C, followed by growing the QDs at relatively lower temperature (such as 250  C) [14]. This approach was able to create monodispersed QDs. This method was further extended to make other types of cadmium-based QDs and thereby establishing the fundamental protocols of synthesizing various colloidal QDs [20, 21]. For example, CdS or CdTe QDs can be prepared by replacing TOPSe with TOPS or TOPTe. The use of pyrophoric and volatile CdMe2 was critical to preparing highly crystalline CdSe and CdTe QDs. However, this method was complex and dangerous for beginners to practice in making QDs. Later, this method was significantly improved by using alternative cadmium precursors, such as cadmium chloride, cadmium oxide, cadmium acetate, and cadmium carbonate. Similarly, high quality of QDs was produced by using all the cadmium precursors mentioned above. To date, synthesis of cadmiumbased QDs with different shape and size is rather simple and straightforward, where one can achieve this through manipulating the precursors, temperatures, and chelating ligands in the reaction mixture [22, 23]. More recently, the hot colloidal synthesis method was further extended and applied for making I–III–VI and III–V QDs such as CuInS2, AgInS2, and InP QDs [24–26]. For example, Peng’s group have developed a set of synthetic chemistry parameters for fabricating CuInS2 and AgInS2 QDs using three independent precursors, which are indium fatty acid salts, copper acetate (or silver nitrate), and sulfur powder [24]. The factors for successfully preparing nearly monodispersed CuInS2 or AgInS2 nanocrystals with excellent optical property was to balance the reactivity of the two cationic precursors with their corresponding ligands, reaction solution composition, reaction temperature, and time. To improve the quantum yield of the QDs, methods were also developed for growing a thin layer of high-bandgap semiconductor shell around the CuInS2 and AgInS2 nanocrystals. Here, we provide a brief description of preparing CuInS2 QDs using hot colloidal synthesis method. This method can be easily used for preparing AgInS2 QDs by changing the copper nitrate to silver nitrate ratio. Briefly, an oleylamine–sulfur solution was prepared in advance by dissolving sulfur in oleylamine. Separately, indium acetate, copper nitrate, oleic acid, and stearic acid were dissolved in octadecene. The mixture solution was heated to 180  C under argon flow, followed by injection of dodecanethiol and oleylamine–sulfur solution into the hot reaction mixture. The reaction mixture was held at 180  C, and then an aliquot was removed by syringe and injected into a large volume of organic solvent (e.g., toluene or hexane) at room temperature to quench the reaction. The QDs was separated from the chloroform solution by the addition of ethanol and centrifugation. The QD precipitate could be redispersed in various organic solvents, such as hexane, toluene, and chloroform. Among the III–V semiconductors, InP is the only one that offers a similar optical property to that of CdSe QDs, but without intrinsic toxicity as InP does not contain elements such as cadmium, mercury, arsenic, and selenium. However, fabrication of high-quality InP QDs is challenging. Many of the currently made InP QDs displayed poor emission efficiency, larger size distribution, and poor Page 3 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

colloidal stability. More importantly, the synthesis protocol for InP QD is more complex than that of CdSe QDs, which makes it difficult to repeat the same experiment and obtain the same quality of InP QDs. Recently, Peng’s group has reported some useful solutions to overcome these challenges [27]. The group has demonstrated that the growth of InP QDs was performed typically below 190  C, and no pumping of the reaction system was needed. No size sorting was needed to obtain monodisperse InP QDs. Briefly, the authors demonstrated that InP QDs can be prepared using one-pot synthesis reaction. In the initial phase, both tris(trimethylsilyl)phosphine and 1-octylamine were dissolved in octadecene for preparing the phosphor precursor. In a typical synthesis, indium acetate, myristic acid, and octadecene were loaded into a three-neck flask. The mixture was heated to 190  C under argon flow, and then the tris(trimethylsilyl)phosphine/amine solution is injected into the hot reaction mixture. It is worth noting that tris(trimethylsilyl)phosphine is a highly toxic and pyrophoric substance, and we urge the users to be careful in handling them. Later, the reaction temperature was lowered to 178  C for continuing the growth of InP QDs. The growth of the QDs could be monitored by taking aliquots at different reaction time points for absorption and emission measurements. It is well accepted that the as-prepared QDs are not favorable for bioconjugation and bioimaging applications. This is indeed true because their surface is not accessible for bioconjugation reactions, and the optical property of QDs is unstable. Also, their surfaces are capped with hydrophobic moieties that do not allow them to be dispersible in water. Thus, the QD surface needs to be modified and made water dispersible for biological applications. In practice, one can achieve this by using the following two steps: (i) growing a thin layer of nontoxic shell on QDs and (ii) functionalizing the surface of core/shell QDs with hydrophilic molecules. Growing a shell on QDs is of paramount importance for biological applications. For example, the shell protects the QDs core against surface oxidation and prevents leaching and dissolution of QDs in the form of heavy metal ions. More importantly, the shell can serve as a unique platform for bioconjugation purpose. Some reports have clearly demonstrated that putting a shell on core QDs not only helped in slowing the degradation of QDs but also significantly enhanced luminescence quantum yield of the QDs, which is useful for single-molecule imaging or sensing applications. In general, higher-bandgap semiconductor materials such as zinc sulfide and zinc selenide are used for shell coating to reduce the surface defects of the core, whereby improving the optical property of the QDs [28, 29]. In addition, alternative strategies such as thiols, polymers, or silica coatings were used to passivate QD surface for bioconjugation [30–33]. Today, bioconjugated core/shell QDs have become a powerful tool for imaging of DNA, organelles, cells, tissues, and even live animals [34]. Both covalent and noncovalent conjugations are used for linking QDs with antibodies, proteins, peptides, aptamers, nucleic acids, etc., for targeted delivery applications. For instance, antibody-conjugated QDs can be used for targeted labeling of cancer cells, when the cancer cells overexpress corresponding receptor antigens [35]. So far, core/shell CdSe/ZnS and CdTe/ZnS QDs have been commonly used in bioimaging and biosensing due to their tunable and photostable luminescence in the visible and NIR region. Wide bandgap II–VI compounds, Zn chalcogenide (ZnE, E ¼ O, S, Se), are generally used as shell materials for passivating lower bandgap QDs core in forming core/shell structure nanocrystals such as CdSe/ZnS and CdTe/ZnSe. More recently, they have become an attractive option for preparing cadmium-free QDs. For instance, by doping Zn chalcogenide nanocrystals with various transition (Mn2+, Cu2+, Fe2+, Co2+, and Ni2+) [36–39] or rare earth ions(Cr3+, Eu3+, Tb3+, and Er3+) [40, 41], one can produce cadmium-free QDs with desirable emission peak in the visible wavelength range. Such QDs are commonly referred as doped-dots (d-dots) in the QDs community. So far, Mn- and Cu-doped ZnSe/S QDs are the two most extensively studied systems within the Page 4 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

category of d-dots. Peng’s group has developed nucleation- and growth-doping strategies for obtaining d-dots with high yield and excellent optical property [39]. In these strategies, reactivity of the host/dopant precursors and temperature are carefully controlled to decouple the doping from the nucleation and growth of the host and thus to ensure that all the nanocrystals are uniformly doped with the required ions. Because the photoluminescence of d-dots is strongly dependent on the dopants’ distribution and concentration within the QDs matrix [42, 43], thus the developed decoupling techniques allow one to prepare high-quality and uniform d-dots in a one-pot synthesis approach. For example, in a typical synthesis of Mn-doped ZnSe QD, the mixture of manganese stearate, tributylphosphine, octadecylamine, and Se in octadecene was first kept at 280  C for 1 h to allow the formation of MnSe nuclei. After that, the temperature was lowered to 180  C for quenching the nucleation growth of MnSe. The lowering of temperature will significantly reduce the reactivity of manganese precursor in the reaction mixture thereby stopping the formation of new MnSe nuclei. In the last step, zinc acetate is introduced to the reaction system to support the growth of ZnSe shells over the MnSe core. At this mild reaction condition, the overall system will suppress the doping process to take place, but it is still sufficient for promoting the reaction between Zn precursor and MnSe nuclei. Besides from the I–III–VI, II–VI, and III–V direct bandgap semiconductor QDs mentioned above, silicon (Si) QDs, a group IV indirect bandgap semiconductor nanocrystal, have been widely studied for the past few years due to their unique optical property as fluorescent labels for bioimaging applications [44–47]. Several approaches have been used to synthesize Si QDs, such as fracturing of porous silicon by ultrasonication [48], inverse micellar growth [49], thermal decomposition of organosilane precursors in supercritical solvents [50], solution-phase methods [51, 52], plasma decomposition of silane [53], and high-temperature aerosol reaction [54]. However, most of these methods are not able to produce high quality of QDs with desirable emission peaks. Recently,

Fig. 1 Schematic illustration of laser-driven aerosol reactor for the mass production of Si QDs (Reprinted with permission from Ref. [55]. Copyright # 2003, American Chemical Society)

Page 5 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

Swihart’s group has demonstrated that silicon QDs with bright visible photoluminescence can be prepared by using a combined vapor-phase and solution-phase process, using only inexpensive chemicals [55, 56]. CO2 laser is used to induce pyrolysis of silane to produce Si QDs at high yield (Fig. 1). Si QDs with an average size of 5 nm can be synthesized by using this method. Further etching these QDs with mixtures of hydrofluoric acid (HF) and nitric acid (HNO3) can reduce their size and also passivate their surface so that they exhibit bright visible luminescence at room temperature. The emission peak of the QDs can be tuned from 800 nm to below 500 nm by manipulating the etching time and conditions. Carbon-based QDs is another type of group IV nanocrystals. For example, carbon dots (C-dots) and graphene quantum dots (GQDs) are the newly developed fluorescent nanocrystals that belong to this group [57]. Due to the availability and abundant bulk of various forms of carbon materials, C-dots can be prepared by using graphite [58, 59], carbon nanotubes [60], fullerenes [61], carbohydrates, and various food products [62] as the main sources. Several research groups have proposed a few methods to produce high-quality C-dots, and such methods include laser ablation [57], electrochemical exfoliation [59], hot colloidal synthesis [63], and hydrothermal process method [64]. The main drawbacks of C-dots are their relatively low QY, non-tunable emission wavelength and complex synthesis steps are involved. Some innovative protocols were developed to overcome some of these issues. Recently, Wang et al. demonstrated the preparation of green emission C-dots with QY close to 60 % [57, 65]. In a typical reaction, the nitric acid-treated carbon soot was refluxed in neat thionyl chloride solution to generate acyl chlorides on carbon surface. Afterwards, the carbon particle sample was mixed with PEG molecules, and the reaction mixture was heated to 110  C and vigorously stirred under nitrogen for 3 days. This process will generate PEGylated C-dots, but the total yield is limited. To enhance the yielding of C-dots, the PEGylated C-dots (QY 16–20 %) were loaded into an aqueous gel column for fractionation. The fractionated C-dots sample featured a welldefined absorption shoulder and strong green fluorescence emissions with QY as high as 55–60 %. The authors suggested a relatively uniform fluorescence radiative process is taking place for the fractionated sample, and this was further confirmed by both QY and lifetime measurements [66]. To be able to shift the C-dots emission spectra to longer wavelength region, Bhunia et al. developed a carbohydrate carbonization procedure to produce C-dots with tunable emissions from blue to red range (Fig. 2). Upon comparing to previous prepared blue–green emission C-dots samples, the C-dots fabricated by Bhunia’s group displayed brighter emission in red spectral region [67]. In addition to C-dots, GQDs can be prepared by hydrothermal reduction of graphene oxide [68], electrochemical reduction of carbon nanotubes [60], and cage opening of fullerene [69]. The quantum yield of GQDs is generally lower than that of C-dots, and it is very challenging in employing them for imaging applications. Shen et al. reported the making of GQDs using hydrothermal method. Briefly, graphene oxide was treated with nitric acid and heated at 70  C for 24 h. Next, the graphene fragments and PEG mixture was autoclaved at 200  C for 24 h. The PEGylated GQDs showed higher florescence than that of the unfunctionalized GQDs [68].

Electronic and Optical Property of Various Types of Cadmium-Free Quantum Dots CuInS2 and AgInS2 are direct bandgap I–III–VI semiconductors with bandgaps of 1.45 eV and 1.8 eV, respectively [70, 71]. Based on their semiconductor property, it is possible to synthesize CuInS2 and AgInS2 QDs with high extinction coefficients and emission peaks ranging from visible to NIR regions. The synthesis of I–III–VI QDs is certainly not a new research subject in the area of Page 6 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

Fig. 2 Digital image of gram scale solid C-dots samples, their solutions under appropriate excitations and the absorption (solid lines), excitation (dashed lines), and emission (color lines) spectra (Reprinted with permission from Ref. [67]. Copyright # 2013, Nature Publishing Group)

colloidal nanoparticle synthesis. Some methods have been developed for fabricating I–III–VI QDs, such as hydrothermal techniques, single-source precursor routes, and hot injection techniques. In comparison to cadmium-based QDs reported in literature, the size and size distribution control, as well as optical properties of I–III–VI QDs are less reported since the synthesis protocol has not yet been standardized. In comparison to cadmium-based QDs (e.g., CdSe and CdTe), CuInS2 and AgInS2 QDs cover a broader color window, including the NIR window that is the most important for in vivo imaging [72]. More importantly, CuInS2 and AgInS2 QDs are more suitable for in vitro and in vivo applications because they do not contain any cadmium, lead, mercury, selenide, and arsenic elements which are commonly used for making first generation of QDs. So far, there has been substantial investigation of targeted live cells and small animal imaging using well-developed CdSe/ZnS QDs and other QDs such as CdTe/ZnSe and CdTe/CdSe. However, there is limited literature reporting the use of CuInS2 or AgInS2 QDs for in vitro and in vivo applications, presumably due to their complex protocol of synthesis. It was reported that CuInS2 and AgInS2 are having a broader full-width half maximum in their spectra and lower quantum yield in Page 7 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

comparison to the other types of QDs fabricated through the same colloidal synthesis technique. There are ways to substantially improve the quantum yield and narrow the full-width half maximum of CuInS2 and AgInS2 QDs. To reduce their polydispersity in size distribution, one can use sizeselective precipitation technique to separate larger particles from the smaller ones [73, 74]. Basically, this depends on the weight of the particle. This approach will capture a selective size distribution of QDs whereby narrowing their spectral full-width half maximum. To enhance the quantum yield of the CuInS2 and AgInS2 QDs, we can grow an additional ZnS layer on the QD surface. If one is able to optimize both the approaches mentioned above, highly crystalline and nearly monodispersed QDs with narrow full-width half maximum can be produced. Another challenge to synthesize high quality of I–III–VI QDs is to maintain the same ternary composition from particle to particle. However, this is not an easy task, and current findings indicate that the total composition from group I and III elements does vary among the particles [75]. As a result, this produced samples with either cubic or wurtzite crystal structure. Ag2S and Ag2Se are I–VI semiconductor materials with bandgaps of 0.9–1.1 eV and 0.15 eV [76]. Based on their electronic property, one is able to make luminescence I–VI nanocrystals with emission spectra peaks ranging from visible to NIR-II region (1.0–1.4 mm), which is promising for high-contrast in vivo imaging of deep tissues [77, 78]. Currently, only a handful of manuscripts are available in the literature reporting the synthesis and applications of Ag2S QDs. So far, hot colloidal synthesis method has been commonly used to produce Ag2S QDs with emission peak extended to 1,200 nm, and excellent colloidal stability can be achieved by functionalizing the QDs surface with six-armed PEG molecules [77]. In addition to hot colloidal synthesis method, Ag2S QDs can also be prepared directly in aqueous phase. It was reported that Ag2S QDs synthesized in the aqueous phase displayed an ultrasmall size distribution when compared to particles prepared in organic phase. Also, their emission profile can be easily manipulated by changing the reaction time [76] and the precursor ratio [79] in the reaction system. However, the as-prepared QDs possess a relatively broad full-width half maximum and a low quantum yield (100 ns). These QDs have reasonable optical and colloidal stability when they are dispersed in physiological buffers. The authors have also used the orange-emitting QDs for biological labeling of baculoviral vectors (BVs). The QDs were electrostatically attached to BVs by simple mixing them together followed by an overnight incubation of the mixture. To confirm that the BVs have been attached to the QDs, HeLa cells are treated with the QDs/BV mixture. The fluorescence image has shown that orange emission was observed from the treated cells. On the other hand, no fluorescence signal was detected from HeLa cells treated with QDs alone. More recently, our group has reported the synthesis and surface functionalization of AgInS2 nanocrystals emitting in the NIR range for in vitro and in vivo imaging applications [104]. Triblock copolymer Pluronic F127 was used to encapsulate the QDs, which made them dispersible in biological buffers. By employing a whole-body small animal optical imaging setup, the authors were able to use the AgInS2 QD formulation for passive targeted delivery to the tumor site. The results have shown that majority of the QDs was removed from the blood circulation after an hour of injection and accumulated in the liver and spleen as time progressed. More importantly, the authors have performed histological analysis of acute toxicity induced by intravenous administration of AgInS2 QDs on the major organs, such as the liver, lung, spleen, kidney, brain, and heart of mice. It was found that mice treated with micelle-encapsulated AgInS2 QDs did not show any inflammation on the liver, kidney, lung, and heart, thus demonstrating the low toxicity of the formulation. This shows that the ultrasmall crystal size, NIR emitting luminescence, and high quantum yield make AgInS2 QDs a good candidate as an optical probe for cancer imaging and sensing. In the application of obtaining optimized signal-to-noise ratio of in vivo imaging, Ag2S QDs with emission extending to NIR-II (1.0–1.4 mm) region are more suitable to be used as the contrast agents when compared to NIR-I emitting probes. This is because the NIR-II region contains negligible background signals (autofluorescence, tissue scattering, etc.) [77], and more importantly, the emission of QDs is able to penetrate deeper depth of tissue at this wavelength range [80]. Jiang et al. reported the synthesis of MPA-capped Ag2S QDs in aqueous phase with tunable emission ranging from 510 to 1,221 nm. These Ag2S QDs (910 nm emission) were ultrasmall in size, and they were tested for in vivo imaging study. It was observed that the luminescence from these particles can be easily differentiated from the tissue autofluorescence background [76]. In another separate in vivo Page 12 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

Fig. 5 Time-dependent NIR-II fluorescence images of 4 T1 tumor-bearing mice injected with Ag2S QDs. (a) NIR-II fluorescence image of a 4 T1 tumor-bearing mouse before injection of 6PEG–Ag2S QDs. (b–g) NIR-II fluorescence images of the 4T1 tumor-bearing mouse at various time points after injection of 6PEG–Ag2S QDs. (h) A principal component analysis(PCA) overlaid image based on the continuous NIR-II fluorescence images. The lungs (blue), kidneys (red), tumor (green). Yellow arrows indicate the 4 T1 murine breast cancer tumor (Reprinted with permission from Ref. [80]. Copyright # 2012, Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)

imaging study, Hong et al. demonstrated the use of PEGylated Ag2S QDs as “tracking space probes” for investigating the circulation of particles in blood (Fig. 5) [80]. The authors have showed that the circulation of Ag2S QDs in mice can last for up to 210 s. After circulation through the heart and lungs, the Ag2S QDs were observed to migrate to the tumor region at 15 s where initial signature of luminescence intensity was identified. Continuation of building up of luminescence intensity in the tumor region was observed for over a period of few hours which was attributed to the enhanced permeability and retention (EPR) effect of the solid tumor. Using a similar concept, Wang et al. reported the synthesis of antibody-conjugated Ag2S QDs for in vivo tumor imaging [105]. The BSA-coated Ag2S QDs were first synthesized in aqueous solution, and subsequently they were conjugated with anti-vascular endothelial growth factor (antiVEGF) for targeted delivery application. Both anti-VEGF–Ag2S and BSA–Ag2S QDs were tested in tumor-bearing mice for targeting VEGF-positive cancer cells. Although both QDs demonstrated tumor uptake capability, the antiVEGF–Ag2S QDs formulation showed a longer retention time (more than 24 h) in the tumor site upon comparing to the BSA-coated QDs formulation. In the case for InP QDs, these nanocrystals have mainly been used for in vitro study, and very few in vivo studies has been reported so far, suggesting that there are some challenges remaining to be resolved before they can be applied for in vivo applications. Bharali et al. reported a facile method to produce water-dispersible InP/ZnS core/shell QDs with luminescence efficiency of 15 %, and these QDs were further conjugated with folic acid for targeted delivery to cancer cells [25]. Folate receptors (FRs) have been commonly used for drug targeting both in vitro and in vivo to tumor cells [107–109]. In general, FRs are overexpressed in many types of human cancer cells, including the malignancies of the ovary, mammary gland, lung, kidney, brain, colon, prostate, nose, and throat. However, FRs are minimally expressed in the normal tissues. The authors have demonstrated the use of folic acid-conjugated InP QDs bioconjugates for confocal and two-photon imaging of KB cancer cells. Later, the same group used InP/ZnS QDs as targeted optical probes for labeling human pancreatic cancer cells (Fig. 6) [106]. Antibodies such as anti-claudin 4 and anti-PSCA, whose

Page 13 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

Fig. 6 Confocal microscopic images of (a) MiaPaCa-2 cells treated with anti-claudin 4 (AC4)-conjugated InP/ZnS QDs; (b) MiaPaCa-2 cells treated with unconjugated InP/ZnS QDs; (c) MiaPaCa-2 cells treated with anti-PSCAconjugated InP/ZnS QDs; (d) XPA3 cells treated with AC4-conjugated InP/ZnS QDs; and (e) KB (human nasopharyngeal epidermal carcinoma cell line, which does not express AC4 receptors) cells treated with AC4-conjugated InP/ZnS QDs. Cell nucleus stained with Hoechst 33342 (blue), and emission from InP/ZnS QDs are coded red (Reprinted with permission from Ref. [106]. Copyright # 2009, American Chemical Society)

corresponding antigen receptors are known to be overexpressed in both primary and metastatic pancreatic cancer, were utilized for the making of these bioconjugated QDs. With confocal microscopy and localized spectroscopy, they have demonstrated receptor-mediated uptake of QD-antibody bioconjugates into pancreatic cancer cells. More importantly, they have discovered that the InP/ZnS QDs have very low cytotoxic effect on the cells, thereby demonstrating that InP QDs can serve as an excellent optical probe for targeted bioimaging in vivo. Our group has recently shown that it is possible to use surface-modified InP QDs as nanoprobes for in vivo imaging. More specifically, we fabricated multifunctional nanoprobes based on InP/ZnS QDs for high-contrast multimodal imaging of cancer in vivo. These nanoprobes were prepared by encapsulating InP/ZnS QDs within phospholipid micelles covalently linked with DOTA-chelated Gd3+. Luminescent InP/ZnS QDs and Gd3 + chelates can be used for optical and magnetic resonance imaging, respectively. Employing in vivo optical imaging of mice bearing pancreatic cancer xenografts, we have shown that systemically delivered anti-claudin 4-conjugated QD nanoprobes can target and label the tumors with highcontrast signals. These studies indicate that InP/ZnS QDs have the potential to be translated in clinical applications ranging from targeted multimodal diagnosis to drug delivery therapy of cancer. Owing to the unique optical property of doped Zn-based QDs, such as their relatively long lifetime, high photostability, and biocompatibility, these particles have become an excellent alternative candidate to replace Cd-based QDs for future biophotonic research applications. Many of the reported manuscripts have demonstrated the potential of these particles for bioimaging without observing any toxicity impact from the prepared nanoformulations. For example, Kang et al. demonstrated the preparation of chitosan-coated Mn-doped ZnSe QDs for imaging of Hep G2 and PANC-1 cells [110], and no cytotoxicity was observed from the prepared formulation. Later, mannosylated chitosan QDs were also reported to be useful candidates for targeted cellular labeling with much higher biocompatibility when compared to the conventional chitosan-coated QDs Page 14 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

[111]. Manzoor et al. reported the preparation of folic acid (FA)-conjugated Mn-doped ZnS QDs for selective labeling of KB cancer cells, and this was achieved because of the as-synthesized bioconjugates have high affinity to bind with folate receptors (FR) that are overexpressed on the KB cell line [112]. Doped ZnS and ZnSe QDs are also attractive candidates for drug delivery applications due to their low cytotoxicity. For example, Xu et al. demonstrated the preparation of nanosized drug carriers that are based on glycopolypeptide-coated Mn-doped ZnS QDs formulation for efficient loading and controlled release of ibuprofen [113]. In addition to optical probing applications, Mn-doped QDs are also being used as multimodal contrast agent for in vitro and in vivo fluorescence and magnetic resonance (MR) imaging. Wang et al. [12] synthesized core/shell structure CdSe/ZnMnS QDs and applied these particles for multimodal imaging of cells without relying on any paramagnetic agents (e.g., Gd3+ chelates or ion-oxide particles). Because of the paramagnetic nature of the Mn dopants in ZnS shell, the core/shell structure exhibit high relaxivity with r1 values in the range of 11–18 mM1 s1 that is sufficient to produce contrast in MR imaging. Even though CdSe QD was involved in preparing the multimodal imaging probes, this example still demonstrates that the potential of Mn-doped QDs can be further developed as cadmium-free contrast agents for multimodal bioimaging applications. It is worth noting that Gaceur et al. recently reported the synthesis of heavily doped MnZnS QDs, and these particles exhibit blue–green emission and high relaxivity (r1 ¼ 20 mM1 s1) under MRI evaluation, but the bright Mn emission was not observable which may be due to the concentration quenching of the system [114]. In addition to in vitro experiment, Yu et al. have successfully applied Mn-doped ZnS QD for high-resolution in vivo tumor-targeted imaging by using multiphoton excitation technique [115]. Using the prepared RGD peptide-conjugated QDs formulation, the authors were able to visualize the accumulated QDs in tumor vasculatures located below the dermis with the depth around 100 mm. It is worth noting that the enhanced Stokes shift and the large three-photon cross section of d-dots will enable one to discriminate the three-photon-excited photoluminescence (3PL) of d-dots from the background signal of two-photon autofluorescence (2PL) (Fig. 7). Silicon QDs are expected to be less toxic than that of cadmium-based QDs, but they have not been widely studied in biological systems because it is challenging to make them optically and colloidally stable in biological buffers for a long period (>7 days). Thus, making water-dispersible bioconjugated Si QDs remains an important challenge to be overcome. More recently, Erogbogbo et al. reported the synthesis of water-dispersible Si QDs using phospholipid micelles [46, 116]. The Si QDs were prepared by laser-driven pyrolysis of silane and subsequently followed by HF–HNO3 etching. Styrene, octadecene, or ethyl undecylenate was used to functionalize the Si QD surfaces and allowed them to be dispersible in organic solvents. Phospholipid micelles were then used to encapsulate Si QDs whereby making them dispersible in water and generating a hydrophilic shell terminated with PEG groups. For in vitro cell-labeling studies, amine-functionalized phospholipid PEGs were used to encapsulate Si QDs, and these encapsulated particles were used as biological luminescent probes. The uptake of micelle-encapsulated Si QDs into pancreatic cancer cells was confirmed by confocal imaging. Later, the same group further improved the Si QD formulation that avoided enzymatic degradation, evaded uptake by the reticuloendothelial system (RES), maintained stability in the acidic tumor microenvironment, and produced bright and stable photoluminescence in vivo [86]. More specially, nude mice bearing subcutaneously implanted Panc1-tumors were intravenously injected with Si QDs conjugated with RGD peptide [117]. Wavelength-resolved spectral unmixing confirmed the presence of emission from Si QDs targeted to the tumor vasculature. The luminescence intensity at the tumor site maintained up to 40 h (Fig. 8). Blood assay and histological analysis of tissue sections revealed no sign of systemic and organ toxicity in the treated animals. This demonstrated the effectiveness of tumor targeting using the bioconjugated Si QDs. Page 15 of 27

Fig. 7 In vivo three-photon imaging of Mn:ZnS QD–RGD conjugates targeted to tumor. (a) Spectral image of tumor vasculature below the base of the dermis. (b) Normalized spectra of 3PL of the QDs (orange, region 1 in a), tissue autofluorescence (green, region 2 in a), and second-harmonic generation from collagen fiber (blue). (c) Photostability comparison of Mn:ZnS QDs with common fluorophores at an excitation power of 10 mW. (d and e) In situ spectral deconvolution images of a tumor targeted by the QDs, showing the endothelial lining (d) and extravasation of the QDs (e). (f) Comparison between a multiphoton micrograph (i) and a one-photon confocal laser scanning micrograph (ii) of the tumor vasculature targeted by Mn:ZnS QD–RGD–FITC conjugates. (g) Comparison between 3PL of Mn:ZnS QDs (i) and 2PL of FITC (ii) acquired from spectral deconvolution of f(i) (Reprinted with permission from Ref. [115]. Copyright # 2013, Nature Publishing Group)

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

Page 16 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

Fig. 8 Time-dependent in vivo luminescence imaging of Panc-1 tumor-bearing mice injected with 5 mg of (A ~ E) RGD-conjugated Si QDs or (K ~ O) nonconjugated Si QDs. The tumors are indicated by white arrows. Background signals and the unmixed SiQD signals are coded in green and red, respectively. Panels F ~ J and panels P ~ T correspond to the images in panels A ~ E and K ~ O, respectively. Ex vivo images (U, W) and luminescence images (V, X) of tumors harvested at 40 h postinjection from mice treated with (U, V) RGD-conjugated Si QDs or (W, X) nonconjugated SiQDs (Adapted with permission from Ref. [86]. Copyright # 2011, American Chemical Society)

Many studies have pointed out that C-dots have good photostability [118, 119], and they can be made water dispersible for bioimaging applications. For example, Bhunia et al. demonstrated the preparation of C-dots by using carbohydrate carbonization method, and these particles displayed tunable emissions ranging from blue to red [67]. Using these prepared particles, TAT peptide- and folate-functionalized C-dots were synthesized and employed for targeted imaging of HeLa cells. Cao et al. reported the synthesis of propionylethylenimine-co-ethylenimine (PPEI-EI)functionalized C-dots for multiphoton imaging of live cells [120]. Upon incubating the prepared C-dots formulation in live MCF-7 cells for 2 h, the labeled cells became brightly illuminated when they are exposed to 800 nm excitation source. The C-dots was observed to label mainly in the cell membrane and the cytoplasm of the MCF-7 cells. The as-synthesized C-dots were also applied for imaging of living tissues and demonstrating their potential for small animal imaging and lymph node mapping applications in the near future. Kong et al. reported the generation of bioconjugated C-dot probes for monitoring the pH gradient in tissues with depth varying from 65 to 185 mm [121]. These particles were synthesized by the electrochemical method, and they were conjugated with AE–TPY using EDC chemistry for pH sensing. Basically, the constructed C-dots complex is sensitive toward the pH changes in environment. The authors showed that the fluorescence emission intensity of C-dots–TPY formulation increases as the pH in the environment decreases. These particles were used to label the cancer cells, and subsequently the treated cells were analyzed by 3D two-photon confocal fluorescence imaging where the system monitored the fluorescence intensity changes of the cells by varying the pH environment in the cell culture medium. Huang et al. demonstrated the preparation of green-emitting C-dots functionalized with the near-IR emitting dye (ZW800) and

Page 17 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

employed them for tumor-bearing mice imaging [122]. The in vivo NIR fluorescence images showed high tumor-to-background contrast and demonstrating the specificity of the C-dots to the tumor cells. GQDs have similar optical property when compared to C-dots, and they were also applied for bioimaging applications. For instance, Dong et al. prepared GQDs formulation using XC-72 carbon black material for cell imaging. The as-prepared QDs were used to stain MCF-7 cells, and the labeled cells were analyzed using confocal laser scanning microscopy technique. It was observed that GQDs were mostly accumulated in the nucleus, and no cell damage was observed [123]. We envision that GQDs can be used as optical probes for in vivo imaging study in the near future. However, several limitations of GQDs need to be overcome before they can be successfully applied for in vivo study. For example, one needs to improve their quantum yield and colloidal and optical stability in biological fluids. To date, there are several attempts at using GQDs for imaging of living tissues [124] and mice [125], but the result shows that further optimization is needed for perfecting the GQDs for targeted imaging and sensing use.

Nanotoxicity of Cadmium-Free QDs: From Cellular to Primate Studies Owing to the absence of toxic heavy metals such as cadmium, lead, and mercury as active ingredients, the CuInS2, AgInS2, InP, and Si and other heavy-metal-free QDs are of extreme interest for bioimaging applications. Nevertheless, it is still important for one to evaluate the cytotoxicity of these QD formulations before they can be translated into clinical research. In general, the preliminary cytotoxicity of these QDs can be evaluated using a cell viability (MTS) assay. Cytotoxicity studies of cadmium-based QDs with different sizes, shapes, and surface coatings have been extensively reported in the literature [16, 126, 127]. However, very few studies have been reported for CuInS2, AgInS2, Ag2S, and InP QDs. For example, Yong et al. demonstrated that cells treated with micelle-encapsulated CuInS2 QDs for 24 h maintained greater than 80 % viability even at a particle concentration as high as 195 mg/mL, thus suggesting low cytotoxicity associated of these QDs [100]. The authors even compared the cytotoxicity between cadmium-based and cadmium-free QDs, where cysteine-coated CdTe QDs were synthesized and used as a reference for MTS studies. It was observed that the particle concentrations corresponding to 50 % cell viability were 100 mg/mL and 300 mg/mL for CdTe and CuInS2 QDs, respectively, in Panc-1 cells. This indicates that CuInS2 QDs can be safely loaded into cells at a higher concentration for bioimaging studies. Recently, Liu et al. have investigated the cytotoxicity of AgInS2 QDs [104]. Basically, they have assessed the cytotoxicity of the AgInS2 NC sample on human pancreatic cancer cells (Panc-1) using the MTS assay. Exposure of the Panc-1 cells to AgInS2 QDs led to insignificant change in cell viability. The cells treated with QD formulation maintained greater than 80 % viability even at a QD dosage as high as 500 mg/mL, demonstrating the low cytotoxicity of these QDs. In the case of InP/ZnS QDs, the viability of treated cells was also in the range of 80–90 % relative to that of untreated cells, even at a treatment concentration as high as ~300 mg/mL [106]. It is worth noting that this dosage is at least 30 times higher than that of the cytotoxicity dosage of CdTe QDs. Similarly, Mn-doped ZnS QDs displayed the same trend, and it was reported that particle concentration that retained 50 % cell viability is at least ten times the dosage of CdSe QDs. More importantly, it was reported that no injuries were found in major organs of nude mice when a high dosage of 100 ml of 50 nM QDs was intravenously injected into the small animal [115]. In addition to doped QDs, Hocaoglu et al. demonstrated that the NIH/3 T3 cells treated with Ag2S QDs at 600 mg/mL exhibit no significant difference when compared to the control group [79]. Later, Zhang et al. investigated the biodistribution of PEGylated Ag2S QDs in mice for 2 months [128]. It was found that the Page 18 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

Fig. 9 H&E-stained tissue sections of the heart (A ~ D), kidney (E ~ H), liver (I ~ L), lungs (M ~ P), and spleen (Q ~ T) from mice treated with ~380 mg/kg of micelle-encapsulated Si QDs at different time points postinjection. The control group (column 1) was treated with saline only and sacrificed 24 h postinjection (Reprinted with permission from Ref. [86]. Copyright # 2011, American Chemical Society)

injected QDs first accumulated at RES system (e.g., spleen and liver), and they were then cleared out from the body after 60 days. Also, the authors discovered that there were no changes in the body weight, blood, and hematological parameters for mice treated with 30 mg/kg Ag2S QDs upon comparing to the control group. All these studies are suggesting that the Ag2S QDs formulation is nontoxic, and they can be further employed for clinical applications such as tumor-guided surgery and possible for therapy use as well. More recently, our group investigated the in vivo toxicity of Si QDs formulation [86]. Specifically, we treated the mice with Si QDs at a dosage as high as ~380 mg/kg, and no changes in the body weight, eating, drinking, exploratory behavior, activity, and physical features were observed over 3 months of the evaluation period. More importantly, no abnormalities were detected from the histological analysis of the major organs harvested from the treated mice (Fig. 9). Based on this finding, we continued with a pilot study of the Si QD formulation in nonhuman primates (NHPs) to check whether a similar trend could be observed in this advanced animal model [129]. Body weights of animals were recorded daily, with no significant differences observed between treated and untreated ones. Similarly, the eating, drinking, grooming, exploratory behavior, physical features, neurological status, and urination of the treated animals were normal throughout the evaluation period. The blood chemistry parameters of the animals were determined in our study; no sign of infection or toxic reactions that can be attributed to the Si QDs were found. Indicators of liver function showed no abnormalities, and no signs of kidney impairment were

Page 19 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

observed. More importantly, the analyses of histological images from various parts of the organs (e.g., the brain, cerebellum, atrium, ventricle, heart muscle, lung, kidney, liver, spleen, renal tubule, and intestine) of the animals were observed with no discernible sign of nanoparticle-induced changes. Pathologists confirmed that no signs of kidney, liver, or spleen disease or damage were present in these histological images. Several in vitro studies have demonstrated that C-dots are highly biocompatible, and their bioinertness is comparable to PEG molecules [65, 130]. A more recent study has shown that the cell viability of HeLa cells maintained over 90 % when treated with chemically functionalized C-dots at concentration ~500 mg/mL [60]. Yang et al. performed in vivo C-dots toxicity study using CD-1 mice. The isotope ratio analysis revealed that the injected 13C-dots were accumulated in the liver and spleen after 6 h of treatment. No abnormal behavior was observed for mice treated with C-dots at concentration as high as 40 mg/kg [65]. Huang et al. reported rapid renal clearance of C-dots from the small animal body, and such fast excretion rate of the particles may be attributed to their ultrasmall hydrodynamic diameter that is around 4.1 nm [122]. In the case of graphene QDs, it was observed that MC3T3 cells exposed to 400 mg/mL GQDs maintained over 80 % viability, and such findings suggest that both GQDs and C-dots possess similar biocompatibility [131]. All these results suggest that rationally designed QD formulation based upon a nontoxic element (CuInS2, AgInS2, InP, doped Zn chalcogenide, C, and Si) and FDA-approved components can be expected to create nontoxic multifunctional QDs for biomedical and clinical research applications.

Summary and Future Outlook In this review, we have summarized the current research status of engineering of cadmium-free QDs for biomedical and medicinal applications. Specifically, we have highlighted the current findings on the development of bioconjugated InP, CuInS2, AgInS2, doped Zn chalcogenide, C, and Si QDs for cell labeling, targeted delivery, tumor imaging, and their biodistribution profile. From their ported data, it is evident that these nanocrystals have a much lower cytotoxicity upon comparing to the cadmium-based QDs. While cadmium-free QDs have emerged as promising candidates to replace cadmium-based QDs for biological applications, there are obviously a number of issues that need to be overcome before their full potential can be realized for clinical applications. For example, after coating the QDs with biocompatible polymers, the dimensions of QDs increase, and the size is close to that of a large protein, which may affect their excretion profile. Also, cadmium-free QDs tend to have larger full-width half maximum, and it is 2–3 times larger than that of the full-width half maximum of cadmium-based QDs. Thus, it will be difficult for one to employ these QDs for highsensitive multiplex imaging in vitro and in vivo. In addition, the composition of elements in the ternary QD varies from individual particle to particle in the same synthesis batch, and it is challenging to isolate the particles having the same composition. It is worth noting that most of the bioimaging experiments are performed using small animal models such as rats and mice, since they are easy to care and cost-effective. However, imaging of small animals is different from imaging in humans, and the set of experimental imaging procedures for animals cannot be scaled up proportionally to humans. To overcome this challenge, new setup of optical imaging system and individualized treatment plan of various QD formulations is essential and viable for effective imaging of targeted parts in the human body. Naturally, many more studies and investigations are needed to address these issues before we can translate these cadmium-free QDs for clinical use. This process seems slow, but we should not give up hope since many active and distinguished researchers worldwide are currently optimizing and testing QDs in vivo that will eventually lead to perfected Page 20 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

formulations for human theranostic applications. By observing and analyzing the current trend in the QD biomedical research community, a definite demand for creating new types of cadmium-free QDs is continuously needed since each and every type of cadmium-free QDs has their own specific biomedical use. In the near future, we foresee that the colloidal synthesis protocols of cadmium-free QDs will be optimized, standardized, and reach a plateau stage where researcher can easily prepare a wide variety of size- and shape-controlled QDs for specific biomedical applications such as siRNA delivery, single-molecule imaging, targeted tumor imaging, and QD–FRET systems for ultrasensitive imaging of cells.

References 1. Alivisatos A (1996) Semiconductor clusters, nanocrystals, and quantum dots. Science 271(5251):933–937 2. Grieve K, Mulvaney P, Grieser F (2000) Synthesis and electronic properties of semiconductor nanoparticles/quantum dots. Curr Opin Coll Interf Sci 5(1):168–172 3. Michalet X et al (2005) Quantum dots for live cells, in vivo imaging, and diagnostics. Science 307(5709):538–544 4. Bruchez M et al (1998) Semiconductor nanocrystals as fluorescent biological labels. Science 281(5385):2013–2016 5. Chan WC, Nie S (1998) Quantum dot bioconjugates for ultrasensitive nonisotopic detection. Science 281(5385):2016–2018 6. Zhang C-Y et al (2005) Single-quantum-dot-based DNA nanosensor. Nat Mater 4(11):826–831 7. Gao X et al (2004) In vivo cancer targeting and imaging with semiconductor quantum dots. Nat Biotechnol 22(8):969–976 8. Dubertret B et al (2002) In vivo imaging of quantum dots encapsulated in phospholipid micelles. Science 298(5599):1759–1762 9. Wu X et al (2002) Immunofluorescent labeling of cancer marker Her2 and other cellular targets with semiconductor quantum dots. Nat Biotechnol 21(1):41–46 10. Bagalkot V et al (2007) Quantum dot-aptamer conjugates for synchronous cancer imaging, therapy, and sensing of drug delivery based on bi-fluorescence resonance energy transfer. Nano Lett 7(10):3065–3070 11. Yang H et al (2006) GdIII‐functionalized fluorescent quantum dots as multimodal imaging probes. Adv Mater 18(21):2890–2894 12. Wang S et al (2007) Core/shell quantum dots with high relaxivity and photoluminescence for multimodality imaging. J Am Chem Soc 129(13):3848–3856 13. Chen O et al (2013) Compact high-quality CdSe–CdS core–shell nanocrystals with narrow emission linewidths and suppressed blinking. Nat Mater 12:445–451 14. Dabbousi B et al (1997) (CdSe) ZnS core-shell quantum dots: synthesis and characterization of a size series of highly luminescent nanocrystallites. J Phys Chem B 101(46):9463–9475 15. Zheng Y, Gao S, Ying JY (2007) Synthesis and cell‐imaging applications of glutathione‐ capped CdTe quantum dots. Adv Mater 19(3):376–380 16. Hardman R (2006) A toxicologic review of quantum dots: toxicity depends on physicochemical and environmental factors. Environ Health Perspect 114(2):165 17. Yong K-T et al (2013) Nanotoxicity assessment of quantum dots: from cellular to primate studies. Chem Soc Rev 42(3):1236–1250 Page 21 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

18. Derfus AM, Chan WC, Bhatia SN (2004) Probing the cytotoxicity of semiconductor quantum dots. Nano Lett 4(1):11–18 19. Murray CB, Norris DJ, Bawendi MG (1993) Synthesis and characterization of nearly monodisperse CdE (E ¼ sulfur, selenium, tellurium) semiconductor nanocrystallites. J Am Chem Soc 115(19):8706–8715 20. Murray C, Kagan C, Bawendi M (2000) Synthesis and characterization of monodisperse nanocrystals and close-packed nanocrystal assemblies. Ann Rev Mater Sci 30(1):545–610 21. Trindade T, O’Brien P, Pickett NL (2001) Nanocrystalline semiconductors: synthesis, properties, and perspectives. Chem Mater 13(11):3843–3858 22. Samokhvalov P, Artemyev M, Nabiev I (2013) Basic principles and current trends in colloidal synthesis of highly luminescent semiconductor nanocrystals. Chem A Eur J 19(5):1534–1546 23. Park J et al (2007) Synthesis of monodisperse spherical nanocrystals. Angew Chem Int Ed 46(25):4630–4660 24. Xie R, Rutherford M, Peng X (2009) Formation of high-quality I  III  VI semiconductor nanocrystals by tuning relative reactivity of cationic precursors. J Am Chem Soc 131(15):5691–5697 25. Bharali DJ et al (2005) Folate-receptor-mediated delivery of InP quantum dots for bioimaging using confocal and two-photon microscopy. J Am Chem Soc 127(32):11364–11371 26. Ryu E et al (2009) Step-wise synthesis of InP/ZnS core  shell quantum dots and the role of zinc acetate. Chem Mater 21(4):573–575 27. Xie R, Battaglia D, Peng X (2007) Colloidal InP nanocrystals as efficient emitters covering blue to near-infrared. J Am Chem Soc 129(50):15432–15433 28. Kortan A et al (1990) Nucleation and growth of cadmium selenide on zinc sulfide quantum crystallite seeds, and vice versa, in inverse micelle media. J Am Chem Soc 112(4):1327–1332 29. Zimmer JP et al (2006) Size series of small indium arsenide-zinc selenide core-shell nanocrystals and their application to in vivo imaging. J Am Chem Soc 128(8):2526–2527 30. Gerion D et al (2001) Synthesis and properties of biocompatible water-soluble silica-coated CdSe/ZnS semiconductor quantum dots. J Phys Chem B Condens Phase 105(37):8861–8871 31. Fernández-Arg€ uelles MT et al (2007) Synthesis and characterization of polymer-coated quantum dots with integrated acceptor dyes as FRET-based nanoprobes. Nano Lett 7(9):2613–2617 32. Smith AM, Nie S (2008) Minimizing the hydrodynamic size of quantum dots with multifunctional multidentate polymer ligands. J Am Chem Soc 130(34):11278–11279 33. Alivisatos AP, Gu W, Larabell C (2005) Quantum dots as cellular probes. Annu Rev Biomed Eng 7:55–76 34. Wang Yet al (2013) Functionalized quantum dots for biosensing and bioimaging and concerns on toxicity. ACS Appl Mater Interf 5:2786 35. Qian J et al (2007) Imaging pancreatic cancer using surface-functionalized quantum dots. J Phys Chem B 111(25):6969–6972 36. Kumar S et al (2013) Room temperature ferromagnetism in Ni doped ZnS nanoparticles. J Alloys Compd 554:357–362 37. Xie RS et al (2011) Fe:ZnSe semiconductor nanocrystals: synthesis, surface capping, and optical properties. J Alloys Compd 509(7):3314–3318 38. Zou WS et al (2011) Synthesis in aqueous solution and characterisation of a new cobalt-doped ZnS quantum dot as a hybrid ratiometric chemosensor. Anal Chim Acta 708(1–2):134–140 39. Pradhan N et al (2005) An alternative of CdSe nanocrystal emitters: pure and tunable impurity emissions in ZnSe nanocrystals. J Am Chem Soc 127(50):17586–17587 Page 22 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

40. Liu N et al (2012) Enhanced luminescence of ZnSe:Eu3+/ZnS core-shell quantum dots. J Non Cryst Solids 358(17):2353–2356 41. Reddy DA et al (2012) Effect of Mn co-doping on the structural, optical and magnetic properties of ZnS:Cr nanoparticles. J Alloys Compd 537:208–215 42. Pradhan N et al (2007) Efficient, stable, small, and water-soluble doped ZnSe nanocrystal emitters as non-cadmium biomedical labels. Nano Lett 7(2):312–317 43. Pradhan N, Peng XG (2007) Efficient and color-tunable Mn-doped ZnSe nanocrystal emitters: control of optical performance via greener synthetic chemistry. J Am Chem Soc 129(11):3339–3347 44. Wolkin M et al (1999) Electronic states and luminescence in porous silicon quantum dots: the role of oxygen. Phys Rev Lett 82(1):197–200 45. Warner JH et al (2005) Water‐soluble photoluminescent silicon quantum dots. Angew Chem Int Ed 117(29):4626–4630 46. Erogbogbo F et al (2008) Biocompatible luminescent silicon quantum dots for imaging of cancer cells. ACS Nano 2(5):873–878 47. Park J-H et al (2009) Biodegradable luminescent porous silicon nanoparticles for in vivo applications. Nat Mater 8(4):331–336 48. Belomoin G et al (2002) Observation of a magic discrete family of ultrabright Si nanoparticles. Appl Phys Lett 80(5):841–843 49. Wilcoxon J, Samara G, Provencio P (1999) Optical and electronic properties of Si nanoclusters synthesized in inverse micelles. Phys Rev B 60(4):2704 50. Holmes JD et al (2001) Highly luminescent silicon nanocrystals with discrete optical transitions. J Am Chem Soc 123(16):3743–3748 51. Heath JR (1992) A liquid-solution-phase synthesis of crystalline silicon. Science 258(5085):1131–1133 52. Bley RA, Kauzlarich SM (1996) A low-temperature solution-phase route for the synthesis of silicon nanoclusters. J Am Chem Soc 118(49):12461–12462 53. Bapat A et al (2003) Synthesis of highly oriented, single-crystal silicon nanoparticles in a low-pressure, inductively coupled plasma. J Appl Phys 94(3):1969–1974 54. Littau K et al (1993) A luminescent silicon nanocrystal colloid via a high-temperature aerosol reaction. J Phys Chem 97(6):1224–1230 55. Li X et al (2003) Process for preparing macroscopic quantities of brightly photoluminescent silicon nanoparticles with emission spanning the visible spectrum. Langmuir 19(20):8490–8496 56. Hua F et al (2006) Organically capped silicon nanoparticles with blue photoluminescence prepared by hydrosilylation followed by oxidation. Langmuir 22(9):4363–4370 57. Sun YP et al (2006) Quantum-sized carbon dots for bright and colorful photoluminescence. J Am Chem Soc 128(24):7756–7757 58. Zheng LY et al (2009) Electrochemiluminescence of water-soluble carbon nanocrystals released electrochemically from graphite. J Am Chem Soc 131(13):4564 59. Lu J et al (2009) One-pot synthesis of fluorescent carbon nanoribbons, nanoparticles, and graphene by the exfoliation of graphite in ionic liquids. ACS Nano 3(8):2367–2375 60. Ding H et al (2013) Luminescent carbon quantum dots and their application in cell imaging. New J Chem 37(8):2515–2520 61. Jeong J et al (2012) Color-tunable photoluminescent fullerene nanoparticles. Adv Mater 24(15):1999–2003

Page 23 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

62. Luo PG et al (2014) Carbon-based quantum dots for fluorescence imaging of cells and tissues. RSC Adv 4(21):10791–10807 63. Wang F et al (2010) One-step synthesis of highly luminescent carbon dots in noncoordinating solvents. Chem Mater 22(16):4528–4530 64. Sahu S et al (2012) Simple one-step synthesis of highly luminescent carbon dots from orange juice: application as excellent bio-imaging agents. Chem Commun 48(70):8835–8837 65. Yang ST et al (2009) Carbon dots as nontoxic and high-performance fluorescence imaging agents. J Phys Chem C 113(42):18110–18114 66. Wang X et al (2010) Bandgap-like strong fluorescence in functionalized carbon nanoparticles. Angew Chem Int Ed 49(31):5310–5314 67. Bhunia SK et al (2013) Carbon nanoparticle-based fluorescent bioimaging probes. Sci Rep 3:1473 68. Shen JH et al (2012) One-pot hydrothermal synthesis of graphene quantum dots surfacepassivated by polyethylene glycol and their photoelectric conversion under near-infrared light. New J Chem 36(1):97–101 69. Lu J et al (2011) Transforming C60 molecules into graphene quantum dots. Nat Nanotechnol 6(4):247–252 70. Krunks M et al (1999) Structural and optical properties of sprayed CuInS2 films. Thin Solid Films 338(1):125–130 71. Torimoto T et al (2007) Facile synthesis of ZnS-AgInS2 solid solution nanoparticles for a color-adjustable luminophore. J Am Chem Soc 129(41):12388–12389 72. Weissleder R (2001) A clearer vision for in vivo imaging. Nat Biotechnol 19(4):316–317 73. Chemseddine A, Weller H (1993) Highly monodisperse quantum sized CdS particles by size selective precipitation. Berichte der Bunsengesellschaft f€ ur physikalische Chemie 97(4):636–638 74. Murray CB et al (2001) Colloidal synthesis of nanocrystals and nanocrystal superlattices. IBM J Res Dev 45(1):47–56 75. Nose K et al (2009) Synthesis of ternary CuInS2 nanocrystals; phase determination by complex ligand species. Chem Mater 21(13):2607–2613 76. Jiang P et al (2012) Water-soluble Ag2S quantum dots for near-infrared fluorescence imaging in vivo. Biomaterials 33(20):5130–5135 77. Zhang Y et al (2012) Ag2S quantum dot: a bright and biocompatible fluorescent nanoprobe in the second near-infrared window. ACS Nano 6(5):3695–3702 78. Zhu C-N et al (2013) Ag2Se quantum dots with tunable emission in the second near-infrared window. ACS Appl Mater Interfaces 5(4):1186–1189 79. Hocaoglu I et al (2012) Development of highly luminescent and cytocompatible near-IRemitting aqueous Ag2S quantum dots. J Mater Chem 22(29):14674–14681 80. Hong G et al (2012) In vivo fluorescence imaging with Ag2S quantum dots in the second near-infrared region. Angew Chem Int Ed 124(39):9956–9959 81. Yamazaki K et al (2000) Long term pulmonary toxicity of indium arsenide and indium phosphide instilled intratracheally in hamsters. J Occup Health Engl Ed 42(4):169–178 82. Wu P, Yan XP (2013) Doped quantum dots for chemo/biosensing and bioimaging. Chem Soc Rev 42(12):5489–5521 83. Yuan X et al (2014) Thermal stability of Mn2+ ion luminescence in Mn-doped core-shell quantum dots. Nanoscale 6(1):300–307 84. Jurbergs D et al (2006) Silicon nanocrystals with ensemble quantum yields exceeding 60%. Appl Phys Lett 88(23):233116-3 Page 24 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

85. He GS et al (2008) Two-and three-photon absorption and frequency upconverted emission of silicon quantum dots. Nano Lett 8(9):2688–2692 86. Erogbogbo F et al (2011) In vivo targeted cancer imaging, sentinel lymph node mapping and multi-channel imaging with biocompatible silicon nanocrystals. ACS Nano 5(1):413 87. Fan JY, Chu PK (2010) Group IV nanoparticles: synthesis, properties, and biological applications. Small 6(19):2080–2098 88. Zeng S et al (2014) Nanomaterials enhanced surface plasmon resonance for biological and chemical sensing applications. Chem Soc Rev 43(10):3426–3452 89. Ding C, Zhu A, Tian Y (2013) Functional surface engineering of C-dots for fluorescent biosensing and in vivo bioimaging. Acc Chem Res 47(1):20–30 90. Zhang Z et al (2012) Graphene quantum dots: an emerging material for energy-related applications and beyond. Energy Environ Sci 5(10):8869–8890 91. Wang X et al (2010) Bandgap-like strong fluorescence in functionalized carbon nanoparticles. Angew Chem Int Ed 122(31):5438–5442 92. Cao L et al (2012) Photoluminescence properties of graphene versus other carbon nanomaterials. Acc Chem Res 46(1):171–180 93. Sun H et al (2013) Highly photoluminescent amino-functionalized graphene quantum dots used for sensing copper ions. Chem A Eur J 19(40):13362–13368 94. Li H et al (2010) Water-soluble fluorescent carbon quantum dots and photocatalyst design. Angew Chem Int Ed 49(26):4430–4434 95. Bacon M, Bradley SJ, Nann T (2014) Graphene quantum dots. Part Part Syst Char 31(4):415–428 96. Anilkumar P et al (2011) Toward quantitatively fluorescent carbon-based “quantum” dots. Nanoscale 3(5):2023–2027 97. Zhu L et al (2013) Plant leaf-derived fluorescent carbon dots for sensing, patterning and coding. J Mater Chem C 1(32):4925–4932 98. Krysmann MJ, Kelarakis A, Giannelis EP (2012) Photoluminescent carbogenic nanoparticles directly derived from crude biomass. Green Chem 14(11):3141–3145 99. Li L et al (2009) Highly luminescent CuInS2/ZnS core/shell nanocrystals: cadmium-free quantum dots for in vivo imaging. Chem Mater 21(12):2422–2429 100. Yong K-T et al (2010) Synthesis of ternary CuInS2/ZnS quantum dot bioconjugates and their applications for targeted cancer bioimaging. Integr Biol 2(2–3):121–129 101. Pons T et al (2010) Cadmium-free CuInS2/ZnS quantum dots for sentinel lymph node imaging with reduced toxicity. ACS Nano 4(5):2531–2538 102. Deng D et al (2012) High-quality CuInS2/ZnS quantum dots for in vitro and in vivo bioimaging. Chem Mater 24(15):3029–3037 103. Guo W et al (2013) Synthesis of Zn-Cu-In-S/ZnS core/shell quantum dots with inhibited blueshift photoluminescence and applications for tumor targeted bioimaging. Theranostics 3(2):99–108 104. Liu L et al (2013) Synthesis of luminescent near-infrared AgInS2 nanocrystals as optical probes for in vivo applications. Theranostics 3(2):109–115 105. Wang Y, Yan X-P (2013) Fabrication of vascular endothelial growth factor antibody bioconjugated ultrasmall near-infrared fluorescent Ag2S quantum dots for targeted cancer imaging in vivo. Chem Commun 49(32):3324–3326

Page 25 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

106. Yong K-T et al (2009) Imaging pancreatic cancer using bioconjugated InP quantum dots. ACS Nano 3(3):502 107. Low PS, Antony AC (2004) Folate receptor-targeted drugs for cancer and inflammatory diseases. Adv Drug Deliv Rev 56(8):1055 108. Lee RJ, Low PS (1994) Delivery of liposomes into cultured KB cells via folate receptormediated endocytosis. J Biol Chem 269(5):3198–3204 109. Antony A (1992) The biological chemistry of folate receptors. Blood 79(11):2807–2820 110. Chang SQ et al (2011) One-step fabrication of biocompatible chitosan-coated ZnS and ZnS: Mn2+ quantum dots via a gamma-radiation route. Nanoscale Res Lett 6:1–7 111. Jayasree A et al (2011) Mannosylated chitosan-zinc sulphide nanocrystals as fluorescent bioprobes for targeted cancer imaging. Carbohydr Polym 85(1):37–43 112. Manzoor K et al (2009) Bio-conjugated luminescent quantum dots of doped ZnS: a cytofriendly system for targeted cancer imaging. Nanotechnology 20(6):065102 113. Xu ZG et al (2011) Glycopolypeptide-encapsulated Mn-doped ZnS quantum dots for drug delivery: fabrication, characterization, and in vitro assessment. Coll Surf B Biointerf 88(1):51–57 114. Gaceur M et al (2012) Polyol-synthesized Zn0.9Mn0.1S nanoparticles as potential luminescent and magnetic bimodal imaging probes: synthesis, characterization, and toxicity study. J Nanopart Res 14(7):1 115. Yu JH et al (2013) High-resolution three-photon biomedical imaging using doped ZnS nanocrystals. Nat Mater 12(4):359–366 116. Erogbogbo F et al (2010) Biocompatible magnetofluorescent probes: luminescent silicon quantum dots coupled with superparamagnetic iron (III) oxide. ACS Nano 4(9):5131 117. Brooks PC et al (1994) Integrin avb3 antagonists promote tumor regression by inducing apoptosis of angiogenic blood vessels. Cell 79(7):1157–1164 118. Peng H, Travas-Sejdic J (2009) Simple aqueous solution route to luminescent carbogenic dots from carbohydrates. Chem Mater 21(23):5563–5565 119. Li X et al (2011) Preparation of carbon quantum dots with tunable photoluminescence by rapid laser passivation in ordinary organic solvents. Chem Commun 47(3):932–934 120. Cao L et al (2007) Carbon dots for multiphoton bioimaging. J Am Chem Soc 129(37):11318–11319 121. Kong B et al (2012) Carbon dot-based inorganic–organic nanosystem for two-photon imaging and biosensing of pH variation in living cells and tissues. Adv Mater 24(43):5844–5848 122. Huang X et al (2013) Effect of injection routes on the biodistribution, clearance, and tumor uptake of carbon dots. ACS Nano 7(7):5684–5693 123. Dong Y et al (2012) One-step and high yield simultaneous preparation of single- and multilayer graphene quantum dots from CX-72 carbon black. J Mater Chem 22(18):8764–8766 124. Liu Q et al (2013) Strong two-photon-induced fluorescence from photostable, biocompatible nitrogen-doped graphene quantum dots for cellular and deep-tissue imaging. Nano Lett 13(6):2436–2441 125. Wu X et al (2013) Fabrication of highly fluorescent graphene quantum dots using l-glutamic acid for in vitro/in vivo imaging and sensing. J Mater Chem C 1(31):4676–4684 126. Su Y et al (2009) The cytotoxicity of cadmium based, aqueous phase–synthesized, quantum dots and its modulation by surface coating. Biomaterials 30(1):19–25

Page 26 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_7-1 # Springer Science+Business Media Dordrecht 2014

127. Chen N et al (2012) The cytotoxicity of cadmium-based quantum dots. Biomaterials 33(5):1238–1244 128. Zhang Y et al (2013) Biodistribution, pharmacokinetics and toxicology of Ag2S near-infrared quantum dots in mice. Biomaterials 34(14):3639–3646 129. Liu J et al (2013) Assessing clinical prospects of silicon quantum dots: acute doses in mice prove safe in monkeys. ACS Nano 7:7303 130. SUN Y-P et al (2010) Cytotoxicity evaluations of fluorescent carbon nanoparticles. Nano Life 01((01n02)):153–161 131. Zhu S et al (2012) Surface chemistry routes to modulate the photoluminescence of graphene quantum dots: from fluorescence mechanism to up-conversion bioimaging applications. Adv Funct Mater 22(22):4732–4740

Page 27 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Photonic Crystal Fiber-Based Biosensors Xia Yu*, Derrick Yong and Yating Zhang Precision Measurements Group, Singapore Institute of Manufacturing Technology, Singapore, Singapore

Abstract Photonic crystal fibers (PCFs) are newly emerging optical fibers that present a diversity of new features beyond what conventional optical fibers can provide. Owing to their unique geometric structure and light-guiding properties, PCFs show an outstanding potential for microliter- or even nanoliter-volume biosensing purposes. In this chapter we briefly review applications of PCFs for developing compact and robust biosensors. This research subject has recently invoked much attention due to the gradually maturing fabrication techniques of fiber microstructures, as well as development of surface processing technique enabling activation of fiber microstructures with functional materials. Particularly, we consider two sensor types: surface-modified and unmodified PCF biosensors. For the first sensor type, we focus mainly on biomolecule-decorated microstructures and metalized air hole arrays. The sensors functionalized with bioreceptors typically employ fluorescence of dye labels of the targeted biomolecules to track specific events, such as DNA hybridization and protein binding. The metallization of air holes of PCF with nanoscaled film or particle aggregates introduces a new physical phenomenon called surface plasmon effect to further strengthen the optical field that interacts with bio-samples and push-up, which is of great significance for biological analysis at ultra-low concentrations and even at single molecule level. The second sensor type directly relies on light absorption from aqueous bio-samples in the air channels of PCF. To elaborate the contribution of PCF in such sensing operation mode, two sensor implementations are presented in the following and detail the influence of fiber structures on the absorption-based sensing performance.

Keywords Photonic crystal fiber; Total internal reflection; Photonic bandgap; Biosensors; Evanescent field; Surface plasmon; Surface-enhanced Raman scattering; Absorption; Resolution; Sensitivity; Biomolecular

Introduction Introduction of PCF

Conventional optical fibers allow propagation of light along the length by confining light within its core. Guidance in such fibers is attained via total internal reflection (TIR), which requires the refractive index of the core to be higher than that of the cladding. In order to possess a higher refractive index, the core is doped, and to further achieve single-mode propagation, a narrower core

*Email: [email protected] Page 1 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Index-guiding

Hollow-core

All-solid

Hybrid

Silica

High-index doped rods Air

Fig. 1 Schematics of four basic types of PCF

is necessary. However, doping raises attenuation and a tighter core limits permissible optical power and elicits undesired nonlinear interactions over extensive fiber lengths [1]. An idea of trapping light within a hollow fiber core emerged in 1991. It comprised a hollow core surrounded by a microscopic periodic lattice of air holes in the silica cladding, forming a photonic crystal structure [2]. This photonic crystal structure banks on the regular arrangement of microstructures to fundamentally alter the material’s optical properties. Fibers possessing such structures are known as photonic crystal fibers (PCFs). The photonic crystal structure evokes a highly wavelength-dependent cladding index in the PCF, providing a host of customizable properties. Over the past years, PCFs have already demonstrated their superiority over conventional optical fibers in many aspects, leading to the emergence of numerous novel applications, particularly in sensing applications. In addition, they possess unbound potential to further surpass conventional optical fibers, as research in the field advances [3]. Diverse designs for PCFs have arisen over the years and are mainly classified into four basic types [4]: (a) index-guiding PCF (solid core surrounded by periodic array of air holes), (b) hollow-core PCF (hollow core surround by periodic array of air holes), (c) all-solid photonic band gap (PBG) fiber (solid core surrounded by periodic array of high-index rods), (d) hybrid PCF (solid core surrounded by periodic array of air holes and high-index rods). The simplest form of guidance within PCF occurs in index-guiding PCF, where light is guided based on modified TIR [5–7]. As illustrated in Fig. 1, this form of PCF incorporates a solid core (an introduced defect in the periodic array of air holes) within a photonic crystal structure that constitutes the cladding. This configuration results in a higher refractive index in the core, allowing light to be confined within it. Although TIR is ideal for light guidance, it is unable to avoid scattering losses and intrinsic absorption due to propagation within a solid core. However, without a solid core, light is unable to be guided via TIR. In the case of hollow fibers, guidance of light is dependent on external reflection, which is highly leaky and multimodal [8]. In contrast, hollow-core PCFs are able to guide light within the hollow core without leakage. Also shown in Fig. 1 is a hollow-core PCF, which has a hollow core situated within a photonic crystal structure that again constitutes the cladding. This configuration traps certain bandwidths of light in the hollow core by photonic bandgap (PBG) effects of the cladding, instead of TIR, and enables single-mode guidance [9]. Unlike conventional optical fibers, hollow-core PCFs enable light to be guided in an air-filled space (hollow core) permitting minimal attenuation over extended lengths. Guidance is possible due to coherent Bragg scattering, where specific bandwidths of light are prevented from escaping into the cladding and are confined within the hollow core. Since only certain bands of light are confined and guided, these fibers are also known as PBG fibers. Furthermore, the photonic crystal structure of the cladding effectively encloses more than 99 % of optical power within the hollow core, enabling low-loss propagation of light along the PCF [10]. Similar to index-guiding PCFs, loss in hollow-core PCFs is a comparably significant issue. Hollow-core PCFs, however, still portray the highest prospects for exceptionally low-loss fibers based on the fact that light propagates within the air-filled hollow core.

Page 2 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

3

1 Drawing of capillaries ~10mm

Drawing into preform

4 Drawing into fiber

~1mm

~10mm

2

~1mm

Stacking of capillaries ~125µm

~10mm

~1mm

Fig. 2 Stack-and-draw process for PCF fabrication: (1) capillaries are drawn from larger tubes; (2) stack is manually assembled in a macroscopic scale on a hexagonally shaped jig; (3) stack is inserted into a tube and remaining gaps are filled with pure silica rods. Tube is drawn into a millimeter wide preform; (4) preform is further drawn into fibers using a fiber-drawing tower under optimally set temperatures and rate of drawing

Losses as low as 1.2 dBkm1 have been reported in [10]. Nevertheless, theoretical predictions indicate potential low losses to the order of 0.1 dBkm1 which essentially is lower than the conventional optical fibers [4]. Fabrication is essential in the realization of modeled PCF designs. As compared to conventional fibers, PCF fabrication is a less sophisticated process. In brief, silica capillaries are stacked and fused and eventually drawn into fibers [1]. This technique offers elevated flexibility by enabling variable sophisticated lattices to be assembled. The process is elaborated below in Fig. 2 [4]. Drawn fibers are verified via high-precision microscopy and are further polymer-coated to heighten mechanical properties. PCF fabrication with various materials through extrusion [11–13], drilling [14], and built-in-casting [15] has also been reported.

Introduction of PCF-Based Biosensors Fiber optic technology has permitted the miniaturization of numerous sensors leading to the vast exploration of fiber optic-based sensors over the past few decades. The device of PCF has provided new grounds for enhanced sensing capabilities with fiber optics. The novel manipulation and guidance of light within PCFs have distinctively heightened performance in aspects of precision as well as accuracy. In particular, modes in PCF, namely, the core, cladding, and hybrid modes, are sensitive to ambient conditions. These modes can thus provide the required data, either individually or collectively [16]. As reported, changes in strain and temperature readily produce a response in all modes, whereas changes in the surrounding medium only affect certain cladding modes. Further post-processing techniques, such as infiltration with gases [17] and liquids [18], coating with metal films [19, 20] and metallic nanoparticles [21, 22], as well as tapering [23], have also been investigated to increase sensitivity of PCF-based sensors. Surface-modified and unmodified PCF-based biosensors would be discussed in this chapter.

Surface-Modified PCF Biosensors An important characteristic of PCF is its provision of an exceptionally high surface area to volume ratio in its array of air holes. Surface modifications performed on the walls of these air holes, thus, further elevate the sensitivity of the PCF-based sensor [24].

Page 3 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Fig. 3 Schematics of optical fiber layout in biochip [25]

Surface Modification by Biomolecules The specificity of deoxyribonucleic acid (DNA) hybridization and protein-protein interactions forms the basis of optical sensing via surface functionalization with biomolecules. Although interaction is between biomolecules immobilized on surfaces and biomolecules introduced in analytes, detection in PCF is optics based; therefore, these biomolecules are often tagged with fluorophores. Rindorf et al. have reported such a PCF-based sensor where the sensing component (a surface-functionalized PCF) is incorporated into a biochip [25]. The PCF reported had singlestranded DNA (ssDNA) immobilized on the walls of its air holes. These ssDNAs were complementary to Cy3 (fluorophore)-labeled target ssDNAs. Hence, when hybridization occurs between this pair of ssDNAs, the resultant double-stranded DNA would possess a Cy3 molecule. Light is coupled into the PCF from an input multimode optical fiber (MMF) and collected via an output MMF as illustrated below in Fig. 3. Cy3 that is trapped within the PCF due to DNA hybridization would thus absorb a certain band of the input light. This absorbance would then be reflected in the output spectrum collected. However, if noncomplementary ssDNA was introduced, DNA hybridization would not have occurred, no Cy3 would be trapped, and hence, no absorbance would be observed. In addition, the unique incorporation of PCF into biochips has facilitated the infiltration of PCF with desired samples. Apart from surface modification with DNA, protein immobilization in PCF has also been reported. Similar to DNA hybridization, proteins have specific interactions with other protein – antigen-antibody assays. Specifically, an estrogen receptor (ER) (antigen) from breast cancer cells, immobilized within a length of PCF, is to be conjugated with an anti-ER (antibody) [26]. Similar to [25], fluorescent dye labels are involved in the sensing process. Conversely, instead of labeling the target protein with a fluorophore, a secondary antibody, able to bond with the anti-ER, is labeled. Input light corresponding to the excitation wavelength of the fluorophore is utilized, and successful binding of anti-ER to ER is characterized by absorbance and corresponding emission peaks. Such a detection regime was able to identify 20 pg of ER in a 50 nL sample, enabling highly sensitive detection of breast cancer indicators, even with extremely low sample volumes. On the contrary, DNA hybridization detection without the requirement of fluorescence has also been reported in several articles. A non-labeled detection was experimentally performed on a PCF with long-period gratings (PCF-LPG) [27]. A layer of biomolecules immobilized on the air holes’ walls elicited resonant wavelength shifts and further enabled the layer’s thickness to be estimated from experimental data. Sensing was also theoretically demonstrated in a hollow-core PCF Bragg fiber [28]. Transmission properties of the Bragg fiber were altered with hybridization of DNA at the

Page 4 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

walls of the PCF’s air holes; similar to Ref. [27], thickness of the DNA was also quantifiable based on the reported theoretical model. On top of pairing-governed detection regimes, surface modifications have also enabled pH sensing in PCF-based devices. Specifically, pH sensing has been successfully demonstrated in PCFs surface modified with pH-sensitive polysaccharide-based films – comprising cellulose acetate doped with a pH-sensitive fluorescence dye, eosin [29]. This surface-modified microstructure polymer optical fiber probe also exhibits the capability of a surfactant-modifiable pH response range and highlights the feasibility of organic meshes as indicator carriers.

Surface Modification by Metal Surface plasmon resonance (SPR) has empowered optical sensing with remarkably high sensitivities. Typically, SPR sensing is performed in Kretschmann-Raether configuration where p-polarized light passes through a prism and is reflected from a thin layer of metal (usually gold or silver). At the point of reflection, evanescent waves formed at the metal-dielectric interface are phase-matched with plasmonic waves at the metal-analyte interface resulting in the formation of surface plasmon waves. Minute changes in the analyte’s refractive index are reflected in changes in amplitude or phase of the reflected light. To miniaturize this technology, it is wed with optical fibers and comprehensively discussed in [30]. PCF has facilitated issues in phase-matching and plasmon excitation by provision of a Gaussian-like core mode and metal-coated air holes’ walls, respectively. Furthermore, the enabling of microfluidics through its air holes provides practicality. The theoretical study reported yields a refractive index sensitivity of 104 refractive index unit (RIU) [19], providing possibilities in sensing of changes in biological analytes, deeming PCF ideal as an SPR fiber optic biosensor. Besides coating walls of the air holes with a full metal layer, investigations in metallic nanoparticle coatings have also been explored. Metallic nanoparticles serve as a substrate for surfaceenhanced Raman scattering (SERS) which is highly molecular specific and able to largely enhance the Raman signal by magnitudes between 106 and 1015. The analyte solution was infiltrated via simple capillary action, allowing confined optical modes in the PCF’s air holes to interact with it and the immobilized gold nanoparticles to obtain measurable SERS signals. It provides a possible platform for detection of biomolecules. Generally, two types of PCF have been studied intensely for SERS applications, namely solidcore PCF (SCPCF) [31–34] and hollow-core PCF (HCPCF) [35–38]. Their distinct guidance properties yield different ways of light-sample interaction. In SCPCF, Raman signal is generated from sample molecules within the evanescent field, whereas for HCPCF, excitation light confined in the hollow core directly interacts with the sample there. HCPCF brings ultrahigh efficiency of surface plasmon excitation, but it has one major limitation – a rather narrow transmission window. This effect becomes especially significant after analyte infiltration as the fiber loses its PBG properties. This thus restricts the wavelength of excitation light and hinders its applications in samples with Raman wavelengths outside the transmission band. In contrast, SCPCF has a much broader wavelength range to accommodate diverse sensing regimes. However, the major challenge for SCPCF-based SERS sensors is the lower sensitivity, which arises from the smaller surface volume of light-sample interaction. For both PCF templates, great efforts have been made to further improve their SERS capabilities. For instance, to alleviate the restriction of bandgap effect in HCPCF, a selective-filling approach was proposed to optimize the guidance properties of fiber and meanwhile preserve a high SERS sensitivity [36]. There are also a few works devoted to enhance SERS efficiency in SCPCF, from the viewpoint of structural design, as well as the optimization of coating process [22, 34].

Page 5 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Fig. 4 Micrograph of SCPCF cross-section. (a) Circles A and B indicate launching positions for core launch and offset launch, correspondingly. (b) Micrograph of SCPCF axial profile after multilayer deposition of Au NPs. (c) Absorbance spectrum of prepared Au NPs (Inset: SEM micrograph of Au NPs) (Reproduced from Ref. [47])

The reproducible deposition of metallic nanoparticles inside air holes mainly involves three methods. The first is stabilization with positive surfactant, for example, hexadecyltrimethylammonium bromide, which could easily absorb metal nanoparticles by opposite charge affinity [21]. The second is silanization of silica walls of air holes. The silane could chemically bind to air-hole surface and provides coupling sites for metal nanoparticles [39]. Another technique worth mentioning is high-pressure chemical deposition, which was reported in [32]. Silver precursor complex was delivered into fiber holes under high pressure, and thermal reduction of the precursor was rigorously controlled to form an annular deposition of silver nanoparticles. In the rest part of this section, a new method of SERS substrate fabrication, multilayered deposition [40, 41], and recent progress of using an offset launch method to amplify the SERS efficiency in an SCPCF-based Raman sensor are presented. Sensor Fabrication The SCPCF has a silica core and three rings of air holes arranged hexagonally, as illustrated in Fig. 4a. The diameter of the air holes is 5.4 mm and the hole-to-hole distance is 9.6 mm. The preparation of gold colloid follows the recipe reported in Kumar’s et al. work [42]. Briefly, 50 ml of HAuCl4 solution (2.54  104 M) was brought to boiling under vigorous stirring. Approximately 1 ml of sodium citrate solution (38.8 mM) was then quickly added to the mixture. Upon a color change, the heating was maintained for another 10 min and the solution was subsequently allowed to cool to room temperature under stirring. This procedure of gold reduction produces Au NPs, approximately 15 nm in diameter, with an absorption peak at 519 nm in the UV-vis spectrum corresponding to its excitation wavelength for localized surface plasmons (shown in Fig. 4c). The fiber was cut into segments of about 8 cm in length, with both ends carefully cleaved. Before modification, a portion of jacket (3 cm) from the fiber tip was stripped off. Multilayer deposition of

Page 6 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Fig. 5 Schematic of multilayered Au NPs deposition on the SCPCF inner walls of air holes via the “layer-by-layer” deposition method

Au NPs in the air holes of the fiber was carried out in the following steps: (1) The fiber was first cleaned with freshly prepared piranha solution (30 % H2O2 and 98 % H2SO4 mixed in the volumetric ratio of 1:3). The solution was pumped into the fiber’s air holes with a custom-built pressure cell and allowed to react for 30 min. (2) Then the air holes were flushed thoroughly with deionized (DI) water and dried in N2 gas. (3) Once cleaned, the air holes were infiltrated with a 3 % (volume percentage) solution of APTMS in methanol and allowed to react for approximately 12 h. This step functionalized the air hole surfaces with amine groups (shown in Fig. 5). (4) Following which, the fiber was flushed with copious amounts of methanol and dried. (5) Next, the Au NP solution was infiltrated and allowed to react with the amine-functionalized surface for another 24 h. The strong affinity between amine groups and Au NPs enabled adherence of Au NPs to the silica surface. (5) Finally, the first round of deposition ends with continuous flushing with DI water and drying under N2 gas. The addition of subsequent Au NP layers can be achieved via alternating steps (3–5), until the desired number of layers was obtained. After eight cycles of Au NP deposition, a wine red color was observed in the air holes of the fiber under a microscope at 100 times magnification (Fig. 4b). To identify this red substance, a portion of fiber was characterized under an SEM/EDX system. The fiber was cracked with tweezers to expose the inner surface of the air holes (Fig. 6a). As shown in Fig. 6b, a size and shape distribution of nanoparticles exists on the silica wall. Specifically, the particle size varies between 15 and 60 nm. An EDX investigation (Fig. 6c) verified that the particles attached to the air hole walls are actually Au NPs. This nonuniformity in particle size arises from aggregation of the Au NPs as more layers of deposition were added [38]. The Au NPs together with their clusters create the SERS active area around the silica core.

Page 7 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Fig. 6 (a) SEM micrograph of air holes deposited with Au NPs. (b) Magnification of region marked with red circle in (a). (c) Element spectrum of (b) collected with EDX (Reproduced from Ref. [47])

Sensor Characterization and Application To test the SERS performance of the probe, Rhodamine B (RhB) solution – a commonly used Raman dye – was chosen as the sample for analysis. RhB was first infiltrated into the air holes under pressure, which only required a few seconds. The fiber was then mounted on a holder and placed under the microscope of a Raman spectrometer (Renishaw), where a 50 times objective lens (N.A. of 0.75) was used to couple the excitation laser (a 785 nm diode laser with 3 mW power) into the fiber and simultaneously collect the backscattered Raman emission. Here, Raman emissions were generated from RhB molecules that were adsorbed onto the SERS substrate – deposited Au NPs. Since the Raman spectrometer allows visualization of the fiber end face, the laser source (spot size of 3 mm) launch position could be accurately determined. Utilizing this feature, different measurement methodologies were adopted. One approach was to directly focus the laser source into the solid fiber core (Circle A in Fig. 4a), the most common method mentioned in the literature [31–34]. An alternative involves illumination of the sample-filled air hole in the second layer of the cladding (Circle B in Fig. 4a), which will subsequently be referred to as offset launch. All the SERS spectra were acquired with three accumulations of 10 s exposures. Figure 7 presents the Raman spectra collected under the abovementioned launching conditions – core launch and offset launch – for RhB solutions of concentration 105 M. All the spectra were vertically separated for clarity but were not scaled relative to one another. Curves A and B correspond to the signals collected from the SERS probe under core and offset launch, with the silica Raman background removed. For comparison, the Raman light in an uncoated SCPCF was also measured (Curves C and D). Note that regardless of the launch method, no Raman signals were observed from the uncoated SCPCF, whereas in the presence of Au NPs, a set of pronounced Raman peaks appeared. The Raman peaks were noted to mainly spread from 1,100 to 1,700 cm1, which is consistent with those RhB Raman signatures reported in the literature [43]. Notably, for offset launch, the SERS spectrum displays better resolved peaks (e.g., peaks located at 1,076, 1,123, and 1,593 cm1) with a higher signal to noise ratio than that from the core-launch scheme. Particularly, the Raman intensity at 1,504 cm1 increases from 2,695 counts to 4,500 counts as the launch point switches from the core to the air hole, demonstrating the potential in achieving higher sensitivity by offset launch.

Page 8 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Fig. 7 Raman spectra of 105 M RhB in fabricated SCPCF Raman probe (Reproduced from Ref. [47])

Fig. 8 SERS spectra of 107 M RhB in fabricated SCPCF Raman probe (Reproduced from Ref. [47])

Further investigation on the SERS enhancement was conducted at a lower sample concentration of 107 M. It is noted that the Raman signal corresponding to the core launch is “drowned” by the noise, as shown in Fig. 8. However, the SERS peaks for offset launch are still clearly distinguishable despite the noise of the system. These results thus highlight the advantage of offset launch in improving the sensing capability of the SERS probe. The achieved detection limit of 107 M is comparable with most of the existing SERS sensors [44]. A qualitative analysis of the difference in SERS signal strength of the two launching conditions, based on light distribution within the liquid-filled SCPCF, was carried out using the finite element Page 9 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Fig. 9 Normalized power distribution of fundamental modes as a function of radial position (corresponding to white line in respective insets: 2D intensity plots) excited under (a) core launch and (b) offset launch at excitation wavelength of 785 nm (Reproduced from Ref. [47])

method as follows. Since the concentration of RhB solution is sufficiently low, the refractive index of the solution is assumed to be that of water (1.33). The infiltrated analyte thus does not disrupt the fiber guidance via TIR. Calculated power distribution of fundamental core modes is shown in Fig. 9. The normalized power distribution under core launch (Fig. 9a) shows that the low-loss core guided mode has most of its energy confined in the silica fiber core. In addition, only a small amount of power is located in the liquid-filled regions surrounding the core. This corresponds to the evanescent field that can interact with the Au NPs, thus exciting surface plasmons along the fiber length. Since the precise 3D geometry of the SERS substrate is unknown, assumption was made that the gold clusters in each hole occupy a thin layer with thickness of 60 nm on top of the inner wall of the air hole. Particularly, only the modal field penetrating these layers contributes to the surface plasmon excitation. Calculated using the method described in [45], the percentage power over these active regions is 0.0063 %.

Page 10 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

In offset launch, the excited mode displays a distinct propagating behavior, as shown in Fig. 9b. The power is mainly localized in the silica region of the side waveguide, consisting of a partially liquid-filled core and six surrounding channels. Similarly, the effective sensing region covers 17 holes, with a total sample volume of 30 nL. Here, a power fraction of 0.049 % was estimated for plasmonic excitation, eight times that observed in the core-launch scheme. The increased power fraction is speculated to come from the much smaller core-cladding index contrast of the side waveguide, which yields a larger field overlap with Au NPs. However, the theoretically predicted improvement may be offset by three factors: (i) deviation of the mode intensity profile from Gaussian shape lowers the coupling efficiency of standard Gaussian laser beam, as compared to core launch. As a result, power entering the fiber probe is reduced. (ii) The larger gold-coated area involved in Raman enhancement in the case of offset launch will also induce a stronger absorption loss of the Raman light. (iii) Light propagating in the side waveguide has a higher confinement loss, as can be understood from the power leakage toward the adjacent solid fiber core (in Fig. 9b) and the outermost silica cladding. To some extent such optical launching method prevents the Raman light generated along the fiber from being well guided and collected by the objective lens. Therefore, the Raman signal strength finally detected is a trade-off between the amplified SERS effect and various sources of loss. However, it is noteworthy that offset launch still exhibits an obvious advantage over the common method of core launch. The design flexibility of PCF cross-sectional structures and the layer-by-layer Au NP deposition technology provide several dimensions of freedom that can be manipulated to achieve an optimal performance, namely, refractive index of fiber material, size of air holes, fiber length, size of Au NPs, and number of deposition cycles. The offset launch has similarly been applied in a hollow-core polymer PCF SERS probe as detailed in Cox et al.’s work [35]. In their experiment the laser source was randomly focused into the cladding air holes, but the collected SERS signal was observed to be much weaker than that obtained by core launch. This was attributed to the perfect light-confinement property in the kagome lattice cladding, which confines light within the big hollow core but lacks efficient mechanism of mode guidance in the ultrathin silica struts or the hollow capillaries in the cladding [46].

Conclusion A simple method of offset launch has been demonstrated as an efficient way in amplification of the SERS effect within SCPCF [47]. This was achieved by shifting the launching point from the solid fiber core to the sample-filled air hole in the cladding. The stronger SERS signal observed was attributed to the significant increase in the overlap between the excited mode and the Au NPs. The offset launch-induced enhancement was also demonstrated to have an improved detection limit of 107 M for sample volumes as low as 30 nL. Lastly, the mechanism of amplification also contributes to the design principle in future structural optimization of fiber SERS probes.

Surface Unmodified PCF Biosensors As previously discussed, fiber parameters strongly influence the guidance of light within PCFs. A strong-penetrating PCF was reported to have visible light propagating within the silica walls with evanescent waves in its air holes for interaction with infiltrated analyte [48, 49]. Its functionality was demonstrated through the detection of Cy5-labeled DNA molecules via absorbance spectrum analysis. In such PCFs, the extent of the evanescent field overlapping with the infiltrations is superior to conventional evanescent field-dependent spectroscopy devices [50]. The results show that the enhancement of sensitivity is attributed to a long effective interaction length while requiring Page 11 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

only submicroliter of samples. Recently, liquid-core waveguide (LCW) cells have been widely used to minimize the loss of light. In a LCW cell, light propagates through the LCW fluid by TIR as the tube (cladding) has a lower refractive index than that of the fluid (core). Such cells have now been widely used for long path-length spectrophotometry [51]. However, the construct material for LCW cells, Teflon ® AF, is one of the most expensive commercial polymers. Moreover, as Teflon® AF is highly gas permeable, it poses the problem of evaporation in the internal solution. In literature, different index-guiding PCFs were proposed as miniaturized waveguide flow cells, particularly for long path-length axial absorbance. This offers several advantages over conventional absorption detection methods. As light is propagated along the fiber, the optical path is equal to the length of the fiber, and in theory, this could essentially be any desired length. Sensitivity is therefore enhanced due to the extraordinarily long optical path. Moreover, stray light effects due to the increased path length are eliminated by the waveguide nature of PCFs. Furthermore, compared to Teflon ® AF, fibers are a much cheaper alternative, and the small cross-sectional area of air holes minimizes reagent consumption. Lastly, the robust and flexible nature of PCFs renders them extremely suitable for on-chip integration.

Operating Principle and Theoretical Analysis Absorption spectroscopy is the most widely used detection method. It refers to a range of techniques employing the interaction of electromagnetic radiation with matter. As defined by Beer-Lambert’s Law [52], absorbance A of light by a sample is proportional to the optical path length b, the chromophore concentration, c and its molar absorption coefficient e. In PCFs, the fraction of light interacting with the aqueous solution Phole will also determine the absorbance. Therefore, the term Phole is included in the calculation of absorbance in the form of A ¼ log Io/It ¼ ebcPhole + a, where a is the attenuation of the PCF without absorption in dB. In absorption spectroscopy, the intensity of a beam of light measured before and after interaction with a sample is related by I ¼ I0 exp(klL), where I0 is the intensity of the incident light, I is the intensity of the light passed through the layer of the substance, L is the thickness of the substance layer or the path length, and kl is the extinction coefficient which depends on the type of substance and wavelength of the incident light. Different molecules absorb radiation of different wavelengths. An absorption spectrum will show a number of absorption bands corresponding to structural groups within the molecule. Thus, by knowing the shape of the absorption spectrum, the optical path length, and the amount of radiation absorbed, one can determine the structure and concentration of the compound. Such spectra of substances are often obtained by a spectrophotometer, where the sample, often liquid, is contained in an optical container called a flow cell. The cell is subsequently placed into the spectrophotometer and allowed to interact with the light beam of varying wavelengths. In chemical sensing, solid-core PCFs based on index guiding do show some advantages over their hollow-core counterparts governed by PBG [53–57]. First, they support a broader spectral band. Second, the requirement for an accurate control of the air hole size and periodicity of the holes are less stringent in the former, which increases the fabrication tolerance. Figure 10 shows the scanning electron micrographs of these two structures. For structure A, the average lattice period size L is 5.2 mm and the average diameter of the air holes in the cladding d is 3.2 mm. Since the index of the solid core is higher than that of the air-hole cladding, modified total internal reflection governs the guidance of light. In structure B, instead of filling the center of the preform with a solid rod, a hole with a diameter of 3.1 mm is introduced. Its L is 6.6 mm and d is 6.2 mm. Similarly, as the average index of the defected core is still higher than that of the cladding, the guiding mechanism is likewise governed by modified TIR. The structures shown in Fig. 10 were subsequently modeled via the fullvector beam propagation method [58], and the transparent boundary conditions were used to enable Page 12 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Fig. 10 SEM micrograph of two PCF cross-sections. Structure A, solid-core PCF; Structure B, hollow-core PCF. Corresponding modal field distributions (Top: evanescent field; Bottom: core mode) are shown on the right (Reproduced from Refs. [54] and [56])

the analysis of leaky modes. At every point of internal reflection at the silica-hole interface, a small portion of the field penetrates and decays exponentially. By inserting a solution into the fiber holes, the core and cladding refractive indices increase accordingly with the solution’s refractive index. The fiber thus experiences higher leakage because the difference between the average indices of the core and cladding is decreasing. The corresponding core mode and the evanescent field, which penetrates into the holey regions with infiltrations, are also shown in Fig. 10. Taking structure A as an example, most of the guided light’s power was calculated to be confined within the solid region of the core, with only a fraction extending into the holey region (with refractive index of 1.33 at 510 nm). Moreover, the evanescent field is further enhanced with the increase in the infiltrated solution’s concentration. As shown in the inset of Fig. 11a, upon increasing the index of the infiltrated material from n ¼ 1 (for air) to n ¼ 1.4, the ratio of evanescent field intensity in the holey region to the total confinement power increases from 0.3 % to 1.7 %. Similar theoretical results could also be obtained in structure B. As shown in Beer-Lambert’s Law, the absorbance in PCF is wavelength dependent because the fraction of light interacting with the aqueous solution Phole is a strong function of wavelength. Moreover, the cladding mode itself is also highly dependent on wavelength [59] – as the wavelength increases, there is higher leakage into the air holes, and thus, the evanescent field too increases. Therefore, the fiber is expected to have a better performance in absorbance detection at longer

Page 13 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

a 6

1.8

Evanescent field (%)

4

Evanescent field (%)

1.6

5

3

1.4 1.2 1 0.8 0.6 0.4 0.2 1

1.1

1.2 1.3 Index of infiltration

1.4

2 1 0 0.4

0.6

0.8 1 1.2 Wavelength (mm)

1.4

1.6

b Λ=4.8 μm Λ=5.0 μm Λ=5.2 μm

Evanescent field (%)

1.1

1.0

0.9

0.8

0.7

3.0

3.2

3.4 3.6 3.8 Air-hole diameter d (μm)

4.0

Fig. 11 Calculated percentage of evanescent field intensity in the holey region to the total confinement power (a) at different wavelengths. Inset: with various indices of the infiltrations. (b) With the change of air-filling fraction, different air hole diameter d and lattice period L (Reproduced from Ref. [56])

wavelengths. Furthermore, as the microstructure array is mainly characterized by two parameters, namely, the air-hole diameter d and the pitch size L, more leakage would occur upon an increase in d, and thus, the interaction with the infiltrated liquid can be enhanced. This phenomenon is also verified through calculation, where an increase in evanescent field occurs with d/L as shown in Fig. 11b.

Experimental Results and Analysis The setup for axial absorbance measurement is shown in Fig. 12. The PCF was supported and aligned using two 3-axis translation stages with V-grooved fiber holders. Light from a broadband halogen light source (Ocean Optics, Dunedin, Florida, USA) with a customized fiber FC connector pigtail was coupled to the end face of the PCF, with its alignment adjusted under a CCD camera. The output port, on the other hand, comprised a fiber core coupled into a USB2000 miniature spectrometer (Ocean Optics, Dunedin, Florida, USA) by means of a 20X objective lens and a collimator. The utilized spectrometer employs “OO1 Base 32” software for the measurement of transmission spectrum with a resolution of 0.2 nm. Here, Cobalt (II) chloride (CoCl2.6H2O) (Sigma Aldrich, Missouri, USA) solution was used as the absorption material. CoCl2 is a crystalline solid with pale Page 14 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Fig. 12 Experimental setup for absorbance measurement. The fiber was supported and aligned by a pair of 3-axis translation stages with V-grooved fiber holders. Light was coupled into the fiber core by free-space butt-coupling with the aid of a CCD camera. The end face of the PCF was coupled to an Ocean Optics USB2000 miniature spectrometer using a focused lens and a collimator (Reproduced from Ref. [57])

rose color when hydrated; it absorbs visible wavelengths ranging from 450 to 580 nm with maximum absorbance at 510 nm. Through progressive dilution with deionized water, aqueous samples with CoCl2 concentration between 10 and 500 mM were prepared. A 30 cm length of PCF was subsequently infiltrated with CoCl2 solution by capillary effect. A new length of fiber was used for each infiltration of a different CoCl2 concentration, with complete infiltration verified under the microscope. A background transmission spectrum, with an infiltration of deionized water, was also recorded as a reference. Taking structure B as an example, the corresponding absorbance spectra shown in Fig. 13a are the transmission curves of CoCl2 solution, with the background subtracted. With the increase of the CoCl2 concentration, the absorbance intensity increased correspondingly. From absorption spectra, an absorption window of CoCl2 in PCFs was observed between 450 and 580 nm, indicating that PCF could be a sample container for absorption spectrometry. It was also noted that maximum absorbance of CoCl2 in PCF occurred at approximately 530 nm, about 20 nm away from the absorption maximum in conventional spectrophotometer which is 510 nm [56]. This was due to the inherent attenuation property of PCF. Calibration of CoCl2 was done at 510 nm with varying concentrations in the range of 10–500 mMol. A fitting curve was generated to determine the sensitivity for absorption detection, presented in Fig. 13b. A sensitivity of approximately 0.4 Mol1 and excellent linearity with R2 ¼ 0.9681 were obtained; this also provided a calibration curve for the determination of an unknown concentration of sample. A linear fitting to the experimental data from structure B gives an absorbance sensitivity of 1.6 Mol1 and a relative standard deviation R2 of 0.996, which shows the excellent linearity of the absorbance response as shown in Fig. 14. The sensitivity from structure B is about four times larger than structure A due to the existence of the central air hole to enhance the evanescent field. The deviation between linear fitting curve and the experimental data is attributed to the alignment resolution and possible scattering at the fiber end faces. Repetition of absorbance measurements could also potentially reduce the uncertainty in the sensitivity. Alternatively, perpendicular measurement was carried out on the same fiber B samples by launching the light vertically onto the fiber cladding. The transmitted light was collected by the miniature spectrometer, placed just beneath the fiber. Polyimide coating was removed to create a detection window. The measured sensitivity was 0.1175 Mol1, which is about 14 times lower than that from the longitudinal direction measurement. Furthermore, the R2 value was only 0.878. Such a deviation is relatively significant, especially in the low-concentration samples, which might be due to light deflection at multiple layers of the hole-silica interface. To better illustrate the effect of measurement direction, perpendicular detection using a single capillary tube was also conducted. This single capillary tube can be viewed as a single micro-channel with the same cross-sectional area as the central hole of our PCF is filled with CoCl2. The resultant data, plotted in the same figure,

Page 15 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

a 0.5 Mol 0.2 Mol 0.1 Mol 0.05 Mol 0.02 Mol 0.01 Mol

Absorbance (a.u.)

0.6

0.4

0.2

0.0 450

500

550

600

Wavelength (nm)

b 0.25 Measurement data Fitting curve

Absorbance (a.u.)

0.2

0.15

0.1

y = 0.3979x + 0.0329 2 R = 0.9681

0.05

0

0

0.1

0.2 0.3 Concentration (Mol)

0.4

0.5

Fig. 13 The absorbance spectra of CoCl2 solution with concentration from 10 to 500 mMol. 30 cm PCF samples were used for each infiltration. Absorption at wavelengths ranging from 400 to 800 nm was observed. Experiments were repeated three times for each concentration. (b) Calibration curve for axial absorbance detection in the range of 10–500 mMol (Reproduced from Ref. [54])

exhibited a sensitivity of 0.0248 Mol1. This absorption sensitivity is a 64 times increase compared with the measurement from the perpendicular detection technique shown in Fig. 14. Moreover, the measurement range of absorbance efficiency in the perpendicular direction is at least one order lower which will therefore constrain the sensing resolution. The effect of path length on absorption was also investigated by reducing the PCF length from 45 to 33 cm with it filled with 0.5 Mol CoCl2 solution. Results are illustrated in Fig. 15, showing the dependence of absorbance on PCF length. The absorbance at 510 nm increased from 0.06 to 0.27 when the length of fiber increased from 33 to 45 cm. This calibration plot also demonstrated an excellent linear change of absorbance with R2 of 0.9868. This agrees with Beer-Lambert’s law, which states that absorbance is linearly dependent on path length. Sensitivity could thus be improved

Page 16 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

Measured data for longitudinal detection Curve fitting for longitudinal detection Measured data for perpendicular detection Curve fitting for perpendicular detection

Absorbance at 510 nm (a.u.)

0.8

0.6

y = 1.6x + 0.0312 2

R = 0.996 0.4

0.2

y = 0.0248x + 0.02 2

R = 0.970 0 0.1

0.2 0.3 Concentration (Mol)

0.4

0.5

Fig. 14 Measurement and linear fitting of absorbance data at 510 nm for samples with different concentrations using longitudinal and perpendicular detection techniques, respectively (Reproduced from Ref. [54])

0.25

Absorbance

y =0.018x-0.553 2 R =0.9868 0.2

0.15

0.1

32

34

36

38 40 Length (cm)

42

44

46

Fig. 15 Calibration plot for PCF with fiber lengths varying from 34 to 45 cm. The fiber was filled with 500 mMol CoCl2 solution (Reproduced from Ref. [56])

by increasing the length of PCF to detect even lower concentration of absorbing analytes. However, it was not demonstrated in this work due to the limited fiber samples. PCFs could be widely used either as a stand-alone flow cell or integrated into microfluidic chips to detect absorbing species such as ions, alkaloids, and biomolecules. Due to their robustness and flexibility, PCFs can easily be coiled up to minimize its footprint, rendering it suitable for microchip absorbance detection. The effects of bending (or coiling) and temperature are the typical external factors that may influence the guiding properties and the interaction strength. By bending the fiber with a diameter varying from 80 to 20 mm, the normalized absorbance change increased from approximately 0.3 % to 8 %. Relating the bending diameter to absorbance reveals sensitivity in the order of 103 mm1 as shown in Fig. 16a. As the fiber is bent, the cladding modes will tend to radiate

Page 17 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013 -3

0.08

Measurement data Fitting curve

0.06

0.04

0.02

0

y =−0.0013x +0.103 2 R = 0.9925

20

30 40 50 60 70 Bending diameter (mm)

b

80

x 10

Measurement data fit 1

Absorbance change

Absorbance change

a

8

7

-4

y= 4.10 x+ 0.005 2 R =0.972

6

5

30

35

40 45 50 55 60 Temperature (⬚C)

65

70

Fig. 16 Calibration plot for PCF with different (a) bending diameters varying from 20 to 80 mm. (b) Temperature ranging from 30  C to 70  C. The fiber was filled with 500 mMol CoCl2 solution, with a length of 30 cm

outward when the phase-matching condition occurs. However, the guided-mode profile in an optical fiber waveguide is relatively stable, which results in the stable bending performance. The effect of temperature was also studied by placing the fiber in a water bath heated between 30  C and 70  C via a hot plate. Results from this exhibit a thermal stability of 104  C1 as shown in Fig. 16b. This is likely attributed to the homogeneity of the PCF material. PCFs are typically made of only a single material (fused silica); thus, a lower and more uniform thermal expansion coefficient can be expected. So far, thermal stability has been observed in many PCF-based devices such as gratings [60], interferometers [61], and laser [62]. The pure silica material in PCF is also chemically and biologically inert compared with their polymer counterparts [63]. Therefore, it well prevents the evaporation of water, which makes PCF a suitable candidate for chemical sensing.

Conclusion

An evanescent field absorption sensor (EFAS) using a short length of PCF has been demonstrated. The results show that the enhancement of sensitivity is attributed to the possibility of achieving a long interaction length while being compact and only requiring submicroliter volumes of sample. The evanescent field in the fiber infiltrated with liquid is theoretically analyzed in a systematic manner. By infiltrating the microstructured waveguides with CoCl2 solutions, sensitive absorption detection from the transmission spectrum was obtained. Excellent linearity was also observed between absorbance and liquid concentration as well as with PCF length. The comparison from measurement results of two different structures reveals that the central air hole can effectively enhance the evanescent field and hence the absorption sensitivity significantly. Here, the absorption sensitivity of up to 1.6 Mol1 has been achieved. In addition, it was observed that the sensitivity using the longitudinal detection method is at least 60 times higher compared with that using the perpendicular measurement technique. The temperature and bending related experiments further revealed good thermal and bending stabilities and highlight the PCF’s potential as a candidate for quantitative analysis of analyte even in harsh environments.

Page 18 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

11.

12. 13.

14. 15.

16.

17. 18.

19. 20.

Russell PS (2001) A neat idea. IEE Rev 57:19–23 Russell PS (2003) Photonic crystal fibers. Science 299:358–362 Knight JC (2003) Photonic crystal fibers. Nature 424:847–851 Cerqueira A (2010) Recent progress and novel applications of photonic crystal fibers. Rep Prog Phys 73:024401–024421 Knight JC, Birks TA, Russell PS, Atkin DM (1996) All-silica single-mode optical fiber with photonic crystal cladding. Opt Lett 21:1547–1549 Birks TA, Knight JC, Russell PS (1997) Endlessly single-mode photonic crystal fiber. Opt Lett 22:961–963 Kurokawa K, Nakajima K, Tsujikawa K, Yamamoto T, Tajima K (2009) Ultra-wideband transmission over low loss PCF. J Lightw Technol 27:1653–1662 Dai J, Harrington JA (1997) High-peak-power, pulsed CO2 laser delivery by hollow glass waveguides. Appl Optics 36:5072–5077 Cregan RF, Mangan BJ, Knight JC, Birks TA, Russell PS, Roberts PJ, Allan DC (1999) Singlemode photonic band gap guidance of light in air. Science 285:1537–1539 Roberts PJ, Couny F, Sabert H, Mangan BJ, Williams DP, Farr L, Mason MW, Tomlinson A, Birks TA, Knight JC, Russell PS (2005) Ultimate low loss of hollow-core photonic crystal fibers. Opt Express 13:236–244 Allan DC, West JA, Fajardo JC, Gallagher MT, Koch KW, Borrelli NF (2001) Photonic crystal fibers: effective index and band gap guidance. In: Photonic crystals and light localization in the 21st century, vol 563. Springer Netherlands, Dordrecht, pp 305–320 Kumar VVRK, George AK, Reeves WH, Knight JC, Russell PS (2002) Extruded soft glass photonic crystal fiber for ultrabroad supercontinuum generation. Opt Express 10:1520–1525 Monro TM, Kiang KM, Lee JH, Frampton K, Yusoff Z, Moore R, Tucknott J, Hewak DW, Rutt HN, Richardson DJ (2002) High nonlinearity extruded single-mode holey optical fibers. Paper presented at the optical fiber communication conference, Anaheim, 17 Mar 2002 Feng X, Mairaj AK, Hewak DW, Monroe TM (2004) Towards high-index glass based monomode holey fiber with large mode area. Electron Lett 40:167–169 Mori A, Shikano K, Enbutsu K, Oikawa KM, Kato KN, Aozasa S (2004) 1.5mm band zerodispersion shifted tellurite photonic crystal fibre with a nonlinear coefficient of 657W1km1. Paper presented at the 30th European conference on optical communication, Stockholm, 5–9 Sept 2004 Chen C, Laronche A, Bouwmans G, Bigot L, Quiquempois Y, Albert J (2008) Sensitivity of photonic crystal fiber modes to temperature, strain and external refractive index. Opt Express 16:9645–9653 Benabid F, Couny F, Knight JC, Birks TA, Russell PS (2005) Compact, stable and efficient all-fibre gas cells using hollow-core photonic crystal fibres. Nature 434:488–491 Woliński TR, Ertman S, Lesiak P, Domański AW, Czapla A, Dąbrowski R, NowinowskiKruszelicki E, Wójcik J (2006) Photonic liquid crystal fibers – a new challenge for fiber optics and liquid crystals photonics. Opt Electron Rev 14:329–334 Hassani A, Skorobogatiy M (2006) Design of the microstructured optical fiber-based surface plasmon resonance sensors with enhanced microfluidics. Opt Express 14:11616–11621 Yu X, Zhang Y, Pan SS, Shum P, Yan M, Leviatan Y, Li CM (2010) A selectively coated photonic crystal fiber based surface plasmon resonance sensor. J Opt 12:015055

Page 19 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

21. Yan H, Gu C, Yang CX, Liu J, Jin GF, Zhang JT, Hou LT, Yao Y (2006) Hollow core photonic crystal fiber surface-enhanced Raman probe. Appl Phys Lett 89:204101 22. Yan H, Liu J, Yang CX, Jin GF, Gu C, Hou LT (2008) Novel index-guided photonic crystal fiber surface-enhanced Raman scattering probe. Opt Express 16:8300–8305 23. Wadsworth W, Witkowska A, Leon-Saval S, Birks TA (2005) Hole inflation and tapering of stock photonic crystal fibres. Opt Express 13:6541–6549 24. François A, Ebendorff-Heidepriem A, Monro TM (2009) Comparison of surface functionalization processes for optical fibre biosensing applications. Paper presented at the 20th international conference on optical fibre sensors, Edinburgh, 7–8 Oct 2009 25. Rindorf L, Høiby PE, Jensen JB, Pedersen LH, Bang O, Geschke O (2006) Towards biochips using microstructured optical fiber sensors. Anal Bioanal Chem 385:1370–1375 26. Padmanabhan S, Shinoj VK, Murukeshan VM, Padmanabhan P (2010) Highly sensitive optical detection of specific protein in breast cancer cells using microstructured fiber in extremely low sample volume. J Biomed Opt 15:017005 27. Rindorf L, Jensen JB, Dufva M, Pedersen LH, Høiby PE (2006) Photonic crystal fiber longperiod gratings for biochemical sensing. Opt Express 14:8224–8231 28. Passaro D, Foroni M, Poli F, Cucinotta A, Selleri S, Lægsgaard J, Bjarklev AO (2008) All-silica hollow-core microstructured Bragg fibers for biosensor application. IEEE Sens J 8:1280–1286 29. Yang XH, Wang LL (2007) Fluorescence pH probe based on microstructured polymer optical fiber. Opt Express 15:16478–16483 30. Sharma AK, Jha R, Gupta BD (2007) Fiber-optic sensors based on surface plasmon resonance: a comprehensive review. IEEE Sens J 7:1118–1129 31. Xie Z, Lu Y, Wei H, Yan J, Wang P, Ming H (2009) Broad spectral photonic crystal fiber surface enhanced Raman scattering probe. Appl Phys B 95:751–755 32. Peacock AC, Amezcua-Correa A, Yang J, Sazio PJA, Howdle SM (2008) Highly surface enhanced Raman scattering using microstructured optical fibers with enhanced plasmonic interactions. Appl Phys Lett 92:141113 33. Oo MKK, Han Y, Martini R, Sukhishvili S, Du H (2009) Forward-propagating surfaceenhanced Raman scattering and intensity distribution in photonic crystal fiber with immobilized Ag nanoparticles. Opt Lett 34:968–970 34. Oo MKK, Han Y, Kanka J, Sukhishvili S, Du H (2010) Structure fits the purpose: photonic crystal fibers for evanescent-field surface-enhanced Raman spectroscopy. Opt Lett 35:968–970 35. Cox FM, Argyros A, Large MCJ, Kalluri S (2007) Surface enhanced Raman scattering in a hollow core microstructured optical fiber. Opt Express 15:13675–13681 36. Zhang Y, Shi C, Gu C, Seballos L, Zhang J (2007) Liquid core photonic crystal fiber sensor based on surface enhanced Raman scattering. Appl Phys Lett 90:193504 37. Yang X, Shi C, Newhouse R, Zhang J, Gu C (2011) Hollow-core photonic crystal fibers for surface-enhanced Raman scattering probes. Int J Opt 754610:1–11 38. Han Y, Tan S, Oo MKK, Pristinski D, Sukhishvili S, Du H (2010) Towards full-length accumulative surface-enhanced Raman scattering-active photonic crystal fibers. Adv Mater 22:2647–2651 39. Schroder K, Csaki A, Schwuchow A, Jahn F, Strelau K, Latka I, Henkel T, Malsch D, Schuster K, Weber K, Schneider T, Moller R, Fritzsche W (2012) Functionalization of microstructured optical fibers by internal nanoparticle mono-layers for plasmonic biosensor applications. IEEE Sens J 12:218–224 40. Addison CJ, Brolo AG (2006) Nanoparticle-containing structures as a substrate for surfaceenhanced Raman scattering. Langmuir 22:8696–8702 Page 20 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

41. Andrade GFS, Fan M, Brolo AG (2010) Multilayer silver nanoparticles-modified optical fiber tip for high performance SERS remote sensing. Biosens Bioelectron 25:2270–2275 42. Kumar S, Aaron J, Sokolov K (2008) Directional conjugation of antibodies to nanoparticles for synthesis of multiplexed optical contrast agents with both delivery and targeting moieties. Nat Protoc 3:314–320 43. Zhang J, Li X, Sun X, Li Y (2005) Surface enhanced Raman scattering effects of silver colloids with different shapes. J Phys Chem B 109:12544–12548 44. Fan M, Andrade GFS, Brolo AG (2011) A review on the fabrication of substrates for surface enhanced Raman spectroscopy and their applications in analytical chemistry. Anal Chim Acta 693:7–25 45. Warren-Smith SC, Afshar S, Monro TM (2008) Theoretical study of liquid-immersed exposedcore microstructured optical fibers for sensing. Opt Express 16:9034–9045 46. Argyros A, Pla J (2007) Hollow-core polymer fibers with a Kagome lattice: potential for transmission in the infrared. Opt Express 15:7713–7719 47. Zhang YT, Yong D, Yu X, Xia L, Liu DM, Zhang Y (2013) Amplification of surface-enhanced Raman scattering in photonics crystal fiber using offset launch method. Plasmonics 8:209–215 48. Jensen JB, Pedersen LH, Høiby PE, Nielsen LB, Hansen TP, Folkenberg JR, Riishede J, Noordegraaf D, Nielsen K, Carlsen A, Bjarklev A (2004) Photonic crystal fiber based evanescent-wave sensor for detection of biomolecules in aqueous solutions. Opt Lett 29:1974–1976 49. Monro TM, Belardi W, Furusawa K, Baggett JC, Broderick NGR, Richardson DJ (2001) Sensing with microstructured optical fibres. Meas Sci Technol 12:854–858 50. Fini JM (2004) Microstructure fibres for optical sensing in gases and liquids. Meas Sci Technol 15:1120–1128 51. Tao SQ, Winstead CB, Xian H, Soni K (2002) A highly sensitive hexachromium monitor using water core optical fiber with UV LED. J Environ Monit 4:815–818 52. Smolka S, Barth M, Benson O (2007) Highly efficient fluorescence sensing with hollow core photonic crystal fibers. Opt Express 15:12783–12791 53. Martelli C, Canning J, Stocks D, Crossley MJ (2006) Water-soluble porphyrin detection in a pure-silica photonic crystal fiber. Opt Lett 31:2100–2102 54. Yu X, Sun Y, Ren GB, Shum P, Ngo NQ, Kwok YC (2008) Evanescent field absorption sensor using a pure-silica defected-core photonic crystal fiber. IEEE Photon Technol Lett 20:336–338 55. Sun Y, Yu X, Nguyen N-T, Shum P, Kwok YC (2008) Long path-length axial absorption detection in photonic crystal fiber. J Anal Chem 80:4220–4224 56. Yu X, Kwok YC, Khairudin NA, Shum P (2009) Absorption detection of Cobalt(II) ions in an index-guiding microstructured optical fiber. Sens Actuators B 137:462–466 57. Yu X, Zhang Y, Kwok YC, Shum P (2010) Highly-sensitive photonic crystal fiber based absorption spectroscopy. Sens Actuators B 145:110–113 58. Feit MD, Fleck JA (1980) Computation of mode properties in optical fiber waveguides by a propagating beam method. Appl Optics 19:1154–1164 59. Knight JC, Birks TA, Russel PJ (1996) All-silica single-mode operational fiber with photonic crystal cladding. Opt Lett 21:1547–1549 60. Zhu Y, Shum P, Bay H, Yan M, Yu X, Hu J, Hao J, Lu C (2005) Strain-insensitive and hightemperature long-period gratings inscribed in photonic crystal fiber. Opt Lett 30:367–369 61. Zhao CL, Yang XF, Lu C, Jin W, Demokan MS (2004) Temperature-insensitive interferometer using a highly birefringent photonic crystal fiber loop mirror. IEEE Photon Technol Lett 16:2535–2537 Page 21 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_8-2 # Springer Science+Business Media Dordrecht 2013

62. Yu X, Liu D, Dong H, Fu S, Dong X, Tang M, Shum P, Ngo NQ (2006) Temperature stability improvement of a multi-wavelength Sagnac loop fiber laser by using HiBi-MOF as birefringent component. Opt Eng 45:044201–044204 63. Cordeiro CMB, Franco MAR, Chesini G, Barretto ECS, Lwin R, Brito Cruz CH, Large MC (2006) Microstructured-core optical fibre for evanescent sensing applications. Opt Express 14:13056–13063

Page 22 of 22

Nonlinear Multimodal Optical Imaging Yan Zeng, Qiqi Sun, and Jianan Y. Qu

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Historical Overview of Optical Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonlinear Optical Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Unique Advantages of Nonlinear Optical Microscopy and the Motivation to Develop Nonlinear Multimodal Optical Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Unique Advantages of Nonlinear Optical Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Motivation of Multimodal and Label-Free Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonlinear Multimodal Microscopy System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Challenges of the Nonlinear Multimodal Microscopy System . . . . . . . . . . . . . . . . . . . . . . . . . Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multicolor Excitation Two-Photon Fluorescence and Harmonic Generation Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Integrated Multimodal Coherent Anti-Stokes Raman Scattering, Two-Photon Fluorescence, and Harmonic Generation Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 2 4 11 11 14 15 15 16 22 29 38 39

Abstract

The constant evolution of optical microscopy over the past century has been driven by the desire to improve the spatial resolution and image contrast, with the goal to achieve a better characterization of smaller biological specimens. The innovation of optical microscopy technology has been proven to be a driving force in the development of biology and medicine. In particular, advanced nonlinear optical microscopes have unique advantages over traditional microscopy approaches: intrinsic three-dimensional (3D) imaging with 1.0 ns when it binds to protein. In contrast, FAD has a short (10 mm/s in both transverse directions, the bacterium remained trapped. During the experiment, the trapped bacterium remained alive as it was observed to continuously wiggle inside the trap. The lowest optical power needed to ensure a stable 3D trap was  0.8 mW. When the power was decreased further, the 3D trap became unstable, but we found that the bacteria could still be trapped in two dimensions on the cover glass. At a power of 0.9 mW, Page 21 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_22-2 # Springer Science+Business Media Dordrecht 2014

Fig. 17 Fluorescence images of a 520-nm fluorescent polystyrene bead trapped in 3D (labeled by “T”) by the fiberbased SP lens (Media 1) [26]. A reference bead (labeled by “R”) is attached to the cover glass. The next movement is labeled at the bottom of each image. With the bead trapped, the cover glass was moved along +y (a, b), x, and y (b, c). The fiber as well as the trap was then lifted up along +z (c, d), followed by the cover glass movements along +y (d, e) and y (e, f). The bead remained trapped during these movements and could be seen when the focal plane was lifted up to the trap level (f, g). (h) The schematic of the experimental setup. The optical power at the fiber end face was 1.5 mW

Fig. 18 (Color online) Images of 3D trapping of a bacterium with the fiber SP lens (Media 2) [26]. The white arrows indicate the bacterium, and the black arrows point to a reference silica bead. (a, b) A free bacterium was trapped. (c–e) The bacterium was lifted up in the vertical direction while the focal plane was on the cover glass. (e, f) The focal plane was brought to the plane where the bacterium was located, while the reference bead became out of focus. (f–h) The water was moved along +y and +x while the bacterium remained trapped. The optical power at the fiber end face was 0.9 mW

stable 3D trapping of bacteria for several hours was achieved. Long-time, stable trapping of biological specimens without physical contact and photodamage is especially useful in investigation techniques requiring a long acquisition time such as Raman spectroscopy [46]. It is noted that trapping live bacteria is more challenging than trapping polystyrene or silica beads. This is due to the fact that bacteria have a low refractive index (1.38 over the visible wavelength band compared to 1.58  1.60 for polystyrene beads and 1.45 for silica beads), small dimensions, and fast motility. Moreover, live bacteria have the ability to propel themselves. Therefore, to trap a live bacterium requires much higher optical power than to trap a dead one [47]. To enable a stable 3D trap for bacteria, the lowest optical power used with conventional optical tweezers is a couple of mW (3  6 mW in Ref. [47] and 6 mW in Ref. [48]). The fact that a much smaller optical power was used in this work suggests that the SP lens-based fiber tweezers have better trapping efficiency than the reported conventional optical tweezers. It should also be noted that we did not observe obvious convection or thermophoresis as reported in [41] when the polystyrene beads or bacteria were trapped more than 20 mm above the cover glass. Page 22 of 27

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_22-2 # Springer Science+Business Media Dordrecht 2014

This implies that the heating effect due to the SP lens absorption does not significantly affect the trap. This can be explained by the fact that the trap was located away from the fiber end face where the heat was generated. However, when the fiber lens was brought much closer to the cover glass (> 1), the phase fluctuation becomes Dj = (n/s)sin(Dy). Then the time average of the squared phase fluctuation is n2 1 1 hD’i2 ¼ h 2 ih sin2 Dyi ¼ 2 SNR s

(13)

where SNR is hs2/n2i. It can be noted that the temporal phase sensitivity of SD-OCPM is inversely proportional to SNR. Higher phase sensitivity can be achieved at high-SNR condition [12,

Fig. 6 SD-OCPM raster-scans across the specimen to acquire the image of the specimen Page 8 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

Effective l(x/w0)

1

Δx/d0 = 0 Δx/d0 = 1 Δx/d0 = 2 Δx/d0 = 4

0.5

0 –10

–5

0 x/d0

5

10

Fig. 7 Beam distributions on sample for different normalized displacements (Reprinted with permission from [15] # MIT 2008)

19]. Experimentally, ~53 pm optical path-length sensitivity has been achieved at the SNR of 63.4 dB, whereas the theoretical sensitivity was expected to be ~30 pm. The external vibration such as mechanical vibration during measurement may account for the difference between theoretical and measured phase sensitivity [10]. Spatial Phase Stability The spatial phase noise of SD-OCPM mainly comes from raster scanning of the probe beam. Stepand-acquire strategy may reduce such noise, but it compromises the image acquisition speed. In order to understand the effect of raster scanning on phase stability, one can consider a Gaussian beam scanning along the direction x with the amount of Dx, over the random scattering media during integration time of spectrometer (Fig. 6). Then the beam distribution on the sample G(x, y) can be regarded as the convolution of the Gaussian beam function and rect function: h

x i (14) Gðx, yÞ ¼ AðDxÞ  g ðx, yÞ  rect Dx where g(x, y) = exp[4 ln 2(x2 + y2)/d20] with full-width half maximum (FWHM) of intensity distribution, d0. A(Dx) is introduced such that ð1 ð1 ð1 ð1 Gðx, yÞdxdy ¼ g ðx, yÞdxdy: (15) S¼ 1 1

1 1

The intensity distribution g(x, y) rather than the field distribution of the probe beam is used to take into account the effect of the mode function imposed by fiber coupling. The amplitude of the backscattered light received by the fiber is determined by an overlap integral between the scattered field and the mode function, resulting in the dependence on the intensity profile. According to Yun et al. [20], the illumination area is enlarged by the scanning during integration time. Figure 7 presents the effective intensity distribution of the beam for four different values of normalized displacement, Dx/d0. Beam enlargement degrades not only the spatial resolution of an image but also degrades SNR, since the signal acquisition on certain position while beam scanning only collects for a fraction of integration time.

Page 9 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014 0 Theory Experiment

SNRΔx-SNRΔx=0(dB)

−1 −2 −3 −4 −5

0

1

2 Δx/d0

3

4

Fig. 8 SNR reduction caused by the lateral scanning. The SNR was calculated based on the average of 100 A-line profiles on titanium oxide mixed amorphous glass (Reprinted with permission from [15] # MIT 2008)

Figure 8 shows the SNR reduction due to the lateral beam scanning obtained from theory and experiment. It can be seen that the SNR decreases as the beam enlargement is increased.

Spatial Resolution in SD-OCPM The lateral resolution of SD-OCPM can be estimated by using a similar method to confocal microscopes. The core of a single-mode fiber interferometer acts as the pinhole in confocal microscopes, and therefore the spatial resolution of diffraction limited SD-OCPM can be calculated as [21]  I ðnÞ ¼

2J 1 ðnÞ n

4 (16)

where n = k  r  NA and J1(n) is the Bessel function of the first kind. The lateral resolution of SD-OCPM can be characterized experimentally, for example, by imaging an USAF resolution target. Chromium steps in the target enable obtaining the step response of the system, which can be used to calculate the point spread function (PSF). Unlike lateral resolution, the axial resolution of SD-OCPM is not only determined by the confocal gating but also by coherence gating of the interferometer [22]. Since the coherence gating is independent of confocal gating, their PSFs are multiplicative with each other, leading to improvement in axial resolution. The confocal gating, I(u), is given by  I ðuÞ ¼



4 sin k  NA  4z k  NA  4z

(17)

Figure 9 is the result of numerical simulation to evaluate improvement of axial resolution due to coherence gating. It can be seen that the multiplication of coherence and confocal gates improves the axial resolution.

Page 10 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014 1

1

a

Coherence gate Confocal gate

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

I/I0

I/I0

0.9

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0 –10

–8

–6

–4

–2

0

2

4

6

8

10

b

0 –10

–8

–6

z(μm)

–4

–2

0

2

4

6

8

10

z(μm)

Fig. 9 Axial resolution calculated in numerical simulation. (a) Axial PSFs for confocal and coherence gating; (b) effective axial PSF. The simulation was performed in the system with 0.5 NA objective lens and the light centered at 800 nm with 130 nm FWHM (Reprinted with permission from [15] # MIT 2008)

SD-OCPM for Quantitative Biology Based on its unique features, SD-OCPM has been widely employed to the diverse applications in molecular and cellular biology. This section presents representative examples in which SD-OCPM was utilized as a tool for studying biological processes.

Measurement of Localized Intracellular Dynamics Intracellular dynamics such as molecular cargo transportation and cytoskeleton rearrangements are fundamental processes that support a broad range of cell functions including cell migration and mitosis. The ability to detect intracellular movement enables studying a myriad of cell-signaling processes and functions. Measurement of intracellular structural changes has mainly been performed by tracking the target structures or particles tagged with exogenous tracers [23]. These techniques allow for detecting the location of target molecules and the transition of intracellular dynamic regimes. However, the exogenous contrast agents may interfere with cellular activity, leading to inaccurate measurement of intrinsic intracellular processes [24]. Dynamic light scattering (DLS) is a label-free method for measuring the movement of scatterers in suspension [25]. DLS has high sensitivity to measuring particle dynamics noninvasively, and the study of diffusive properties inside biological specimen based on combination of DLS with microscopes has been demonstrated with high resolution [26]. Unlike conventional DLS, field-based DLS (F-DLS) utilizes both amplitude and phase information of scattered light, thereby extracting mean-squared displacement (MSD) and time-averaged displacement (TAD) of scattering particles with much higher sensitivity [27]. The MSD refers to time-averaged variance of scatterer displacement, which allows accessing the viscoelastic property of a sample. The TAD provides a statistical means to measure directional transport dynamics of scattering structures. The study demonstrating the ability to measure F-DLS with SD-OCPM has recently been reported [27], in which SD-OCPM enabled high-sensitivity measurement of intracellular dynamics. Figure 10 shows the experimental result of F-DLS performed with SD-OCPM [27]. The employed experimental setup is similar to the configuration introduced in section SD-OCPM configuration. The intensity image of human ovarian cancer cells (OVCAR-5) at a specific depth could be clearly visualized by SD-OCPM, as illustrated in Fig. 10a. The amplitude and phase fluctuations were measured at the depth (Fig. 10b), and the F-DLS analysis was performed to obtain the MSD and

Page 11 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

|μ (τ)| (μm)

MSD (τ) (μm2)

~τ~0.73 10–4 ~τ~0.45

0 1x104

–2 –4 0

d

10–2

2

0

0

c

4 2x104

Phase (rad)

Amplitude (a.u.)

b

0.008

0.004

5

10 15 t(sec)

20

25

–6 30

10–2 |μ (τ)| (μm)

104

a

10–4 10–6 –4 10

10–2 τ(sec)

10–0

0.000

10–2

10–4

10–3

10–2 10–1 τ (sec)

100

101

f |μ (τ)| (μm)

e MSD (τ) (μm2)

10–6 10–4

~τ~0.22 ~τ~0.25

–1 0

1

2

3 4 τ (sec)

5

6

7

–1 0

1

2

3 4 τ (sec)

5

6

7

0.008

0.004

0.000 10–6 10–4

10–3

10–2

10–1

τ (sec)

100

101

Fig. 10 Result of F-DLS analysis on OVCAR-5 cells, measured with SD-OCPM. (a) Scattering intensity image of OVCAR-5 cells at depth of ~3.4 mm above the cell-glass interface. Highly scattering structures such as cytoskeletons and organelles may account for the image contrast. (b) Amplitude and phase fluctuations measured at the point indicated in (a). (c, d) Calculated MSD and TAD based on averaged data on live OVCAR-5 cells. The transition from random to directional dynamics can be observed both in MSD (c) and TAD (small graph in (d)). The mean exponents of random and directional movement in MSD were measured to be ~0.45 and ~0.73, respectively. (e, f) MSD and TSD measurement on fixed OVCAR-5 cells. No directional movement was detected, while the smaller random movement was observed compared to live cell measurements (Reprinted with permission from [27] # OSA 2010)

TAD (Fig. 10c, d). Transition from random to directional movements was observed with timescale of ~0.01 s, both in MSD and TAD. For the fixed cells, no directional movement was detected (Fig. 10e, f). The measured intracellular motion is a consequence of intrinsic cellular activities, and therefore the introduction of chemicals to the cells may lead to different behaviors. The researchers performed F-DLS analysis with colchicine-treated and ATP-depleted OVCAR-5 cells. Figure 11 shows the experimental results. The directional dynamics on the longer timescale (1 ~ 5 s) were found to be smaller than control cells, while the movements on short timescales presented no notable changes (Fig. 11b, c). The measurements on multiple experiments (Fig. 11d, e) also showed the larger random movement for the treated cells. Most approaches for detecting intracellular dynamics involve tracers such as fluorescent probes and microbeads. This study demonstrated combined F-DLS and SD-OCPM as a label-free, quantitative measurement tool for investigating localized intracellular dynamics.

Page 12 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

0.8

Control Colchicine ATP-Depl

10–6 10–4 10–3 10–2 10–1 100 101 τ (sec) 0.2 0.0 –0.2 0.2

0

1

2

3

4

5

6

0.2 0.0 –0.2

0

1

2

3

4

5

6

7

3

4

5

6

7

ATP-Depl

β

****

0.4

0.4

0.0

0.0 Control Colchicine ATP-Depl

2

7

Colchicine

0.0 –0.2

*

e

Control

1< τ < 5 sec ****

0.8

*

α

10–4

c 1.2

10–4< τ < 10–2 sec

0.02 v (τ) (μm/sec)

μ (τ) (μm) μ (τ) (μm) μ (τ) (μm)

d

b 1.2

10–2

1 ρv(0),v(Δτ)(Δτ)

MSD (τ) (μm2)

a

Control

Control Colchicine ATP-Depl

Colchicine

ATP–Depl

0.00

–0.02 0.02 0.00 0.02 –0.02 0.00 0.02 –0.02 0.00 –0.02 v (τ+Δτ) (μm/sec) v (τ+Δτ) (μm/sec) v (τ+Δτ) (μm/sec)

0

Control Colchicine ATP-Depl

–1 0

1

2

τ (sec)

0

1

2

3

4 Δτ(sec)

5

6

7

Fig. 11 F-DLS results for control, colchicine-treated, and ATP-depleted OVCAR-5 cells. (a) Averages of log-transformed MSDs. (b, c) The mean exponents on short timescale and long timescale. (d) TAD profiles of each experiment. Directional movements are observed in control cells. (e) Velocity correlation coefficient as a function of time delay, averaged over all the measurements under each condition. The colchicine-treated and ATP-depleted cells showed more significant disruption of directional intracellular dynamics than untreated cells (Reprinted with permission from [27] # OSA 2010)

Molecular Biosensor Molecular sensors capable of measuring molecular interactions are of great importance in biology, medicine, and environment monitoring [28]. Generally, molecular sensors involve secondary contrast agents to label the captured molecules, and subsequently the molecules are detected by fluorescence or reagents. These methods, however, require extra time and cost, and the efficiency of labeling depends on the affinity of label to target protein. In addition, the secondary agents may alter characteristics of proteins of interest [29]. Therefore, label-free molecular detection would enable more precise and accurate studies in molecular biology. The association and dissociation between the proteins on planar sensor surfaces alter the optical thickness of the surfaces. Since SD-OCPM can measure optical thickness variation with sub-nanometer level sensitivity, it could be a viable means to molecular assay. Multiplexed dynamic measurement of protein interactions has been achieved with SD-OCPM by Joo et al. [28]. SD-OCPM setup for label-free protein microarray assay is illustrated in Fig. 12a. It employed an upright microscope configuration to fully image multiple interactions on sensor surface. It detects the interference signal between the light reflected from the interfaces of medium SiO2 and SiO2-Si (Fig. 12b). By examining the interference signal, one can detect the absorption of proteins on SiO2 surface, as such change leads to the frequency shift of the interference fringes (Fig. 12c, d). Joo et al. have functionalized the sensor surface with multiple spots of probe proteins and examined the phase signal changes under the introduction of various analytes. Figure 13 shows the time-lapsed phase profiles, averaged over multiple spots for each interaction. The phase shift of each spot was measured relative to the background SiO2 surface. In the experiment, bovine serum albumin (BSA), human serum albumin (HSA), rabbit immunoglobulin G (IgG), and mouse IgG were coated on protein array, and then the subsequent introduction of rabbit-originated anti-HSA (a-HSA), goat-originated anti-rabbit IgG (aRO), and goat-originated anti-mouse IgG (aMO) were

Page 13 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

a

Broadband light source Collimator

XY scanner

Beam expander

Spectrometer

Objective Protein array

SiO2

Sample

Si

d

After the molecular absorption

I (a.u.)

c I (a.u.)

b

Reference Before molecular absorption After molecular absorption

λ (nm)

Before molecular absorption z (μm)

Fig. 12 (a) SD-OCPM configuration for protein microarray assay. (b) The interference light includes the reflection from top and bottom surface of SiO2 layer. (c, d) The change of fringe occurred with molecular absorption in wavelength and optical path-length domain, respectively (Reprinted with permission from [28] # Elsevier 2009)

Fig. 13 Change of accumulated mass density detected by SD-OCPM (Reprinted with permission from [28] # Elsevier 2009)

performed to induce protein absorption. It can be seen that the different affinities between the analytes and probe proteins could clearly be observed. The important parameter in chemical interaction, kinetic association, and dissociation coefficients could also be obtained from the measurements from the protein spots [28]. Figure 14a shows the sensorgram for BSA and HSA spots upon aHSA introduction. The coefficients averaged over five spots were found to be ka = (1.37  0.08)  104 M1 s1 and kd = (7.80  0.99)  105 s1 for Page 14 of 22

2.5 2.0 1.5 1.0 0.5

BSA spots HSA spots

0.0

b

3x104

k3 (M–1s–1)

0

2x104

500 1000 Time (sec)

c

1.0x10–4

–1 kd (S )

a

mass accumulation (ng/mm 2)

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

5.0x10–5

1500

2000

1x104

0.0

0 BSA spots

HSA spots

BSA spots

HSA spots

Fig. 14 Kinetic association and dissociation parameters for BSA and HSA spots with aHSA, based on measurement with SD-OCPM. (a) Sensorgram for each protein spots. (b, c) Calculated association coefficients and dissociation coefficients (Reprinted with permission from [28] # Elsevier 2009)

BSA and ka = (2.61  0.05)  104 M1 s1 and kd = (3.92  0.50)  105 s1 for HSA. These results were consistent with prior studies within ~20 % [30].

Cell-Based Assay with SD-OCPM Quantitative measurement of cellular responses to external stimuli is of significance in the development of new cellular drugs. Since the cellular response upon the stimuli is not homogeneous [31], the ability to monitor a massive number of cells is crucial to understand the general behaviors of live cells. To avoid potential cell toxicity from secondary agents [32], various label-free cell assay systems have been developed, which include electrode-cell impedance sensing (ECIS) [33], resonant waveguide grating (RWG) biosensors [34], and photonic crystal biosensors [35]. These methods allow measurement of cell responses to drugs without additional agents but require dedicated cell substrates patterned with metallic electrodes or nanostructures for operation. Simple and cost-effective label-free cell assay has been realized with SD-OCPM [36]. In contrast to other label-free sensors, SD-OCPM allows depth-resolved phase measurement with high phase stability, and therefore dynamic intracellular changes due to the chemicals can be measured at a specific depth. Indeed, in Ref. [36], label-free cell-based assay with SD-OCPM was demonstrated with commercially available glass-bottom microtiter plates. Figure 15 shows the experimental configuration of SD-OCPM cell monitoring system and a scenario of cell-based assay. The introduction of stimulant induces relocation of intracellular structures or adhesion near cell-substrate interface. These changes cause the fringe shift in the interference spectrum, which can be measured with nanometer-level sensitivity. Figure 16 shows the representative results of SD-OCPM cell assay with human breast cancer cells (MCF-7 s). The cells were treated with 2-picolinic acid (2-PA) and histamine. Introduction of 2-PA resulted in a decrease in cell adhesion and intracellular mass proximal to the cell-glass interface. On

Page 15 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

Fig. 15 (a) Experimental configuration of SD-OCPM cell assay system. (b) Scenario of cell response detection with SD-OCPM (Reprinted with permission from [36] # SPIE 2014)

the other hand, the histamine-treated cells exhibited an opposite behavior. These results were consistent with prior studies [37–39]. Cellular responses to the chemicals with different concentrations were also measured. As shown in Fig. 17, the relationship between the magnitudes of cell responses and concentrations of drugs were found to be linear. SD-OCPM has a distinctive advantage in measuring cellular responses compared with other quantitative phase imaging techniques such as holographic [40], Hilbert phase [41], and Fourier phase microscopy [42]. Those methods have the ability to image unstained live cells with high contrast and sensitivity. However, they do not offer depth-sectioning capability, as they operate with the transmitted light through the sample. On the other hand, SD-OCPM enables localization of measurement volume in depth, due to its short coherence gating. Measuring dynamic mass redistribution inside the cells necessitates this depth-resolved detection capability. In this regard, SD-OCPM may be an attractive technique for label-free cell-based assay.

Page 16 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

Phase Intensity 0 min

a

10 min

20 min

30 min

I (a.u.) 700

I (a.u.)

Δφ (rad)

0 φ (rad) 0.1

–3

0

2x10

4x10

-0.04

–3

2x10

0 –2x10

0

1x10 0

–3 0

–1x10 0

10

20

30

Time (min)

Phase Intensity

b

0 min

14 min

26 min

40 min

Δφ (rad)

I (a.u.) 1100

I (a.u.)

–3

0 φ (rad) 0.1

6x10

–3

-0.04

4x10

–3

0

3x10

0

2x10

0

2x10

1x10 0

0 0

10

20

30

40

Time (min) Fig. 16 (a) Experimental result of MCF-7 upon 2-PA introduction. (b) Experimental result of histamine introduction case (Reprinted with permission from [36] # SPIE 2014)

Three-Dimensional Stem-Cell Viability Assessment Conventional viability assays are based on destructive techniques such as dye exclusion assay. The “happiness” of cells can also be assessed by measuring the focal adhesion and shapes of cells, which can be measured by various technologies including ECIS and RWG biosensors. Yet, these techniques are limited to measurements on 2D cultured cells, of which condition is apart from in vivo systems. Monitoring 3D cultured cells plays an important role in regenerative medicine and tissue engineering, for cells in 3D structures are more similar to the actual environment in life

Page 17 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

Fig. 17 (a) SD-OCPM phase graph of 2-PA injected cells at different concentrations. (b) Averaged SD-OCPM phase magnitude vs. 2-PA concentration. (c) Phase measurements of histamine-treated cells at different concentrations. (d) The averaged SD-OCPM phase magnitude vs. histamine concentration. N = 5 indicates the number of measurements averaged for each concentration (Reprinted with permission from [36] # SPIE 2014)

Fig. 18 OCPM images of adult stem cells, cultured in 3D scaffold. (a–d) OCPM intensity images of live stem cells (a–c) and fixed cells (d) in 3D scaffolds. (e–h) OCPM phase standard deviation map of the location identical to (a–d). Note that the viability of cells is clearly shown. (i–l) Overlapped images of OCPM intensity and phase images. These provide both depth-resolved structural and cell viability information (Reprinted with permission from [44] # Wiley 2013)

[43]. SD-OCPM has been employed to monitor the viability of stem cells cultured in 3D scaffolds [44]. Reference [44] utilized SD-OCPM to observe the cellular dynamics far from the cell-substrate interface. The results of this study are depicted in Fig. 18. The OCPM intensity images show the structural information of scaffolds and cells (Fig. 18a–d). To monitor the viability of cells, the authors performed lateral scanning at the same location and calculated the standard deviation of Page 18 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

phase fluctuations in 50 lateral scans. The results are shown in Fig. 18e–h. The live cells (Fig. 18e–g) exhibited larger variation than that of fixed cells (Fig. 18h). This study demonstrates the potential of SD-OCPM to monitor the cell culture process, in terms of cell viability.

Conclusion As a functional derivative of SD-OCT, SD-OCPM was developed to provide high-resolution, depthresolved amplitude and phase images of transparent biological specimens and to measure dynamics of scattering structures with nanometer-level sensitivity. Since its first experimental demonstration [10], SD-OCPM has been widely employed to the diverse applications in molecular and cell biology. The applications described here reflect a few representative examples only, and one can foresee many other areas in which SD-OCPM can be utilized. For example, imaging nanoscale dynamics of in vitro three-dimensional models for metastatic cancer can be one of the viable applications [11]. The 3D models have been useful for recapitulating the human disease. These spheroidal tumor cultures, however, can grow in excess of 200 mm in diameter and it is critical to visualize growth and treatment dynamics of cancer cells both at the surface and in the interior of the spheroidal models. SD-OCPM can be an ideal live imaging tool for non-perturbatively visualizing these complex systems, enabling detailed observations of the model at both nodular and subcellular levels. Understanding the mechanism of angiogenesis under various conditions is another potential application. Microengineered devices have been developed to reveal the mechanisms of angiogenesis, but monitoring the angiogenic process nondestructively in these devices remains elusive [15]. SD-OCPM can be potentially utilized to evaluate angiogenic sprouting in a microengineered device. Three-dimensional (3D) distribution of the sprouting vessels in the microengineered device and the details such as vessel lumens and branching points can be visualized. In order to resolve many questions in biology, technological advances in SD-OCPM should also be made. The point-scanning nature of current SD-OCPM implementation inherently limits the image acquisition speed, although it provides high spatial resolution. Recently, ultrahigh-speed line scan cameras become commercially available, and thus it is expected that SD-OCPM imaging speed can be further improved. Full-field SD-OCPM based on wavelength-swept laser is another strategy to enhance the acquisition speed. Conventional wavelength-swept lasers have been known to exhibit poor phase stability due to the mechanical motion of the tunable filter and the amplified spontaneous emission background of the gain medium [45]. However, recent reports suggested that picometerlevel displacement sensitivity can be achieved with Fourier domain mode-locked lasers at up to 370 kHz A-line rate [46, 47]. With such A-line rate, the frame rate of SD-OCPM can be significantly improved. Achieving both high phase stability and spatial resolution has been challenging in SD-OCPM. In the SD-OCPM implementation described in this chapter, increasing the spatial resolution would decrease the reflection from the reference surface (i.e., the bottom surface of a glass substrate), compromising the signal-to-noise ratio. There have been efforts to resolve this problem. SD-OCPM implementation based on a Sagnac interferometer was demonstrated [48]. Sharing a galvanometric scanner in both reference and sample arms has also been suggested to eliminate phase noise due to the mechanical scanner in OCM [49]. SD-OCPM represents a novel label-free quantitative microscopy technique. It provides depthresolved amplitude and phase images, and its displacement sensitivity can be in the nanometer range. With these unique features, along with the technological improvements described above, Page 19 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

SD-OCPM is believed to stand up as a powerful tool in biology and medicine. Integration with SD-OCPM with other molecular-contrast imaging technique such as fluorescence microscopy [21] would further expand its utility in quantitative biology.

References 1. Cooper MA (2002) Optical biosensors in drug discovery. Nat Rev Drug Discov 1(7):515–528 2. Akkin T, Joo C, De Boer JF (2007) Depth-resolved measurement of transient structural changes during action potential propagation. Biophys J 93(4):1347–1353 3. Lee PH (2009) Label free optical biosensor: A tool for G protein coupled receptors pharmacology profiling and inverse agonists identification. Journal of Receptors and Signal Transduction 29 (3–4):146–153 4. Denk W, Strickler JH, Webb WW (1990) Two-photon laser scanning fluorescence microscopy. Science 248(4951):73–76 5. Zernike F (1955) How I discovered phase contrast. Science 121(3141):345–349 6. Ross KFA (1967) Phase contrast and interference microscopy for cell biologists. Edward Arnold, London 7. Cuche E, Bevilacqua F, Depeursinge C (1999) Digital holography for quantitative phase-contrast imaging. Opt Lett 24(5):291–293 8. Popescu G, Deflores LP, Vaughan JC, Badizadegan K, Iwai H, Dasari RR, Feld MS (2004) Fourier phase microscopy for investigation of biological structures and dynamics. Opt Lett 29(21):2503–2505 9. Ikeda T, Popescu G, Dasari RR, Feld MS (2005) Hilbert phase microscopy for investigating fast dynamics in transparent systems. Opt Lett 30(10):1165–1167 10. Joo C, Akkin T, Cense B, Park BH, de Boer JF (2005) Spectral-domain optical coherence phase microscopy for quantitative phase-contrast imaging. Opt Lett 30(16):2131–2133 11. Zhao Y, Chen Z, Saxer C, Xiang S, de Boer JF, Nelson JS (2000) Phase-resolved optical coherence tomography and optical Doppler tomography for imaging blood flow in human skin with fast scanning speed and high velocity sensitivity. Opt Lett 25(2):114–116 12. Choma MA, Ellerbee AK, Yang C, Creazzo TL, Izatt JA (2005) Spectral-domain phase microscopy. Opt Lett 30(10):1162–1164 13. Fercher AF, Drexler W, Hitzenberger CK, Lasser T (2003) Optical coherence tomographyprinciples and applications. Rep Prog Phys 66(2):239 14. Fercher AF, Hitzenberger CK, Kamp G, El-Zaiat SY (1995) Measurement of intraocular distances by backscattering spectral interferometry. Optics Commun 117(1):43–48 15. Joo C (2008) Spectral-domain optical coherence phase microscopy for quantitative biological studies. Massachusetts Institute of Technology, Cambridge 16. Dorrer C, Belabas N, Likforman J-P, Joffre M (2000) Spectral resolution and sampling issues in Fourier-transform spectral interferometry. JOSA B 17(10):1795–1802 17. Mujat M, Chen TC, de Boer JF, Park BH, Cense B (2007) Autocalibration of spectral-domain optical coherence tomography spectrometers for in vivo quantitative retinal nerve fiber layer birefringence determination. J Biomed Opt 12(4):041205–041206, 041205 18. De Boer JF, Cense B, Park BH, Pierce MC, Tearney GJ, Bouma BE (2003) Improved signal-tonoise ratio in spectral-domain compared with time-domain optical coherence tomography. Opt Lett 28(21):2067–2069

Page 20 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

19. Park B, Pierce MC, Cense B, Yun S-H, Mujat M, Tearney G, Bouma B, de Boer J (2005) Realtime fiber-based multi-functional spectral-domain optical coherence tomography at 1.3 mm. Opt Express 13(11):3931–3944 20. Yun S, Tearney G, De Boer J, Bouma B (2004) Motion artifacts in optical coherence tomography with frequency-domain ranging. Opt Express 12(13):2977–2998 21. Joo C, Kim KH, de Boer JF (2007) Spectral-domain optical coherence phase and multiphoton microscopy. Opt Lett 32(6):623–625 22. Izatt JA, Swanson EA, Fujimoto JG, Hee MR, Owen GM (1994) Optical coherence microscopy in scattering media. Opt Lett 19(8):590–592 23. Yamada S, Wirtz D, Kuo SC (2000) Mechanics of living cells measured by laser tracking microrheology. Biophys J 78(4):1736–1747 24. Van Citters KM, Hoffman BD, Massiera G, Crocker JC (2006) The role of F-actin and myosin in epithelial cell rheology. Biophys J 91(10):3946–3956 25. Berne BJ, Pecora R (2000) Dynamic light scattering: with applications to chemistry, biology, and physics. Courier Dover Publications, New York 26. Nishio I, Peetermans J, Tanaka T (1985) Microscope laser light scattering spectroscopy of single biological cells. Cell Biochem Biophys 7(2):91–105 27. Joo C, Evans CL, Stepinac T, Hasan T, de Boer JF (2010) Diffusive and directional intracellular dynamics measured by field-based dynamic light scattering. Opt Express 18(3):2858–2871 28. Joo C, Özkumur E, Ünl€ u MS, de Boer JF (2009) Spectral-domain optical coherence phase microscopy for label-free multiplexed protein microarray assay. Biosens Bioelectron 25(2):275 29. MacBeath G (2002) Protein microarrays and proteomics. Nat Genet 32:526–532 30. Özkumur E, Needham JW, Bergstein DA, Gonzalez R, Cabodi M, Gershoni JM, Goldberg BB, Ünl€ u MS (2008) Label-free and dynamic detection of biomolecular interactions for highthroughput microarray applications. Proc Natl Acad Sci 105(23):7988–7992 31. Ferrie AM, Deichmann OD, Wu Q, Fang Y (2012) High resolution resonant waveguide grating imager for cell cluster analysis under physiological condition. Appl Phys Lett 100(22):223701–223704 32. Shamah SM, Cunningham BT (2011) Label-free cell-based assays using photonic crystal optical biosensors. Analyst 136(6):1090–1102 33. Giaever I, Keese CR (1993) A morphological biosensor for mammalian cells. Nature 366(6455):591 34. Fang Y, Ferrie AM, Fontaine NH, Yuen PK (2005) Characteristics of dynamic mass redistribution of epidermal growth factor receptor signaling in living cells measured with label-free optical biosensors. Anal Chem 77(17):5720–5725 35. Cunningham BT, Li P, Schulz S, Lin B, Baird C, Gerstenmaier J, Genick C, Wang F, Fine E, Laing L (2004) Label-free assays on the BIND system. J Biomol Screen 9(6):481–490 36. Ryu S, Hyun K-A, Heo J, Jung H-I, Joo C (2014) Label-free cell-based assay with spectraldomain optical coherence phase microscopy. J Biomed Opt 19(4):046003 37. Fernandez-Pol J, Bono VH, Johnson GS (1977) Control of growth by picolinic acid: differential response of normal and transformed cells. c Sci U S A 74(7):2889–2893 38. Brischwein M, Herrmann S, Vonau W, Berthold F, Grothe H, Motrescu ER, Wolf B (2006) Electric cell-substrate impedance sensing with screen printed electrode structures. Lab Chip 6(6):819–822 39. Fang Y, Ferrie AM (2008) Label-free optical biosensor for ligand-directed functional selectivity acting on b(2) adrenoceptor in living cells. FEBS Lett 582(5):558–564

Page 21 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_25-1 # Springer Science+Business Media Dordrecht 2014

40. K€ uhn J, Shaffer E, Mena J, Breton B, Parent J, Rappaz B, Chambon M, Emery Y, Magistretti P, Depeursinge C (2013) Label-free cytotoxicity screening assay by digital holographic microscopy. Assay Drug Dev Technol 11(2):101–107 41. Popescu G, Ikeda T, Best CA, Badizadegan K, Dasari RR, Feld MS (2005) Erythrocyte structure and dynamics quantified by Hilbert phase microscopy. J Biomed Opt 10(6):060503 42. Lue N, Choi W, Popescu G, Ikeda T, Dasari RR, Badizadegan K, Feld MS (2007) Quantitative phase imaging of live cells using fast Fourier phase microscopy. Appl Optics 46(10):1836–1842 43. Pampaloni F, Reynaud EG, Stelzer EH (2007) The third dimension bridges the gap between cell culture and live tissue. Nat Rev Mol Cell Biol 8(10):839–845 44. Holmes C, Tabrizian M, Bagnaninchi PO (2013) Motility imaging via optical coherence phase microscopy enables label‐free monitoring of tissue growth and viability in 3D tissue‐engineering scaffolds. J Tissue Eng Regen Med, 10.1002/term.1687 45. Huber R, Wojtkowski M, Taira K, Fujimoto J, Hsu K (2005) Amplified, frequency swept lasers for frequency domain reflectometry and OCT imaging: design and scaling principles. Opt Express 13(9):3513–3528 46. Huber R, Wojtkowski M, Fujimoto J (2006) Fourier Domain Mode Locking (FDML): a new laser operating regime and applications for optical coherence tomography. Opt Express 14(8):3225–3237 47. Adler DC, Huber R, Fujimoto JG (2007) Phase-sensitive optical coherence tomography at up to 370,000 lines per second using buffered Fourier domain mode-locked lasers. Opt Lett 32(6):626–628 48. Helderman F, Haslam B, de Boer JF, de Groot M (2013) Three-dimensional intracellular optical coherence phase imaging. Opt Lett 38(4):431–433 49. Ansari R, Myrtus C, Aherrahrou R, Erdmann J, Schweikard A, H€ uttmann G (2014) Ultrahighresolution, high-speed spectral domain optical coherence phase microscopy. Opt Lett 39(1):45–47

Page 22 of 22

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Monitoring Cancer Therapy with Diffuse Optical Methods Ulas Sunara,b* and Daniel J. Rohrbachb a Biomedical Engineering, University at Buffalo, Buffalo, NY, USA b Biomedical, Industrial and Human Factors Engineering, Wright State University, Dayton, OH, USA

Abstract This review focuses on noninvasive monitoring of the therapeutic responses of tumors via assessment of tumor vascular and hemodynamic parameters. The diffuse optical techniques provide a promising means for noninvasive imaging of deep tissues. During the last few years, researchers have focused on developing diffuse optical techniques to provide complementary information with multiple parameters. These techniques permit real-time, noninvasive quantification of tissue hemoglobin concentration, blood oxygen saturation, blood flow, and drug concentration in vivo. Multiparameter approach increases sensitivity and specificity along with providing a more complete picture of physiological mechanisms and ultimately prediction of the response. The instrumentation is portable and rapid, and it has enabled the study of tissue responses in a variety of settings of monitoring cancer treatments in preclinical and clinical settings. After presenting the niche for the diffuse optical methods in the Introduction section, the basic principles of photon propagation in tissue will be provided in section “Theoretical Background.” After several instrumentation, examples will be presented in section “Instruments,” preclinical applications will be provided in section “Preclinical Applications.” In preclinical cases, examples of antivascular therapy and photodynamic therapy (PDT) in small animals will be provided. The effects of an antivascular drug, combretastatin, were monitored continuously and were found to induce substantial reduction of blood flow and tissue oxygen. The observations of blood flow and oxygenation were then correlated with power Doppler ultrasound and hypoxia biomarker techniques, respectively. Then PDT fluence rate effects on skin and head and neck cancer models for superficial and deep tissue imaging are provided. As clinical applications in section “Clinical Applications,” PDT and chemoradiation monitoring in patients with head and neck cancer and chemotherapy monitoring in breast cancer patients will be presented. Pilot studies revealed that early changes in diffuse optical parameters correlate well with the end-point clinical responses. Total hemoglobin concentration, blood oxygen saturation, blood flow, and drug consumption during treatment showed variable sensitivity to the therapy for different individuals, thus emphasizing the need for simultaneous monitoring of multiple tissue parameters for individualized treatment planning.

Keywords Cancer therapy; Monitoring and predicting response; Blood flow; Oxygenation; Oxygen metabolism; Diffuse optical imaging; Diffuse optical spectroscopy; Diffuse correlation spectroscopy; Diffuse reflectance spectroscopy

This book chapter was invited by Professor Donghyun Kim. *Email: [email protected] Page 1 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Introduction Complete and sustained recovery is the main goal of most cancer treatments. Several factors affect the treatment planning and outcome including the tumor type and stage as well as the patient’s immune response. Neoadjuvant treatments such as radiation therapy, chemotherapy, or a combination of the two (chemoradiation therapy) are used for advanced-stage tumors to decrease tumor size before surgical resection [1, 2]. This increases the likelihood of full resection and organ preservation, and, in some cases, complete recovery is possible with neoadjuvant therapies. However, some patients do not respond to neoadjuvant therapies. This disparity in response may be due to differences in functional parameters, such as tumor blood flow and oxygenation between patients, which can lead to variations in treatment dose [3]. The primary factor determining therapy efficacy is reduction in tumor size, but this can take weeks or months to determine [4]. However, changes in tumor functional parameters (such as tumor blood flow and metabolism) may be seen before changes in tumor size are observed [5, 6]. Using methods that are sensitive to functional changes can help assess treatment efficacy early allowing for the treatment plan to be adjusted accordingly as shown in Fig. 1. Recent studies have shown the importance of frequent therapy monitoring [7]. Common methods for therapy monitoring are positron emission tomography (PET) [8, 9], dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) [10], computed tomography (CT) [11, 12]. A common feature shared by some of these techniques is the injection of contrast agents for assessment of vascular parameters. The pharmacokinetics of the contrast agent contains information about tumor vascular parameters such as blood flow and permeability. However, it can be difficult to quantify these parameters [13]. PET images metabolism but requires expensive radiotracers that are not widely available for use as contrast agents. DCE-MRI is increasingly more available in hospitals, but the cost and immobility of the machines limit the applications. In addition, the pharmacokinetics of the gadolinium-based contrast agents typically used depends on both blood flow and the vascular-permeability/surface-area product, making interpreting the results difficult. CT requires ionizing radiation. Ultrasound (US) and laser Doppler flow (LDF) techniques can also be used for monitoring tumor vascular parameters [14, 15]. LDF is more sensitive to surface vessels [16] and US Doppler is typically more sensitive to larger blood vessels [13], although according to recent reports, there may be higher sensitivity to smaller vessels [17]. For a “reference standard” of tumor oxygenation, oxygen-sensitive microelectrode needles can be used [18], but the technique is invasive and not commonly used in hospitals [8].

Fig. 1 The primary goals of therapy monitoring with diffuse optical methods are to predict therapy outcome and provide feedback to improve therapy efficacy

Page 2 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

b

1 0.8

Normalized Absorption

HbO2 Hb Water

0.6 0.4 0.2 0 200

400 600 800 Wavelength (nm)

c

104

Absorption (1/cm)

Normalized Absorption

a

103

1000

0.3

HbO2 Hb Water

0.2

Therapeutic Window

0.1

0

700

800 Wavelength (nm)

900

Oxy-Hb Deoxy-Hb

102 101 100 400

500

600 700 800 Wavelength (nm)

900

Fig. 2 (a) Absorption spectra of primary tissue components. Absorption is high over most of the spectral region. (b) Absorption is much lower between 650 nm and 950 nm, and oxygenated and deoxygenated hemoglobin are the primary absorbers. (c) The absorption spectra in log scale with subset showing the changes in measured reflectance with respect to changes in oxygenated and deoxygenated state

Diffuse optical techniques are ideal for monitoring tumor vascular parameters noninvasively and repetitively. They are a fast, portable, and low-cost alternative for monitoring tumor response [19]. This is achieved by quantifying all therapeutically important functional vascular parameters with the exception of blood vessel permeability. Here, examples of preclinical and clinical studies are given demonstrating the utility and value of diffuse optical techniques in preclinical (therapy monitoring and new drug testing in small animals) and clinical (head and neck cancer) studies. The ultimate goal is to routinely apply diffuse optical methods in the clinic. Several clinical case reports and initial results will be presented from head and neck cancer, breast cancer, and skin cancer patients. The early blood flow and oxygenation changes presented herein suggest potential use for daily basis measurements in the early stages of therapy. In fact it has been suggested that the greatest tumor physiological (e.g., hemoglobin concentrations) changes may occur within the first week and that optical methods can pick up these changes [20]. Due to the limitations of the other techniques described earlier, diffuse optical methods have advantages for daily therapy monitoring. Diffuse optics is appropriate for clinical monitoring of cancer therapies because near-infrared (NIR) photons (650–950 nm) can penetrate several centimeters into the tissue. This feature was applied successfully by Cutler in 1929 to examine breast lesions by transillumination [21]. In 1977, Jobsis established the field of oximetry by using a two-wavelength spectroscopic approach to extract in vivo blood oxygenation [22]. Figure 2a shows the optical absorption of primary tissue chromophores. In this range absorption is typically high. However, there is a spectral window between 650 and 950 nm with low absorption (Fig. 2b), allowing for deeper light penetration in tissue. In this “therapeutic window,” the primary Page 3 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

absorbers are oxygenated and deoxygenated hemoglobin (HbO2 and Hb), whose absorption spectra depend on oxygenation state (Fig. 2c). The oxygenation state provides information about available oxygen and metabolic processes [23]. Diffuse optical methods take advantage of the deep light penetration, sensitivity to oxygenation state, and inexpensive equipment to noninvasively probe living tissue. Most diffuse optical methods extract the optical properties (absorption and scattering coefficients) of living tissue. Tissue absorption contains information about the presence and concentration of biological chromophores, such as hemoglobin, which can be used to quantify physiological responses to therapy [24]. The scattering parameter contains information about the organization, density, and composition of tissue structures including cells and subcellular organelles [25–27]. A higher organelle population, particularly mitochondria, can indicate higher metabolic activity, a hallmark of rapidly growing tumors. This increase in organelle density typically leads to a higher scattering coefficient for the tumor. Therefore diffuse optical methods can provide functional and structural information about changes before, during, and after therapy. Blood flow is another important intrinsic contrast that can be measured with diffuse optical methods [28]. Diffusing photons can scatter from moving blood cells, causing fluctuations in the intensity of collected light over time. In a rigid and static medium, there would not be any fluctuations. Measurements of these temporal fluctuations allow one to derive information about blood flow from deep within tissue [29, 30]. This technique is called diffuse correlation spectroscopy (DCS). Blood flow measurements from the tumor are particularly helpful for assessing therapeutic response, since blood flow is related to angiogenesis and tumor oxygenation [31–34]. Moreover, exogenously administered agents such as photosensitizers can strongly fluoresce and preferentially accumulate in tumor cells, which can lead to detection and therapy at the earlier stage [35, 36]. Noninvasive and repetitive assessment of diffuse optical parameters is an attractive approach for monitoring and optimization of cancer therapies [37–40]. For example, both photodynamic therapy (PDT) and radiation therapy require oxygen for high efficacy; thus, monitoring blood flow and oxygenation can inform about oxygen-limited and oxygen-conserving protocols so that clinicians can adapt the treatment for improved efficacy [41]. Similarly, blood flow parameter can be crucial for delivering cytotoxic agents to tumor cells. With the discovery of new therapeutic (e.g., antivascular and antiangiogenic) drugs, the potential therapeutic benefits of targeting tumor vasculature need to be demonstrated. These drugs need to be tested in preclinical settings before translating into clinics. Diffuse optical methods can facilitate clinical translation of these agents rapidly and more effectively by providing frequent assessment of tumor function-related parameters. In the following sections, basic theoretical models of photon diffusion are discussed. First, analytical solutions for semi-infinite media, a common geometry for most of preclinical and clinical applications, will be discussed. Next, three spectroscopic point-measurement-based approaches are introduced: diffuse reflectance spectroscopy for quantifying blood oxygen saturation, and blood volume, diffuse correlation spectroscopy for quantifying blood flow, and diffuse florescence spectroscopy for quantifying drug concentration in vivo. Diffuse optical imaging concepts with applications for deep and thick tumor imaging are presented. Several preclinical and clinical applications of diffuse optical methods for monitoring cancer therapies are discussed.

Theoretical Background Photon Diffusion Model In this section, the basic models of photon propagation in tissue will be discussed. The diffusion approximation of photon propagation in tissue allows analytical solutions for simple geometries, which Page 4 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 3 Photon diffusion in tissue. (a) Drawing showing the most sensitive area of photon path between the source and detector fibers. (b) Continuous wave method. (c) Frequency-domain method. (d) Time domain method

can quickly be solved for nearly real-time quantification. The descriptions in this chapter will be brief, so it is recommended that the interested reader consult the excellent reviews cited in this work. The primary focus here is on clinical and preclinical applications with diffusion approximation and analytical solutions. The semi-infinite (reflectance) approximation is the primary approach used in diffuse optical spectroscopies for clinical applications. The important physical quantities of optical contrasts in tissue are absorption and scattering parameters. In the NIR spectral window (650–950 nm), the tissue absorption parameter (ma) is much lower (10–100 times) than the reduced scattering parameter (ms0 ); thus light scatters many times before being absorbed. After photons are launched into the tissue from a light source, photons travel in the form of random walk due to the highly scattering medium. Photon propagation in the tissue is most generally modeled by the transport equation or with Monte Carlo simulations. Both techniques are computationally intensive, so the photon diffusion approximation is usually applied. When the scattering parameter is much greater than 0 absorption (ms  ma ), the photon fluence rate ðFðr, tÞ [Watt cm2 s1]), which is proportional to the photon number density, obeys the time-dependent diffusion equation in a homogeneous medium: ∇  ðD∇Fðr, tÞÞ  ma Fðr, tÞ 

1 @Fðr, t Þ ¼ S ðr, tÞ v @t

(1)

  where v is the speed of light in the medium, ma is the absorption coefficient, D ¼ 1=3 m0s þ ma is the photon diffusion coefficient, and S is the photon source term which gives the number of photons emitted at position r and time t per unit volume per unit time. The reduced scattering parameter m0s is the reciprocal of the photon random walk step l ¼ 1=m0s , which is the average distance traveled by the photon before the photon’s direction becomes randomized. The diffusion equation can be solved either numerically or analytically depending on the photon diffusion techniques and the boundary conditions. There are three primary photon diffusion techniques commonly used for characterizing the optical properties (ma, m0s) of tissue as shown in Fig. 3: continuous Page 5 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 4 The two primary models for light propagation. (a) Infinite and (b) semi-infinite geometry

wave (CW), frequency domain (FD), and time resolved (TR). The CW technique (Fig. 3b), uses a light source to administer a constant intensity while diffused light with constant amplitude is collected at the detector. In the FD technique (Fig. 3c), the light intensity is modulated, typically sinusoidally; thus both amplitude and phase of the sinusoidal diffusive wave are measured by the detector. The TR technique (Fig. 3d) uses short laser pulses as the source and measures the broadening or dispersion of these pulses caused by the diffusive medium. These techniques are all related to one another; the TR technique is equivalent to a FD measurement performed over a wide range of modulation frequencies, and the FD technique is essentially the CW technique when the source modulation frequency is zero. CW is the simplest technique, uses relatively cheap instrumentation, and allows fast data acquisition. However, because it only measures the amplitude, it has difficulty quantifying both the absorption and scattering parameters. TR has the most information content, but it needs relatively complex and expensive instrumentation limiting its clinical applications. FD systems are typically compact and mobile with cost-efficient instruments allowing the extraction of both absorption and scattering parameters with extra phase information as described briefly below. It is helpful to give example solutions to the diffusion equation for the special cases and simple geometries so that analytical expressions are available. In the FD approach, the source intensity is modulated sinusoidally, i.e., S ðr, t Þ ¼ S o dðrÞexpðiotÞ , where So is the source strength and o is the angular modulation frequency. Then the fluence rate is parameterized Fðr, tÞ ¼ Fo ðrÞexpðiotÞ, and if the medium is homogeneous and optical parameters do not change spatially, the diffusion equation reduces to D∇2 Fo ðrÞ þ ðvma þ ioÞFo ðrÞ ¼ vS o dðrÞ:

(2)

In an infinite homogeneous medium (Fig. 4a), the only boundary restriction is that the fluence rate goes to zero at large distances from the source. The solution for a sinusoidally modulated source at the origin is in the form of Fo ðrÞ ¼ C expðikrÞ=r expðiotÞ [42], where C ¼ vS o =4pD is a constant term, k is the wave number, k 2 ¼ ðvma þ ioÞ=D, and r is the distance between the source and detector. This analytical solution shows that when the amplitude of the light source is modulated (with angular frequency o), a macroscopic diffuse photon density wave is generated in the tissue and propagates outward from the source. A detector at distance r measures a decrease in amplitude and a phase shift relative to the intensitymodulated source (Fig. 3c). The amplitude damping and phase shift of the detected diffusive wave depend on the optical properties of the diffuse medium and the modulation frequency [43]. Higher optical properties and frequency cause more damping and more phase shift. The analytical solution also indicates that the diffuse photon density waves are scalar waves. It has been demonstrated experimentally that these waves interfere, refract, scatter, and diffract from optical heterogeneities [43, 44].

Page 6 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Diffuse Reflectance Spectroscopy Reflectance Solution The main aim for in vivo spectroscopy is to quantify the optical parameters (unknowns) with measurements such as fluence rate (knowns) by using available experimental techniques. In most medical applications of noninvasive diffuse optical measurements, one places both the light source and the detector on a tissue surface (Fig. 4a), which creates a physical boundary. Thus, the infinite-medium approximation is not appropriate for these cases and the semi-infinite approximation is a better approach. In the semi-infinite approximation, the source and detector fibers are arranged on the same surface plane (Fig. 4b). The analytical solution for the semi-infinite geometry can be obtained by using extrapolated zero boundary conditions and method of images [45]. An image of the real source is formed by reflection of the real source about the plane of the “extrapolated boundary” (zb in Fig. 4b). Furthermore, it has been shown that a light beam incident upon the source is well represented by a single point source at a depth 1/m's equal to one effective photon mean free path, which typically has a value of ~1 mm in the tissue. This feature accounts for an effective isotropic photon source at 1/m's even if the photons are actually injected in a single direction at the tissue interface. To obtain the solution for the fluence rate, image sources and the superposition principle are used to obtain the solution at the detector on the surface of the semi-infinite medium [46]: FðrÞ ¼ Cðexpðikr1 ðrÞ=r1 ðrÞ  expðikr2 ðrÞ=r2 ðrÞÞ;

(3)

where C ¼ vS o =4pD is a constant term and r is the source-detector separation along the tissue surface (Fig. 4b). The parameters r1,2(r) are shown in Fig. 4b: r1(r) is the distance from the point of contact of the detector fiber on the tissue surface to the effective source position in the tissue located 1/m's directly beneath the source fiber; r2(r) is the distance between the point of contact of the detector fiber and a point 0 located 1=ms þ 2zb directly above the source, where zb is the distance between extrapolated boundary and the real physical boundary (surface of the medium). The  diffuse0 reflectanceRd, defined0 as the flux out from the tissue surface, is derived using Fick’s law, Rd r, o, ma , ms ¼ k∇F r, o, ma , ms . To obtain the optical parameters, experimentally measured reflectance data Rdata is fit to the model reflectance by minimizing the difference between Rdata and Rd:     0 0 (4) minkRdata r, o, ma , ms  Rd r, o, ma , ms k: After obtaining unknown optical properties at multiple wavelengths, tissue spectroscopy can be utilized for assessing physiological parameters as briefly described in the below section. Spectroscopy The wavelength-dependent absorption parameter ma(l) of chromophores depends on their concentrations (C) and extinction coefficients (e(l)) [47]. The main endogenous absorbers in the tissue are oxyhemoglobin (HbO2) and deoxyhemoglobin (Hb) in the red and near-infrared wavelength window (~[600–950] nm). Additional background tissue absorption can originate mainly from water absorption in tissue. Thus, the total absorption parameter of the tissue can be represented by the sum of the constituting components of hemoglobin and background absorption: assumed to be a linear sum ma ðlÞ ¼ ϵ HbO2 ðlÞC HbO2 þ ϵ Hb ðlÞC Hb þ mback a

(5)

Page 7 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 5 (a) Photons injected by a long coherent laser light scatter from static scatterers (black dots) and from moving blood cells, which introduce temporal intensity fluctuations at the detector. (b) The decay rate of the autocorrelation intensity fluctuations are related to blood flow. Sharper decay translates to higher blood flow

Here e(l) is the extinction coefficient of the given chromophore at the given wavelength, CHbO2 and is the background absorption CHb are the concentrations of HbO2 and Hb, respectively, and mback a coefficient. By summing these hemoglobin concentrations, one gets the total hemoglobin concentration (THC) as THC ¼ C HbO2 þ C Hb , which is related to blood volume (BV) in the microvasculature and commonly used interchangeably with THC. Tumors induce new vascular growth (angiogenesis) for nutrition and oxygen supply, which result in higher BV in the tumor compared to surrounding normal tissue. Tissue oxygen saturation (StO2) is derived as the ratio of the oxygenated hemoglobin concentration to total hemoglobin concentration, i.e., StO2 ¼ CHbO2/ ðC HbO2 þ C Hb Þ . Although tissue oximetry principles are similar to pulse oximetry, StO2 is the tissue oxygen saturation and originates from the volume average of thick tissue containing capillaries, arterioles, and venules, while pulse oximetry only measures superficial arterial oxygen saturation. Tumors usually have lower blood oxygen saturation related to induced hypoxia due to high metabolic rate and oxygen consumption. It should be stressed that the above equation is important since it allows making transition from “physical parameter” of ma(l) to clinically relevant “physiological parameters” of oxyhemoglobin, deoxyhemoglobin, total hemoglobin concentrations, and blood oxygen saturation (CHbO2, CHb, THC, StO2). The scattering parameter depends on the scattering amplitude and power of the tissue structures (e.g., mitochondria, organelles) 0 and follows Mie-type behavior, ms ðlÞ ¼ alb ; decreasing monotonically in the [600–950] nm wavelength window, where a is the scattering amplitude and b is the scattering power. A hallmark of cancer is uncontrolled cell growth, so tumors typically have more cells and organelles; thus they generally have higher scattering parameter than surrounding normal tissue. While the absorption parameter provides blood-related functional parameters, the scattering parameter provides structural information about the tissue, making diffuse reflectance spectroscopy (DRS) a powerful technique for assessing physical and physiological parameters [48–53].

Diffuse Correlation Spectroscopy

In the previous section, section “Diffuse Reflectance Spectroscopy,” optical absorption and scattering parameters which are the static properties of the tissue were discussed. In this section it is shown that Page 8 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

speckle fluctuations of the diffusing photons can be used to probe dynamical information such as blood flow in the tissue. When diffusing photon scatter from moving scatterers (Fig. 5a), it causes the intensity of detected light to fluctuate in time (Fig. 5b). These fluctuations are rapid for faster moving blood cells. Therefore, one can derive information about motion far below the tissue surface from measurements of temporal fluctuations of diffusing light. This optical technique is based on the extension of dynamic light scattering (DLS) in the single scattering limit [54, 55] to diffuse wave spectroscopy (DWS) in the multiple scattering regime, which was introduced for studying motions of scatterers such as colloids in suspension. Extension of DWS to tissue optics within the diffusion approximation for quantifying blood flow has been implemented relatively recently [30, 43]. Details of diffuse photon correlation (or diffusing wave spectroscopy) method can be found elsewhere [56]. Briefly, the normalized electric field, E(r, t), is readily extracted from the autocorrelation function, g ðr, tÞ ¼ Gðr, tÞ= < I > , where Gðr, tÞ ¼ < E ðr, t ÞE  ðr, t þ tÞ > is the electric field autocorrelation function, < I > is the time-averaged diffuse light intensity, r is the source-detector separation, and t is the time delay. The study of motions in deep tissue is possible since the electric field autocorrelation function satisfies a diffusion equation [4, 29, 30, 42, 57, 58]:    0 (6) ∇  ðD∇G1 ðr, tÞÞ  vma þ 1=3vms k 2o Dr2 ðtÞ G1 ðr, tÞ ¼ vSd3 ðrÞ Analytical solutions are readily available and depend on the nature of the particle motion, which can be characterized by the mean square displacement (< Dr2 ðtÞ >) of the scattering particles such as blood cells. Similar to diffuse reflectance spectroscopy described in section “Diffuse Reflectance Spectroscopy,” in most preclinical and clinical applications, measurements are performed in the reflectance mode (semiinfinite geometry, Fig. 4b) where source and detector positions are on the same plane, the tissue surface (Fig. 3a). In this case the solution again simplifies by using the method of images: G1 ðr, tÞ ¼ vS=ð4pDÞ  ðexpðkr1 Þ=r1  expðkr2 Þ=r2 Þ 0

(7)

0

where k 2 ðtÞ ¼ 3ma ms þ ms2 k 2o < Dr2 ðtÞ > and r1, r2 are defined as before. As can be seen, this is exactly the same solution as Eq. 2, but with the main difference of additional dynamic (motion) term. Here, optical parameters (ma, m's) are called “static” parameters, which can be obtained with the diffuse reflectance spectroscopy (DRS) measurements as described in the previous section. The main difference originates 0 from the extra term ms2 k 2o < Dr2 ðtÞ > , which characterizes the motion of the scatterers. ko is the wavenumber of light in the medium and r1 (r2) are the distances between source (image) and the detector on the surface, as described in section “Diffuse Reflectance Spectroscopy.” For the case of random ballistic flow model, < Dr2 ðtÞ > ¼ v2 t2 ; v2 is the second moment of the cell velocity distribution. For the case of diffusive motion, < Dr2 ðtÞ > ¼ 6DB t; where DB is the effective diffusion coefficient of the tissue scatterers [57]. It has been observed that the diffusion model fits the autocorrelation curves (Fig. 5c) well (significantly better than random flow model), and aDB characterizes the blood flow in deep tissue in a broad range of studies including murine and human tumors and brain functions [28, 57–64]. Here a is a factor representing the probability that a scattering event in the tissue is from a moving scatterer (a is generally proportional to tissue blood volume fraction). Generally relative blood flow, rBF, is reported to describe blood flow changes during therapy: rBF is a blood flow parameter measured relative to its pretreatment value, i.e., rBF ¼ aDB =aDB ðbaselineÞ.

Page 9 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 6 Excitation source propagation, fluorescence generation, and propagation

Diffuse Fluorescence Spectroscopy Tumor therapy monitoring depends on localizing the tumor (seeing) before treating [65]. The success of the therapy depends on the stage and size of the tumor. Detecting the tumor at the earliest stage and planning the treatment protocols accordingly will improve therapeutic outcome significantly. For accurate therapy planning under any treatment approach, the extent of the tumor has to be mapped [66]. Lesion structures, such as micronodules, may not be clinically evident and result in tumor recurrence. Since some tumors, like oral lesions, usually exhibit a multifocal, wide-field nature of invasion and occur at diverse sites, this can create considerable problems to clinicians for tumor demarcation during treatment planning. One approach is to enhance the tumor contrast for visualizing early cancers via tumor selective contrast agents. Fluorescence imaging has played a significant role in visualizing oral tumors. Autofluorescence imaging, which utilizes intrinsic tissue fluorophores (e.g., collagen, NADH, and FAD), has demonstrated usefulness in clinical settings in improved screening of suspicious lesions [67–70]. Since background tissue fluorescence is much smaller than absorption, exogenously administered fluorescence contrast can greatly increase contrast for tissue demarcation. Photosensitizers (PS) are exogenous fluorophores that have demonstrated utility in clinical use for therapy and diagnosis of several early malignancies such as oral, esophageal, and bladder as they preferentially accumulate in dysplastic and malignant cells [71–82]. While the administered PS dose will affect the overall tumor accumulation [83], there will still be patient-to-patient variation as well as variation within the tumor. In addition, strong tissue absorption and scattering in living tissue distort raw fluorescence signal (intrinsic and exogenous), confounding the true fluorescence contrast. Thus, accurate methods to address these issues are needed for a quantitative fluorescence imaging approach [84, 85]. In this section we describe diffuse waves propagating in fluorescent media. Fluorescence in diffuse media is a two-part process: when diffuse waves generated from a point source propagate in the medium from source position to fluorophore position, a fluorophore in the medium will be excited and acts as a secondary point source of fluorescent diffuse waves, which propagate to the detector (Fig. 6). If we define the fluorescence transfer function as T ðrÞ ¼ eN ðrÞ for the simple case of CW diffuse waves, where e is the fluorophore extinction coefficient,  is the fluorescence quantum yield, and N(r) is the fluorophore distribution, then the generated fluorescence wave in the scattering media is the summation over all the fluorophores [44, 86]: ð Ff l ðrs , rd Þ ¼ Fðrs , rÞT ðrÞGðr, rd Þdr (8) Here we refer to Ff l as the fluorescent diffuse photon wave, or simply fluorescent intensity signal in diffuse media, and G is the Green’s function or propagator, the first term in the integral (F(rs, r)) Page 10 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

represents diffuse photon propagation from excitation source to a fluorophore, and the term T(r)G(r, rd) represents the generated fluorescence signal propagation from fluorophore to detector. For diffuse fluorescence spectroscopy (DFS) data analysis, fluorescence and optical parameter distributions are typically assumed to be homogeneous. Tissue fluorescence signal (Ff l) is usually assumed to be a linear combination of injected drug fluorescence (e.g., PS fluorescence) and tissue autofluorescence (auto); Ff l ¼ C PS FPS þ C Auto FAuto , where CPS and CAuto are spectral amplitudes of PS and autofluorescence, respectively. Here, it should be noted that extracted spectral amplitudes do not correspond to absolute concentrations, since raw fluorescence signal is affected by the optical properties at both excitation and emission wavelengths [87, 88]. Low fluorescence signal can mean low fluorescence concentration or high optical signal attenuation due to optical absorption and scattering. The ultimate aim of DFS is to quantify absolute fluorescence (drug) concentration in vivo and this is a field of active research. One simple approach is to normalize the fluorescence signal with the diffuse reflectance data (Rdata) to reduce the effects of the optical attenuation on the raw fluorescence signal [89, 90]. Another approach is to normalize the fluorescence signal by the autofluorescence [91, 92]. In addition, small diameter optical fibers minimize the effect of tissue absorption allowing for improved quantification, as shown by Pogue and Burke [93].

Diffuse Optical Imaging In some cases tissue can be very heterogeneous, and spectroscopic models that assume tissue homogeneity may not be accurate enough to characterize the tissue. For example, a basal cell carcinoma or melanoma introduces heterogeneity on the relatively homogeneous background tissue. The tumor itself can exhibit significant heterogeneity with respect to oxygen, vascular, or drug distribution. This heterogeneity can extend along the thickness and depth of the tumor. Apart from demarcation purposes, imaging is also important for therapy monitoring since localized tumor response might be very different than global response. In this case homogeneous model cannot accurately recover heterogeneities. Thus, there has been significant work on volumetric, quantitative imaging of these heterogeneities with optical contrast [63, 94–102]. In diffuse optical tomography (DOT), we obtain three-dimensional reconstruction of the tissue heterogeneities hidden in the tissue volume based on measurements at the tissue surface. This imaging method is similar to X-ray tomography, but it uses much less energetic photons, resulting in high scattering and low resolution. Optical imaging consists of two main parts: forward and inverse models. The forward model describes photon propagation in the heterogeneous medium to predict the measured signal. The region of interest is divided into voxels, and the forward model provides a weight for each voxel. Weight is defined as the probability of a photon from the source to a given voxel and then from voxel to detector. The object is to use the set of measurements to solve for the tissue optical properties (unknowns). In the inverse model the difference between measured and predicted (forward-modeled) signals is minimized to obtain the unknown parameters. For simplicity, it is assumed that only absorption and/or fluorescence properties exhibit spatial variations. The other properties such as scattering parameter are assumed to be constant. For solving heterogeneous diffusion equation (Eq. 1), we use Born or Rytov expansion of the photon waves (F(r)), and for solving the unknowns we use algebraic reconstruction technique (ART). The details of the technique are explained before [42, 44, 100], but here the basic mathematical approach is described. For the case of absorption parameter heterogeneity ma can be divided into two main parts, homogeneous background (moa) and spatially varying heterogeneous part (dma); ma ¼ moa þ dma ðrÞ. In the Born expansion we divide the total photon density wave F(rs, r) from a source at rs measured at r into a linear superposition of its incident (homogenous) and scattered (heterogeneous

Page 11 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

parts); Fðrs , rÞ ¼ Fo ðrs , rÞ þ Fsc ðrs , rÞ. The Born approximation also assumes the scattered wave is much smaller than the incident field, i.e., Fsc  Fo . Then, one obtains heterogeneous solution as [44] ð   Fsc ðrsi , rdi Þ ¼ Fo ðrs , rÞOðrÞG rj  rdi dr; (9) where OðrÞ ¼ vdma =D is the same heterogeneity term for the case of absorption contrast, and Gðr  rd Þ ¼ expðik jr  rd jÞ=4pjr  rd j is the Green’s function or free propagator. The solution implies that the photons pass from source position rs to some position r, scatter with an amplitude proportional to heterogeneity (dma), and then propagate from position r to a detector at rd. In the Rytov approximation, we expand the photon density wave in the exponential form; Fðrs , rÞ ¼ expðFo ðrs , rÞ þ Fsc ðrs , rÞÞ . The Rytov approximation does not place restriction on the top magnitude of the scattered field but assumes that the scattering field is slowly varying and much smaller than the perturbation term, j∇Fsc j  dma; then the solution simplifies to ð   1 Fsc ðrsi , rdi Þ ¼ (10) Fo ðrs , rÞOðrÞG rj  rdi dr; Fo ð r s , r d Þ where OðrÞ ¼ vdma =D is the same heterogeneity term for the case of absorption contrast. Rytov is less restrictive compared to the Born approximation and has been shown to be more suitable for most biological tissue. The fluorescence solution is already given in Eq. 8. For image reconstruction of the unknown parameter, the region of interest is divided into voxels, and integral equations are digitized resulting in a set of linear equations (y ¼ W x). For example, for Rytov equation, Fsc ðrsi , rdi Þ ¼

N X       Fo rsi , rj O rj G rj  rdi ;

(11)

j¼1

    where WijR is the Rytov weight; W Rij ¼ Fo rsi , rj G rj , rdi vh3 =ðFo ðrsi , rdi Þ; indicating the relative importance of each voxel.

Instruments In this section, an example of instruments used in preclinical and clinical measurements is presented. First, a multimodal spectroscopic instrument that was used in clinical PDT studies is described briefly, and then diffuse optical tomography instrument that can work in absorption and fluorescence mode for small animal imaging is discussed. Then, optical probes that are utilized for spectroscopic measurements in preclinical and clinical settings are described.

Multimodal Optical Instrument The multimodal instrument can assess several parameters that may be required for understanding the complete picture related to mechanisms of a therapy. For example, PDT is a relatively complicated therapy involving mechanisms relating to oxygen, PS, and light. To understand all three parameters, one needs to quantify optical, photosensitizer, and oxygen parameters. Optical parameters allow modeling

Page 12 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 7 (a) Picture of multimodal clinical optical instrument. (b) Diagram of the instrument (c) during the measurements at the operating room for PDT monitoring

light distribution in the tissue. To assess PS distribution and oxygen, one needs to quantify PS fluorescence, absorption properties, and tissue oxygen saturation. An example of multimodal instrument was recently described [19]. The instrument performs sequential measurements of blood flow (by DCS method), optical parameters, blood oxygenation and blood volume (by DRS method), and fluorescence (by DFS method). Figure 7a, b shows the picture and schematic diagram of the instrument, respectively, while (c) shows the instrument in the operating room for PDT monitoring. The DCS instrument has a long coherence length laser (CrystaLaser), four single-photoncounting detectors (SPCD), and a custom-built autocorrelator board. Photodetector outputs were fed into a correlator board, and the intensity autocorrelation functions and photon arrival times were recorded by a computer. After blood flow measurements, the second laptop initiates fluorescence and reflectance data acquisition. In absorption mode, broadband diffuse reflectance measurements were taken by illuminating the tissue with tungsten halogen lamp and collecting the light with one of the channels of a two-channel spectrometer. In fluorescence mode, a ~410 nm laser diode excites most photosensitizers in their Soret bands and after passing through a 500 nm long-pass filter collects the fluorescence spectra with the second channel of the spectrometer. The light sources can be optimized according to the fluorophore of interest. Page 13 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 8 Fast and dense sampling whole-body fluorescence tomography instrument. The main parts of the instrument include a fast galvo scanner for different source positions and a CCD camera for detection

Fig. 9 Different methods of optical imaging. (a) Contact probe can be placed directly on the skin. (b) Different source-detector separations define the interrogation volume of the light. (c) Face of a smaller handheld probe for imaging the oral cavity. (d) Picture of the handheld oral probe. (e) Noncontact probes use lenses to project an image of the fiber onto the skin, allowing the probe to not touch the surface. (f) Interstitial fibers are placed directly into the tumor and are good for deeply seated or thick tumors

Diffuse Optical Tomography Instrument DOT instrument can be utilized to monitor physiologic parameters such as THC or the concentration of fluorescence compounds such as PSs in small animals and humans for deeply seated or thick tumors. These instruments usually work in both absorption and fluorescence modes depending on the chosen filters. If filter is removed, then the instrument will image the intrinsic contrast of absorption and scattering parameters. If a fluorescence long-pass filter is used, then the instrument measures the fluorescence contrast of fluorophores such as intrinsic fluorescence (autofluorescence) or exogenously administrated fluorophores such as photosensitizers. An example of a small animal imaging instrument is shown in Fig. 8. The instrument allows dense source and detector sampling with a fast galvo scanner and a CCD detector for improved resolution and sensitivity. In this setup, a laser diode (LD) of appropriate wavelength to excite the desired fluorophore is directed to a beam splitter (BS) to split the laser beam into two: one beam goes to a photodiode (PD) to monitor laser beam fluctuations, and the other beam is directed to a lens (L1) to focus onto a fast galvo scanner. The two-dimensional galvo (XYGal) scans along x and y dimension creating dense source positions, and a lens coupled (L2) with an emission filter (F, bandpass) and a CCD camera at the detection side collects emitted fluorescence light.

Page 14 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

a

b

c

d

Fig. 10 (a) Relative blood flow (rBF) measured by DCS shows the acute effects of the CA4P. (b) Mean percent change (SD) in rBF for N = 9 mice. (c) Microbubble contrast-enhanced power Doppler ultrasound, yellow regions indicate contrast enhancement in perfused blood vessels with uniform enhancement before CA4P. (d) Blood perfusion is reduced post treatment

Optical Probes Probe design is very important in diffuse optical techniques. Each probe must be desired for its intended application. Similar to US technique, the primary components of the diffuse optical instrument remain the same, but the probe interfacing the tissue needs to be changed according to the particular physiological application. One should arrange source-detector separations according to the tumor depth. If a tumor is deep, large source-detector separations need to be used. On the other hand, if a tumor is superficial, a small probe with small source-detector separations is desirable. The golden rule is that source-detector separations need to be at least two times higher than the depth of the tumor being investigated. There are three types of optical probes: handheld contact, noncontact, and interstitial probes. Handheld Probe. Figure 9a shows the case of a relatively large probe used in clinical measurements of head and neck tumor patients where deep photon penetration is required. Fibers were arranged in such a way that the tissue was imaged with many source-detector separations (Fig. 9b). Generally, the probe consists of a simple black pad and fibers placed on it. The pad can be constructed of a plastic or rubber material according to desired flexibility. Black color eliminates background light leakages. A smaller probe for use in the oral cavity is similar in idea, but the source and detector fibers are held in a stainless steel tube. Figure 9c, d shows the end face and picture of the probe containing the fibers used for diffuse reflectance (white), diffuse fluorescence (blue), and diffuse correlation (red) spectroscopy measurements. This probe can be used for measuring superficial malignancies by directly placing the tip of the probe on the tissue surface. Noncontact Probe. For continuous measurements in animal models to get the dynamical information during therapy such as antivascular therapy and PDT, a noncontact probe (Fig. 9e) is used. In these cases a noncontact probe is superior because it allows continuous measurements during treatment without shading the surface. Additionally, it is more sanitary and will not perturb the surface with pressure. Care should be taken in the design of the probe to minimize the effects of subject movement. Interstitial Probe. Although the instrument stays the same, the handheld surface probe is ill suited for interstitial light delivery and noninvasive measurements, and the probe-tissue interface must be changed accordingly. For an “interstitial” probe, source and detector fibers are placed inside a catheter and inserted

Page 15 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 11 (a) Mean percent change (SD) of StO2 in mice (N = 5). (b) Hypoxia marker shows no binding for the control mice. (c) Significant binding (shown in red) in the treated tumors indicating induced hypoxia

directly into the tumor (Fig. 9f). This allows optical methods to be used on deep tumors far below the skin surface and in internal organs.

Preclinical Applications In this section, several examples of animal studies are presented. Preclinical work is important in drug and experimental therapy development since each must be tested for adverse effects, optimal dose, and optimized regimens that define the therapeutic dose. These tests need to be performed before translating these concepts to patients at the clinic. Animal studies can also supplement clinical work as well as provide insight about related biological mechanisms. Several examples from antivascular and photodynamic therapies will be shown.

Monitoring Antivascular Therapy There is an ongoing interest in vascular targeting agents that can modulate the sensitivity and response of tumors by modifying the oxygen and blood flow [103–106]. Therefore monitoring these drugs frequently, and possibly continuously, can provide insights about the working mechanisms and have potential value for clinical evaluation of these drugs. Combretastatin A4 phosphate (CA4P) is an example of an antivascular drug that disrupts tumor blood vessels [107–109]. Its effects were tested in K1735 malignant mouse melanoma tumor models. A representative near real-time blood flow kinetics is shown in Fig. 10a indicating a continuous blood flow decrease [60]. As Fig. 10b summarizes, from nine mice, the average blood flow decreased significantly (p < 0.001) by 64 % within 1 h. Power Doppler ultrasound images confirmed optical results. The image of microbubble contrast agent showed that K1735 is a well-perfused tumor model, with nearly the entire tumor enhanced with the microbubbles (yellow areas in Fig. 10c). One hour after injection of CA4P, many of the vessels had shut down, and perfusion reduced substantially, as indicated by the less enhanced vasculatures in Fig. 10d. Blood flow decrease was accompanied by a significant (p < 0.002) decrease in blood oxygen saturation (StO2) from 42 % to 14 % in the tumor due to reduction of blood supply (Fig. 11a). Noninvasive StO2 measurements were confirmed with microscopic analysis of a hypoxia marker, nitroimidazole (EF5) binding. There was no measurable EF5 binding (Fig. 11b) for the control tumor, but the treated tumor showed considerable binding (as shown in red fluorescence area in Fig. 11c), indicating that CA4P induced substantial hypoxia. It should be noted that noninvasive optical measurements and microscopic analysis are indirectly related since StO2 quantifies blood oxygen saturation in the microvasculature while hypoxia is an indicator of oxygen levels in the cells.

Page 16 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 12 Continuous blood flow measurements during PDT. (a) Schematic of the setup for continuous PDT and blood flow measurements. (b) Picture of the noncontact optical probe that allows continuous measurements while PDT treatment light is on with a desired circular beam delivery on the target tumor area while shielding the surrounding normal tissue. (c) Relative blood flow changes during ALA-PDT with respect to different fluence rates. Error bars represent the standard error over five animals (N = 5)

Monitoring Photodynamic Therapy PDT uses light to activate a photosensitizer (PS) in the presence of oxygen for cell and tissue destruction. A specific time after administration of the PS, the PS typically accumulates more in the diseased site compared to surrounding normal tissue. At the optimal time point of tumor uptake, light determined by the optical absorption properties of the PS is shined at a predetermined power to activate the PS to create a photodynamic reaction. Due to specific uptake of the PS and localized light illumination, photodynamic therapy (PDT) is a local therapy rather than a systemic therapy like chemotherapy. It can be repeated without accumulated side effects compared to conventional therapies such as chemo or radiation therapy. The efficacy of PDT is greatly affected by the tumor microenvironment [114, 115]. Tissue oxygen level is crucial for effective PDT since the photochemical reactions necessary for cell killing occur only in the presence of oxygen [116]. Tissue oxygenation is highly affected by vascular parameters such as blood flow and blood oxygenation. Moreover, there needs to be enough photosensitizer present in the target tissue. During PDT, the PS is consumed dynamically. Thus, the efficacy of PDT is dependent on both PS level before therapy [117] and PS consumption during therapy, also called photobleaching [90, 118, 119]. Since most PSs have high fluorescence quantum efficiencies, their fluorescence properties can be utilized for assessing the PS content. Thus quantifying vascular parameters and PS fluorescence is crucial for monitoring PDT response [120].

Page 17 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 13 Effective PDT dose defined as the dose blood flow higher than 50 % of its initial value. (a) High fluence rate induced rapid blood flow shutdown during the early phase of the treatment. Only ~2.1 J/cm2 of light could be delivered while blood flow was higher than 50 % of its initial value (b) Low fluence rate does not induce rapid blood flow decrease, blood flow was high most of the treatment and an effective dose of ~45 J/cm2 could be delivered while blood flow was high enough (higher than 50 % of its initial value) during PDT

The incidence of nonmelanoma skin cancers (NMSCs) is drastically increasing worldwide leading to a growing demand for effective treatment modalities [121, 122]. Conventional approaches such as surgery are unattractive due to significant healing, cosmetic, and functional morbidity as well as high financial costs. Topical 5-aminolevulinic acid (ALA)-based PDT, with efficacy similar to surgery and substantially better cosmetic and functional outcomes, has recently become an attractive treatment option, especially for cases with multiple sites and large areas [123]. PDT is also an emerging treatment option for cancer of the head and neck [124–131] . The size of the treatment beam can be adjusted so that the whole wide-field mucosa in the oral cavity can be treated easily without any fibrotic complications as compared to radiation therapy. For large and thick tumors, interstitial PDT can be applied similar to brachytherapy [132–138]. In the following subsections the applications for skin and head and neck tumor models will be given. Skin Cancer Monitoring Many photosensitizers lead to vascular destruction, which is one of the mechanisms by which PDT kills cells and destroys tumors. Since oxygen is crucial for PDT, vascular disruption early in treatment must be identified and prevented for optimized PDT treatment. These vascular effects vary based on different PDT regimens such as different treatment light fluence rates. Early identification of vascular disruption can allow a time window for light adjustment in real time to improve the effectiveness of therapy. Thus, it is desired to continuously monitor blood flow and find the optimal time at which these vascular effects improve efficacy. Figure 12a, b shows a setup for continuous monitoring of relative blood flow (rBF) during PDT. The treatment laser wavelength (630 nm or 660 nm), which is different than blood flow laser wavelength of 785 nm, can be chosen based on the desired photosensitizer. The light shield blocks the treatment light from irradiating the surrounding healthy tissue [139]. The fluence rate is an important determinant of PDT responses including vascular effects [140–145]. Figure 12c summarizes a preclinical study from colon 26-bearing mice treated with topical ALA at fluence rates of 10, 35, and 75 mW/cm2. It is clear that ALA-PDT induced acute early vascular changes: a quick initial drop was followed by an increase and a final gradual change toward initial levels for all irradiances. These results show that ALA-PDT induced early blood flow changes and the changes were fluence rate dependent. The differences in blood flow decreases with respect to fluence rate were statistically significant (p < 0.05). The early decrease may be due to constriction of the blood vessels caused by a lack of nitric oxide, since nitric oxide production decreases in deoxygenated conditions and

Page 18 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Z = 5 mm

Z = 9 mm

Z = 13 mm

0.04

5

5

0

0

0

0.03

–5

–5

–5

0.02

–10

–10

–10

0.01

Pre

5

–15

–15

–15 –5

0

5 10 X [mm]

–5

15

Z = 5 mm

0

5 10 X [mm]

–5

15

Z = 9 mm

0

5 10 X [mm]

15

Z = 13 mm

0

0.04

5

5

0

0

0

–5

–5

–5

0.02

–10

–10

–10

0.01

Post

5

–15

–15

–15 –5

0

5 10 X [mm]

–5

15

Z = 5 mm

0

5 10 X [mm]

15

Z = 9 mm

0

0

0

–5

–5

–5

–10

–10

–10

Pre-Post

5

–15 15

5 10 X [mm]

15

0.01

0.005

–15

–15 5 10 X [mm]

0

Z = 13 mm

5

0

0 –5

5

–5

0.03

–5

0

5 10 X [mm]

15

–5

0

5 10 X [mm]

15

0

Fig. 14 Depth-resolved reconstruction of yield images of a mouse tumor and surrounding normal areas. Images at different depths (Z = [5, 9, 13] mm) are shown from left to right with Z = 5 mm close to source plane and Z = 13 mm close to the detector plane and the treatment light of the imaging slab. PDT induced substantial changes in yield contrast in a depthdependent manner obtained by subtracting “Post-PDT” from “Pre-PDT” at different depths Z = 5, 9, 13 mm with the highest change occurred at the slice closest to the treatment light (Z = 13 mm)

deoxygenation can be caused by oxygen consumption during PDT [146]. Following this initial constriction, there may have been a temporary burst of blood flow resulting in an increase trend. Since lowest fluence rate (10 mW/cm2) has higher blood flow throughout PDT treatment, it allows more oxygen in the tissue and thus is a more favorable treatment regimen. Head and Neck Cancer Monitoring As mentioned in the previous section, most systemic photosensitizers including HPPH are vascular disrupting agents that can induce significant changes in blood flow and oxygenation. In this case the fluence rate dependency in modulating the vascular effects can be more pronounced. Continuous measurements are advantageous for finding the potential time window for optimal PDT. Previous work showed that kinetics of therapy-induced blood flow changes measured continuously might be predictive of PDT response [147]. Figure 13 shows the results from two different fluence rate treatments in a squamous cell carcinoma (SCC) model of head and neck cancer treated with HPPH-PDT using the same setup and probe shown in Fig. 12a, b. The “effective” PDT dose is defined as the dose delivered, while tumor blood flow is greater than 50 % of its initial value since tissue oxygenation is relatively high enough when there is adequate enough blood supply [148]. Figure 13a shows the representative blood flow changes during relatively high fluence rate PDT (75 mW/cm2), which induces significant acute blood flow Page 19 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

a

200

Tumor Normal

b

4 3

BVF (%)

100

1

50

0

0 Pre

Pre

Post

d

100 Tumor Normal

80

StO2 (%)

2

60 40 20 0 Pre

Post

HPPH Concentration (µM)

rBF(%)

150

c

Tumor Normal

Post

0.4 Tumor Normal

0.3 0.2 0.1 0.0 Pre

Post

Fig. 15 Extracted functional parameters from a head and neck patient before and after PDT. (a) Relative blood flow (rBF(%)). (b) Blood volume fraction (BVF (%)). (c) Blood oxygen saturation (StO2 (%)). (d) HPPH concentration (mM)

shutdown and effective treatment dose of only about 2 J/cm2 while administered dose was 100 J/cm2. In contrast, at the lower fluence rate (Fig. 13b) of 14 mW/cm2, blood flow remains relatively high compared to baseline, providing improved tissue oxygenation during PDT with about 45 J/cm2 effective dose. These are crucial findings in regard to PDT efficacy since it implies the effective local dose can be very different than the administered dose and blood flow parameter may provide real-time feedback to clinicians for adjusting the treatment light for improved efficacy. Monitoring Thick Head and Neck Tumors with Fluorescence Tomography Previous preclinical and clinical studies have demonstrated that HPPH-mediated PDT is an effective treatment option for superficial head and neck lesions in the oral cavity [19, 120]. However, many tumors occur deeper in tissue, or they grow to be very thick. This can make surface illumination and measurements impractical. Interstitial PDT, where catheter-based fibers are inserted directly into the tumor, can overcome this limitation. Treating thicker and deeper tumors can be challenging due to insufficient PS or light distributions. Recently there is an increasing interest for interstitial treatment of larger and deeper tumors such as those in the base of the tongue or large neck nodes [128, 149, 150]. For effective PDT dosimetry, one needs to know the PS dose in the tissue; thus, it is desirable to know volumetric PS distribution [31]. It has been also shown that changes in PS distribution (photobleaching) during PDT are related to singlet oxygen generation and thus PDT efficacy [112]. PS photobleaching can be monitored by changes in drug fluorescence yield. The application of fluorescence diffuse optical tomography (FDOT) for quantifying the PDT-induced changes in fluorescence yield in a clinically relevant head and neck tumor model has been reported [151]. Depth-resolved quantitative maps of fluorescence yield were obtained before and after PDT (75 mW/cm2, 10 min, 45 J/cm2). An SCID mouse was inoculated subcutaneously with human head and neck tumor tissue obtained from a patient. Fluorescence and excitation scans were performed at pre-PDT at 24-h post injection of HPPH. Then PDT treatment was done. After the PDT treatment is finished, fluorescence and excitation scans were performed again for post-PDT quantification. Figure 14 shows the reconstructed fluorescence yield images at multiple depths (Z = 5, 9,13 mm) for pre-treatment, post-treatment and the difference (pre – post) to determine the PDT-induced changes. Page 20 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014 Patient 0

0

FaDu Std 19.0

25.3

% crosslinked STAT3

III STAT3 II Crosslink Complexes I STAT3 Monomer – Tumor Pre

– Normal Post

+ Tumor Post

+

PDT

Fig. 16 STAT3 cross-linking functional parameters from a head and neck patient before and after PDT

Table 1 Performance of individual parameters at discriminating responders from nonresponders

Dysplasia

Blood flow Blood oxy. Blood vol. Fluorescence

Pre-PDT Sens. 72.7 100 45.5 90.9

Spec. 66.7 66.7 100 66.7

AUC 58 79 55 67

Changes Sens. 45.5 54.5 72.7 100

Spec. 100 100 66.7 33.3

AUC 0.70 0.79 0.70 0.52

The results show that the HPPH preferentially accumulated in the tumor, which allowed accurate localization of the tumor. It is also clear that HPPH-mediated PDT induced significant photobleaching and the photobleaching was depth dependent. Photobleaching was higher (~20 %), close to the detector plane (Z = 13 mm), where PDT treatment light was administered, and thus light dose at that plane was higher. These results verify that our FDOT system can quantify changes in photosensitizer distributions at different depths and second-generation photosensitizers such as HPPH are feasible in imaging thick tumors.

Clinical Applications In this section, several examples are shown in clinical studies that demonstrate the clinical utility of diffuse optical methods for therapy monitoring. The ultimate aim is to provide noninvasive optical biomarkers for assessing therapy response early so that a clinician can adjust the treatment dose or completely change the therapy regimen accordingly. Since they are portable and highly available compared to other modalities, optical methods are advantageous for frequent, bedside monitoring of patients. First, PDT, then chemoradiation, and finally chemotherapy are shown.

Monitoring of Photodynamic Therapy of Head and Neck Cancer in the Oral Cavity

The multimodal instrument described in section “Multimodal Optical Instrument” was used to measure drug concentration and vascular parameters such as blood flow and oxygenation in clinical settings of oral cancer, as described before [19, 152]. The instrument was utilized in a clinical trial of photocolor (HPPH)-mediated PDT in oral cancer patients [153]. HPPH is a second-generation PS developed at Roswell Park Cancer Institute (RPCI) [83]. It has an absorption peak wavelength of 665 nm in vivo that allows enhanced tissue Page 21 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 17 Combined three-parameter classifier in predicting the PDT response of dysplasia group

penetration and less skin phototoxicity compared to Photofrin, the first FDA-approved PS with an absorption peak at 630 nm. Figure 15 shows a set of data obtained from noninvasive measurements from a patient with squamous cell carcinoma of the oral cavity. The measurements were performed 1 day after systemic administration of HPPH in the operating room at pre- and post-PDT. Multiple measurements were obtained by positioning a handheld probe (Fig. 9c, d in section “Diffuse Optical Tomography Instrument”) at various locations. The treatment light was delivered by a single quartz lens fiber with the 3 cm beam diameter slightly larger than lesion diameter. Figure 15a shows that mean tumor blood flow (rBF(%)) decreased by ~85 % following PDT. These results suggest that HPPH-PDT induced significant vascular changes and vascular disrupting effects of HPPH in tumor tissue. Reduction in blood flow was accompanied by changes in blood volume fraction (BVF (%)) and blood oxygen saturation (StO2 (%)). The mean baseline value of BVF was ~2.9 % but decreased to ~1.7 % after PDT (Fig. 15b). The mean StO2 decreased from ~76 % to ~36 % (Fig. 15c). The tumor-to-normal tissue ratio of HPPH uptake was ~2.3 (Fig. 15d), but HPPH drug concentration decreased by ~41 % due to photobleaching. In general changes in surrounding normal tissue were smaller. Changes in normal tissue may be due to physiological fluctuations in the operating room or due to possible tissue sampling errors originating from point measurements. These changes were supported by a molecular measure for the oxidative photoreaction, which is obtained by quantifying the cross-linking of the signal transducer and activator of transcription 3 (STAT3) [154–156]. Biopsy tissue analyzed from this patient showed 19.0 % STAT3 conversion (Fig. 16), suggesting an effective photoreaction when compared to previous tumor biopsy analysis that showed maximal STAT3 with a median of ~12 % cross-linking [19]. It is desirable to investigate predictive power of these noninvasive parameters. To evaluate the sensitivity and specificity of each parameter in predicting the clinical response, patients who had dysplasia (N = 12) were grouped into responders (N = 7) or nonresponders (N = 5) based on the clinical assessment of the response based on 3-month post-biopsy results. Responders are defined as showing complete absence of visible lesion and negative biopsy and size reduction of the lesion more than 50 %, and nonresponders had stable disease with size reduction of less than 50 % and progressive disease. Sensitivity and specificity are defined as predictive percentage of responders and nonresponders to Page 22 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

Fig. 18 (a) Diagram showing handheld probe with scan direction on a neck nodal mass. (b) One-dimensional profile of tumor contrast along the scan direction. (c) Average of tumor rBF changes during chemoradiation averaged over seven patients who showed a complete response to the treatment. (d) Average StO2 changes of complete responders. (e) Average of THC of complete responders. (f) rBF kinetics of a partial responder. (g) StO2 kinetics of a partial responder. (h) THC kinetics of a partial responder

Page 23 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

therapy, respectively. The sensitivity, specificity, and area under the curve (AUC) for individual parameters (both pre-PDT and changes) are shown in Table 1. As can be seen individual parameters have different sensitivity and specificity in predicting the response. Although prediction power is quite good, individual parameters alone are not always the best predictors for response. When three parameters related to PDT dose are combined, for example, initial blood oxygen saturation, change in blood flow, and change in HPPH fluorescence, discrimination between responders and nonresponders becomes much stronger. Initial blood oxygen saturation is related to available tissue oxygen, a required element for effective PDT. Blood flow changes in the lesion are related to the effective light dose, and changes in HPPH fluorescence are related to the amount of photosensitizer consumed, both of which are also necessary for effective PDT. A logistic regression model based on the three PDT-related parameters was used to combine the three parameters into a single predictor. With this model, a receiver operating characteristic (ROC) curve was calculated (Fig. 17), providing the 100 % sensitivity, 80 % specificity, and AUC of 0.91 for the groups of dysplasia, which is considered excellent. These results support diffuse optical spectroscopies permit noninvasive monitoring and predicting the PDT response in clinical settings of head and neck cancer patients.

Monitoring of Chemoradiation Therapy of Head and Neck Cancer

Sunar et al. [59] have applied the DCS and DRS for monitoring early relative blood flow (rBF), tissue oxygen saturation (StO2), and total hemoglobin concentration (THC) responses to chemoradiation therapy in patients with superficial head and neck tumor nodes. The noninvasive measurements consisted of pre-therapy measurements as baseline, followed by weekly measurements until the treatment was completed. The radiation treatment was fractionated on a daily basis for about 7 weeks. Patients were concurrently treated with weekly carboplatin and paclitaxel. The noninvasive DCS/DRS measurements were performed by placing a handheld probe (Fig. 18a) on the tumor and the forearm muscle (for control to assess systematic changes). For both DCS and DRS, the largest source-detector separation was 3 cm, with a penetration depth of about 2 cm.

Fig. 19 Percent change in oxyhemoglobin concentration during the first week of chemotherapy in responding and nonresponding patients. The number of tumors measured each day is indicated as n. The largest separation between two groups occurred on the first day Page 24 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

MRI (Axial)

a

DOT (Axial)

THC(µM)

Pre-chemo

20

10

After 4th

12

6 Post-chemo

12

6

b 1.4

rBF (normalized)

1.2 Chemo 1 0.8 0.6 0.4

Pre-chemo

After 4th

Post-chemo

Fig. 20 (a) A DOT example for chemotherapy monitoring of a complete responder. Only one reconstruction slice is shown for simplicity. Tumor is localized in THC image contrast, which diminished at the 4th cycle and post-therapy scans. Dynamic contrast-enhanced MRI image also shows well-localized tumor mass at pre-therapy and tumor size shrinkage in the course and post-therapy scans. (b) Relative blood flow obtained by handheld DCS probe indicates blood flow decrease with chemotherapy confirming tomographic scans

Figure 18a, b shows a representative handheld scan and its corresponding pretreatment rBF contrast along the scan dimension, respectively. As the probe is on the tumor, tumor to surrounding normal tissue contrast is about 2.5 times, and this contrast diminishes as the probe moves away from the tumor. Similar pretreatment contrast for THC and StO2 can be obtained. To quantify the therapy-induced changes, one can obtain mean peak contrast from multiple measurements that can be obtained by replacing the handheld probe on to similar positions and obtain these values with respect to time interval of the treatment. Figure 18c, d, and e shows mean averages of rBF, StO2, and THC in seven responders (based on clinical evaluation with no evidence of residual cancer). It is clear from the plots that weekly rBF, StO2, and THC showed different kinetics during therapy with significant early changes during the first 2 weeks of the therapy. Average rBF increased (52.7  9.7 %) in the first week and decreased (42.4  7.0 %) in the second week. Averaged StO2 increased from (62.9  3.4 %) baseline value to (70.4  3.2 %) at the end of

Page 25 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

the second week, and averaged THC showed a continuous decrease from pretreatment value of (80.7  7.0) mM to 73.3  8.3 mM at the end of the second week and to (63.0  8.1) mM at the end of the fourth week of therapy. The right side of the Fig. 18f, g, h shows the changes on these parameters from a patient who was a partial responder to the therapy. The parameters obtained by the noninvasive optical methods showed substantially different trend compared to the average trend of the complete responders: rBF showed a continuous increase, while StO2 and THC also increased during the course of the treatment. Pretreatment computed tomography imaging indicated a large necrotic nodal mass initially, and the tumor was still palpable at the end of the treatment, which was also confirmed by the postsurgical pathology. This study suggests frequent optical measurements may be utilized for therapy monitoring and early changes in the optical measurements may be indicative chemoradiation therapy response. One can focus on the first 2 weeks of the treatment with frequent (such as daily-based) measurements that can potentially lead to predicting the response at the earliest stage.

Monitoring of Chemotherapy of Breast Cancer There is an extensive work on diffuse optical monitoring of breast chemotherapy [5, 20, 157–163], and interested readers can find the details from very recent reviews such as by Choe and Yodh [95] and Choe and Durduran [164]. Here two representative examples are given: one shows the predictive power of the diffuse optics parameters, and the other presents a clinical tomographic approach. The University of California Irvine group utilized DRS to monitor therapeutic response in stage II/III neoadjuvant chemotherapy patients [20]. They showed after DRS indices within 1 week of the therapy could predict therapy response. The best single predictor was THC with 83 % sensitivity and 100 % specificity, while combined parameters of THC and water concentration could discriminate responders from nonresponders with 100 % sensitivity and specificity. This study demonstrated potential use of optical spectroscopy for predicting an individual patient response. In a more recent work, the same group reported that the functional hemodynamic parameter of oxyhemoglobin concentration could discriminate responders from nonresponders on the first day after chemotherapy treatment [161]. They measured several parameters including concentrations of oxyhemoglobin, deoxyhemoglobin, and water in 24 tumors. Figure 19 shows the observed mean percent changes (from baseline (B)) of oxyhemoglobin concentrations in responders and nonresponders during the first week of the therapy. Oxyhemoglobin concentration increased significantly in partial responders and complete responders, whereas it showed a decrease trend in nonresponders. This study indicates a significant impact of optical methods for therapy optimization in that very early measures of chemotherapy response offer the potential to alter treatment strategies for the case of nonresponders for improved therapeutic outcomes as well as for less toxic effects resulting in improved quality of life. The imaging instrument at the University of Pennsylvania (UPenn) is very similar to the CCD camerabased DOT instrument mentioned in section “Optical Probes,” with the main differences being that the setup is housed under a patient bed where patient lays down while her breast is inside an imaging chamber filled with matching fluid with similar background optical properties as breast tissue. The imaging instrument works in transmission geometry where laser scans on the source plane while CCD camera detects signal in the detection plane. There is the additional frequency-domain spectroscopy part for quantifying bulk background optical properties to be used in an image reconstruction algorithm. A female with locally advanced breast cancer (poorly differentiated invasive ductal carcinoma) was measured with UPenn DOT system and handheld DCS system. DCE-MRI was also performed at pre-therapy, mid-therapy, and post-therapy time points. The chemotherapy consisted of four cycles of a combination of doxorubicin with Adriamycin as a brand name and cyclophosphamide (also called AC Page 26 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

treatment) followed by four Taxol cycles at 2-week intervals. This patient was denoted as complete responder since the pathological analysis of surgical tissue specimen at the end of the therapy showed no residual tumor. Figure 20a shows a DOT reconstruction of THC at pre-therapy indicating a localized tumor mass. However, this contrast diminished after four cycles of the AC treatment and at the completion of the whole treatment. Apart from this local tumor contrast, the THC values decreased globally, which may be due to the systematic side effects of chemotherapy that usually decreases hematocrit levels in the whole body. DCE-MRI images also showed a highly localized tumor at the very similar location, but this contrast was not as pronounced in the later scans. The last scan at the end of the treatment did not show contrast enhancement. Figure 20b shows the blood flow changes measured by the DCS system, indicating that tumor blood flow also decreased over the course of the treatment supporting the THC observations obtained by DOT.

Summary With the advent of many novel approaches in therapeutics, there is a need for testing and standardization of clinical protocols. Diffuse optical methods can provide dose-related parameters as well as vascular and oxygen metabolism-related parameters for assessing the therapy response early and providing clinicians a feedback tool for adapting and ultimately optimizing the intervention accordingly. Since the techniques are noninvasive, instruments are portable, and extracted parameters are directly clinically relevant metabolic parameters, the most significant impact diffuse optical methods is expected to provide is prediction of the therapy response at the earliest time point that can result in survival benefit to patients and reduction of the health-care costs.

Acknowledgments We would like to thank Dr. Arjun G. Yodh for providing supervisorship and mentorship at the University of Pennsylvania that initiated most of the work presented here. We also acknowledge Dr. Britton Chance for his excellent mentoring and guidance. We thank Shoko Nioka and Bruce J. Tromberg for their continuous support. Additional thanks go to current and past researchers of Yodh lab at Penn, particularly Turgut Durduran, Regine Choe, Guoqiang Yu, Chao Zhou, Soren D. Konecky, Kijoon Lee, Hsing-wen Wang, David R. Busch, Alper Corlu, and Leonid Zubkov. U. Sunar acknowledges the support from the NCI grants, P30CA16056 (Startup grant) and CA55791 (Program Project Grant).

References 1. McCarthy K, Pearson K, Fulton R, Hewitt J (2012) Pre-operative chemoradiation for non-metastatic locally advanced rectal cancer. Cochrane Database Syst Rev 12, CD008368 2. Rydzewska L, Tierney J, Vale CL, Symonds PR (2012) Neoadjuvant chemotherapy plus surgery versus surgery for cervical cancer. Cochrane Database Syst Rev 12, CD007406 3. Ueda S, Roblyer D, Cerussi A et al (2012) Baseline tumor oxygen saturation correlates with a pathologic complete response in breast cancer patients undergoing neoadjuvant chemotherapy. Cancer Res 72:4318–4328

Page 27 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

4. Garland ML, Vather R, Bunkley N et al (2014) Clinical tumour size and nodal status predict pathologic complete response following neoadjuvant chemoradiotherapy for rectal cancer. Int J Colorectal Dis 29:301–307 5. Jiang S, Pogue BW, Kaufman PA et al (2014) Predicting breast tumor response to neoadjuvant chemotherapy with diffuse optical spectroscopic tomography prior to treatment. Clin Cancer Res 20:6006–6015 6. Vaupel P, Kallinowski F, Okunieff P (1989) Blood flow, oxygen and nutrient supply, and metabolic microenvironment of human tumors: a review. Cancer Res 49:6449–6465 7. Tromberg BJ, Pogue BW, Paulsen KD et al (2008) Assessing the future of diffuse optical imaging technologies for breast cancer management. Med Phys 35:2443–2451 8. Lehtio K, Eskola O, Viljanen T et al (2004) Imaging perfusion and hypoxia with PET to predict radiotherapy response in head-and-neck cancer. Int J Radiat Oncol Biol Phys 59:971–982 9. Jacobson O, Chen X (2013) Interrogating tumor metabolism and tumor microenvironments using molecular positron emission tomography imaging. Theranostic approaches to improve therapeutics. Pharmacol Rev 65:1214–1256 10. DeVries AF, Kremser C, Hein PA et al (2003) Tumor microcirculation and diffusion predict therapy outcome for primary rectal carcinoma. Int J Radiat Oncol Biol Phys 56:958–965 11. Hermans R, Lambin P, Van der Goten A et al (1999) Tumoural perfusion as measured by dynamic computed tomography in head and neck carcinoma. Radiother Oncol 53:105–111 12. Preda L, Calloni SF, Moscatelli ME et al (2014) Role of CT perfusion in monitoring and prediction of response to therapy of head and neck squamous cell carcinoma. Biomed Res Int 2014:917150 13. Anderson H, Price P, Blomley M et al (2001) Measuring changes in human tumour vasculature in response to therapy using functional imaging techniques. Br J Cancer 85:1085–1093 14. Pirhonen JP, Grenman SA, Bredbacka AB et al (1995) Effects of external radiotherapy on uterine blood flow in patients with advanced cervical carcinoma assessed by color Doppler ultrasonography. Cancer 76:67–71 15. Chen B, Pogue BW, Goodwin IA et al (2003) Blood flow dynamics after photodynamic therapy with verteporfin in the RIF-1 tumor. Radiat Res 160:452–459 16. Huilgol NG, Khan MM, Puniyani R (1995) Capillary perfusion – a study in two groups of radiated patients for cancer of head and neck. Indian J Cancer 32:59–62 17. Goertz DE, Yu JL, Kerbel RS et al (2002) High-frequency Doppler ultrasound monitors the effects of antivascular therapy on tumor blood flow. Cancer Res 62:6371–6375 18. Stone HB, Brown JM, Phillips TL, Sutherland RM (1993) Oxygen in human tumors: correlations between methods of measurement and response to therapy. Summary of a workshop held November 19–20, 1992, at the National Cancer Institute, Bethesda, Maryland. Radiat Res 136:422–434 19. Sunar U, Rohrbach D, Rigual N et al (2010) Monitoring photobleaching and hemodynamic responses to HPPH-mediated photodynamic therapy of head and neck cancer: a case report. Opt Express 18:14969–14978 20. Cerussi A, Hsiang D, Shah N et al (2007) Predicting response to breast cancer neoadjuvant chemotherapy using diffuse optical spectroscopy. Proc Natl Acad Sci U S A 104:4014–4019 21. Cutler M (1929) Transillumination of the breast. Surg Gynecol Obstet 48:721–727 22. Jobsis FF (1977) Noninvasive, infrared monitoring of cerebral and myocardial oxygen sufficiency and circulatory parameters. Science 198:1264–1267 23. Bank W, Chance B (1997) Diagnosis of defects in oxidative muscle metabolism by non-invasive tissue oximetry. Mol Cell Biochem 174:7–10

Page 28 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

24. Jacques SL (1996) Origins of tissue optical properties in the UVA, visible, and NIR regions. In: Advances in optical imaging and photon migration. OSA trends in optics and photonics, vol 2, pp 364–371 25. Mourant JR, Freyer JP, Hielscher AH et al (1998) Mechanisms of light scattering from biological cells relevant to noninvasive optical-tissue diagnostics. Appl Opt 37:3586–3593 26. Mourant JR, Fuselier T, Boyer J et al (1997) Predictions and measurements of scattering and absorption over broad wavelength ranges in tissue phantoms. Appl Opt 36:949–957 27. Laughney AM, Krishnaswamy V, Rizzo EJ et al (2012) Scatter spectroscopic imaging distinguishes between breast pathologies in tissues relevant to surgical margin assessment. Clin Cancer Res 18:6315–6325 28. Cheung C, Culver JP, Takahashi K et al (2001) In vivo cerebrovascular measurement combining diffuse near-infrared absorption and correlation spectroscopies. Phys Med Biol 46:2053–2065 29. Boas DA, Campbell LE, Yodh AG (1995) Scattering and imaging with diffusing temporal field correlations. Phys Rev Lett 75:1855–1858 30. Boas DA, Yodh AG (1997) Spatially varying dynamical properties of turbid media probed with diffusing temporal light correlation. J Opt Soc Am A 14:192–215 31. Bussink J, Kaanders JH, Rijken PF et al (2000) Changes in blood perfusion and hypoxia after irradiation of a human squamous cell carcinoma xenograft tumor line. Radiat Res 153:398–404 32. Fenton BM, Lord EM, Paoni SF (2001) Effects of radiation on tumor intravascular oxygenation, vascular configuration, development of hypoxia, and clonogenic survival. Radiat Res 155:360–368 33. Busch TM (2006) Local physiological changes during photodynamic therapy. Lasers Surg Med 38:494–499 34. Busch TM (2010) Hypoxia and perfusion labeling during photodynamic therapy. Methods Mol Biol 635:107–120 35. Gibbs-Strauss SL, O’Hara JA, Hoopes PJ et al (2009) Noninvasive measurement of aminolevulinic acid-induced protoporphyrin IX fluorescence allowing detection of murine glioma in vivo. J Biomed Opt 14:014007 36. Rollakanti KR, Kanick SC, Davis SC et al (2013) Techniques for fluorescence detection of protoporphyrin IX in skin cancers associated with photodynamic therapy. Photonics Lasers Med 2:287–303 37. Warren CB, Lohser S, Wene LC et al (2010) Noninvasive fluorescence monitoring of protoporphyrin IX production and clinical outcomes in actinic keratoses following short-contact application of 5-aminolevulinate. J Biomed Opt 15:051607 38. Cerussi AE, Tanamai VW, Mehta RS et al (2010) Frequent optical imaging during breast cancer neoadjuvant chemotherapy reveals dynamic tumor physiology in an individual patient. Acad Radiol 17:1031–1039 39. Jakubowski DB, Cerussi AE, Bevilacqua F et al (2004) Monitoring neoadjuvant chemotherapy in breast cancer using quantitative diffuse optical spectroscopy: a case study. J Biomed Opt 9:230–238 40. Vishwanath K, Klein D, Chang K et al (2009) Quantitative optical spectroscopy can identify longterm local tumor control in irradiated murine head and neck xenografts. J Biomed Opt 14:054051 41. Yu G, Durduran T, Zhou C et al (2006) Real-time in situ monitoring of human prostate photodynamic therapy with diffuse light. Photochem Photobiol 82:1279–1284 42. Yodh AG, Boas DA (2003) Functional imaging with diffusing light. In: Vo-Dinh T (ed) Biomedical diagnostics. CRC Press, Boca Raton, Florida, pp 311–356 43. Boas DA (1996) Diffuse photon probes of structural and dynamical properties of turbid media: theory and biomedical applications. University of Pennsylvania, Philadelphia

Page 29 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

44. O’Leary MA (1996) Imaging with diffuse photon density waves. In: Physics and astronomy. University of Pennsylvania, Philadelphia 45. Haskell RC, Svaasand LO, Tsay TT et al (1994) Boundary conditions for the diffusion equation in radiative transfer. J Opt Soc Am A Opt Image Sci Vis 11:2727–2741 46. Kienle A, Patterson MS (1997) Improved solutions of the steady-state and the time-resolved diffusion equations for reflectance from a semi-infinite turbid medium. J. Opt. Soc.Am.A Opt. Image Sci. Vis 14:246–254 47. Tseng SH, Bargo P, Durkin A, Kollias N (2009) Chromophore concentrations, absorption and scattering properties of human skin in-vivo. Opt Express 17:14599–14617 48. Bard MP, Amelink A, Skurichina M et al (2006) Optical spectroscopy for the classification of malignant lesions of the bronchial tree. Chest 129:995–1001 49. Gamm UA, Kanick SC, Sterenborg HJ et al (2011) Measurement of tissue scattering properties using multi-diameter single fiber reflectance spectroscopy: in silico sensitivity analysis. Biomed Opt Express 2:3150–3166 50. Kanick SC, Gamm UA, Schouten M et al (2011) Measurement of the reduced scattering coefficient of turbid media using single fiber reflectance spectroscopy: fiber diameter and phase function dependence. Biomed Opt Express 2:1687–1702 51. Middelburg TA, Kanick SC, de Haas ER et al (2011) Monitoring blood volume and saturation using superficial fibre optic reflectance spectroscopy during PDT of actinic keratosis. J Biophotonics 4:721–730 52. Finlay JC, Foster TH (2004) Hemoglobin oxygen saturations in phantoms and in vivo from measurements of steady-state diffuse reflectance at a single, short source-detector separation. Med Phys 31:1949–1959 53. Hull EL, Nichols MG, Foster TH (1998) Quantitative broadband near-infrared spectroscopy of tissue-simulating phantoms containing erythrocytes. Phys Med Biol 43:3381–3404 54. Brown W (1993) Dynamic light scattering: the method and some applications. Oxford University Press, Oxford, England 55. Berne BJ, Pecora R (1990) Dynamic light scattering: with applications to chemistry, biology, and physics. R.E. Krieger, Malabar 56. Pine DJ, Weitz DA, Chaikin PM, Herbolzheimer E (1988) Diffusing wave spectroscopy. Phys Rev Lett 60:1134–1137 57. Mesquita RC, Durduran T, Yu G et al (2011) Direct measurement of tissue blood flow and metabolism with diffuse optics. Philos Trans A Math Phys Eng Sci 369:4390–4406 58. Yu G, Durduran T, Zhou C et al (2011) Near-infrared diffuse correlation spectroscopy (DCS) for assessment of tissue blood flow. In: Boas DA, Pitris C, Ramanujam N (eds) Handbook of biomedical optics. Taylor & Francis Books, Florence, Kentucky, pp 195–216 59. Sunar U, Quon H, Durduran T et al (2006) Noninvasive diffuse optical measurement of blood flow and blood oxygenation for monitoring radiation therapy in patients with head and neck tumors: a pilot study. J Biomed Opt 11:064021 60. Sunar U, Makonnen S, Zhou C et al (2007) Hemodynamic responses to antivascular therapy and ionizing radiation assessed by diffuse optical spectroscopies. Opt Express 15:15507–15516 61. Durduran T, Choe R, Baker WB, Yodh AG (2010) Diffuse optics for tissue monitoring and tomography. Rep Prog Phys 73:43 62. Carp SA, Dai GP, Boas DA et al (2010) Validation of diffuse correlation spectroscopy measurements of rodent cerebral blood flow with simultaneous arterial spin labeling MRI; towards MRIoptical continuous cerebral metabolic monitoring. Biomed Opt Express 1:553–565

Page 30 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

63. Culver JP, Durduran T, Furuya D et al (2003) Diffuse optical tomography of cerebral blood flow, oxygenation, and metabolism in rat during focal ischemia. J Cereb Blood Flow Metab 23:911–924 64. Li J, Dietsche G, Iftime D et al (2005) Noninvasive detection of functional brain activity with nearinfrared diffusing-wave spectroscopy. J Biomed Opt 10:44002 65. de Visscher SA, Witjes MJ, van der Vegt B et al (2013) Localization of liposomal mTHPC formulations within normal epithelium, dysplastic tissue, and carcinoma of oral epithelium in the 4NQO-carcinogenesis rat model. Lasers Surg Med 45:668–678 66. van Leeuwen-van Zaane F, van Driel PB, Gamm UA et al (2014) Microscopic analysis of the localization of two chlorin-based photosensitizers in OSC19 tumors in the mouse oral cavity. Lasers Surg Med 46:224–234 67. Ramanujam N (2000) Fluorescence spectroscopy of neoplastic and non-neoplastic tissues. Neoplasia 2:89–117 68. Kasischke KA, Lambert EM, Panepento B et al (2011) Two-photon NADH imaging exposes boundaries of oxygen diffusion in cortical vascular supply regions. J Cereb Blood Flow Metab 31:68–81 69. Manjunath BK, Kurein J, Rao L et al (2004) Autofluorescence of oral tissue for optical pathology in oral malignancy. J Photochem Photobiol B 73:49–58 70. Brancaleon L, Durkin AJ, Tu JH et al (2001) In vivo fluorescence spectroscopy of nonmelanoma skin cancer. Photochem Photobiol 73:178–183 71. Bogaards A, Sterenborg HJ, Wilson B (2007) In vivo quantification of fluorescent molecular markers in real-time: a review to evaluate the performance of five existing methods. Photodiagnosis Photodynamic Ther 4:170–178 72. Fisher CJ, Niu CJ, Lai B et al (2013) Modulation of PPIX synthesis and accumulation in various normal and glioma cell lines by modification of the cellular signaling and temperature. Lasers Surg Med 45:460–468 73. Casas A, Fukuda H, Meiss R, Batlle AM (1999) Topical and intratumoral photodynamic therapy with 5-aminolevulinic acid in a subcutaneous murine mammary adenocarcinoma. Cancer Lett 141:29–38 74. Johansson J, Berg R, Svanberg K, Svanberg S (1997) Laser-induced fluorescence studies of normal and malignant tumour tissue of rat following intravenous injection of delta-amino levulinic acid. Lasers Surg Med 20:272–279 75. Kobuchi H, Moriya K, Ogino T et al (2012) Mitochondrial localization of ABC transporter ABCG2 and its function in 5-aminolevulinic acid-mediated protoporphyrin IX accumulation. PLoS One 7, e50082 76. Korbelik M, Krosl G (1995) Accumulation of benzoporphyrin derivative in malignant and host cell populations of the murine RIF tumor. Cancer Lett 97:249–254 77. Korbelik M, Krosl G (1995) Photofrin accumulation in malignant and host cell populations of a murine fibrosarcoma. Photochem Photobiol 62:162–168 78. Millon SR, Ostrander JH, Yazdanfar S et al (2010) Preferential accumulation of 5-aminolevulinic acid-induced protoporphyrin IX in breast cancer: a comprehensive study on six breast cell lines with varying phenotypes. J Biomed Opt 15:018002 79. Saczko J, Mazurkiewicz M, Chwilkowska A et al (2007) Intracellular distribution of Photofrin in malignant and normal endothelial cell lines. Folia Biol (Praha) 53:7–12 80. Uekusa M, Omura K, Nakajima Y et al (2010) Uptake and kinetics of 5-aminolevulinic acid in oral squamous cell carcinoma. Int J Oral Maxillofac Surg 39:802–805

Page 31 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

81. Wang C, Chen X, Wu J et al (2013) Low-dose arsenic trioxide enhances 5-aminolevulinic acidinduced PpIX accumulation and efficacy of photodynamic therapy in human glioma. J Photochem Photobiol B 127:61–67 82. Zaak D, Sroka R, Khoder W et al (2008) Photodynamic diagnosis of prostate cancer using 5-aminolevulinic acid – first clinical experiences. Urology 72:345–348 83. Bellnier DA, Greco WR, Loewen GM et al (2003) Population pharmacokinetics of the photodynamic therapy agent 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a in cancer patients. Cancer Res 63:1806–1813 84. Sunar U, Rohrbach D, Morgan J et al (2013) Quantification of PpIX concentration in basal cell carcinoma and squamous cell carcinoma models using spatial frequency domain imaging. Biomed Opt Express 4:531–537 85. Saager RB, Cuccia DJ, Saggese S et al (2011) Quantitative fluorescence imaging of protoporphyrin IX through determination of tissue optical properties in the spatial frequency domain. J Biomed Opt 16:126013 86. O’Leary MA, Boas DA, Li XD et al (1996) Fluorescence lifetime imaging in turbid media. Opt Lett 21:158–160 87. Gardner CM, Jacques SL, Welch AJ (1996) Fluorescence spectroscopy of tissue: recovery of intrinsic fluorescence from measured fluorescence. Appl Opt 35:1780–1792 88. Welch AJ, Gardner C, Richards-Kortum R et al (1997) Propagation of fluorescent light. Lasers Surg Med 21:166–178 89. Kanick SC, Davis SC, Zhao Y et al (2014) Dual-channel red/blue fluorescence dosimetry with broadband reflectance spectroscopic correction measures protoporphyrin IX production during photodynamic therapy of actinic keratosis. J Biomed Opt 19:75002 90. Finlay JC, Conover DL, Hull EL, Foster TH (2001) Porphyrin bleaching and PDT-induced spectral changes are irradiance dependent in ALA-sensitized normal rat skin in vivo. Photochem Photobiol 73:54–63 91. Montan S, Svanberg K, Svanberg S (1985) Multicolor imaging and contrast enhancement in cancertumor localization using laser-induced fluorescence in hematoporphyrin-derivative-bearing tissue. Opt Lett 10:56–58 92. Sterenborg HJ, Saarnak AE, Frank R, Motamedi M (1996) Evaluation of spectral correction techniques for fluorescence measurements on pigmented lesions in vivo. J Photochem Photobiol B 35:159–165 93. Pogue BW, Burke G (1998) Fiber-optic bundle design for quantitative fluorescence measurement from tissue. Appl Opt 37:7429–7436 94. Busch DR, Guo W, Choe R et al (2010) Computer aided automatic detection of malignant lesions in diffuse optical mammography. Med Phys 37:1840–1849 95. Choe R, Yodh A (2008) Diffuse optical tomography of the breast. In: Suri J, Rangayyan R, Laxminarayan S (eds) Emerging technology in breast imaging and mammography. American Scientific Publishers, Valencia, California, pp 317–342 96. Culver JP, Ntziachristos V, Holboke MJ, Yodh AG (2001) Optimization of optode arrangements for diffuse optical tomography: a singular-value analysis. Opt Lett 26:701–703 97. Gu X, Zhang Q, Bartlett M et al (2004) Differentiation of cysts from solid tumors in the breast with diffuse optical tomography. Acad Radiol 11:53–60 98. Herve L, Koenig A, Da Silva A et al (2007) Noncontact fluorescence diffuse optical tomography of heterogeneous media. Appl Opt 46:4896–4906 99. Ntziachristos V, Hielscher AH, Yodh AG, Chance B (2001) Diffuse optical tomography of highly heterogeneous media. IEEE Trans Med Imaging 20:470–478 Page 32 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

100. Pogue BW, Davis SC, Song X et al (2006) Image analysis methods for diffuse optical tomography. J Biomed Opt 11(3):33001 101. Srinivasan S, Pogue BW, Dehghani H et al (2004) Improved quantification of small objects in nearinfrared diffuse optical tomography. J Biomed Opt 9:1161–1171 102. Konecky SD, Panasyuk GY, Lee K et al (2008) Imaging complex structures with diffuse light. Opt Express 16:5048–5060 103. Gridelli C, Rossi A, Maione P et al (2009) Vascular disrupting agents: a novel mechanism of action in the battle against non-small cell lung cancer. Oncologist 14:612–620 104. Kim S, Peshkin L, Mitchison TJ (2012) Vascular disrupting agent drug classes differ in effects on the cytoskeleton. PLoS One 7, e40177 105. Tozer GM (2003) Measuring tumour vascular response to antivascular and antiangiogenic drugs. Br J Radiol 76(Spec No 1):S23–S35 106. Rossi A, Maione P, Ferrara ML et al (2009) Angiogenesis inhibitors and vascular disrupting agents in non-small cell lung cancer. Curr Med Chem 16:3919–3930 107. Ding X, Zhang Z, Li S, Wang A (2011) Combretastatin A4 phosphate induces programmed cell death in vascular endothelial cells. Oncol Res 19:303–309 108. Greene LM, O’Boyle NM, Nolan DP et al (2012) The vascular targeting agent Combretastatin-A4 directly induces autophagy in adenocarcinoma-derived colon cancer cells. Biochem Pharmacol 84:612–624 109. Li J, Cona MM, Chen F et al (2013) Sequential systemic administrations of combretastatin A4 Phosphate and radioiodinated hypericin exert synergistic targeted theranostic effects with prolonged survival on SCID mice carrying bifocal tumor xenografts. Theranostics 3:127–137 110. Kessel D, Oleinick NL (2010) Photodynamic therapy and cell death pathways. Methods Mol Biol 635:35–46 111. Dougherty TJ, Gomer CJ, Henderson BW et al (1998) Photodynamic therapy. J Natl Cancer Inst 90:889–905 112. Wilson BC, Patterson MS (2008) The physics, biophysics and technology of photodynamic therapy. Phys Med Biol 53:R61–R109 113. Zhu TC, Finlay JC (2008) The role of photodynamic therapy (PDT) physics. Med Phys 35:3127–3136 114. Maas AL, Carter SL, Wileyto EP et al (2012) Tumor vascular microenvironment determines responsiveness to photodynamic therapy. Cancer Res 72:2079–2088 115. Chen B, Pogue BW, Zhou X et al (2005) Effect of tumor host microenvironment on photodynamic therapy in a rat prostate tumor model. Clin Cancer Res 11:720–727 116. Foster TH, Murant RS, Bryant RG et al (1991) Oxygen consumption and diffusion effects in photodynamic therapy. Radiat Res 126:296–303 117. Zhou X, Pogue BW, Chen B et al (2006) Pretreatment photosensitizer dosimetry reduces variation in tumor response. Int J Radiat Oncol Biol Phys 64:1211–1220 118. Wilson BC, Patterson MS, Lilge L (1997) Implicit and explicit dosimetry in photodynamic therapy: a new paradigm. Lasers Med Sci 12:182–199 119. Sheng C, Hoopes PJ, Hasan T, Pogue BW (2007) Photobleaching-based dosimetry predicts deposited dose in ALA-PpIX PDT of rodent esophagus. Photochem Photobiol 83:738–748 120. Sunar U (2013) Monitoring photodynamic therapy of head and neck malignancies with optical spectroscopies. World J Clin Cases 1:96–105 121. Rogers HW, Weinstock MA, Harris AR et al (2010) Incidence estimate of nonmelanoma skin cancer in the United States, 2006. Arch Dermatol 146:283–287

Page 33 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

122. Goldberg LH, Landau JM, Moody MN et al (2012) Evaluation of the chemopreventative effects of ALA PDT in patients with multiple actinic keratoses and a history of skin cancer. J Drugs Dermatol 11:593–597 123. Lehmann P (2007) Methyl aminolaevulinate-photodynamic therapy: a review of clinical trials in the treatment of actinic keratoses and nonmelanoma skin cancer. Br J Dermatol 156:793–801 124. Biel M (2006) Advances in photodynamic therapy for the treatment of head and neck cancers. Lasers Surg Med 38:349–355 125. Biel MA (2010) Photodynamic therapy of head and neck cancers. Methods Mol Biol 635:281–293 126. D’Cruz AK, Robinson MH, Biel MA (2004) mTHPC-mediated photodynamic therapy in patients with advanced, incurable head and neck cancer: a multicenter study of 128 patients. Head Neck 26:232–240 127. Quon H, Finlay J, Cengel K et al (2011) Transoral robotic photodynamic therapy for the oropharynx. Photodiagnosis Photodyn Ther 8:64–67 128. Quon H, Grossman CE, Finlay JC et al (2011) Photodynamic therapy in the management of premalignant head and neck mucosal dysplasia and microinvasive carcinoma. Photodiagnosis Photodyn Ther 8:75–85 129. Rigual NR, Thankappan K, Cooper M et al (2009) Photodynamic therapy for head and neck dysplasia and cancer. Arch Otolaryngol Head Neck Surg 135:784–788 130. Jerjes W, Hamdoon Z, Hopper C (2012) Photodynamic therapy in the management of potentially malignant and malignant oral disorders. Head Neck Oncol 4:16 131. Jerjes W, Upile T, Betz CS et al (2007) The application of photodynamic therapy in the head and neck. Dent Update 34:478–480, 483–474, 486 132. Baran TM, Wilson JD, Mitra S et al (2012) Optical property measurements establish the feasibility of photodynamic therapy as a minimally invasive intervention for tumors of the kidney. J Biomed Opt 17:98002-1 133. Johansson A, Axelsson J, Andersson-Engels S, Swartling J (2007) Realtime light dosimetry software tools for interstitial photodynamic therapy of the human prostate. Med Phys 34:4309–4321 134. Oakley E, Wrazen B, Bellnier DA et al (2015) A new finite element approach for near real-time simulation of light propagation in locally advanced head and neck tumors. Lasers Surg Med 47:60–67 135. Rendon A, Beck JC, Lilge L (2008) Treatment planning using tailored and standard cylindrical light diffusers for photodynamic therapy of the prostate. Phys Med Biol 53:1131–1149 136. Thompson MS, Johansson A, Johansson T et al (2005) Clinical system for interstitial photodynamic therapy with combined on-line dosimetry measurements. Appl Opt 44:4023–4031 137. Kruijt B, van der Ploeg-van den Heuvel A, de Bruijn HS et al (2009) Monitoring interstitial mTHPC-PDT in vivo using fluorescence and reflectance spectroscopy. Lasers Surg Med 41:653–664 138. Samkoe KS, Chen A, Rizvi I et al (2010) Imaging tumor variation in response to photodynamic therapy in pancreatic cancer xenograft models. Int J Radiat Oncol Biol Phys 76:251–259 139. Becker TL, Paquette AD, Keymel KR et al (2010) Monitoring blood flow responses during topical ALA-PDT. Biomed Opt Express 2:123–130 140. Busch TM, Wang HW, Wileyto EP et al (2010) Increasing damage to tumor blood vessels during motexafin lutetium-PDT through use of low fluence rate. Radiat Res 174:331–340 141. Busch TM, Wileyto EP, Emanuele MJ et al (2002) Photodynamic therapy creates fluence ratedependent gradients in the intratumoral spatial distribution of oxygen. Cancer Res 62:7273–7279 142. Ericson MB, Sandberg C, Stenquist B et al (2004) Photodynamic therapy of actinic keratosis at varying fluence rates: assessment of photobleaching, pain and primary clinical outcome. Br J Dermatol 151:1204–1212 Page 34 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

143. Henderson BW, Busch TM, Snyder JW (2006) Fluence rate as a modulator of PDT mechanisms. Lasers Surg Med 38:489–493 144. Sitnik TM, Hampton JA, Henderson BW (1998) Reduction of tumour oxygenation during and after photodynamic therapy in vivo: effects of fluence rate. Br J Cancer 77:1386–1394 145. Sitnik TM, Henderson BW (1998) The effect of fluence rate on tumor and normal tissue responses to photodynamic therapy. Photochem Photobiol 67:462–466 146. Henderson BW, Sitnik-Busch TM, Vaughan LA (1999) Potentiation of photodynamic therapy antitumor activity in mice by nitric oxide synthase inhibition is fluence rate dependent. Photochem Photobiol 70:64–71 147. Yu G, Durduran T, Zhou C et al (2005) Noninvasive monitoring of murine tumor blood flow during and after photodynamic therapy provides early assessment of therapeutic efficacy. Clin Cancer Res 11:3543–3552 148. Rohrbach DJ, Tracy EC, Walker J et al (2015) Blood flow dynamics during local photoreaction in a head and neck tumor model. Frontiers in Physics 3 149. Jerjes W, Upile T, Hamdoon Z et al (2011) Photodynamic therapy: the minimally invasive surgical intervention for advanced and/or recurrent tongue base carcinoma. Lasers Surg Med 43:283–292 150. Story W, Sultan AA, Bottini G et al (2013) Strategies of airway management for head and neck photo-dynamic therapy. Lasers Surg Med 45:370–376 151. Mo W, Rohrbach D, Sunar U (2012) Imaging a photodynamic therapy photosensitizer in vivo with a time-gated fluorescence tomography system. J Biomed Opt 17:071306 152. Rohrbach DJ, Rigual N, Tracy E et al (2012) Interlesion differences in the local photodynamic therapy response of oral cavity lesions assessed by diffuse optical spectroscopies. Biomed Opt Express 3:2142–2153 153. Rigual N, Shafirstein G, Cooper MT et al (2013) Photodynamic therapy with 3-(1′-hexyloxyethyl) pyropheophorbide a for cancer of the oral cavity. Clin Cancer Res 19:6605–6613 154. Henderson BW, Daroqui C, Tracy E et al (2007) Cross-linking of signal transducer and activator of transcription 3 – a molecular marker for the photodynamic reaction in cells and tumors. Clin Cancer Res 13:3156–3163 155. Liu W, Oseroff AR, Baumann H (2004) Photodynamic therapy causes cross-linking of signal transducer and activator of transcription proteins and attenuation of interleukin-6 cytokine responsiveness in epithelial cells. Cancer Res 64:6579–6587 156. Srivatsan A, Wang Y, Joshi P et al (2011) In vitro cellular uptake and dimerization of signal transducer and activator of transcription-3 (STAT3) identify the photosensitizing and imagingpotential of isomeric photosensitizers derived from chlorophyll-a and bacteriochlorophyll-a. J Med Chem 54:6859–6873 157. Zhou C, Choe R, Shah N et al (2007) Diffuse optical monitoring of blood flow and oxygenation in human breast cancer during early stages of neoadjuvant chemotherapy. J Biomed Opt 12:051903 158. Choe R, Corlu A, Lee K et al (2005) Diffuse optical tomography of breast cancer during neoadjuvant chemotherapy: a case study with comparison to MRI. Med Phys 32:1128–1139 159. Schegerin M, Tosteson AN, Kaufman PA et al (2009) Prognostic imaging in neoadjuvant chemotherapy of locally-advanced breast cancer should be cost-effective. Breast Cancer Res Treat 114:537–547 160. Shah N, Gibbs J, Wolverton D et al (2005) Combined diffuse optical spectroscopy and contrastenhanced magnetic resonance imaging for monitoring breast cancer neoadjuvant chemotherapy: a case study. J Biomed Opt 10:051503

Page 35 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_26-1 # Springer Science+Business Media Dordrecht 2014

161. Roblyer D, Ueda S, Cerussi A et al (2011) Optical imaging of breast cancer oxyhemoglobin flare correlates with neoadjuvant chemotherapy response one day after starting treatment. Proc Natl Acad Sci U S A 108:14626–14631 162. Lee K (2011) Optical mammography: diffuse optical imaging of breast cancer. World J Clin Oncol 2:64–72 163. Leproux A, van der Voort M, van der Mark MB et al (2011) Optical mammography combined with fluorescence imaging: lesion detection using scatterplots. Biomed Opt Express 2:1007–1020 164. Choe R, Durduran T (2012) Diffuse Optical Monitoring of the Neoadjuvant Breast Cancer Therapy. IEEE J Sel Top Quantum Electron 18:1367–1386

Page 36 of 36

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Exploiting Complex Media for Biomedical Applications Youngwoon Choi, Moonseok Kim and Wonshik Choi* Department of Physics, Korea University, Seoul, South Korea

Abstract “Turbidity” caused by multiple light scattering distorts the propagation of waves and thus undermines optical imaging. For example, translucent biological tissues exhibiting optical turbidity have posed limitations on the imaging depth and energy transmission. However, recent advances in wavefront-sensing and wavefront-shaping technologies have opened the door to imaging and controlling wave propagation through a complex scattering medium. In this chapter, a novel method called turbid lens imaging (TLI) is introduced that records a transmission matrix (TM) of a scattering medium characterizing the input–output response of the medium. The knowledge of this TM allows one to find an incident wave out of the distorted transmitted wave. Therefore, it converts the highly complex medium into a useful imaging optics. By the use of the TM, the image distortion by a scattering medium can be eliminated and a clean object image can be retrieved as a result. The TLI was also adapted for imaging through a multimode optical fiber, which is a scattering medium, for the endomicroscopic imaging. In addition to imaging, the knowledge of TM was used to enhance light energy delivery through a highly scattering medium. This seemingly implausible task was made possible by coupling light into the resonance modes, called transmission eigenchannels, of the medium. With all these studies, the TLI will lead to great important applications in deep-tissue optical bio-imaging and disease treatment.

Keywords Turbidity; Scattering medium; Interferometric microscope; Transmission matrix; Turbid lens imaging; Endomicroscopy; Transmission eigenchannel

Introduction Multiple light scattering in disordered media such as biological tissues distorts waves propagating through the media. This has been detrimental to the performance of optical imaging, especially in imaging depth. While there have been efforts to mitigate the effect of light scattering such as the use of adaptive optics [1], the success was limited to weakly scattering media. In recent years, there have been interesting studies in which they not only overcome the effect of light scattering but also make use of the multiple scattering in a useful way. For example, with the proper shaping of an incident wave, it was demonstrated that focusing of a beam is made sharper than the diffraction limit by a disordered medium [2]. In a recent study, it was also demonstrated that image resolution is enhanced beyond the diffraction limit with the use of multiple scattering [3]. The rationale behind these studies is that even for a highly scattering medium, the wave distortion is deterministic as long as the

*Email: [email protected] Page 1 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

measurement time is shorter than the time scale of the perturbation of the medium. Then the shaping the wavefront of an incident wave can control the propagation of waves through the media. There have been mainly two approaches in applying for the wavefront shaping. One is to use a feedback control of an incident wave in such a way to induce a constructive interference at a single point on the opposite side of a scattering layer. By combining this feedback control with the so-called memory effect, one can scan the focused spot over a limited area and obtain an object image underneath the scattering layer [4–7]. The other approach is to use wavefront-sensing technique to record the input–output response of a disordered medium. Once this so-called TM is measured, the incoming wave containing an object image can be retrieved from the distorted object image recorded in transmission [3, 8–10]. This is equivalent to using the scattering medium as an imaging optics. In our study, we developed a method called TLI that can record a TM of a medium by using an interferometric microscopy [3, 10]. In this chapter, we review the application of TLI for imaging through disordered media and enhancing wave transport through the media [11]. We also introduce the study of using a multimode optical fiber as an imaging optics and the implementation of scannerfree single-fiber endoscopic imaging [12]. Compared with the studies that demonstrated fluorescence endoscopic imaging by a point-optimization method [13, 14], this method is capable of recording reflectance images.

Theory: Turbid Lens Imaging A disordered medium distorts an object image by multiple scattering such that the information transmitted through the turbidity is useless as it is. However, if a turbid medium is characterized by measuring its TM, we can eliminate the image distortion by using the deterministic relationship between the image distortion and the TM. Then, a turbid medium is no longer an obstacle to imaging but a useful optical element. Moreover, it can have additional benefits that conventional optics is devoid of. In this section we review the details of TLI method. The basic principle for TLI presented using the wave nature of light is described in section “Concepts of TLI.” In section “Experiment,” experimental details to realize TLI for reconstructing object images are presented. Next, in section “Synthetic Aperture TLI,” we introduce an extended technique for TLI which uses synthetic

Fig. 1 Object wave propagation through a turbid medium. ESP(x, ) is an object wave which is composed of multiple angular plane waves and ETP(x, y) is a distorted image due to the turbid medium. SP is a sample plane where an object is placed and TP is a transmission plane where an object image is taken (Modified from Ref. [3]) Page 2 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 2 Experimental procedure. (a) Schematic of recording a TM for a disordered medium. The incident angle of a plane wave, (yx, y), is scanned at SP. The transmitted images at TP are recorded as a function of the incident angle. (b) Representative TM elements. Only the amplitude images are shown, but the corresponding phase images are recorded at the same time. (c) Recording of an object image. (d) The distorted image of an object (Modified from Ref. [3])

aperture image processing method to enhance the imaging resolution without measuring enlarged dimension of TM. In section “Benefit of TLI,” the benefits of TLI – breaking the diffraction barrier and enlargement of the field of view – are described.

Concepts of TLI When an object located at the sample plane (SP in Fig. 1) is illuminated with a plane wave propagating along z-axis, the wavefront of the light is modified, and the transmitted wave through the object comes to have object information. This wave at SP can be decomposed as a superposition of various plane waves with different propagation angles as shown in Fig. 1. The complex electric field of the object image at SP can be expressed as follows: E SP ðx, Þ ¼

X

  iðk x xþk  Þ e A k , k ;  x kx, k

(1)

where A(kx, k) is a complex superposition coefficient for each plane wave and kx and k are the wave vectors along x and  directions, respectively. The set of A(kx, k) is known as an angular spectrum of the object image. The electric field ESP enters the first surface of a turbid medium and exits from the opposite surface after propagation. While propagating through the turbid medium, each plane wave gets distortion in its own way independent of each other. Even after all the plane waves are spatially distorted, the relationship among those waves, A(kx, k), remains unchanged due to the linear response of the medium. It means that the distorted object image, ETP(x, y), at the transmission plane (TP) can be described as a superposition of various distorted plane waves with the same coefficients, A(k , k ), at SP. If we consider that each of the plane wave components, eiðk x xþk  Þ , is x



distorted into a unique speckle pattern, Etr(x, y; kx, k), after transmitting through the turbid medium, then the distorted object image ETP(x, y) can be written as

Page 3 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

E TP ðx, yÞ ¼

X k x, k 

    A k x , k  E tr x, y; k x , k  :

(2)

The set of distorted plane waves, Etr(x, y; kx, k), which are in general complex speckle patterns, are the TM of the turbid medium which describes the output responses for the input waves [3, 8, 9, 11]. In order to obtain the original object image ESP from the distorted one ETP, all we need is to find out the angular spectrum A(kx, k) imprinted in ETP. For this purpose, one of the TM elements, Etr(x, y; kx, k), is projected onto ETP(x, y). The projection operation can be described as     E* tr x, y; k x , k  E T P ðx, yÞ x, y +     X   ¼ E tr x, y; k x , k  A k 0x , k 0 E tr x, y; k 0x , k 0 0 0 kx, k x, y (3)  X  0 0   0 0 ¼ A k x, k  d k x  k x, k   k  k 0x , k 0   ¼ A kx, k ; where E*tr is the complex conjugate of the TM element and h  ix,y means spatial average over the entire image plane. Here we assume the orthogonality of TM elements and its validity has been experimentally verified in Ref. [3]. If we repeat the projection operation for all the TM elements we have, we can obtain the angular spectrum of the object image up to the same extent within which we measure the TM. Once the set of angular spectrum is acquired, the original object image at SP can be reconstructed using Eq. 1. Figure 2 schematically shows the experimental procedures to realize TLI method. As shown in Fig. 2a, we first illuminate a turbid medium with a plane wave at SP. Then, the transmission image through the medium is recorded at TP while scanning the illumination angle at SP. The TM for the medium is constructed as a set of the recorded images as shown in Fig. 2b. Next, an object is placed at SP and illuminated with a plane wave (Fig. 2c). The distorted object image is taken at TP (Fig. 2d). With the measured TM and the distorted object image, the projection operation given in Eq. 3 is applied to the distorted object image for extracting the angular spectrum of the object. Once we obtain the spectrum, the object image at SP is reconstructed by Eq. 1.

Experiment The experimental setup for TLI is shown in Fig. 3. We implement an interferometric microscope equipped with a dual-axis scanning mirror (GM; Cambridge Technology). The system records complex field images (both amplitude and phase) of plane waves transmitting through a disordered medium while scanning the angles of planar waves. A set of the recorded images constitute a TM. As shown in Fig. 3, a laser beam (He–Ne laser: Melles Griot, 05-LHP-827) is sampled by a beam splitter. One of the beams (sample beam) is sent to the sample and the other to free space (reference beam). In the path of the sample beam, a spatial light modulator (SLM: HOLOEYE, LC-R 2500) and the scanning mirror are positioned at each of the conjugate planes of SP. Virtual objects are generated by the SLM for the test purpose. When the sample beam is reflected off, the pattern on the SLM is imprinted in the form of amplitude and phase variations in space. It is then delivered to SP via a 4-f telescope and forms a test object. Therefore, writing a pattern on SLM is optically equivalent to positioning an object on SP. A high numerical aperture (NA) condenser lens (Nikon, NA = 1.4) is used to cover a wide angular range of illumination, and a high NA objective lens (Olympus, UPlanSApo, 100 oil immersion, NA = 1.4) is used to image transmitted light from a turbid Page 4 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 3 Experimental setup and image processing. (a) Setup schematic. SLM spatial light modulator, GM galvanometer scanning mirror, C condenser lens, BS beam splitter, M mirror, SP sample plane, TP transmission plane, TP2 conjugate plane of TP. A ZnO layer used as a turbid medium is positioned at the sample stage of a microscope. GM, SLM, and SP are positioned at conjugate planes to each other and TP, TP2, and the camera plane are also all conjugate to each other. (b) A typical interference image of the distorted wave. (c) Zoom-in view of the white box in (b). (d–e) Amplitude (d) and phase (e) images processed from (b) (color bar: phase in radians) (Modified from Ref. [3])

a

b

c

e 1.0 Phase (radian)

Normalized Amplitude

d 0.8 0.6 0.4 0.2 0.0 –40 –20 0 20 θx (degree)

40

3 2 1 0 –1 –2 –3 –30 –20 –10 0 10 θx (degree)

20

30

Fig. 4 Reconstruction of 1-D object hidden behind under a ZnO layer. (a) 1-D object written on the SLM (scale bar: 10 mm). (b) A distorted image of the structure shown in (a). (c) Reconstructed image from the distorted image in (b). (de) Amplitude (d) and phase (e) of angular spectra with (red) and without (blue) the ZnO layer (Modified from Ref. [3])

medium at TP. ZnO layers are positioned between SP and TP to serve as disordered media. The image of transmitted wave is relayed to a camera (RedLake, M3, 500 fps) through a 4-f imaging system. The reference beam travels through free space and arrives as a plane wave at the camera with Page 5 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 5 Reconstruction of 2-D object images. (a) The USAF target-like pattern used as an object. (b) The distorted image of the structure in (a) by the ZnO layer. (c) Angular spectrum of the object extracted by the projection operation. Corresponding phase components are also acquired (scale bar: 0.5 mm1). (d) The reconstructed target image by using the angular spectrum in (c). (e) An image of the emblem of Korea University positioned before the ZnO layer. (f) Reconstructed emblem image from the distorted image (Modified from Ref. [3])

an oblique angle to the sample beam, generating off-axis hologram. The interference image (Fig. 3b) is captured by the camera, and Hilbert transform [15] is used to process it into an E-field image, which contains both amplitude (Fig. 3d) and phase (Fig. 3e) information of the sample beam. We validate the projection operation in retrieving the angular spectrum of an object from a distorted image. For this purpose we prepare a ZnO layer with 15 % total transmission as a disordered medium and a simple 1-D structure (Fig. 4a) is written on SLM for a test object. At first we record TM of the medium Etr(x, y; yx, y) with 5,000 images for 10 s while scanning the illumination angle (yx, y) at SP along the direction perpendicular to that of the structures in the object. A flat pattern with a constant amplitude and phase value is written on the SLM, which serves as a mirror, during this procedure. Next, we load the 1-D test object on the SLM and take the distorted image ETP(x, y) at TP (Fig. 4b). By applying the projection operation in Eq. 3 to each angular component of the measured Etr(x, y; yx, y) and ETP(x, y), we retrieve A(yx, yy) constituting the object image. The amplitude of the angular spectrum is presented in red curve in Fig. 4d. The angular spectrum of the object taken without the turbidity is shown in blue curve for comparison. In Fig. 4e, phase of the acquired angular spectrum is displayed as well. Both the phase and amplitude of

Page 6 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 6 Live cell imaging under a rat skin tissue of 450 mm thickness. (a) A quantitative phase image of a microglia cell extracted from a rat brain. (b) A distorted cell image taken under the skin tissue. (c) Reconstructed image from (b) by TLI. (d) A quantitative phase image of red blood cells. (e) Distorted image of the red blood cells by the skin tissue. (f) Reconstructed cell image (color bar: phase in radians) (Modified from Ref. [3])

the angular spectrum are in good agreement with those of the original object image. Figure 4c is the reconstructed image from the extracted angular spectrum using Eq. 1, presenting an excellent agreement with the original image in Fig. 4a. Note that the complex field recording by the interferometric imaging method makes it possible to acquire phase of an angular spectrum, which is critical in reconstructing an object image. We also apply TLI method to more complicated 2-D objects. We prepare a USAF target-like pattern on the SLM, as shown in Fig. 5a, at SP. The ZnO layer with 6 % average transmission (25 mm thickness) totally makes the object structures invisible in a random fashion as presented in Fig. 5b. For a TM, we record 20,000 images covering the angular range of illumination corresponding to 0.5 NA. Then the projection operation is applied as was done for the 1-D case. Figure 5c shows the angular spectrum retrieved from the distorted image, and the reconstructed image from this angular spectrum is presented in Fig. 5d showing an excellent structural correspondence with the original one. We repeat the same experiment with a different object as shown in Fig. 5e, which is the emblem of Korea University, and we can reconstruct the image from the distortion with a fairly good reliability as presented in Fig. 5f. By the preceding image reconstructions for different samples, the capability of TLI to unscramble the effect of multiple scattering and to convert a turbid medium into a unique lens is clearly proven. Note that TLI measures a TM of a disordered medium with a clean reference by a unique phasereferencing method, and it is thus able to image a real object through a disordered medium instead of the virtual objects. In order to clearly demonstrate this ability, a live biological cell through a tissue slice is imaged by TLI. A microglia cell in a culture medium shown in Fig. 6a is used as a sample, and an approximately 0.45 mm thick skin tissue sliced from a rat belly is used as a scattering medium. Recoding a TM of the skin tissue is done in much shorter time than a typical drift time of a tissue (a few minutes) with a high-speed recording ability of TLI. Therefore, we successfully reconstruct a high-resolution and low-noise quantitative phase image of the cell (Fig. 6c) from its distorted image (Fig. 6b) obscured by the tissue slice. We also image red blood cells under the skin tissue. As shown in Fig. 6f, we reconstruct an image of the cells from the distorted image (Fig. 6e).

Page 7 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

a

b

SLM

SP2

ZnO Layer C

(qx , qh) = (0,0)

OL

c

SP TP

d

e

(qx , qh)

f

g

h

Fig. 7 Process of aperture synthesis. (a) The object pattern written on the SLM. The numbers represent scaling factors for the structures. Structure 1: 760 nm line spacing at SP. (b) 0 illumination. The SLM is depicted as transmission geometry for convenience although it is working as a reflective type. SP2: conjugate plane to SP. (c) Distorted sample image for (b). (d) Illumination at an angle of 36.8 . (e) Distorted image for (d). (f–g) Amplitudes of restored angular spectra of (c) and (e), respectively. Red and blue circles represent 0.6 NA. (h) Amplitude of synthesized angular spectrum of 500 images. White circle represents 1.2 NA. Red and blue circles are the shifted apertures of (f) and (g). All angular spectra are represented in log scale for better visibility (Modified from Ref. [10])

Both cases show fairly good agreements with the images taken without turbidity, as shown in Fig. 6a and d, respectively.

Synthetic Aperture TLI

The spatial resolution of TLI is determined by the NA of a TM, NAtr = sin ymax, where ymax is the maximum angle at which the TM is measured. The number of matrix elements to be recorded is proportional to NA2tr . Therefore, the size of a matrix required for high-resolution imaging is extremely large. In this section, we introduce a method that employs synthetic aperture microscopy technique to enhance the spatial resolution twice better than that given by NAtr [10]. Only 2.5 % additional data is necessary for the twofold increase of the spatial resolution, not 300 % more data that would be required to double NAtr. As a result, we significantly reduce data acquisition and processing time. We use the same setup depicted in Fig. 3 for this experiment. At first we characterize a turbid medium by recording a TM, Etr(x, y; yx, y) at an incident angle (yx, y). The angular coverage of the input angles corresponds to NAtr = 0.6 with 20,000 complex images for 40 s, which is much shorter than a typical drift time (10 min) of the turbid medium. As a test object, we use straight patterns similar to a USAF target (Fig. 7a) written on the SLM. The finest lines marked as “1” were designed such that they have a distance of 760 nm from center to center at SP. Dimensions of other patterns are increased with the scaling factors marked in the image. The image is projected to SP and subsequently distorted by the turbid medium. The distorted image with (yx, y) = (0, 0) is shown in Fig. 7c. Figure 7f shows the retrieved angular Page 8 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 8 Resolution enhancement by the aperture synthesis. (a) Reconstructed image from the angular spectrum in Fig. 7f. (b) Synthesized image with 500 reconstructed images (scale bar: 10 mm). (c-d) Section profiles along the lines in (a) and (b), respectively. All images are represented in amplitude ([10])

spectrum by the projection operation. The angular spectral power distribution along the center axis is clearly visible which is associated with the vertical and horizontal structures of the object. The radius of the red circle is limited by kpass = NAtr/l. Figure 8a is the reconstructed image from the retrieved angular spectrum which clearly shows the patterns of the object. But the smallest structures marked with “1” are not resolved because the spacing is smaller than the diffraction limit given by NAtr = 0.6 (1.29 mm). A TM is a basis of image reconstruction in TLI method. The extent of NA of a TM determines the bandwidth of an angular spectrum for the image reconstruction. Thus, TLI converts a turbid medium into a lens with NA corresponding to NAtr. For better resolution, it is required that a TM be measured with a larger angular coverage with wider NAtr. Since the dimension of a TM is proportional to NAtr2, the number of matrix elements becomes extremely large for high-resolution imaging. For instance, we measure 20,000 images for 0.6 NA; therefore, we will need 80,000 images for 1.2 NA. In order to bypass the demanding data acquisition and processing, we employ the technique for aperture synthesis. In the synthetic aperture imaging, multiple object images are taken at different oblique angles of illumination. Since the object spectrum is shifted with respect to the wave vector of illumination, the synthetic process enlarges the pass band of an angular spectrum [16]. Therefore, the final aperture becomes NAimg + NAill, where NAimg and NAill are the NA of imaging and illumination, respectively. In normal TLI, NAimg is the same as NAtr while NAill = 0, since we take an object image only at a single illumination angle. Here, however, we illuminate the object at various angles covering NAill = 0.6. The schematic in Fig. 7d shows this idea and Fig. 7e presents a distorted object image taken at an illumination angle of 36.8 (0.6 NA). The retrieved angular spectrum for this case is depicted in Fig. 7g. The red and blue circles have the same radius of kpass. The brightest spot in each spectrum means the illumination direction of an incident wave. The spot is located at the center for zero illumination (Fig. 7f), but off-center for the oblique illumination

Page 9 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 9 Benefits of TLI. (a) Conventional imaging with an objective (LO) and a tube (LT) lenses. ymax is the maximum angle that LO can collect. (b) Scattered wave whose angle yT exceeding ymax can be captured after inserting a disordered medium. (c) The scattered waves reach the camera sensor through multiple scattering process (solid red lines) although the objective is shifted away from the conventional field of view (gray area) ([3])

(Fig. 7g). Since the two spots have the same physical meaning, which are the DC spectra of the object [17], we shift the spectrum in Fig. 7g such that those two DC spots coincide in the synthesized spectrum as shown in Fig. 7h. Consequently, the final synthesized aperture reaches up to 1.2 NA, with the TM of NAtr = 0.6. For the synthetic aperture TLI we take distorted object images at 500 different angles covering NAill = 0.6. After performing the aperture synthesis with all the angular images, we obtain a spectrum of the enlarged pass band shown in Fig. 7h. The white circle in Fig. 7h indicates the pass band given by 1.2 NA, which is twice wider than that of the individual spectra in Fig. 7f and g. Figure 8b presents the final object image obtained by the aperture synthesis. The smallest stripes are now clearly resolved. The section profiles in Fig. 8c and d along the lines in Fig. 8a and b obviously show the resolution enhancement. Another advantage of the aperture synthesis is the effect of noise reduction. With our experimental parameters, the mean sampling density increases from 20,000/(0.6NA)2 to 20,000  500/(1.2 NA)2. Thus, the background noise in the restored angular spectrum is reduced (Fig. 7h). As a result, the reconstructed image also has a low noise in the background, and the image contrast is increased. The ratio of signal contrast to background noise is improved by 3.8 times after the synthesis process.

Benefit of TLI A conventional lens delivers scattered waves from an object to a detector (Fig. 9a). The achievable resolution (the diffraction-limit resolution) of this optical system is set by the most oblique angle, ymax, that the lens can capture. As illustrated in Fig. 9b, however, if a turbid medium is introduced between the object and the imaging system, a wave scattered at an angle yT (greater than ymax) can be redirected into the detector by the multiple scattering. For any incoming waves, the disordered

Page 10 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

a

c

0.8

0.4 0.2 0.0 0.6

d

g 1.0

e

Normalized SNR

Angular Spectrum

0.6

b

f

0.4 0.2

0.6 0.4 0.2 No turbidity low NA imaging

0.0

0.0 –0.8 –0.6 –0.4 –0.2

0.0

sinθ

0.2

0.4

0.6

0.8

0

20

40

60

80

100

Transmission (%)

Fig. 10 TLI breaks the diffraction barrier. (a) Imaging with high NA (1.0 NA) and (b) low NA (0.15 NA), respectively. Red arrows indicate invisible structures (scale bar: 10 mm). (c) A distorted object image through a ZnO layer (T = 6 %). The image is taken with low NA. (d) The object image recovered from the distorted image in (c). (e) Angular spectra from the finest lines (red box in (a)). Blue, high NA; green, low NA. The peak indicated by the arrow is conjugate to the periodicity of the structure. (f) The angular spectra acquired from the distorted image for turbid media. Red high turbidity, T = 6 %; black low turbidity, T = 70 %. (g) Normalized signal to noise ratio is plotted for various turbid media ([3])

medium diverts outgoing waves over the entire solid angle so that a small NA lens can capture a portion of high-angle scattered waves from the object. Therefore, the insertion of the turbid medium can potentially break the diffraction limit of a conventional lens provided that the captured waves are appropriately processed. The field of view of an imaging system is also affected by the random scatterings of a disordered medium. Conventionally, the field of view is determined by L/M, where L is the size of the image sensor and M is the magnification of the imaging system. An ordinary imaging system is strictly governed by this law, as shown in Fig. 9c, such that light scattered from a part of an object located outside the field of view is prohibited from reaching the camera sensor (dashed blue lines). When a turbid medium is positioned, however, it redirects some of the transmitted waves to the camera via multiple scatterings. Therefore, the disordered medium allows the detector to collect the light from an object outside the conventional field of view. We demonstrate the first benefit of TLI, enhancement of a spatial resolution. A test object which has 1-D stripes is generated by the SLM. It is imaged without turbidity by a high NA objective lens (1.0 NA) in a conventional imaging configuration (Fig. 10a). The finest lines indicated by the red box (spatial period is 2.5 mm at OP) are well resolved since the diffraction limit of the imaging is 0.77 mm. The angular spectrum of the structure bounded in the red box reveals two peaks corresponding to the periodicity (blue curve in Fig. 10e). Now an image of the same object is taken with a low NA objective lens (0.15 NA) (Fig. 10b). The finest pattern in the red box as well as some other fine structures marked as red arrows disappears due to the lack of resolving power. As shown in Fig. 10e, no peak associated with the finest periodic structure is observed. Then a ZnO layer as a disordered medium is introduced between the low NA lens and the object as shown in Fig. 2b. Despite of the limited NA of the objective lens TM can be recorded up to 0.85 NA. This is Page 11 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

because the disordered medium changes a high-angle input to a low-angle one. Without the turbid medium, the measurement of TM is limited to an angular extent corresponding to 0.15 NA. Next, the distorted object image through the same ZnO layer is taken. It contains the information of both highangle scattered waves and low-angle ones as the form of speckle patterns with an average size equal to the diffraction limit given by 0.15 NA (Fig. 10c). As was done in the previous section, the angular spectrum of the object, A(yx, y), embedded in the distorted image is extracted. Then we reconstruct the object image (Fig. 10d). As taken by the high NA objective lens, the finest lines are clearly resolved and the spectrum exhibits associated peaks (Fig. 10f). When recording the TM, the input angle is steered from 53 to 53 (0.85 NA) in 5,000 steps along the direction orthogonal to the stripes in the object. Thus, the angular spectrum is obtained up to 0.85 NA, although the actual objective lens is limited to 0.15 NA. Consequently, the NA is increased by more than fivefold and the spatial resolution is enhanced by the same factor. Note that the value of 0.85 NA is not a fundamental limit. A random medium can capture any input wave, even evanescent waves, as long as the particles constituting the medium are smaller than the wavelength [18]. The reconstruction fidelity of TLI depends on the turbidity. As the turbidity increases, the medium can redirect high-angle inputs to low-angle outputs more efficiently. Since the turbid medium deflects the incident wave into various directions, 0.15 NA objective lens (maximum acceptance angle: 9 ) can capture the scattered wave from the object at an angle of 27 , specifically the peak indicated by the arrow in Fig. 10f. If we use thinner turbid medium (T = 70 %) instead of thicker one (T = 6 %), its angular conversion decreases. Then the peak at the spectrum (black curve in Fig. 10f) is attenuated. With various turbidities, we obtain the signal to noise ratio (SNR), defined as the peak height divided by the baseline noise of the spectrum, and normalize it with that of the high NA object image (Fig. 10g). The measured SNR drops and noise in the reconstructed image is increased at lower turbidity. Counter intuitively, this shows that higher turbidity is more favorable for TLI. The other benefit of TLI is the extension of the field of view: we can see an object even though the object is not directly imaged onto the field of view by the scattered light. To demonstrate this concept, we prepare an object with a blank upper part and a periodic pattern in its bottom part (Fig. 11a). With ZnO layers in place, we obtain a highly distorted image as shown in Fig. 11b. We narrow our view field only to the upper part (solid red box) for recording both the distorted sample images and TM of the disordered media. Without a turbid medium (T = 100 %), the image in the red box contains no information on the object in the blue box, thus the image in the blue box is invisible (Fig. 11c). By contrast, when ZnO layers are inserted, the object images in the blue box can be reconstructed with the data acquired only in the red box (Fig. 11d and e). As the transmission becomes lower from 20 % (Fig. 11d) to 6 % (Fig. 11e), the reconstruction range is extended beyond the normal view field. This observation agrees well with the tendency of spot broadening during transmission through disordered media (Fig. 11f).

Imaging Through Optical Waveguides Direct image transport through a single optical fiber has been a promising research topic for several decades because of its potentials for wide applications in various fields such as clinical applications for endoscopy, industrial and military purposes for microimaging devices, etc. Especially, a multimode optical fiber has received broad attentions for the possibility of transporting large amount of information at a time via the independent spatial modes of the fiber. In spite of its scientific and economic impacts, however, wide-field high-resolution image transport through a single multimode Page 12 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

a

d

10 Extended field of view (μm)

b

c

e

f

(e)

8 6 (d)

4 2 (c) 0 0

5

10

15

20

25

Spot broadening (μm)

Fig. 11 Field of view extension by TLI. (a) A target object without turbidity and (b) with turbidity. Only the solid red box is a recording area (scale bar: 10 mm). (c-e) Reconstructed images under various turbidities of T = 100 % ((c), no turbidity), T = 20 % (d), and T = 6 % (e). (f) The extended field of view as a function of spot broadening for various turbidities. The extended field of view is estimated to the extent that the contrast of structure drops by one half. The HWHM of the transmitted image of a spot (5 mm in diameter) through a disordered medium is used to determine the spot broadening. Representative broadened spot images are shown at the bottom of the figure. From left, T = 100 %, 20 %, and 6 %, respectively (scale bar: 10 mm). Data points correspond to the transmission of T = 100 %, 70 %, 50 %, 30 %, 20 %, 10 %, and 6 %, reading, respectively, from the left ([3])

fiber has not yet been accomplished to date. The main barrier is the inevitable image distortion induced by a multimode optical fiber while an image is transported through. In this section, we apply TLI method to transport wide-field high-resolution images through optical waveguides. We demonstrate direct image transmission through a single multimode optical fiber with the resolution beyond the limit set by the fiber NA. The experimental setup is schematically depicted in Fig. 12a. It is basically the same as that of the TLI setup shown in Fig. 3, except that a single multimode optical fiber is used as a turbid medium instead of ZnO layers. After being injected into one end of the fiber at SP, the sample beam is guided and delivered to the other end located at TP. The outgoing light is collected by an objective lens (OL: 40, ACHN40P, Olympus) and polarization is selected by a polarizing beam splitter (PBS). The sample beam is further delivered to an imaging camera (M3, RedLake; 500 fps) via 4-f telescope and another beam splitter (BS2). The reference beam propagating through free space is combined with the sample beam at BS2, and then generates an interference image at the camera. An electric field image can be extracted from the interference image. In this experiment, a 1 m-long multimode optical fiber (Thorlabs, BFL48-200; 0.48 NA), with 200 mm core diameter and 15 mm clad thickness, in an arbitrary shape is used as a turbid medium. In order to record a TM of the fiber, the sample beam is steered by GM in a way that an input angle covers up to 0.48 NA, which is the maximum NA supported by the fiber. An input with the incident angle larger than 0.48 NA is almost attenuated while propagating through the fiber. The TM is constructed with 12,000 output images. Some of the representative TM elements are presented in

Page 13 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 12 (a) Schematic diagram of the experimental setup. BSi: i-th beam splitter, PBS: polarizing beam splitter, GM: galvanometer mirror, SP: sample plane, TP: transmission plane, OL and OLT: objective lens. Arrows indicate the direction of polarization. (b) and (c) Representative measured TM elements. Amplitude (b) and corresponding phase (c), respectively (color bar, phase in radian; scale bars, 100 mm)

Fig. 12b and c, amplitude and phase, respectively. After measuring the TM, a USAF target (NT55622, Edmund Optics Inc.) is placed at OP as shown in Fig. 13a. The object image is captured directly by the fiber as if the fiber itself is a conventional objective lens. After being guided through the fiber, the object image is highly ruined and the spatial information is completely distorted similar to the case of ZnO layers. The object image taken at a normal incidence angle of illumination at TP is shown at the very front of Fig. 13b. From this distorted image, we reconstruct the object image at SP by the TLI technique. After applying the TLI method, the original object image is successfully restored as shown in Fig. 13c. All structures in group 8 are well resolved but not for those in group 9 because the delivery of higher angle scattering components beyond the fiber NA is forbidden. As mentioned in the previous section, the imaging resolution of TLI is determined by the angular range of TM measurement [10]. Thus, a wider angular coverage of TM is a requirement of better resolution. However, for an optical fiber, fiber NA (NAF) sets the upper limit on the TM measurement. In order to overcome this limit, we use the synthetic aperture method as is done in the previous section. Specifically, distorted images are taken while the illumination angle is scanned with the angular range corresponding to NAF. By synthesizing all the reconstructed images, the effective NA can be enlarged by a factor of 2. In the experiment, 500 object images were taken at various angles covering up to 0.48 NA following the scheme shown in Fig. 13a. Some of the representative object images taken at different angles are shown in Fig. 13b. After reconstruction of individual images using TLI, we coherently synthesize all those images and the final imaging NA is extended up to 0.96. The diffraction limit of the imaging is now 0.80 mm. Figure 13d shows the synthesized image of the USAF target. Now the smallest structures in group 9, with the center-to-center distance of 1.56 mm, are clearly resolved. To verify the resolution enhancement further, we image polystyrene beads with 1 mm diameter. Figure 13e presents the synthesize image for spread of the beads on a standard coverslip. The Page 14 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 13 Synthetic TLI through a single multimode optical fiber. (a) Scanning of illumination angles (yx, y) for synthetic aperture microscopy. (b) Distorted USAF target images taken at different incident angles at SP. (c) Reconstruction result only with the first object image in (b) at (yx, y) = (0, 0). (d) Synthesized image with 500 reconstructed images. (e) Image of 1 mm beads by synthetic TLI (color bar: arbitrary unit)

Fig. 14 Image of living microglia cells taken through a single multimode fiber. (a) Quantitative phase image of the cells (scale bar, 10 mm; color bar, phase in radians). (b) Simulated DIC image of the same cells in (a). (c) Same image as (b), but the focus is numerically propagated. Arrows indicate in-focus features for each image (color bar: arbitrary amplitude unit)

individual details are clearly visible, which is not the case if the imaging NA is limited to that of the fiber. We also demonstrate the ability of our method for imaging transparent biological samples. Live microglia cells immersed in a culture medium are located at SP. The cells were extracted from a rat brain and incubated for 5 days in DMEM (Dulbecco’s modified eagle medium) with 10 % fetal bovine serum. For observation the cells were plated on a glass slide with a density about 100 cells/ mm2. After taking 500 images we applied TLI and aperture synthesis sequentially. The quantitative phase image of the cells is presented in Fig. 14a. With this complex field image we numerically process it into a simulated DIC image as shown in Fig. 14b. The field gradient is greatly emphasized and the structural details are shown to be better than taken by a standard objective lens. The image is further processed and numerically propagated to other depth. As shown in Fig. 14c, after the propagation away from the fiber end, the structures in focus in Fig. 14b become blurred and new Page 15 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 15 Experimental scheme for LMSF. The setup is based on an interferometric phase microscope. The interference between the reflected light from the object located at the opposite side of a single multimode fiber and the reference light is recorded by a camera. GM: 2-axis galvanometer scanning mirror, BS1, BS2, and BS3: beam splitters, OL: objective lens, IP: illumination plane of a multimode optical fiber, SP: sample plane (Modified from Ref. [12])

structures come out in focus. With this result, the capability of delivering 3-D information of an object through a single multimode fiber is clearly demonstrated.

Wide-Field Endoscopic Imaging through A Single Multimode Optical Fiber Endoscopy is a widely used technique in areas such as industrial, military and medical fields for the purpose of visualizing inaccessible structures. Optical fibers are essential elements for the endoscopy due to the ability of guiding light through a flexible and bent passage. Although fiber bundles play a critical role in the market of commercial endoscopy, the realization of a single-fiber endoscope has been elusive for several decades. In the previous section, we demonstrated the image delivery through a single multimode fiber. However, endoscopy using a single fiber has another requirement that an object image be taken with a reflection configuration. This is the most challenging part since two folds distortion (light in and light out) should be resolved if an object image is taken with a reflection geometry. In this section, we review the recently developed endoscopic imaging, operating in a reflection mode, through a single multimode optical fiber. A multimode optical fiber is characterized by measuring its TM and imaging is performed with reflected light. We call this method lensless microendoscopy by a single fiber (LMSF). The setup for LMSF is schematically shown in Fig. 15 (for more details, see [12]). A He–Ne laser (l = 633 nm) illuminates an input end (at IP) of a fiber via two beam splitters (BS1 and BS2) and a 2axis galvanometer scanning mirror (GM). The laser beam couples to the fiber and subsequently propagates toward the sample plane (SP) located at the exit of the fiber to illuminate a target object. Here we use a 1 m-long multimode optical fiber with 200 mm core diameter, 15 mm clad thickness (230 mm in total diameter) and 0.48 NA. For a test object we used either a USAF target or a biological tissue. The light reflected from the object is collected by the same optical fiber. The collected light by the fiber is guided back through the fiber to IP, and the output image is captured by the camera. The laser beam reflected by BS1 is combined with the beam from the fiber to form an interference image at the camera. Using an off-axis digital holography algorithm, both amplitude and phase of the image from the fiber are retrieved [15, 19]. Page 16 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 16 Image reconstruction process. (a) Representative images of the measured TM following the scheme shown in (b). (b) Scheme of the separate setup for measuring a TM of the single fiber from SP to IP. (c) Reflected object images taken by the setup shown in Fig. 15 at different illumination angles (yx, yy)S. (d) Reconstructed object images by applying TLI method on the images in (c). (e) Averaging of all reconstructed images in (d) (Modified from Ref. [12])

Figure 16c shows typical images of the USAF target recorded at the camera. Two folds distortion scrambles the object image. One is the distortion of illumination light on the way in (IP ! SP) and the other is the mixing of the reflected light (by the object) on the way out (SP ! IP). In order to figure out these two distortions, we employ speckle imaging method [20–22] and TLI, respectively. First, the application of TLI deals with the distortion SP ! IP. As shown in  Fig. 16b, we first measure the TM of the fiber from SP to IP. We scan the incident wave in its angle yx , y T to the fiber at SP and record its transmitted image at IP. We measure 15,000 images for the TM up to 0.22 NA. Figure 16a shows a few representative measured TM elements. The transmitted patterns are random speckles and the average size of the speckle decreases as we increase the incident angle. This is because the illumination at a high-angle couples mostly to the high-order modes of the fiber. Once we obtain the TM, then the object image is reconstructed (each image in Fig. 16c) by TLI method. Here we use an inversion algorithm with the measured TM expressed by the relation below: E SP ðx, Þ ¼ T 1 EIP ðx, yÞ:

(4)

where EIP and T 1 represent the recorded image at IP and the inversion of the measured TM, respectively, and ESP is the image at SP. Using Eq. 4, the distortion from SP to IP is reversed. More details about the inversion algorithm can be found in Refs. [11, 12]. The recovered images at SP are shown in Fig. 16d. The images still remain speckled because the illumination light is distorted due to the propagation from IP to SP. Next, the distortion of illumination light at SP is resolved by using the speckle imaging method. At the stage of object imaging, the illumination angle, (yx, yy)S, at the IP is scanned as shown in Fig. 15. The change in the illumination angle at IP causes the variation of the illumination light at the target object at SP. Different speckle field is generated as a function of the angle (yx, yy)S. According to the speckle imaging method, a clean object image can be acquired if we average a sufficient number of images recorded at different speckle illuminations [19]. Then the complex speckle

Page 17 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 17 Scanning operation of LMSF. (a) The fiber end is translated to take images at different sites of the sample. Each image is reconstructed with the same TM measured at the initial position of the fiber. (b) Reconstructed images are stitched to extend the field of view (scale bar: 100 mm) ([12])

patterns are averaged out, leaving a clean object image. To implement this, 500 images are taken at different (yx, yy)S. This is done in 1 s, the actual image acquisition time of our LMSF. The representative object images are shown in Fig. 16c. Then, TLI method is applied to each of them to obtain object images at SP (each in Fig. 16d). Now each image contains the object information even though they still look like speckle patterns. By adding all the intensity images in Fig. 16d, the final clear object image shown in Fig. 16e is obtained. Therefore, the combination of speckle imaging and TLI method enables a single-fiber microendoscopic imaging. The spatial resolution of LMSF is equivalent to that of the incoherent imaging, which is twice better than that of the coherent imaging, due to the speckle illumination. The effective NA is extended up to twice of that covered by the TM. In our LMSF, a TM is measured up to 0.22 NA, thus the diffraction limit of the imaging is about 1.8 mm. In Fig. 16e, the structures in group 8 of the USAF target, whose smallest element has a peak-to-peak distance of 2.2 mm, are all resolved. The field of view is about the same as the 200 mm diameter of the fiber core. Next, we demonstrate a surveying operation for LMSF as is done in conventional diagnostic endoscopy. Although LMSF does not support a fully flexible endoscopic operation due to the modification of a TM when the fiber is bent, we found that this operation is attainable up to a partially flexible degree. To demonstrate the endoscopic searching operation for LMSF, we translated the facing end of the fiber along the surface of the sample (Fig. 17a). A TM is measured only at the initial position of the fiber end and 500 distorted object images are acquired at different positions of the sample. The initial TM is repeatedly used to reconstruct the multiple object images. Figure 17b represents the composite image of the USAF target by stitching multiple images together. Structures ranging in a wide area, where the fiber end is translated about 800 mm in horizontally, about 1,000 mm in vertically, are all clearly visible. As shown in Fig. 17, LMSF works well under the translation operation even though movement of the fiber end away from the initial position causes the shape change of the fiber. The TM deviates from the initial one since the wave propagation through the fiber is very susceptible to the fiber shape. However, we found that most of the TM elements remain nearly intact under the modification of the fiber shape to a certain extent. Especially, the TM elements with large input angles remain almost the same upon the bending of the fiber. To characterize the effect of the fiber bending on TM elements, we compare multiple TMs measured with different bending of the fiber. As illustrated in Fig. 18a, several TMs are measured up to 0.41 NA while varying the fiber shape. Specifically, we

Page 18 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 18 The effect of bending on the image reconstruction. (a) Measurement of TMs while varying the fiber shape. The center point in the 30 cm-long middle part of the fiber is translated to control the degree of fiber bending. (b) Cross correlation maps between the TM at the initial position (left) and those at the translated position with respect to the incident angle (scale bar, 0.2 NA; color bar, normalized cross correlation). (c) LMSF images under the translation of the fiber end. The numbers below the figures indicate the displacement of the fiber end with respect to the initial position ([12])

take about 30 cm-long middle part of the fiber and translate the center position up to 5 mm from the initial position. The cross correlation between the initial TM and one at the translated position is calculated to quantify the similarity lying on them. Figure 18b represents the correlation map with respect to the incident angle for various translations. One can notice that the TM elements stay correlated to a wide range of translation. This robustness of TM to the fiber bending has led to the successful image reconstruction in LMSF under the endoscopic surveying operation. Finally, we assess the working range of the scanning operation in LMSF. After measuring TM at the initial position of the fiber, the USAF target is mounted on the translation stage together with the fiber end. Then the stage is moved along the plane of SP. Thus, we always image the same part of the object during the translation. The range of the translation is set from 5 mm to +5 mm with respect to the initial position. At each position of the translation, we take the object images and process them with the initial TM. As shown in Fig. 18c, the target image is well resolved for the entire range. This verifies that the view field of LMSF can be scanned by at least 10 mm-long travel with reasonable capability of visualizing. The heart of endoscopy is the ability of visualizing biological specimens. To demonstrate the potential of LMSF as an endoscope for imaging biological samples, we image a rat intestine tissue. We loaded the tissue at SP and scanned the fiber end to extend the view field. Figure 19b is the composite image taken by LMSF for the site of multiple villi. The villi are clearly visible with the external features in excellent agreement with the conventional bright field image taken by transmission geometry (Fig. 19a). The contrast of LMSF image is better than that of the bright field image, because the absorption of the villi is weak for the transmission mode imaging, while the reflection at the rough surface provides relatively strong contrast for the LMSF imaging.

Page 19 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 19 Image of villi in a rat intestine tissue taken by LMSF. (a) A composite bright field image taken by a conventional transmission microscope (scale bar, 100 mm). (b) A composite image by LMSF at the same site. (c) The same image as (b) but after numerical refocusing by 40 mm toward the fiber end. The arrows indicated by A point to the villus in focus at (c) while those indicated by B refer to the villus in focus at (b) ([12])

LMSF can acquire 3-D information of an object without depth scanning of the fiber end. Since LMSF captures complex field maps, ESP(x, ), of a specimen containing both amplitude and phase of the reflected waves, we can numerically solve the wave propagation to make a focus at a different depth [22]. Figure 19c is the result of the numerical refocusing from the image in Fig. 19b by 40 mm toward the fiber end. The villus indicated by A that used to be blurred in Fig. 19b comes to a focus and its boundary becomes sharp. On the other hand, the villus pointed by B gets blurred due to the defocusing. By comparing Fig. 19b and c, we can clearly distinguish relative depths of multiple villi in the view field. This 3-D imaging ability of LMSF will eliminate the need for the physical depth scanning such that it will dramatically enhance the speed of volumetric imaging.

Enhancing Wave Transport Through Disordered Media Using the TM, not only imaging through a disordered medium but also controlling the wave transport through the medium is possible. In this section, we introduce a method of maximizing energy transport through disordered media [11]. According to the random matrix theory (RMT) developed in 1980s, it is suggested that a particular incident wave with a specific pattern can propagate through the disordered medium without loss of energy [23]. Mathematically, this special wave is the eigenvector of a TM with maximum eigenvalue and often called as an open eigenchannel. The underlying physical explanation for the abnormal transmittance of the open eigenchannel is that a proper choice of eigenchannel as an incident wave can induce strong constructive interference of scattered waves at the opposite side of the medium. One can control the wave interference, either constructively or destructively, for a highly disordered medium by means of coupling light into individual transmission eigenchannels. In particular, it is possible to selectively generate an eigenchannel of maximum transmittance and achieve significant transmission enhancement. The method could potentially be useful in facilitating laser radiation therapy and biomedical imaging by enhancing light energy delivery deep into biological tissues. In spite of recent technical advances in the wavefront sensing and recording, the injection of waves into single eigenchannels was still a challenge. Some efforts have been made to calculate individual eigenchannels by Popoff et al. in optics [8] and by Shi et al. in microwave [24] from the recorded TM, but the injection of waves into single eigenchannels was not realized. Coupling waves into unique eigenchannels is difficult due to two stringent requirements. First, the TM of the disordered medium must be recorded in the short time before the medium is perturbed. Second,

Page 20 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 20 Experimental setup for recording a TM and generating each transmission eigenchannel. The main frame of the setup is an off-axis interference microscope with scanning mirrors (GM, Cambridge Technology) installed in the sample beam path. The laser output from a He–Ne laser is split by a beam splitter (BS1), and one of the beams (sample beam) is sent to the sample and the other (reference beam) through free space. The two beams are recombined by another beam splitter (BS2) to form an interference image at the camera (RedLake M3, 500 fps). The interference image is processed to acquire the amplitude and phase maps of the sample beam. A spatial light modulator (SLM, Hamamatsu Photonics, X10468-06) that can generate an arbitrary wavefront is installed in the sample beam path. A disordered medium is placed between the input plane (IP) and the output plane (OP), and the transmitted image at the OP is delivered to the camera by an objective lens (Olympus UPlanSApo) and a tube lens ([11])

the eigenchannels of highly complicated wavefronts derived from the measured matrix should be precisely generated in the experiment. To overcome such difficulties, in this study, the experiment has been carried out as follows. The experimental setup is shown in Fig. 20. For the measurement of a TM, the angle, (yx, y), of the plane wave incident to the medium was scanned , and transmission image was recorded at each incident angle. Figure 21b shows the phase maps of the transmitted waves at representative incident angles. The complex field at the input plane was measured for the same set of incident angles (Fig. 21a). This was simply done by measuring the field after removing the disordered medium. From the two sets of complex field maps, a TM was constructed. The amplitude and phase of the TM are shown in Fig. 21c and d, respectively. If there were no disordered medium, the matrix would be diagonal. In a disordered medium, however, multiple scattering processes cause the incident beam to spread, thereby generating nonzero off-diagonal elements. In order to acquire the transmission eigenchannels and their corresponding transmission eigenvalues, singular value decomposition was performed for the constructed TM. Specifically, the TM was factorized into T = USV*, where S is a rectangular diagonal matrix with nonnegative real numbers on the diagonal called singular values and V* denotes the conjugate transpose of the matrix V. V and U are the unitary matrices whose columns are the transmission eigenchannels at input and output plane, respectively. Although the eigenchannels were acquired from the measured TM, it is important to implement each eigenchannel in the experiment to verify whether it truly exhibits either enhanced or reduced transmission in accordance with its eigenvalue. By using a SLM, an optical wave corresponding to the eigenchannel was experimentally generated and sent to the medium. The SLM used in this experiment operated in a phase-only mode. In order to generate eigenchannels having spatial

Page 21 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 21 Construction of a TM for a disordered medium. (a) Phase maps of the recorded waves acquired in the absence of a disordered medium as a function of incident angle (scale bar, 10 mm). (b) Phase maps of the transmitted waves through a disordered medium as a function of incident angle. (c) Amplitude part of a TM constructed from (a) and (b). (color bar, amplitude in arbitrary unit). (d) Phase part of the TM. The same color bar, which represents the phase in radians, applies to those phase maps in (a), (b), and (d) ([11])

Fig. 22 Comparison between the uncontrolled wave and the first eigenchannel. (a) and (b) Transmitted images of the uncontrolled wave and the first eigenchannel through the disordered medium, respectively ([11])

variations in both amplitude and phase, a method was applied in which multiple SLM pixels are combined into a single unit at the target wave [25]. The optical wave of the first eigenchannel was generated by the SLM and sent to the medium. Then complex field map at the output plane was captured (Fig. 22b). Figure 22a is the output image

Page 22 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

Fig. 23 Experimentally measured transmittance of individual eigenchannels. Red circles, measured transmittance when the optical wave of each eigenchannel is generated and illuminated onto the disordered medium. Blue squares, corrected eigenvalues. Black line, mean transmittance of the medium measured under illumination by a plane wave. Green line, mean transmittance when the point-optimization process is performed [11]

when the uncontrolled wave (plane wave) was sent to the medium. The enhancement factor, which is defined as the ratio of the transmittance of the first eigenchannel to that of the uncontrolled wave, was 2.8 for this particular sample. When the same experiment was repeated for more than 20 different samples, it was observed that the transmission enhancement is highly reproducible. The enhancement factor varies somewhat from sample to sample, typically in the range of 2–4. The maximum enhancement factor achieved in the experiment was 4. Eigenchannels other than the first eigenchannel are also implemented and their transmittances are measured (red circles in Fig. 23). The transmittance is monotonically decreased with the increase of the eigenchannel index, confirming that the first eigenchannel has indeed maximum transmittance. The experimentally measured transmittance was compared with the eigenvalues (blue squares in Fig. 23) acquired from the recorded TM after accounting for the limited accuracy of wavefront shaping. Experimentally generated eigenchannels faithfully follow the prediction from measured TM. The residual discrepancy aside from the wavefront-shaping inaccuracy might come from mechanical perturbation of the sample and imperfect repeatability of the GM scanning. This method was compared with the previous method proposed by Vellekoop et al. [26] in which intensity was optimized at a single point at the far side of the medium. Although the previous method shows increase in the transmittance, the transmittance (green line in Fig. 23) is well below that of the first eigenchannel. This is because the point-optimization couples light into other eigenchannels as well as the eigenchannel with maximum eigenvalue [27].

Conclusion In this chapter, we discussed the working principle of TLI, which is a method for retrieving the image information of an object hidden under turbid media. TLI can find out the angular spectrum of an object, which is the relationship of angular plane waves constituting an object image formation, from the distorted image using the projection algorithm and a TM of a turbid medium. Using the extracted angular spectrum, TLI can reconstruct the distortion-free image at the plane where the object is placed. In the experiments, a TM of a turbid medium was measured for both amplitude and phase by interferometric detection. From this, we successfully restored the linkage of angular waves for an object at a right phase and then recovered the object image with high reliability. We also

Page 23 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

reconstructed quantitative phase images of living cells under a thick skin tissue as a demonstration of the capability of TLI for imaging real objects. The resolving power of TLI, which is strictly governed by the angular coverage of a TM measurement, is improved by twice with the use of the synthetic aperture microscopic method. After measuring a TM of a turbid medium with 0.6 NA, the image reconstruction up to 1.2 NA was demonstrated by synthesizing multiple TLI images. TLI converts a turbid medium into a scattering lens, which has unique advantages mediated by multiple light scattering. The use of a scattering lens, if it is well characterized by a TM measurement, allows a conventional imaging optics behind it to overcome the diffraction limit determined by NA as well as to extend an imaging field of view. We experimentally demonstrated fivefold resolution enhancement and lateral extent of physical field of view after placing a scattering lens in front of an objective lens. TLI can also eliminate the distortion caused by a multimode optical fiber. With the measurement of a TM, TLI can retrieve an object image from the output wave exiting from a multimode optical fiber. The achievable imaging resolution, which is given by the maximum angular range of a TM measurement, is limited by the NA of the fiber. However, by combining the synthetic aperture microscopic technique, a wide-field image transmission through a multimode optical fiber with the resolution exceeding the NA of the fiber was demonstrated. In addition, we demonstrated partially flexible wide-field endoscopic imaging with a single multimode optical fiber without a lens or any scanning element attached to the fiber. After characterizing the transmission property of the optical fiber, TLI and speckle imaging method were used to see an object through a curved optical fiber in a reflection configuration. The image acquisition time was 1 s for the 3-D imaging with 12,300 effective image pixels, and the view field was given by the core diameter of the fiber. The pixel density of LMSF is about 30 times greater than that of a typical fiber bundle (10 mm diameter for each pixel) and can be increased further if full NA of a TM is measured. Measurement of a TM of a turbid medium also enables us to maximize delivery of the light energy through the medium. By using SVD analysis of the TM, the transmission eigenchannels of the turbid medium were obtained. The light energy delivery through the turbid medium can be controlled by the coupling of incident waves to the eigenchannels of various eigenvalues. The complex wave of the first eigenchannel which has the maximum transmittance was experimentally implemented by the wavefront-shaping technique. By injecting the incident wave into this unique eigenchannel, 400 % enhancement of transmitted energy through a turbid medium was observed. Both the recording of waves interacting with turbid media and the shaping of wavefront of illumination have provided numerous opportunities to make use of multiple light scattering for enhancing the performance of imaging and light energy delivery. However, the main scope of the studies conducted so far was confined to the transmission mode of detection. In order for the proposed approaches to make a significant impact on the most practical applications in biomedicine, the studies employing the reflection mode of operation should be ensued. We expect that this will take place in the coming years.

References 1. Ji N, Milkie DE, Betzig E (2009) Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues. Nat Methods 7:141–147 2. Vellekoop IM, Lagendijk A, Mosk AP (2010) Exploiting disorder for perfect focusing. Nat Photon 4:320–322

Page 24 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

3. Choi Y, Yang TD, Fang-Yen C, Kang P, Lee KJ, Dasari RR et al (2011) Overcoming the diffraction limit using multiple light scattering in a highly disordered medium. Phys Rev Lett 107:023902 4. Vellekoop IM, Aegerter CM (2010) Scattered light fluorescence microscopy: imaging through turbid layers. Opt Lett 35:1245–1247 5. van Putten EG, Akbulut D, Bertolotti J, Vos WL, Lagendijk A, Mosk AP (2011) Scattering lens resolves sub-100 nm structures with visible light. Phys Rev Lett 106(19):193905 6. Katz O, Small E, Bromberg Y, Silberberg Y (2011) Focusing and compression of ultrashort pulses through scattering media. Nat Photon 5:372–377 7. Bertolotti J, van Putten EG, Blum C, Lagendijk A, Vos WL, Mosk AP (2012) Non-invasive imaging through opaque scattering layers. Nature 491:232–234 8. Popoff SM, Lerosey G, Carminati R, Fink M, Boccara AC, Gigan S (2010) Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media. Phys Rev Lett 104:10061 9. Popoff S, Lerosey G, Fink M, Boccara AC, Gigan S (2010) Image transmission through an opaque material. Nat Commun 1:81 10. Choi Y, Kim M, Yoon C, Yang TD, Lee KJ, Choi W (2011) Synthetic aperture microscopy for high resolution imaging through a turbid medium. Opt Lett 36:4263–4265 11. Kim M, Choi Y, Yoon C, Choi W, Kim J, Park Q-H et al (2012) Maximal energy transport through disordered media with the implementation of transmission eigenchannels. Nat Photon 6:581 12. Choi Y, Yoon C, Kim M, Yang TD, Fang-Yen C, Dasari RR et al (2012) Scanner-free and widefield endoscopic imaging by using a single multimode optical fiber. Phys Rev Lett 109:203901 13. Bianchi S, Di Leonardo R (2012) A multi-mode fiber probe for holographic micromanipulation and microscopy. Lab Chip 12:635–639 14. Cizmar T, Dholakia K (2012) Exploiting multimode waveguides for pure fibre-based imaging. Nat Commun 3:1027 15. Ikeda T, Popescu G, Dasari RR, Feld MS (2005) Hilbert phase microscopy for investigating fast dynamics in transparent systems. Opt Lett 30:1165–1167 16. Alexandrov SA, Hillman TR, Gutzler T, Sampson DD (2006) Synthetic aperture fourier holographic optical microscopy. Phys Rev Lett 97:168102 17. Kim M, Choi Y, Fang-Yen C, Sung YJ, Dasari RR, Feld MS et al (2011) High-speed synthetic aperture microscopy for live cell imaging. Opt Lett 36:148–150 18. Lerosey G, De Rosny J, Tourin A, Fink M (2007) Focusing beyond the diffraction limit with far-field time reversal. Science 315:1120–1122 19. Choi W, Fang-Yen C, Badizadegan K, Oh S, Lue N, Dasari RR et al (2007) Tomographic phase microscopy. Nat Methods 4:717–719 20. Pitter MC, See CW, Somekh MG (2004) Full-field heterodyne interference microscope with spatially incoherent illumination. Opt Lett 29:1200–1202 21. Park Y, Choi W, Yaqoob Z, Dasari R, Badizadegan K, Feld MS (2009) Speckle-field digital holographic microscopy. Opt Express 17:12285–12292 22. Choi Y, Yang TD, Lee KJ, Choi W (2011) Full-field and single-shot quantitative phase microscopy using dynamic speckle illumination. Opt Lett 36:2465–2467 23. Dorokhov ON (1984) On the coexistence of localized and extended electronic states in the metallic phase. Solid State Commun 51:381–384 24. Shi Z, Genack AZ (2012) Transmission eigenvalues and the bare conductance in the crossover to Anderson localization. Phys Rev Lett 108:043901 Page 25 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_28-1 # Springer Science+Business Media Dordrecht 2014

25. van Putten EG, Vellekoop IM, Mosk AP (2008) Spatial amplitude and phase modulation using commercial twisted nematic LCDs. Appl Optics 47:2076–2081 26. Vellekoop IM, Mosk AP (2008) Universal optimal transmission of light through disordered materials. Phys Rev Lett 101:120601 27. Choi W, Mosk AP, Park QH, Choi W (2011) Transmission eigenchannels in a disordered medium. Phys Rev B 83:134207

Page 26 of 26

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_29-3 # Springer Science+Business Media Dordrecht 2014

Probing Different Biological Length Scales Using Photoacoustics: From 1 To 1000 MHz Eno Hysi, Eric M. Strohm and Michael C. Kolios* Department of Physics, Ryerson University, Toronto, ON, Canada

Abstract Photoacoustics (PA) is proving to be a versatile imaging modality combining features of ultrasonic and optical imaging such as the high resolution of ultrasound (US) with the contrast of optical imaging, reducing the fundamental depth limitation of the photon mean free path. This chapter focuses on using PA as a tool for probing various biological length scales by adjusting the frequency of the detected PA signals. As the frequency increases, the PA imaging resolution increases allowing for the examination of smaller length scales. More specifically, the ability of PA to characterize red blood cells (RBCs) is explored by analysis of the frequency-domain content of the PA signals rather than the PA signal strength typically used. RBCs are one of the most dominant sources of endogenous contrast in PA imaging, and the presented research has focused on probing the effect of RBC orientation, morphology, and pathology at various length scales. Finite element models were developed investigating the effect of cell orientation and morphology on the features of the PA power spectra over 100 MHz, where significant differences in the simulated and measured power spectra for various RBC orientations have been observed. The shape used to model RBCs (biconcave, oblate ellipsoid, and sphere) was found to also significantly affect the features of the power spectra. Using clinically relevant US detection frequencies (100 MHz) photoacoustic microscope (PAM) in addition to the clinical feasibility of the technique investigated with lower frequency (90 %) in the artery and low (60–80 %) in the vein. V was measured using PA Doppler bandwidth broadening (Fig. 13c). The average blood flow speeds of artery and vein were 5.5 and 1.8 mm/s, respectively. MRO2 measurements obtained using OR-PAM can be used in early cancer detection (Fig. 14d, e).

PAT for Molecular Imaging Differences in intrinsic optical contrasts of substances (i.e., hemoglobin, lipid, melanin, and water) can provide contrast in PAT images without injecting any exogenous materials. However, the imaging depth is shallow when using these intrinsic contrasts due to the wavelengths of the optical absorption peaks. These chemicals have optical absorption peaks in the visible range; the penetration Page 11 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_30-1 # Springer Science+Business Media Dordrecht 2014

Fig. 13 Representative in vivo PAM images of a mouse ear showing (a) HbT, (b) SO2 in a dotted box, and (c) blood flow. (d) Depth-encoding PAM image of the mouse ear containing tumor. (e) Top: MRO2 variance due to tumor growth of (d). Bottom: averaged SO2 between inside and outside the tumor (Reprinted with permission from Ref. [61])

Fig. 14 PA images of sentinel lymph node of a rat. (a) Before injection, (b) 68 min and (c) 251 min after Cu2-xSe injection (Reprinted with permission from Ref. [68])

depth of visible light is significantly decreased by both scattering and strong optical absorption. To increase imaging depth, several kinds of exogenous contrasts such as organic dyes, metallic nanostructures, and organic nanostructures are used in PAT to increase its capability to detect the chemicals in deep tissues [62]. Further, for some kinds of biological tissues which do not have optical absorption contrast, exogenous contrasts can increase sensitivity, specificity, and contrast in PAT.

Page 12 of 20

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_30-1 # Springer Science+Business Media Dordrecht 2014

Organic Dyes Organic dyes have been used for PAT in many biomedical applications. Organic dyes such as methylene blue (MB), lymphazurin blue (LB, isosulfan blue), and indocyanine green (ICG) have already acquired FDA approval for use in humans. MB and LB have been used for biomedical imaging [37–39, 63]. MB is variously used in biology, chemistry, and medicine. Especially, MB is utilized to detect sentinel lymph nodes (SLNs) for determining metastasis of breast cancer. Though LB is the only dye which is approved for SLN identification during breast surgery by the FDA [64], MB is more suitable than LB for SLN biopsy because MB is widely available and inexpensive. Moreover, LB causes several side effects such as high reaction and skin stain. ICG is nontoxic and water soluble and has a principal optical absorption spectrum in the NIR region. ICG penetrates ~5 mm into biological tissue [65]. ICG has received the permission from the FDA approved for diagnosing abnormal human cardiac, hepatic, and ophthalmic blood flow and function. In addition, ICG can also be used as a dual-modal imaging agent for PAT and fluorescence imaging owing to its moderate fluorescence quantum yield (e.g., ~10 % in dimethyl sulfoxide and 4 h) than the iodine contrast agent iopromide (850 nm), the excited photosensitizer does not have sufficient energy to excite oxygen and produce ROS. In contrast, at a shorter wavelength (0.8) air objectives, the thickness of the coverslip is crucial, and imperfections of the order of micrometers can impact on the objective’s performance. The correction collar allows the user to adjust the position of critical lenses inside the objective barrel and overcome any discrepancies in thickness and refractive index in the coverslip. Practically the user has to have a reasonable level of skill as the focus position tends to shift during correction. Tissue optical clearing, pioneered by Tuchin and coworkers, has also proved to be successful when it comes to improving image quality in tissue [3]. Here the tissue is immersed in an optical

Page 3 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

Fig. 2 The Zernike polynomials to the fourth order illustrated as phase changes over a unit circle

Table 1 The Zernike polynomials mapped to traditional aberrations found in optical systems Z00 Z11 Z1 1 Z22 Z02 Z2 2 Z33 Z13 Z1 3 Z3 3 Z44 Z24 Z04 Z2 4 Z4 4

Piston y-axis tilt x-axis tilt Astigmatism 45 Defocus Astigmatism 90 Trefoil 30 y-axis coma x-axis coma Trefoil 0 Second-order astigmatism 0 Spherical aberration Second-order astigmatism 45

Page 4 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

a

b

Microscope

c

Microscope

d

Microscope

Microscope

Fig. 3 A schematic diagram illustrating the principle of adaptive optics. (a) A planar wavefront is focused by the microscope system producing a spherical wavefront and an optimal focal spot; (b) when focusing through a dielectric interface, for example, a microscope slide, the focal spot is broadened and elongated; (c) when imaging through biological material with a planar wavefront, the resulting focal spot is further distorted; (d) the equal but opposite wavefront distortion is placed on the incoming beam resulting in a spherical wavefront at the focus and restoring the quality of the focal spot to optimal

clearing agent, which normally has a high refractive index similar to the scatterers present in the sample. As the optical clearing agent penetrates the extracellular spaces, the effects of scattering are reduced leading to a more transparent sample. Depending on the tissue, it can take several weeks to produce a transparent sample ready for imaging. More recently Combs et al. have demonstrated that by using a parabolic mirror to maximize collection efficiency in a multiphoton microscope, they are also able to considerably increase image depth. The parabolic mirror and additional collection optics increase the chance of collecting scattered photons, and they report and signal to noise enhancement of 8.9 [4, 5]. In this chapter we will focus on a technique called adaptive optics (AO) that was first used in optical astronomy to overcome the aberrations arising from the Earth’s atmosphere [6]. Since the turn of the century, AO has been applied to several different microscopy modalities in order to correct for aberrations (see section “Applications and Implementation”) [7–11]. AO aims to shape the wavefront of the incoming laser beam in such a way that it counteracts for any distortion imposed by the aberrations in the optical path or the sample, hence improving the quality of the focal spot and ultimately the resolution of the system. A schematic can be seen in Fig. 3. At the crux of all AO systems is the method used to determine the wavefront correction required in order to effectively compensate for the aberrations present. At present, there are a number of types of AO systems currently being employed in microscopy which can be grouped into direct wavefront sensing and indirect wavefront sensing techniques, each with their own advantages and disadvantages. This chapter aims to give a flavor of the different approaches that have been implemented, the practicalities of including AO in an imaging system, and the level of imaging improvement that has been achieved in a variety of microscopy modalities.

AO in Astronomy One of the greatest problems faced in ground-based optical astronomy is the distortions arising from the Earth’s atmosphere which impact on image resolution and quality [12]. These distortions originate from turbulence within the atmosphere, which, in turn, are caused by small temperature variations, leading to micro-variations in both density and refractive index of the atmosphere. These small changes in refractive index (106) [6] over large distances (atmosphere depth  100 km) can build up to cause large refractive index variations. This results in a significant reduction in image Page 5 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

resolution and signal intensity when imaging any celestial object from ground-based optical telescopes. Before the development of adaptive optic systems, telescopes were placed on top of mountains (e.g., Hawaii, La Palma) to reduce the degree of atmospheric turbulence or into space (i.e., the Hubble Space Telescope) to remove atmospheric disturbance altogether. Therefore any solution which could overcome problems with astronomical seeing due to the atmosphere as well as improving the detected signal (without increasing telescope mirror sizes) is of enormous benefit. The use of AO was first suggested in 1953 by H. W. Babcock [13] whereby, with the aid of a wavefront sensor and adaptive element, atmospheric distortions could be detected and compensated. In his original system, Babcock suggested using a revolving knife-edge above an orthicon (a now obsolete television pickup tube) to act as a wavefront sensor, where the electrical signal output controlled the electron beam intensity of an electron gun. By aiming the spatially varying electron beam at a mirror coated with a thin layer of oil, it was proposed that the resultant phase introduced by the change in thickness of the film would compensate for aberrations. Although the principle was sound, the technology required to compensate for the turbidity of the atmosphere required several thousand changes per second, which was unachievable at the time. It was not until recent developments in electronics and computing as well as wavefront sensing and deformable mirrors that the use of dynamic aberration correction devices has become a practical solution. In astronomy the degree of wavefront correction required is measured using a probe or beacon high in the atmosphere. The beacon is produced using a focused high-power pulsed laser beam to excite sodium atoms located high in the upper atmosphere. By detecting the emission using a wavefront sensor, information on the atmospheric distortions from a particular region of sky can be measured and compensated for. This approach is often referred to as the “laser guide star” method.

General AO Systems AO systems are typically composed of three main components [14] (see Fig. 4): 1. Wavefront sensor – measures the state of the system to be optimized. This can be either performed directly using a form of wavefront sensor or indirectly by measuring a particular fitness parameter which is then sent to a control system. 2. Control system – interprets the signal sent from the sensor into a signal that can control the wavefront modulator and thus compensate for aberrations. 3. Wavefront modulator – the adaptive element that modulates the wavefront to correct for aberrations. Any subsequent change to the wavefront modulator is detected by the sensor via the closed-loop feedback (Fig. 4) and incorporated in the computer control.

Wavefront Modulators Wavefront modulators are used to spatially impose a phase change on the incoming wavefront. This can be achieved using various types of mirror and liquid crystal display technologies. These devices work on the basis of introducing a path length variation or refractive index variation to alter the phase profile of the light incident on the device.

Page 6 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

a Aberrated wavefront

Corrected wavefront Camera

Beam splitter Wavefront modulator

Wavefront sensor

Control system

b Aberrated wavefront

Corrected wavefront Camera

Beam splitter

Wavefront modulator

Fitness sensor

Control system

Fig. 4 Examples of two different AO feedback loops. (a) A direct system using a wavefront sensor and (b) an indirect AO with a form of fitness sensor

Deformable membrane mirrors (DMMs) or spatial light modulators (SLMs) are typically usually used as wavefront modulators (see Fig. 5 for a schematic of the two technologies). DMMs are essentially a thin reflective membrane above an array of electrostatic or magnetic actuators where local path length changes can be imposed across the wavefront by altering the voltages applied to individual actuators [15]. The device is composed of a continuous membrane, and therefore phase/ path length discontinuities are not possible. The maximum wavefront change possible or “stroke” is device and manufacturer dependent ranging from less than 10 to 50 mm total stroke with smaller changes possible between neighboring actuators. The cost of the devices reflects the stroke available and ranges from ~€1.5 k, for the smaller stroke devices, to ~€22 k for the larger stroke devices; in

Page 7 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

a

Incoming wavefront

Highly reflective membrane Electrodes Deformable membrane mirror (DMM)

Incoming wavefront

b Coverslip

Transparent electrode Liquid crystal Reflection enhancer

Electrodes Spatial light modulator (SLM)

Fig. 5 (a) An electric field applied to a deformable membrane mirror exerts a local force on the membrane causing it to deform and change the optical path length of the incoming light beam. (b) An electric field applied across a spatial light modulator changes the orientation of the liquid crystal molecules and hence the effective local refractive index altering the phase of the incoming beam

many cases DMMs are the cost-effective option over SLMs. The majority of DMMs are bound at the edges, placing a restriction on the corrections achievable, and they are often “pull”-only devices. When using a “pull”-only device, it is common to start with a defocus offset on the DMM, corrected by a lens earlier in the system, allowing the user to effectively apply negative and positive aberration correction. The two important benefits of DMMs are their light efficiency and speed. In terms of light loss, they are equivalent to placing an extra mirror in the system and can be antireflection coated according to the wavelength being used to further reduce losses. The speed of DMMs is again manufacturer dependent, but commonly they operate at refresh rates of several hundred Hz or a kHz, and therefore they are ideally suited to algorithm-based AO approaches that rely on the rapid sampling of possible wavefronts. SLMs are holographic, pixelated, liquid crystal devices (typically 512  512 pixels) that impose a phase change on the incoming wavelength. Local changes in electric field are applied to the liquid crystal resulting in changes in refractive index and therefore phase. Using phase wrapping approaches, they can achieve a maximum phase change, or “stroke,” of several p across the device and are, in this respect, more powerful than the DMMs. It is also possible to produce phase discontinuities, and there are no issues associated with bound edges in the same way there can be with DMMs, making an SLM a more flexible option. However, they can be lossy with zero-order diffraction efficiency of 65 % not uncommon, although with recent advance in the technology, manufacturers are now quoting closer to 95 %. Their update rate depends on the liquid crystal response time and also the method for applying and determining the required hologram. The SLM is often controlled via the computer graphics card as though it was a separate monitor, and therefore they typically operate at ~50 Hz. SLM and graphics card technology is changing rapidly and update rates of several hundred Hz are now being quoted for nematic liquid crystal devices.

Page 8 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

Some optical microscopy experiments, for example, optical tweezers, routinely use SLMs, and therefore in these cases, to avoid the introduction of further optics, they are the natural correction device. An approach that uses both has also been reported taking advantage of the large stroke of the SLM and the high speed of the DMM [16]. Whether you use an SLM or a DMM, you are likely to end up including additional optics into your light path to reimage the active region onto the back aperture of the microscope objective and often the laser scanning device too. Care must be taken to expand and then reduce the size of your laser beam so that the full active region of the modulator and back aperture of the objective is used. Most SLMs and DMMs rely on a beam size of ~12–15 mm, whereas objective back apertures and scanners are usually smaller. With each additional optic introduced, there is a greater risk of introducing further aberration to the optical system.

Wavefront Sensing In order to determine the correction required to compensate for aberrations in the system, the wavefront must first be measured. This can either be achieved by measuring the surface profile of the wavefront directly or indirectly inferring it using a form of fitness sensor (see Fig. 5).

Direct Wavefront Sensing The three main types of direct wavefront sensing approaches are interferometric (i.e., shearing interferometry), geometrical (i.e., Shack-Hartmann [17]), and phase retrieval [18]. Phase retrieval is an algorithm-based approach for finding the phase solution to a measured amplitude function. Probably the most common form of wavefront sensor to be used in microscopy is a ShackHartmann wavefront sensor, which involves a lenslet array in front of a CCD camera. For a plane, collimated, incoming wave, an evenly spaced, equal size, array of spots should be formed on the camera, and any deviation to this can be attributed to aberration. For commercial systems the lenslet array and camera arrangement are pre-calibrated, and the software determines any deviation from plane wavefront in terms of contribution of the various Zernike polynomials. In a paper published in 2010, an SLM was used in an optical trapping configuration as both the wavefront correction device and a “virtual” wavefront sensing device [19]. The SLM was initially used to create a lenslet array which was focused by the objective lens onto the sample plane; from here the Zernike polynomials were determined, and then the appropriate correction was applied to the SLM. For a successful direct wavefront sensing approach (see Fig. 5a), a known reference point is required in the sample, similar to the laser guide star used in optical astronomy. An obvious laser guide star is not readily available in microscopy. To resolve this Azucena et al. injected 1 mm fluorescent beads into a Drosophila embryo sample at the final stages of preparation and imaged them on a Shack-Hartmann wavefront sensor similar to a laser guide star approach [20, 21]. The wavefront sensor was used in a closed-loop configuration along with a DMM, and they were able to restore the image of a bead at a depth of 100 mm into the embryo sample. Direct wavefront sensing techniques require enough light to detect the shape of the wavefront which can be difficult to achieve when imaging deep into a biological sample due to the deterioration in the signal as well as the complexity of the aberrations present. In confocal systems, particularly when operating in fluorescence, the collected light may not be of sufficient intensity to make an accurate determination of the wavefront shape.

Page 9 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

Feierabend et al. removed the need for a laser guide star by looking at the backscattered light from their highly scattering samples [22]. They were able to distinguish in-focus and out-of-focus light using coherence-gated and phase-shifting interferometry, similar to optical coherence tomography. This works on the principle that unscattered light from the focus has a different arrival time to out-offocus light that has under gone multiple scattering events. The phase-shifting interferometer not only extracts the in-focus light but also provides information about its phase, and from here the wavefront can be determined. At this point the wavefront contains equivalent information to that which would be retrieved from a laser guide star, all be it without the use of fluorescence, and a Shack-Hartmanntype approach can be used to express the aberrations in terms of Zernike modes.

Indirect Wavefront Sensing Indirect wavefront sensing usually involves an iterative search routine which alters the shape of the incoming wavefront and maximizes or minimizes a particular property of the image selected to be directly related to image quality or resolution (i.e., a fitness sensor), for example, intensity. The different approaches can be split into modal or zonal sensing depending on the basis set used for the search routine. For example, a modal approach might use the Zernike polynomials as the basis set, taking advantage of the orthogonality of the modes allowing each mode to be corrected independently of each other. The modal approach relies on being able to accurately reproduce the required modes without any cross talk. In a zonal sensing approach, the basis set could, for example, be each actuator on a DMM, and here the optimization algorithm routine becomes increasingly important since it determines how quickly and effectively the search space is explored. In the zonal approach, the speed at which the wavefront can be changed is important, and hence it is usually best suited to systems using a DMM. Both techniques will be discussed in more detail in the subsequent sections. Due to the issues with direct wavefront sensing, the majority of research groups have embarked on an indirect wavefront sensing approaches. Modal Sensing Using a modal approach for indirect wavefront sensing, sometimes referred to as sensor-less wavefront sensing, was pioneered by the group in Oxford led by Wilson and Booth [10, 23]. As explained previously in section “Introduction,” a wavefront can be described in terms of a set of modes (e.g., Zernike) which are orthogonal to each other; see Eq. 1. For each mode equal amounts of negative and positive correction are applied to the wavefront which when focused lead to two focal spots of differing intensities which can be measured using a confocal pinhole. The difference in intensity (V+k  V k ) between these two spots is proportional to the amount, ck, of that particular mode present on the original aberrated wavefront, ck ¼

 Vþ k  Vk Vo

(6)

Since the modes are orthogonal, their individual contribution to the final corrected shape can be measured independently of each other. Each mode is applied sequentially and the level of that mode present measured allowing the final combined correction to be determined. The great benefit of this approach is the speed at which you can arrive at the required correction since by considering only orthogonal modes, the search space is greatly simplified. The user decides the number of modes they wish to correct for, and for each mode two trial wavefronts are required plus a measure of the intensity of the focal spot when no correction has been applied, V0. Booth et al. used this routine to correct for 7 Zernike modes (including astigmatism, coma, trefoil, and first order spherical) in a Page 10 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

fluorescence confocal image of mouse intestine and found an optimum result was achieved after running through the cycle twice leading to a total of 28 scans [10]. When using an SLM for indirect modal sensing, the whole process can be made even more time efficient by splitting the wavefront into n-paths which are then modified by aberration plates. Reducing the time taken to reach a correction is of particular importance for applications where sample photodamage is an issue. Important in any form of indirect modal wavefront sensing is the accurate representation of the modes. When using a DMM with a small number of actuators and applying analytical modes like Zernike or Lukosz modes, this is a nontrivial task. Wang et al. address this issue by deriving an alternative modal basis set directly from the actuator influence functions and hence avoid any approximation errors with their simulations showing a significant improvement [24]. Zonal Sensing Zonal sensing approaches rely on no prior knowledge of the sample or the modal wavefront components, and instead, for example, they work with the set of actuators present on the DMM. Optimization Algorithms One common form of zonal sensing uses a DMM with an optimization algorithm. Here the search space is typically more complicated since it has not been reduced via the use of orthogonal modes therefore it is important to be able to sample a wide range of wavefronts very quickly. A property of the image, directly related to image quality and/or resolution, is selected as the merit factor, and an optimization algorithm is used to either maximize or minimize this property and hence remove aberrations and improve image quality. In order for such an approach to be successful, the choice of merit factor and optimization algorithm has to be carefully considered, particularly in terms of speed and the level of correction required. The downside of this approach is often the time taken to complete an optimization and the amount of sample photon exposure that occurs during this time. In 2005 Wright et al. looked at a range of optimization algorithms from stochastic random search algorithms to evolutionary genetic algorithms in order to determine which worked best in a confocal microscope arrangement [25]. The algorithms were compared in terms of the number of iterations (or time taken) to complete an optimization and the level of improvement in axial resolution achieved. A hill-climbing algorithm that tried each actuator in turn was the quickest but achieved limited improvement since it is a local search technique with a tendency to get stuck on a local maximum; see Fig. 6. The genetic algorithm, based on evolution, started with a random population of DMM shapes and used the individual actuators to represent the genes. The genetic algorithm provided a global search and therefore was best in terms of improvement factor but was poor on number of iterations required. The authors concluded that a random search optimization, where an actuator is selected at random, changed by a random amount and the change is accepted or rejected depending on the influence on the merit factor, provided a reasonable level of improvement in the fastest time. The intensity of a point in the image, or average intensity over a small region of interest, is typically used as the merit factor for optimization routines. Although intensity does not directly link to resolution, the effect of “asking” the algorithm to concentrate all the photons and energy to a small region in the image normal results in an increase in resolution and image quality. Different merit factors have been explored, for example, optimizing on contrast or directly on the width of the point spread function and hence resolution, but the major limitation with these approaches is the time taken to determine the merit factor [26]. Intensity of a point in the sample can be read out almost instantaneously.

Page 11 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

B Fitness Value

3 C A 1

2

4

Solution

Fig. 6 An example search space showing solution versus merit factor for a particular problem highlighting the difference between a local and global optimization, B representing the global maximum (i.e., the solution giving the highest merit factor) and A and C representing local solutions. The final outcome for an optimization algorithm will depend on whether a local or global algorithm is implemented as well as the initial starting point (1, 2, 3, or 4). For example, if left to run long enough, a global algorithm such as the genetic algorithm with the potential to search the entire search space will return point B as a solution, whereas a local algorithm like a hill-climbing algorithm would return A if starting at point 1 or B if starting at point 3

A recent paper [27] has addressed the issues of photo-bleaching and optimization time when using the optimization algorithm approach. One approach presented was to create look-up tables of DMM shapes which could be determined using a reference sample and then applied as required to the sample of interest. Mullenbroich et al. demonstrated that the look-up tables could be determines by optimizing on a second harmonic signal that does not suffer from photo-bleaching and the shape then applied to a multiphoton imaging to further reduce photo damage. All these approaches have their merit and depending on sample and information required can help to considerably reduce sample exposure time. Increased sample exposure time and therefore increased risk of photodamage is the downside of all AO system and something the user needs to be aware of when implementing such approaches. Pupil Segmentation In 2010 a group based at Janelia farm introduced a new zonal indirect wavefront sensing approach where they split the rear pupil of the microscope objective into individual beamlets using an SLM [28, 29]. In an ideal situation, without aberration and sample inhomogeneities, a diffraction-limited focus arises from all light rays entering the rear pupil and intersecting at a common point with a common phase inside the sample. Local changes in refractive index in the sample act to redirect these rays as well as shift their phase relative to each other. The SLM is divided into N subregions (typically N < 100) leading to N beamlets at the rear focal plane. Each beamlet can then be considered individually with a continuous phase ramp applied to the separate subregions of the SLM in order to alter the angle of the beam and correct for position. The authors proposed and demonstrated two separate methods for correcting for phase, the direct measurement and the phase reconstruction method. The direct measurement approach turns “on” one of the beamlets and uses this as a reference from which to correct the phase of the other beamlets, taking a series of images for each beamlet with a variety of phase offsets in order to maximize the signal at the focal point. For the phase reconstruction method, information gained about the beam deflection required for each beamlet provides a map of phase gradients across the pupil, and from here the phase itself is extracted using an iterative algorithm. The main hurdle to overcome with the approach is power limitations arising from splitting the beam into multiple beamlets. This is particularly true when working with a nonlinear signal, i.e., a multiphoton signal where the signal intensity is proportional to incident power squared. Reducing

Page 12 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

the number of beamlets, increasing laser beam power, and increasing pixel integration time can all help here, but ultimately there will be a limit placed on the number of beamlets possible, and this limit will decrease with image depth. Sample photon exposure can be reduced using the phase retrieval method and by limiting the number of beamlets. Compared to using an optimization algorithm method, as described above, there is a reduced risk of photodamage since fewer steps are needed and images are taken rather than the beam being parked on a specific point for a prolonged period of time. Like the previous indirect methods described above, intensity from the focal point is used as a measure of the correction required, and this was read initially from a fluorescence bead present in the sample and then later from a “bright” micron-sized feature in the sample.

Predetermined Aberrations It is generally accepted that the bulk aberration created by biological tissue is spherical aberration which is generated by the average refractive index mismatch. Spherical aberration can be defined as a Zernike polynomial that is perfectly symmetrical with regard to rotation around the optical axis. This type of aberration is therefore only accurately generated by layers of refractive indices without any lateral inhomogeneities. This model, termed “stratified medium,” has been well documented in the published literature as the origin of spherical aberrations [30]. Several groups have used this approach as a route for predetermining the aberration correction required as the user images deeper into the sample [31, 32]. Although an enhancement in signal and image quality is observed, the result often falls short of the predicted outcome mainly because the model does not perfectly reproduce the optical properties of biological tissue. Biological tissue shows a complex axial and lateral variation of refractive indices and only converges toward the model of a stratified medium when the local inhomogeneities of refractive indices are averaged over large lateral areas. Since stratified medium is only an approximation of biological tissue and because Zernike polynomials are orthogonal, any aberration that is not in its entirety made up of spherical aberration must contain components of any of the various other Zernike modes.

Applications and Implementation Due to the development of inexpensive wavefront modulating systems, AO has been incorporated successfully into a wide variety of microscopy modalities, using different types of implementation and achieving a range of improvements. A number of these applications will be discussed in more detail concentrating first on fluorescent imaging systems, namely, confocal and multiphoton microscopy; then forms of nonlinear microscopy that use an intrinsic property of the sample as the contrast mechanism, i.e., second-harmonic imaging microscopy; and finally super-resolution techniques such as stimulated emission depletion microscope (STED).

Confocal and Multiphoton Microscopy: Fluorescence Imaging In confocal microscopy, the use of a pinhole in front of the detector rejects out-of-focus light, increasing the ratio of the desired signal to unwanted background and producing images with greatly improved contrast and, in theory, diffraction-limited resolution. A 2D image can be formed by either scanning the beam across the sample or scanning the sample itself, and a 3D image stack is formed by then stepping the sample or the objective in the axial direction. Confocal microscopy is more often than not used in fluorescence mode with the pinhole rejecting the out-of-focus fluorescent light Page 13 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

from the sample. It has become the workhorse of many life science laboratories providing valuable insight into the inner workings of cell design and function. Multiphoton microscopy functions through the nonlinear excitation of fluorophores resulting in fluorescence within a small volume of the sample. Typically, two low-energy long-wavelength photons are absorbed simultaneously to excite a single higher-energy shorter-wavelength fluorescent photon. The probability of this occurring is very low so in order for the two excitation photons to be absorbed simultaneously a high photon density is needed. Hence fluorescence is only generated at the focus of the laser beam removing the need for a confocal pinhole and making the technique inherently optically sectioning. When collecting fluorescence in the visible, the use of longer near-infrared wavelengths allows for deeper imaging since the effects associated with scattering is greatly reduced. Fluorescence, confocal, and multiphoton microscopies are common in many life science laboratories and are used routinely in in vitro and in vivo experiments of cells and tissue. All of the early work on AO in microscopy concentrated on these imaging systems, and indeed many of the more recent publications are based on improving these techniques and showing further enhancements in image quality and resolution. One of the first experimental feasibility studies of AO in microscopy used a multiphoton microscope, measured the aberrations using a wavefront sensor, and employed a spatial light modulator as the correction device [33]. The first practical implementation adaptive optics in multiphoton microscopy implemented a hill-climbing algorithm and optimized the shape of a deformable membrane mirror using intensity as the merit factor [11]. Using a 1.3 NA, 40 oil immersion objective, imaging in water (to induce refractive index mismatch), the maximum scanning depth was increased from 3.4 to 46 mm. In 2006 Rueckel et al. [34] incorporated a coherence-gated approach for wavefront sensing with an AO mirror into a closed-loop mechanism. Only light that had been scattered close to the focus was measured to determine the wavefront. They showed that this approach could correct for aberrations up to a depth of 200 mm in zebrafish larva. Applying the pupil segmentation approach detailed in section “Pupil Segmentation” to multiphoton microscopy in 2011, Ji et al. recovered diffraction-limited resolution at depths of 450 mm in brain tissue and improved image quality over fields of view of hundreds of microns [29].

Intrinsic Nonlinear Techniques (SHG, THG, and CARS) Second-harmonic generation (SHG), third-harmonic generation (THG), and coherent anti-Stokes Raman scattering (CARS) microscopy are all types of nonlinear imaging modalities that are used frequently in biological imaging. SHG and THG are special cases of sum frequency generation where photons interact with a particular nonlinear material, for example, collagen to form signal photons with two or three times the energy respectively. CARS is a four-wave mixing process where a Stokes beam os and two pump beams op interact at the sample to produce an anti-Stokes beam, having frequency oas ¼ 2opos. By altering the beat frequency of the Stokes and pump beam to coincide with the frequency of a specific molecular vibration, the oscillation is driven coherently, and a particular chemical vibration can be targeted. Like multiphoton microscopy described above, all these techniques require a high peak power and high photon density, which is only available at the focal volume giving optical sectioning capabilities and improved axial and lateral resolution. The reliance on a high photon density and a “good quality” focus makes such techniques highly susceptible to aberrations and therefore the implementation of AO beneficial. With coherent nonlinear microscopies such as SHG, THG, and CARS, signal levels also depend on the phase distribution near the focal volume [35]. Since these techniques all arise from the intrinsic properties of the material, there is no need to fluorescently label the sample and therefore no risk of photo-

Page 14 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

bleaching, in addition fluorophores can introduce unwanted sample perturbations, and labeling can be hard to achieve in some samples. Much of the work on AO in SHG and THG microscopy has looked at improving the quality of images showing embryo development and in particular highlighting the dynamic nature of this process. The THG signal typically shows the cellular and subcellular structures of the embryo, whereas the SHG signal highlights the zona pellucida (membrane surrounding the oocyte) [36]. In 2009 Oliver et al. proposed a modal correction scheme for determining the DMM shape required using the 11 most influential eigenmodes of the DMM excluding tip, tilt, and defocus, thus allowing the correction phase to be accurately produced by the DMM. They used the sharpness of a small region of interest as their merit factor and showed a 2.5 increase in sample illumination [37]. Also in 2009 Jesacher et al. imaged a mammalian embryo using SHG and THG incorporating AO into their system. They used a modal correction scheme, akin to that detailed in section “Modal Sensing,” but using the Zernike modes and intensity as their figure of merit and reported signal increases of 21 % and 9 % for SHG and THG, respectively. Interestingly they were able to confirm that the correction achieved when looking at a SHG signal was similar to that achieved when looking at a THG signal suggesting that both modalities are affected by the same aberrations [36]. AO has been incorporated into a CARS microscope using a search algorithm to optimize the intensity of a point in a sample with a look-up table to provide a good starting wavefront in order to improve the rate of optimization. The look-up table had been previously determined using a similar tissue sliced to a variety of thicknesses and also included a correction shape for the system-induced aberrations determined by focusing on a coverslip. In this paper the significance of look-up tables is demonstrated as a possible practical means of incorporating adaptive optics in an in vivo biological imaging situation. Figure 7 shows sample CARS images with and without AO and with only the system-induced aberrations corrected at a depth of 260 mm in chicken muscle.

Single-Molecule Imaging Single-molecule super-resolution techniques such as STORM [39] (stochastic optical reconstruction microscopy) and PALM [40] (photoactivated localization microscopy) obtain super-resolution through the localization of individual fluorescent emitters using a centroiding algorithm (i.e., 2D Gaussian fitting) to accurately determine each emitter’s position. Both techniques require each fluorophore to switch on and off individually in order to ensure that each individual molecule fluorescing at any particular time is sparse enough to determine the position with sufficient accuracy. Since the precision of location at which an isolated emitter can be determined is limited to the intensity (Eq. 7), any improvements made to reduce systematic aberrations will improve the resolution and speed at which a data set can be collected. In terms of area, Localisation 

FWHMPSF pffiffiffiffi N

(7)

where the localization precision is dependent on the full width half maximum (FWHM) of the point spread function (PSF) and the number of photons N used in the fit. By applying a known amount of astigmatism into the beam, one can infer the position along the optical axis of a single point illuminator based on shape of the point spread function (PSF). Initially this was performed using a cylindrical lens [7]. Since then further improvements in axial resolution have been achieved through PSF engineering using a spatial light modulator with a diffraction image to generate a rotating double-helix intensity pattern along the axial direction [41].

Page 15 of 24

Fig. 7 Adipose globular deposit imaged with CARS microscopy at 260 mm depth in white chicken muscle. Top: comparison of image (a) without correction, (b) with the system-induced aberrations corrected, and (c) with the system- and sample-induced aberrations corrected. The arrow marks the point on the sample where beam was parked and the intensity optimized during the correction process. Bottom: line intensity plots of the three images for comparison [38]

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

Page 16 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

Stimulated Emission Depletion Microscopy (STED) Stimulated emission depletion microscopy (STED) [42] combines a depletion beam and an excitation beam which overlap spatially and temporally with respect to each other. The excitation beam normally has a Gaussian intensity profile, but the depletion beam is doughnut shaped with a region of zero intensity at its center. The depletion beam de-excites all but a small region within the center of the excitation beam which is smaller than the diffraction limit. By then collecting only light from within this region, one effectively surpasses the diffraction limit imposed by Abbe (Eq. 1), and super-resolution is achieved. Here super-resolution typically only occurs in the lateral region, and although there are methods to attain 3D super-resolution, these are limited to thin specimen samples due to either the complexity of the system or to aberrations. In an attempt to overcome this issue, Klar et al. used an SLM to create a phase mask that forms a beam with a ring-shaped focus and additional high-intensity lobes both above and below the central zero-intensity region thus providing super-resolution in the axial as well as the lateral directions [43]. The approach by Klar et al. proved successful on thin samples but the novel beam was highly susceptible to aberrations and therefore not suitable to imaging thick samples [44]. AO was first implemented in STED by Gould et al. in 2012 in a 3D STED system similar to that described above with an intensity maximum above and below the central doughnut beam. Their approach was based on the modal approach described in section “Modal Sensing” using Zernike modes (minus tip, tilt, and defocus) to express the wavefront but with two SLMs, one in the excitation beam and the other already present in the depletion beam but now used to correct for aberration as well as form the 3D zero-intensity region. Image brightness proved to be insensitive as a figure of merit since when the two beams no longer overlap a traditional confocal image with a higher average intensity is formed as opposed to a STED image. A new metric was defined that incorporated both the image intensity and image sharpness. First the system was used in standard confocal mode and the aberration correction in the excitation path determined using intensity as the merit factor and then the imaging system was converted to STED mode with the new combined metric applied. Figure 8 shows the success of this technique when applied to imaging fluorescent beads through zebrafish retina ~14 mm thick. In a second, more recent publication, the same team demonstrated that by applying a similar method but correcting for only tip, tilt, and defocus, they can improve the spatial alignment and overlap of the excitation and depletion beams [45].

Selective Plane Illumination Microscopy (SPIM) Selective plane illumination microscopy (SPIM) otherwise known as light-sheet microscopy is an imaging modality where the sample is illuminated from the side with a thin plane of illumination and the fluorescence light collection occurs perpendicular using a normal microscope objective lens with a wide-field camera [47]. The thin illuminating plane is usually formed using a cylindrical lens, but more recently scanning laser techniques utilizing Bessel beams have been used to reduce the effects of scattering and substantially improve axial resolution [48]. Bourgenot et al. utilized the indirect modal sensing approach employing Zernike modes (section “Modal Sensing”) to correct for aberrations when imaging a zebrafish within a glass borosilicate pipette [49]. They altered the improvement metric used depending on the level of aberrations present, for highly aberrated regions of their sample image contrast were used as a measure, and when relatively low quantities of aberrations were present, they looked to maximize the high spatial frequencies of the image. A significant level of improvement was achieved as can been seen in Fig. 9 a–c, although it is clear from Fig. 9b, e that in this particular case, they are correcting for mostly aberrations introduced by the optical system rather than the sample itself.

Page 17 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

Fig. 8 AO STED images of fluorescent beads through zebrafish retina sections. (a–f) Results for beads imaged through ~14 mm of retina. Lateral and axial sections of a single fluorescent bead imaged in (a, d) confocal, (b, e) STED, and (c, f) AO STED show improvement in signal and resolution when adaptive aberration correction is applied to the depletion beam path. (g–l) Similar image sequences for beads imaged through ~25 mm of retina. Axial profiles of beads in AO STED images were ~208 and ~249 nm for (f) and (l), respectively. Color bar in (e) also applies to (b), (c), and (f). Color bar in (k) also applies to (h), (i), and (l). (m–o) Volume renderings for data shown in (a–f) for (m) confocal, (n) STED, and (o) AO STED data. (n) and (o) plotted on same color scale for comparison of signal [46]

Optical Trapping Optical trapping is a microscopy technique whereby micron-sized objects and cells can be manipulated in three dimensions at the focus of a laser beam. Optical trapping systems are often built round commercial microscopes and require a laser beam focused by a high numerical aperture objective lens, typically an oil immersion lens with a numerical aperture of 1.3. The optical trapping force is a direct result of the tightly focused light and is proportional to intensity gradient of the beam; therefore any deterioration in focal spot greatly impacts on the trapping force available. This often places limits on the depth at which it is possible to trap a particle and also the minimum size of particle that can be trapped [50]. In the case of optically trapped cells, users are often forced to increase the power of their laser beam to increase the optical trapping force leading to high levels of sample photon exposure. For small displacements from equilibrium, x, an optical trap can be likened to a mass on a spring with a restoring force, F, acting to bring the particle back to its equilibrium position (F ¼ kx). k is the constant of proportionality and is either referred to as the spring constant or trap strength giving a measure of how strongly or weakly an object is trapped. As an object is trapped deeper into a sample, k noticeably decreases due to the presence of aberrations and a broader focal spot. Many optical trapping systems are holographic systems incorporating an SLM into the microscope optics. The ability to have complete control over the phase of the input beam has allowed

Page 18 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

Fig. 9 AO in SPIM with a zebrafish in a glass borosilicate pipette. (a, b, c) are images taken for a flat mirror shape, a mirror shape optimized for the system aberrations, and a mirror shape optimized directly on the fish. The white square corresponds to the region of interest on which the optimization is performed and is 24 mm wide. (d) shows the metric normalized to the uncorrected values during the z-stack, as a function of imaging depth, when the mirror is flat (blue), and optimized (black). The green vertical lines correspond to where the mirror has been optimized. The purple vertical line shows when the region of interest has been moved. (e) shows the Zernike mode amplitude at different depth. Mode 4 and mode 6 are focus and astigmatism, respectively [49]

people to trap multiple objects in three dimensions with independent control over the objects positions, to trap large arrays of objects and also to spin and rotate objects by using beams carry angular momentum. In the early models, the SLM itself was often the cause of considerable aberration, and therefore an initial correction would be applied to the system to allow for this [51]. SLM technology has improved considerably over the last decade, and the existing devices present in optical trapping systems are now being used to correct for system- and sample-induced aberrations as well as control the position, orientation, rotation, etc., of the trapped object. For example, in 2010 Čižmár et al. used an SLM-based approach that worked on the principle that an optical field propagating through any system can be described as a composition of arbitrary orthogonal modes where regardless of the system or mode set, an optimal focus would be achieved

Page 19 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

when all the modes meet at a selected point in space with the same phase [52]. They split their SLM up into a rectangular lattice with each rectangle projecting a different mode onto the sample. A reference beam was created in the first diffracted order of the SLM and was interfered with each mode in turn. The phase of each mode was corrected by maximizing the interference pattern. This approach is in many ways similar to that described above in section “Pupil Segmentation.” Several groups have also taken a DMM approach to correcting for the aberrations present in an optical trapping system. Ota et al. predetermined the amount of spherical aberration required at a certain depth and applied this amount using DMM [53]. Several groups have used optimization algorithm approaches to determine the wavefront correction required by the DMM. Theofanidou et al. maximized a multiphoton fluorescence signal from an optically trapped fluorescence bead [54], and Mϋllenbroich et al. minimized the displacement from equilibrium position when a trapped bead experienced an external force [55].

Summary AO aims to shape the wavefront of the incoming beam with equal but opposite distortion to that introduced by the sample and imaging optics in order to restore image quality and resolution. It is a concept that originated in optical astronomy and is now common place in most ground-based optical telescopes. It has also been used extensively in ophthalmology enabling scientists to produce highquality images of the cones and rods on the retina at the back of the eye [56, 57] and also as a method for improving vision by making bespoke contact lenses and glasses [58]. In this chapter we discuss the application of AO to optical microscopy. Like all optical systems when used away from their specific design criteria, microscopes suffer from aberrations which deteriorate the image quality, reduce resolution, and significantly limit the amount of useful information that can be gained from a particular image. The application of AO to microscopy has come in many forms from DMMs to SLMs, direct wavefront measurements to stochastic algorithm-based approaches. As yet there is no one clear leader or accepted routine for correcting for aberrations in an optical microscope. This is due in part to the complex nature of the problem and the truly inhomogeneous property of tissue samples making sample-to-sample generalization extremely difficult. In the last few years, several commercial AO systems have become available to be used as add-ons to existing microscope systems. For example, Thorlabs provides an AO kit involving a DMM, Shack-Hartmann wavefront sensor, the appropriate optomechanics, and stand-alone closedloop control software for determining the correction required. Imagine Optic sells a similar kit containing a DMM, Shack-Hartmann wavefront sensor, and appropriate software package. In addition to this, Imagine Optic also sells a bespoke software package called “GENAO” for use where direct wavefront sensor measurements are not possible. GENAO uses an evolutionary algorithm with the Zernike modes as the basis set and improves on a property of the image. Hopefully this chapter has highlighted the wide variety of microscope modalities that stand to benefit from some form of AO, from confocal microscopy (the workhorse of many life science laboratories) to super-resolution approaches such as STED microscopy. What all the techniques discussed have in common is their reliance on a tightly formed ‘good quality’ focal spot without which high resolution and decent image quality is not possible. Many applications of AO in microscopy have concentrated on imaging biological tissues both in vivo and in vitro, here without AO the user is often forced to increase the laser beam power as they image deeper into the sample in order to achieve a measurable level of signal. With AO in place, the signal levels deep into the

Page 20 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

sample can be comparable to the levels possible at the surface; therefore, the user no longer needs to increase the power of their laser beam greatly reducing the risk of sample photodamage. It is common to split the corrected aberrations into two categories, system and sample aberrations referring to those resulting from the optical system and those arising due to the sample. System aberrations are often thought of as remaining fixed with depth, whereas sample aberrations vary the deeper a user images into their sample. From a practical point of view, this can be very useful; by first determining the system correction required, the user has a starting point to work from and enough signal available to go on and correct for the sample aberrations. In many cases a significant image improvement is achieved purely by correcting for system-induced aberrations resulting from very subtle misalignments in the optics or optics being used away from their specific design criteria. Indirect wavefront sensing approaches are often favored over direct wavefront sensing techniques due to the lack of a suitable laser guide star, or point source, within a sample at the required depth. Indirect wavefront sensing approaches rely on selecting a property of an image which can be measured and used as an indicator of the level of improvement achieved. It is extremely important that the appropriate indicator or fitness metric is chosen as if not the overall outcome of the AO system will be limited regardless of the method used to determine the final correction to be applied. A metric is needed that relates directly to image quality and is sensitive to the level of aberrations present in a particular system, and, in many cases, a metric that can be read out easily and at speed is also desirable. For this reason intensity is often used, whether it is intensity of a point in the image or the average intensity of a region of interest. Intensity can be recorded almost instantaneously and more often than not is a clear indicator of the level of image quality. Section “Applications and Implementation” includes examples where intensity was found to be inappropriate due to its lack of sensitivity, and here metrics that incorporate image sharpness or contrast were employed.

References 1. Schwertner M, Booth M, Wilson T (2007) Specimen‐induced distortions in light microscopy. J Microsc 228(1):97–102 2. Fahrbach FO, Rohrbach A (2010) A line scanned light-sheet microscope with phase shaped self-reconstructing beams. Opt Express 18(23):24229–24244 3. Zhu D et al (2013) Recent progress in tissue optical clearing. Laser Photon Rev 7(5):732–757 4. Combs CA et al (2007) Optimization of multiphoton excitation microscopy by total emission detection using a parabolic light reflector. J Microsc 228(3):330–337 5. Combs CA et al (2010) Optimizing multi-photon fluorescence microscopy light collection from living tissue by non-contact total emission detection (TEDII). Biophys J 98(3):180a 6. Tyson R (2010) Principles of adaptive optics. CRC Press, London, UK 7. Huang B et al (2008) Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science 319(5864):810–813 8. Albert O et al (2000) Smart microscope: an adaptive optics learning system for aberration correction in multiphoton confocal microscopy. Opt Lett 25(1):52–54 9. Sherman L et al (2002) Adaptive correction of depth‐induced aberrations in multiphoton scanning microscopy using a deformable mirror. J Microsc 206(1):65–71 10. Booth MJ et al (2002) Adaptive aberration correction in a confocal microscope. Proc Natl Acad Sci 99(9):5788–5792 11. Marsh P, Burns D, Girkin J (2003) Practical implementation of adaptive optics in multiphoton microscopy. Opt Express 11(10):1123–1130 Page 21 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

12. Coulman C (1985) Fundamental and applied aspects of astronomical ‘seeing’. Ann Rev Astron Astrophys 23:19–57 13. Babcock HW (1953) The possibility of compensating astronomical seeing. Publ Astron Soc Pac 65(386):229–236 14. Greenaway A, Burnett J (2004) Industrial and medical applications of adaptive optics. Technology tracking. IOP Publishing Ltd. Bristol, UK. ISBN 0-7503-0850-8 15. Dalimier E, Dainty C (2005) Comparative analysis of deformable mirrors for ocular adaptive optics. Opt Express 13(11):4275–4285 16. Wright AJ et al (2005) Dynamic closed-loop system for focus tracking using a spatial light modulator and a deformable membrane mirror. Opt Express 14(1):222–228 17. Shack RV, Platt B (1971) Production and use of a lenticular Hartmann screen. J Opt Soc Am 61(5):656 18. Fienup JR (1982) Phase retrieval algorithms: a comparison. Appl Optics 21(15):2758–2769 19. Booth MJ, Neil M, Wilson T (1998) Aberration correction for confocal imaging in refractive‐ index‐mismatched media. J Microsc 192(2):90–98 20. Jiang M et al (2010) Adaptive optics photoacoustic microscopy. Opt Express 18(21): 21770–21776 21. Tao X et al (2011) Adaptive optics confocal microscopy using direct wavefront sensing. Opt Lett 36(7):1062–1064 22. Feierabend M, Ruckel M, Denk W (2004) Coherence-gated wave-front sensing in strongly scattering samples. Opt Lett 29(19):2255–2257 23. Neil M, Booth M, Wilson T (2000) Closed-loop aberration correction by use of a modal Zernike wave-front sensor. Opt Lett 25(15):1083–1085 24. Wang B, Booth MJ (2009) Optimum deformable mirror modes for sensorless adaptive optics. Opt Commun 282(23):4467–4474 25. Wright AJ et al (2005) Exploration of the optimisation algorithms used in the implementation of adaptive optics in confocal and multiphoton microscopy. Microsc Res Tech 67:36–44 26. Poland SP, Wright AJ, Girkin JM (2008) Evaluation of fitness parameters used in an iterative approach to aberration correction in optical sectioning microscopy. Appl Optics 47(6):731–736 27. M€ ullenbroich MC et al (2014) Strategies to overcome photobleaching in algorithm based adaptive optics for nonlinear in-vivo imaging. J Biomed Opt 19(1):016021 28. Ji N, Milkie DE, Betzig E (2010) Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues. Nat Methods 7(2):141–147 29. Ji N, Sato TR, Betzig E (2012) Characterization and adaptive optical correction of aberrations during in vivo imaging in the mouse cortex. Proc Natl Acad Sci U S A 109(1):22–27 30. Török P et al (1995) Electromagnetic diffraction of light focused through a planar interface between materials of mismatched refractive indices: an integral representation. J Opt Soc Am A 12(2):325–332 31. Kner P et al (2010) High-resolution wide-field microscopy with adaptive optics for spherical correction and motionless focusing. J Microsc 237:136–147 32. Booth MJ, Neil MAA, Wilson T (1998) Adaptive aberration imaging in refractive-indexmismatched media. J Microsc 192:90–98 33. Neil M et al (2000) Adaptive aberration correction in a two‐photon microscope. J Microsc 200(2):105–108 34. Rueckel M, Mack-Bucher JA, Denk W (2006) Adaptive wavefront correction in two-photon microscopy using coherence-gated wavefront sensing. Proc Natl Acad Sci 103(46): 17137–17142 Page 22 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

35. Olivier N, Beaurepaire E (2008) Third-harmonic generation microscopy with focus-engineered beams: a numerical study. Opt Express 16(19):14703–14715 36. Jesacher A et al (2009) Adaptive harmonic generation microscopy of mammalian embryos. Opt Lett 34(20):3154–3156 37. Oliver N, Debarre D, Beaurepaire E (2009) Dynamic aberration correction for multiphoton microscopy. Opt Lett 34(20):3145–3147 38. Wright AJ et al (2007) Adaptive optics for enhanced signal in CARS microscopy. Opt Express 15(26):18209–18219 39. Rust MJ, Bates M, Zhuang X (2006) Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat Methods 3(10):793–796 40. Betzig E et al (2006) Imaging intracellular fluorescent proteins at nanometer resolution. Science 313(5793):1642–1645 41. Pavani SRP et al (2009) Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function. Proc Natl Acad Sci 106(9):2995–2999 42. Hell SW, Wichmann J (1994) Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt Lett 19(11):780–782 43. Klar TA et al (2000) Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission. Proc Natl Acad Sci U S A 97(15):8206–8210 44. Deng S et al (2009) Investigation of the influence of the aberration induced by a plane interference on STED microscopy. Opt Express 17(3):1714–1725 45. Gould TJ et al (2013) Auto-aligning stimulated emission depletion microscope using adaptive optics. Opt Lett 38(11):1860–1862 46. Gould TJ et al (2012) Adaptive optics enables 3D STED microscopy in aberrating specimens. Opt Express 20:20998–21009 47. Huisken J et al (2004) Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305:1007–1009 48. Fahrbach FO, Simon P, Rohrbach A (2010) Microscopy with self-reconstructing beams. Nat Photon 4:780–785 49. Bourgenot C et al (2012) 3D adaptive optics in a light sheet microscope. Opt Express 20:13252–13261 50. Hajizadeh F, Nader S, Reihani S (2010) Optimized optical trapping of gold nanoparticles. Opt Express 18(2):551–559 51. Jesacher A et al (2007) Wavefront correction of spatial light modulators using an optical vortex image. Opt Express 15(9):5801–5808 52. Čižmár T, Mazilu M, Dholakia K (2010) In situ wavefront correction and its application to micromanipulation’. Nat Photon 4:388 53. Taisuke O (2003) Enhancement of laser trapping force by spherical aberration correction using a deformable mirror. Jpn J Appl Phys 42:L701 54. Theofanidou E et al (2004) Spherical aberration correction for optical tweezers’. Opt Commun 236:145 55. M€ ullenbroich MC, McAlinden N, Wright AJ (2013) Adaptive optics in an optical trapping system for enhanced lateral trap stiffness at depth. J Opt 15:075305

Page 23 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_37-1 # Springer Science+Business Media Dordrecht 2014

56. Baraas RC et al (2007) Adaptive optics retinal imaging reveals S-cone dystrophy in tritan colorvision deficiency. JOSA A 24(5):1438–1447 57. Li KY, Roorda A (2007) Automated identification of cone photoreceptors in adaptive optics retinal images. JOSA A 24(5):1358–1363 58. Liang J, Williams DR, Miller DT (1997) Supernormal vision and high-resolution retinal imaging through adaptive optics. JOSA A 14(11):2884–2892

Page 24 of 24

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

SPR Biosensors Aaron Ho-Pui Hoa*, Shu-Yuen Wua, Siu-Kai Kongb, Shuwen Zengc and Ken-Tye Yongc a Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong b School of Life Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong c School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore

Introduction Surface plasmon resonance (SPR) refers to excited charge density oscillations that exist along the boundary between a metal and a dielectric with permittivities of opposite signs. When the orientation of the electric field vector of an incident light matches the movement of free electrons in the metal as restricted by the boundary conditions associated with some material and structural parameters, surface plasma waves (SPW) can be excited, and consequently efficient coupling with large energy as guided electromagnetic wave along the interface may occur. This phenomenon of unexpected attenuation was first discovered by Wood [1] in 1902 when they measured the reflection of metallic gratings and found that some optical power was absorbed by the metal because of the excitation of SPW. The focus on developing SPR sensing was inspired after the introduction of attenuated total internal reflection (ATR) by Otto [2] and Kretschmann [3] in 1968. It was not until 1983 that Liedberg and Nylander [4] reported the first practical sensing application of SPR for biomolecular detection. Since then, SPR biosensors have experienced rapid development in the last two decades and become a valuable platform for qualitative and quantitative measurements of biomolecular interactions with the advantages of high sensitivity, versatile target molecule selection, and real-time detection. For this reason, SPR sensors are now widely adopted for meeting the needs of biology, food quality and safety analysis, and medical diagnostics. Currently, a number of commercial SPR sensor instruments are available from companies including Biacore, IBIS Technologies B.V., AutoLab, GWC Technologies, Genoptics Bio Interactions, Biosensing Instrument, and SPR Navi. Table 1 provides a summary of instrument specifications of several commercial SPR sensing systems for readers’ reference. The major detection method has always been the angular interrogation scheme which measures reflectivity variations with respect to incident angle. A less common approach is the wavelength scheme in which one monitors the wavelength dependence of the sensor reflectivity. Typical sensitivity resolution of the angular approach is in the higher end of 107 RIU (refractive index unit), while for the wavelength interrogation approach, this is somewhere in the order of 106 RIU because of the relatively less sharp SPR absorption dip. Further improvement may be achievable through the use of phase interrogation because of the sharp phase change across the SPR resonance dip. Since SPR only affects the p-polarization because the metal sensor film only supports field-induced charge oscillation in a direction perpendicular to the sensor surface, polarization–retardation variations of optical beam, i.e., phase difference between the s- and p-polarizations, are considered as an effective approach for detecting the desired SPR signal. Considerable sensitivity improvement is expected from detecting the SPR phase/retardation change. Figure 1 shows typical phase response of the Kretschmann (inverted prism) configuration. Comparison of theoretical sensitivity resolution between different detection schemes is shown in Table 2. Moreover, readers should note that

*Email: [email protected] Page 1 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

Table 1 Examples of commercial SPR sensors Manufacturer Biacore

Instrument BIAcore 4000

Sensitivity (RIU) 1  107

Biacore IBIS Technologies Bio-Rad Horiba BiOptix Diagnostics SensiQ Technologies SPR NAVI

BIAcore T200 IBIS-MX96 ProteOn XPR36 SPRi-plex II BiOptix 404pi

1 3 1 5 2

SensiQ pioneer

1  107

3 channels

4–40

SPR NAVI™ 220A

1  106

2 channels

4–40

    

Operation temperature ( C) 4–40

Number of detection points 16 (with 4 independent flow cells) 4 96 spots 36 spots Up to 400 spots 4 detection channels

107 106 106 106 106

4–45 15–40 15–37

a 150

1 0.9

100

SPR reflectance

0.8 0.7

Phase

0.6

0

0.5

He-Ne laser (632.8 nm)

–50

0.4 Prism (BK 7)

qinc.

–100

0.3

Metal layer (46 nm gold)

0.2

Sample medium (water 1.3330)

–150 –200 65

70

75

80

85

SPR reflectance

Phase (degree)

50

0.1

90

0

Incident angle (degree) 0.7

50

0.6

SPR refectance

0

0.5 0.4

Phase

–50

0.3 Light source: He-Ne laser (632.8nm) Incident angle: 76° Metal layer: 46 nm gold Prism: BK 7 Sample refractive index: 1.3330 (water) – 1.3700 RIU

–100 –150 –200 1.33

1.335

1.34

1.345

1.35

1.355

1.36

1.365

1.37

0.2

SPR reflectance

Phase (degree)

b 100

0.1

0 1.375

Refractive index of sample media (RIU)

Fig. 1 Theoretical angular and phase response of the Kretschmann configuration for different angle of incidence (a) and sample refractive index (b)

Page 2 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

Table 2 Sensitivity comparison of various SPR signal detection schemes SPR detection scheme Angular interrogation Spectral interrogation Phase interrogation

Response versus refractive index change ~2  102 Deg/RIU ~104 nm/RIU ~105 Deg/RIU

Typical refractive index detection resolution ~5  107 RIU ~106 RIU ~108 RIU

while signal-to-noise ratio is the primary factor that ultimately dictates the instrument’s sensitivity resolution, actual performance of practical SPR sensor systems is essentially limited by additional measurement uncertainties arising from temperature drifts and mechanical microphonics. This chapter focuses on reporting the development of phase-detecting SPR sensors because of its potential to offer very high measurement performance. Biosensing applications are briefly summarized as similar reviews are readily available from the literature. The provision provided by this review also aims to stimulate new ideas that may lead to further advancement of SPR instrumentation.

Basic Principle of SPR Phenomenon Evanescent wave is typically generated at the boundary between two media when the propagation of a wave is being perturbed under total internal reflection (TIR). In this physical phenomenon, for example, when TIR occurs at a glass–water interface, the energy of the wave is conserved and the process often has zero energy loss. However, in the case of a glass–metal film–dielectric (e.g., water) system, TIR may still take place when the metal film is of appropriate thickness. In this case, SPWs, which are associated with oscillations of charge density in the metal layer, are being excited. SPW in turn will enhance the intensity of the evanescent wave in the lower-index medium. In addition, the fact that oscillatory motion of electrons also means resistive losses, thus leading to strong optical absorption. The SPR absorption phenomenon is therefore also called attenuated total reflection (ATR). At resonance, SPWs are efficiently generated under the condition that the momenta of the incident wave and the excited SPW should be the same. Thus an approximation of the SPW wave vector (kspw) is given by k spw

rffiffiffiffiffiffiffiffiffiffiffiffiffiffi em es ¼ kf em þ es

where kf is the free space wave vector of the optical wave, while em and es are the dielectric constants of the metal and the sample medium, respectively. If excitation happens, kspw can be rewritten as follows: k spw

2p ¼ l

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n2m n2s n2m þ n2s

where nm ¼ (em)1/2 is the refractive index of the metal and ns ¼ (es)1/2 is the refractive index of the sample. This equation indeed suggests that SPR is a power tool for monitoring refractive index changes caused by chemical or biochemical interactions. Page 3 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

While the SPR phenomenon is fully described by Fresnel’s theory, one can accurately model how the resonance shifts according to the changes of refractive index in the dielectric (i.e., sample) layer. The signal comes from a shift either in reflectivity or optical phase. From Fresnel’s equations, the reflection coefficient, rxyz, of a three-layer prism–metal–dielectric (water) structure in the xyz coordinate system, with the z-direction representing the one normal to the plane of the sensing surface, is given by   rxy þ ryz exp 2ik yj t m   rxyz ¼ 1 þ rxy ryz exp 2ik yj t m and   rxyz ¼ rxyz ei’ where tm ¼ thickness of metal film, ’ ¼ phase, and rxy and ryz are Fresnel’s coefficients in the xyand yz-planes, respectively, in the prism–metal–dielectric configuration.

SPR Phase Detection Schemes Since it is not practical to detect the phase of an optical wave directly because of its high frequency, one typically first shifts it to a lower frequency through a mixing step, then followed by a final interference process to convert the phase into amplitude oscillations. Generally, practical phase detection techniques are based on optical heterodyning, polarimetry, and interferometry. These schemes will be discussed in this section.

Optical Heterodyne Optical heterodyne refers to the frequency mixing technique through which interference between two input waves can produce new waves with frequencies equal to the sum or difference of the input waves. The difference frequency, also called the beat frequency, bears the same phase information as the input waves and is typically in the range between KHz and GHz. This means that the optical phase difference between two coherent laser beams can be readily detected with common electronic phase meters. Common heterodyne optical interferometer involves the use of frequency shifters, also called the acousto-optical modulator (AOM) or the Bragg cell, in which radio-frequency acoustic waves at 10–500 MHz propagating in a transparent solid such as quartz, lead molybdate (PbMoO4), or tellurium dioxide (TeO2) may introduce Doppler shift to the input laser beam. The resultant frequency-shifted and unshifted optical beams will generate the required oscillating intensity variation for the final measurement of the phase. A practical phase-detecting SPR sensor with easy beam alignment and heterodyne-shifted p- and s-components was first proposed in 2003 by Wu et al. [5]. With two AOMs operating in the two separate arms of a Mach–Zehnder interferometer at driving frequencies of 40 and 40.06 MHz, respectively, the authors were able to obtain a low frequency beat signal at 60 kHz. It was then possible to monitor the SPR phase change with a simple commercial phase meter. The authors reported a refractive index resolution as high as 2  107 RIU by measuring the phase response at different incident angles for methanol, water, and ethanol. Starting from 2005, Chiu and coworkers produced a series of research papers on the development of heterodyne sensors based on the use of single-mode optical fiber [6–9]. In their work, a D-type fiber was fabricated by removing the cladding and polishing off half of the fiber Page 4 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

core, then followed by depositing a layer of gold on the exposed core for generating the required SPR effect. The authors also reported that multiple reflections inside an optical fiber would enhance the sensing dynamic range significantly. On the other hand, a heterodyned light source enabled phase instead of amplitude-only detection with improved sensing resolution. Simulated sensing response was confirmed with experimental results obtained from alcohol samples at different concentrations [6]. The authors also numerically investigated three-layer (glass core-gold layer sample) [7] and four-layer (glass core-silicon dioxide-gold layer sample) [8] configurations based on this D-type optical fiber configuration, and optimal parameters were suggested. Wang et al. later reported a U-type multimode optical fiber configuration in which multiple reflection increased the net phase change as the probing beam hit the fiber-sample boundary multiple number of times [9]. In this design, the fiber core was immersed in the sample medium in the absence of any metal coating. The optical beam was able to go through 70 cycles of TIR with the U-type region. Despite the multiplied signal outcome, this sensing however only gave a moderate resolution of 1.6  106 RIU. In 2008, Li and coworkers [10] demonstrated a system capable of reaching a high refractive index resolution of 2.8  109 RIU through active amplitude equalization of the reference and signal beams. From this active adjustment, noise due to laser intensity fluctuations was suppressed, thus resulting in ultrahigh resolution. It was found that the phase change caused by adding 0.00001 % sucrose to pure water was still observable with this sensor. The corresponding refractive index change resolution was 1.4  108 RIU. The projected measurement resolution was around 2.8  109 RIU. A similar resolution of 7  109 RIU was achievable from glycerin–water solution. For real biosensing of mouse-IgG/antimouse-IgG interactions, the scheme offered a sensitivity of 10 fg/mL (67 aM), which is among the best ever achieved with the SPR technique. Other reported optical heterodyne sensing schemes were based on monitoring the amplitude instead of any involvement of phase detection [11, 12]. The input beam came from a Zeeman laser that contains a built-in heterodyning frequency difference between the s- and p-polarizations. With the use of a quarter-wave plate (QWP), two SPWs separated by the Zeeman-split frequency (2.5 MHz in this case) were launched at the sensor surface. Cross-polarization interference conveniently generated a reference modulation signal at the beam splitter placed just before the sensor head, while the optical beam reflected from the SPR sensor surface provided the signal. The final modulation signal was in fact coming from the interference between the two SPWs launched by two laser components which were separated by the Zeeman-split frequency. Since ATR had almost identical effects on both SPWs because of their near-identical frequencies, and the output was an interference product between the two SPWs, it became clear that the amplitude variations caused by ATR (i.e., the sensor’s response signal) had been amplified twice, thus increasing the detection resolution considerably. The scheme achieved a refractive index resolution of 3.5  107 RIU based on an experiment using 0.001 % sucrose–water solution. Compared to the amplitude-based scheme, phase detection with heterodyning [10] was shown to offer an improvement of 2 orders (2.8  109 RIU versus 3.5  107 RIU) in detection sensitivity. However, the dynamic range was 3 orders narrower (102 versus 105 RIU). In 2013, Ju-Yi and coworkers [13] demonstrated an improved wavelength-modulated heterodyne interferometer for sensitivity enhancement. The system utilized the wavelength dependence of drive current in common laser diodes. The laser beam was first analyzed by a Michelson interferometer so that sweeping of the laser drive current would lead to a series of amplitude peaks in the output intensity because of continuous phase variations. By incorporating a QWP at an appropriate azimuthal angle in front of the sensor, it was possible to measure the SPR phase shift with a lockin amplifier. The system demonstrated a resolution limit of 3  107 RIU. Since there was no need of any electro-optical modulator, this approach has the benefit of relatively simple instrument design. Page 5 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

Polarimetry Polarimetry refers to interference from two orthogonally polarized p- and s-components of a light wave with the incorporation of a polarizer, and the measurement of phase difference relies on the analysis of a series of interference patterns. Apart from fundament noise intrinsic to the system, noise reduction can be enhanced through collection of a large amount of data and the use of appropriate signal processing. Phase-sensitive polarimetry-based SPR sensors incorporating the use of photoelastic modulation in various configurations have been reported by Ho and coworkers since 2003 [14–16]. The photoelastic modulator was included to provide a high-frequency sinusoidal modulation in the phase retardation between the p- and s-components. The phase modulation itself generated a series of harmonics which all carried the SPR phase information. Under certain modulation depth condition, the SPR phase term could be extracted directly from the amplitude ratio of the first and second harmonics. Indeed this approach was previously proposed for the measurement of low-level birefringence [15]. Through detecting the phase change induced by changing the concentration of glycerin in water, a sensitivity level of 1.19  106 RIU was obtained. Real-time characterization of the binding reaction between bovine serum albumin (BSA) and anti-BSA biomolecules was demonstrated with this technique, and the detection limit was found to be 11.1 ng/mL. Soon afterward, a modified setup that made use of the full polarization parameters including the azimuth angle, the intensity ratio of p- and s-components, and the phase retardation induced by the SPR sensor cell was also demonstrated by the same group [16]. In particular, the tangent of the double azimuth angle, i.e., ratio between the first and second harmonics, was shown as a sensitive indicator of refractive index variations without any compromising in dynamic range performance. Moreover, better signal-to-noise ratio should be possible due to reduced susceptibility to stray light and laser intensity fluctuations. As a result, a detection limit of 6  107 RIU or 15 ng/mL with a dynamic range in the order of 102 RIU was obtained. In 2008, Stewart et al. reported an analytical investigation on optimization of the photoelastic SPR polarimetry system and its subsequent application for biochemical sensing [17]. A refractive index resolution better than 5  107 RIU was demonstrated. Later, the group further developed multiplexing capability for their SPR polarimetry system and better signal-to-noise ratio; hence detection limit was further improved as compared to the commercially available SensiQ Discovery system [18]. Kabashin and coworkers investigated the polarimetric SPR sensing extensively [19–22]. In 2007, they introduced a photoelastic modulator for temporal modulation of the phase retardation, and this time they monitored the second and third harmonic components, and a detection limit of 2.89  107 RIU together with a dynamic range larger than 102 RIU was obtained [19]. This photoelastic modulator sensor system was then applied to the detection of streptavidin–BSA complex, and a sensitivity limit of 1.3 nM was achieved [20], which was comparable to the best sensitivities reported in literature. Shortly afterward, they reported a polarimetric SPR scheme in which a wedge-shaped birefringent quartz provided the required linear phase retardation change over a rectangular region [21]. In their experiment, a CCD camera captured a series of two-dimensional fringe patterns from the linear retardation region, and the phase distribution was then analyzed by Fourier transform. This way the SPR phase response was detected. The authors showed a detection limit better than 106 RIU. Interestingly they showed that the phase-detecting approach was one order of magnitude improvement as compared to the amplitude-detecting approach from the same setup. In 2008, SPR polarimetry with the use of mechanical phase dithering was also employed [22]. In the experiment, a Wollaston prism was used to separate the s- and p-components of the light beam, and polarization modulation was performed with an optical chopper. A refractive index detection limit of 3.2  107 RIU was demonstrated through real-time monitoring of bio-reactions inside the sample flow cell.

Page 6 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

Chiang and coworkers studied the dependence of detection sensitivity on the wavelength of the incident optical wave in SPR polarimetry. The authors showed that it was possible to optimize sensing performance by matching the thickness of the sensor film to specific optical wavelengths. Resolution levels at 3.7  108 RIU for refractive index sensing [23] and 1.9  106 degrees for angle measurement [24] were possible. Spectral phase detection with a polarimetry setup was reported by Zheng et al. in 2009 [25]. A super luminescent LED (1,525–1,565 nm) was used, with the spectral interference signal produced by inserting a birefringent wave plate in the exit path of the SPR sensor. The output spectrum contained a series of peaks because of cross-interference between the p- and s-polarizations. Any retardation change due to the sensor’s SPR phase was monitored by analyzing the shift of the spectral peaks. A detection limit better than 106 RIU was achieved in this system. In 2013, Ho and coworkers proposed a novel technique to improve this system performance by incorporating a temporal carrier so that the phase computation and unwrapping can be performed in the time domain. Inserted in front of the birefringent crystal, a liquid crystal device was employed to modulate the optical phase retardation between p- and s-polarizations. This enabled the extraction of the desired differential spectral SPR phase. The integration time would be longer limited by the temporal carrier frequency, thus making it possible to enhance the signal-to-noise ratio. The experimental detection limit was between 2  108 and 8  108 RIU, with a measurement range of 3  102 RIU [26]. SPR imaging based on polarimetry is a powerful tool for high-throughput sensing. Two schemes have been proposed [27, 28], and they are found to offer better performance than the amplitude-only SPR imaging approach [29]. The first one reported by Nikitin et al. [27] extracted the SPR phase signal simply by performing direct interference on the p-polarization component using a Mach–Zehnder interferometer, with the SPR sensor element placed in one of the optical arms. The second scheme, reported by Homola and Yee [28], took advantage of the fact that appropriate adjustment of the retardation using additional polarizers and wave plates would lead to reversing the contrast of the SPR spectral absorption dip because of the contribution of interference from the SPR phase change. The SPR image consequently had bright spots for region with strong SPR absorption. Homola and coworkers further combined the polarization contrast scheme and specially designed multilayer sensing arrays for even better sensitivity with parallel sensing capability [30], and a resolution limit as low as 2  106 RIU has been achieved [31]. Since wave plates are generally designed to have fixed phase shift, the polarization contrast scheme described above only offers limited operational spectral window. To resolve this issue, one can incorporate a variable phase-shifting device to achieve more versatile phase control. In 2005, Su et al. incorporated phase-shift interferometry to SPR polarimetry and further improved the sensitivity of SPR imaging system [32]. In their design, a two-dimensional CCD camera was used for recording the interference intensities of the p- and s-components. The SPR-induced phase difference was subsequently processed by a simple five-step phase-shifting algorithm. A resolution of 2  107 RIU was achieved. Lateral resolution of better than 100 mm has been demonstrated in their experiment based on DNA microarray. In 2008, Yu et al. reported a similar setup together with the Stoilov algorithm for phase retrieval. They achieved a resolution of 106 RIU based on SPR imaging experiments conducted with NaCl–water mixtures of different concentration levels and biomolecular samples [33].

Interferometry Interferometry generally refers to the optical instrumentation technique that makes use of the interference between two light beams which typically come from a light-emitting diode or a laser. Page 7 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

An interferometer is the optical system necessary for converting fast oscillating waves into those with much lower frequencies for signal detection by electronic devices. An interferometer’s output signal has direct relation with the amplitudes of the interfering waves and their phase difference, as depicted by the famous two-beam interference equation: I ¼ A2 ¼ A21 þ A22 þ 2A1 A2 cos ½ot þ ðf1  f2 Þ where I is intensity of the final wave, A1 and A2 are amplitudes of the interfering waves, f1 and f2 are their phases, and o represents their frequency difference. As compared to polarimetry and optical heterodyning, SPR interferometry offers the advantage of direct visualization of activities taking place in the sensor surface without any compromise in measurement accuracy and lateral resolution. This is an important feature relevant to the sensor’s capability to perform real-time monitoring of multiple binding reactions on a single sensing surface. There is one drawback, however, due to the fact that the phase extraction process requires intensive computation involving fast Fourier transform algorithms. This has resulted in a rather slow data production rate, in the order of one sample point every several seconds. The interferometry approach for SPR phase measurement was first demonstrated by Kabashin and Nikitin in 1997. Their phase-sensitive SPR sensor was implemented with a Mach–Zehnder interferometer [34, 35]. A laser beam was divided into a signal arm and a reference arm by a beam splitter. With the SPR sensing cell included in the signal arm, interference between the signal and reference arms converted the phase shift in the p-component due to refractive index variations in the analyte into an optical intensity signal at the output beam splitter. Experiments performed on Ar-N2 gas mixtures achieved a very respectable detection limit of 4  108 RIU if a phase resolution of 0.01 was assumed [34]. In 2000, Notcovich et al. [36] reported a similar SPR phase imaging system in which a Mach–Zehnder interferometer was employed to analyze the p-component. Fourier transform was employed for fringe analysis and a refractive index resolution of 106 RIU was obtained. In 2002, Ho and coworkers proposed a differential phase measurement scheme [37, 38]. In previous interferometry designs, the system only used the signal arm to detect refractive index change or biomolecular interactions, and the reference arm was not involved in the sensing action at all. In the differential scheme, a half wave plate was included to rotate the p-polarization by 90 in order to have interference with the s-polarization. Phase retrieval was achieved by phase stepping with a piezoelectric transducer, typically made from lead zirconate titanate (PZT), while monitoring the shift in the interference fringes. The setup was found to have better noise immunity as compared to the Mach–Zehnder setup reported by Kabashin and Nikitin [35]. In 2004, Ho et al. reported another differential interferometer design for SPR phase measurement [39]. As shown in Fig. 2, the input polarization is oriented at 45 from the s- and p-polarizations. This resulted in having two interferometers, one from the s-polarization and the other from the p-polarization operating in parallel. At the output beam splitter of this differential phase Mach–Zehnder interferometer, a Wollaston prism separated the two polarization signals. Any environmental disturbances in the system should have identical effects on both interferometers, while on the other hand, the SPR phase change from the sensor surface affects the p-component only. Differential phase measurement, i.e., phase difference between the s- and p-polarizations, should only contain the SPR phase and all other spurious noises would be canceled out. The authors reported ultra-sensitive performance with a resolution of 5.5  108 RIU based on an assumed phase measurement resolution of 0.01 . In 2007, the team reported other variants of the differential design: Michelson interferometerbased double-pass [40] and Fabry–Perot interferometer-based multiple-pass configurations [41]

Page 8 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

Sample out Computer

Beam splitter Sample in

He-Ne laser

Digial Storage oscilloscope

Wollaston prism

Beam splitter

Detector

PZT

Fig. 2 Differential phase-sensitive surface plasmon resonance biosensor based on the Mach–Zehnder configuration [39]

PZT

Beam splitter Wollaston prism

Beam splitter

Photodectector

Beam splitter Beam splitter

Sample in

He-Ne laser Prism Sample out Wollaston prism

Photodectector

A/D

Fig. 3 Surface plasmon resonance biosensor with Michelson interferometer [40]

exhibiting further enhancement in sensitivity. In the Michelson interferometer double-pass system (as shown in Fig. 3), additional beam splitters were used to separate 50 % of the light passing through to another Mach–Zehnder interferometer, while the remaining 50 % was reflected back for use by the Michelson interferometer, in which the signal beam was allowed to pass through the SPR cell twice. This way the signal beam experienced an additional phase change and the expected phase response should be doubled. Calibration experiments with salt-water mixtures and biomolecular samples involving BSA and anti-BSA binding were carried out to demonstrate a resolution limit of 7.7  107 RIU, which was almost two times as sensitive as the single-pass version. In the Fabry–Perot cavity multiple-pass configuration, SPR phase change was accumulated as the laser light was reflected back and forth inside the Fabry–Perot cavity. The improvement, however, was only 12 % in addition to the doubled resolution achieved by the Michelson interferometer due to gradual reduction in light intensity as the beam went through multiple reflections at the SPR active sensing surface.

Page 9 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

In 2008, the team proposed to use a Michelson interferometer-based double-pass design for the white-light SPR scheme, thus offering the possibility to resolve the issues of high sensitivity and wide dynamic range simultaneously [42]. However, strong spectral dispersion in the sensor prism due to the broadband nature of the beam had made interference between the signal and reference arms very difficult. To solve this problem, the authors employed a dispersion compensation prism in the reference arm of the optical system. The exit angles of all spectral components in both the signal and reference arms were corrected, thus enabling strong interference signal to be obtained across the entire spectral range. Differential phase detection was applied in order to improve system stability. The experimental detection limit was reported to be 2.6  107 RIU, while the measurable refractive index range was 102 RIU [43].

Surface Plasmon Resonance Imaging (Amplitude and Phase) SPR imaging offers important attributes for practical SPR sensing applications. Through imaging one can visualize the entire biosensing surface. It then becomes possible to monitor many sensing sites simultaneously. One then contemplates the realization of a SPR-based biochip platform with which the user may be able to study a range of molecular interactions in parallel. Corn et al. [44] developed an oligonucleotide arrays on gold surfaces for hybridization detection by SPR imaging technique. In their experiment, four oligonucleotide probe spots were printed on the gold substrate through selected area exposure to unlabeled DNA. By monitoring the intensity change of the spotted areas within the SPR images captured throughout the hybridization process, the researchers were able to distinguish the difference between areas responsive to single-stranded DNA and double-stranded DNA. The same team further applied SPR imaging to the analysis of protein–peptide interactions [45]. With white-light SPR imaging, the researchers were able to perform label-free study on the kinetics of protein adsorption and desorption on peptide arrays. In this case, narrowband filtered p-polarized light was used as the light source, and the reflectivity change was measured as a function of time by a CCD camera. The system offered the possibility of varying the SPR excitation wavelength, which was needed for achieving wide measurement range. In 2005, long-range surface plasmon resonance (LRSPR), often associated with symmetric dielectric–gold–dielectric structures, was further implemented in the SPR imaging scheme for biomolecular detection by Corn et al. [46]. In this scheme, the multilayered sensing surface took the structure of SF10 prism/Cytop/gold/water, where Cytop is an inert and optically transparent material. Data analysis was performed by subtraction of images taken before and after exposure of DNA samples in the SPR system. The authors reported a 20 % improvement as compared to conventional SPR imaging. Research efforts in phase interrogation further improved the sensitivity limit of SPR imaging. Wong et al. [47] in 2008 demonstrated protein array sensing with phase-stepping differential SPR imaging (Fig. 4). In the experiment, 25 different protein samples were prepared with a 5  5 polydimethylsiloxane (PDMS) microwell array. The microwells were then used for pretreating a gold surface to form an array of antigen spots. Experiments performed with BSA antibody samples at various concentrations demonstrate the scheme’s capability to perform real-time monitoring of binding reactions. The reported detection limit was 0.77 mg/mL. Another example of phase SPR imaging was reported by Corn and Kim [48] in 2011. The authors produced a one-dimensional phase-sensitive SPR biosensor array. With the use of a linear variable retardation wedge, phase information was converted into a linear interference fringe pattern. The SPR imaging system captured and analyzed the pattern to provide information on the binding Page 10 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

Halogen Light Source

Polarizer Wave plate

Sample out

Colour CCD

Sample in

Polarizer

Fig. 4 Protein array sensing with phase-stepping differential SPR imaging [47]

reaction that took place in the one-dimensional sensor array. The setup was used to monitor the bioaffinities of biomolecules and a detection limit of 50 pM was achieved, which is almost 100 times better than that from traditional intensity-based SPR imaging.

Developments of Gold Nanoparticles (Au NPs) for Sensitivity Enhancement In view of the rising need to develop high-throughput biosensing techniques for detecting small molecules and meeting the requirements of nanoscale biotechnology, sensitivity improvement is an important direction in SPR technology. While a number of approaches including reengineering of the light source and the detection scheme, the incorporation of nanoparticles (NPs) offers real promises. Selected examples of Au NP-based SPR sensing are presented in Table 3. Here, we briefly discuss some basic features of Au NPs and representative examples of using Au NPs to improve the SPR sensing signals, thereby allowing one to achieve detection range that is not possible by using commercially available SPR sensors. Au NPs are intensively investigated for the past decade due to their unique tunable optical properties that can be applied for various applications ranging from sensing to imaging. The preparation protocols for Au NPs have been continuously expanding and thereby providing new means for controlling the particle size and shape. The special optical property of Au NPs originates from their particle size confinement effect. The size confinement effect on the nanoscale particle generates new electronic and optical characteristics. For example, Au NPs display strong vibrant color in the colloidal solution and this is generated from the SPR absorption. This SPR absorption by Au NPs is associated with the coherent oscillation of the conduction band electrons induced by the interaction with the electromagnetic field. In general, the SPR absorption peak blue shifts to shorter wavelength with decreasing nanoparticle size as predicted by the theory. To date, the SPR frequency of Au NPs has been shown to depend on particle size, shape, dielectric properties, aggregate Page 11 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

Table 3 Au NP-enhanced SPR sensing applications Nanostructure Au nanorod array Au nanorod Single dots Double dots Square array of Au double dots Spherical Au NP Au NPs Aptamer/thrombin Au NPs

Detection limit Below 300 nM TNF-a antigen 0.03 pM Propanol with 520 per 103 air RIU Diameter 100 nm, separation 140 nm, lattice Propanol with 465 per 103 constant 317 nm air RIU Diameter 132 nm, separation 140 nm, lattice Water with 93 per 103 constant 318 nm glycerol RIU 40 nm (best signal amplification) /60/70/80 nm – 5.4  107 diameter RIU 15 nm DNA 10 fg/mm2 hybridization 10 nm Thrombin 0.1 nM 6–8 nm E. coli 103 cfu/mL

Parameters 380 nm length, 25 nm diameter, and 60 nm separation ~47 nm long and ~22 nm wide Diameter 118 nm, lattice constant 316 nm

Analyte Glycerin

References [49] [50] [51] [51] [51] [52] [53] [54] [55]

morphology, surface modification, and refractive index of the surrounding medium. For instance, the SPR peak of 13 nm Au NPs is around 520 nm and that of 50 nm NPs is around 570 nm. The SPR peak of the Au NPs can be manipulated by tailoring the size and shape of the particles. Due to the rich surface chemistry of Au NPs, many research groups have started to incorporate Au NPs into the SPR system for selective enhanced sensing of desired biomolecules. In general, Au NPs are used to enhance SPR sensing response. Some groups have demonstrated the use of functionalized Au NPs in an immunoassay to amplify the sensing signals. For instance, Au NP-enhanced SPR sensing system was used to generate assays for detecting cholera cells [56]. Other groups have shown that the Au NP-amplified SPR sandwich immunoassay has the capability to detect picomolar of human immunoglobulin [57]. These studies suggest that the detection limit for Au NP-amplified SPR sensing system can be further improved by optimizing the shape, size, and architecture of the Au NPs used for the sensing study. We envision that the use of functionalized Au NPs for enhancing biosensor response will create new opportunities for single-molecule detection biosensor and cancer cell diagnostics. For example, Kabashin et al. [49] in 2009 investigated a plasmonic metamaterial for SPR sensitivity enhancement. They fabricated Au nanorod (Au NR) arrays on a thin-film porous aluminum oxide template with 380 nm in length, 25 nm in diameter, and 60 nm in spacing between the rods, and the areal density was approximately 1010–1011 cm2. These nanorods provided a substantial overlap between probing field and active biological target molecules with strong plasmon-mediated energy confinement. The change of refractive index of glycerin with different concentrations was examined by Au NR arrays in SPR phase detection, and an impressive sensitivity limit of more than 30,000 nm per refractive index unit was achieved. Diffraction-coupled collective resonant LSPR biosensor was proposed by Kabashin et al. [51] in 2010. Regular arrays of Au nanodots were used for possessing of collective plasmon resonances caused by diffractive coupling of localized plasmons. The response was studied on controlled gaseous mixtures of propanol with air and controlled mixtures of water with glycerol, and the phase sensitivity is better in one order of magnitude, as compared to SPR-based sensors.

Page 12 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

Different shapes and sizes of Au NPs can be fabricated by tailoring the reaction parameters of the growth solution [58]. In 2013, spherical Au NP-enhanced SPR sensor based on an interferometric interrogation scheme was reported by Shuwen et al. [52]. In their system, the authors made use of the SPR phase-shift enhancement due to plasmonic coupling between Au NPs and the sensing film. Experimental results suggested that the optimal Au NPs diameter for best signal amplification was 40 nm. Their Au NP-enhanced scheme achieved a detection limit 5.4  107 RIU. For further sensitivity enhancement of SPR biosensing, Kim et al. [53] proposed colocalization of Au NP-conjugated nanoantennas. In this study, the nanograting antenna structure had a periodicity of 400 nm and depth of 20 nm. Experiments on the detection of DNA hybridization confirmed that the colocalized nanograting antennas produced 60–80 % enhancement of sensitivity in terms of angular shift. Au NPs combined with sandwich assays had also been developed for SPR sensing in recent years. One example is the aptamer/thrombin/aptamer-AuNPs sandwich scheme reported by Bai et al. [54]. In the proposed assay, the thiol-modified thrombin aptamer (TBA29) was immobilized on Au NPs, while another biotinylated thrombin aptamer (TBA15) was bound to the SPR sensor surface through biotin–streptavidin recognition. A detection limit as low as 0.1 nM was reported. Another application of Au NR in SPR sensing was proposed by Law et al. [50]. Au NRs were used for signal enhancement in their sandwich immunoassay, and the authors adopted a phase interrogation scheme. For the detection of tumor necrosis factor alpha (TNF-a), the detection limit was found to be 0.5 ng/mL, which was 40-fold better than traditional SPR biosensing techniques. Au NP enhancement for bacteria detection was reported by Baccar et al. [55]. In this sensor Au NPs were first immobilized on the sensor surface before the final step of capturing the target E. coli cells. A detection limit of 103 cfu/mL E. coli was found, while the control experiment with no Au NP enhancement only achieved 104 cfu/mL, i.e., ten times higher concentration. All these findings indicate that Au NPs are important for enhancing the sensitivity of the SPR sensor. It is obvious that there are rooms for improvement in terms of designing new architecture of bioconjugated Au NPs to further increase the sensitivity of the biosensor. In the next few years, it is certain that many more researchers will devote their effort in investigating the use of various sizes and shapes of Au NPs for SPR biosensing applications such as single cancer marker detection.

SPR Biosensor Applications Apart from the implementation of phase detection, sensitivity optimization and enhancement, together with immobilization techniques of target biomolecules, also require improvements. Typically biomolecules or ligands are directly coupled to the SPR sensing surface by covalent immobilization. However, unwanted orientations of the ligands may have degrading effects on the subsequent binding reactions. Capture coupling schemes were developed to ensure homogenous orientation of the captured molecules on surface. Recent developments are presented in the following paragraphs and some examples are summarized in Table 4. In 2009, Kyprianou et al. [59] proposed a SPR sensor surface based on the reaction between primary amines and thioacetal groups and thiol compounds. Thioacetal self-assembled monolayers (SAMs) were fabricated from several thiol-containing molecules and tested by using Biacore 3000. Experimentally, SAMs of pentaerythritol tetrakis (3-mercaptopropionate) (PETMP) offer the highest stability and surface capacity for protein immobilization, while Salmonella typhimurium cells were detected down to 5  106 cells/mL.

Page 13 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

Table 4 Examples of SPR biosensing applications Biomolecule coupled Application with transducer Microarray Thiol compounds immunoassay Mercaptoundecanoic acid (MUDA Polymer of MES/DEAEM Nucleic acid Thiolated DNA detection oligonucleotides Nucleic acid in SPRi Antibody detection

Analyte Protein Protein

Detection system Biacore 3000 Biacore 3000

2,4,6-Trinitrotoluene SIA Kit Au (TNT) (GE Healthcare) Microribonucleic acids Surface plasmon coupler and disperser Oligonucleotides Complementary SPR imaging strands apparatus 11-Mercaptoundecanoic Anti-Leishmania SPR system on acid infantum antibodies Kretschmann configuration Anti-IBDV monoclonal Infectious bursal Spreeta biosensor antibody disease virus (TSPR1k23) Antibody Chloramphenicol and Biacore Q chloramphenicol glucuronide Anti-E. coli polyclonal E. coli SpreetaTM sensors antibodies (Texas instruments)

Limit of detection References 5  106 cells/mL [59] – [60] 5.7 pg/mL

[61]

1,124.4 ng/mL

[62]



[63]

Specificity up to 1:6,400

[64]

2.5 ng/mL

[65]

0.005–0.04 mg/kg [66] 1.9  0.9  104 [67] cfu/mL

Another experiment performed on SPR sensor surface involved the modification by mercaptoundecanoic acid (MUDA) coatings [60]. Biacore 3000 sensor chips were modified with 1 %, 10 %, 100 % MUDA and cysteamine surface. Experimental results suggested that protein assays were sensitive to the dendrimer-modified 1 % MUDA surface, while 1 % MUDA coating with standard EDC/NHS offers the best signal output for DNA assays. In 2013, Yatabe et al. [61] reported a polymer-modified SPR immunosensor chip fabricated with surface-initiated atom transfer polymerization (SI-ATRP). The process involved the mixing of a monomer, mono-2-(methacryloyloxy) ethylsuccinate (MES), with diethylaminoethylmethacrylate (DEAEM). Experimental detection of 2,4,6-trinitrotoluene (TNT) concluded that the monolayer polymer-covered surface offered a limit of detection at 5.7 pg/mL (~ppt level). Detection of microribonucleic acids (miRNAs) using SPR biosensing was reported by Homola et al. [62]. The biosensing chip was modified with thiolated DNA oligonucleotides which were effective agents for capturing miRNAs. Also, the sensor surface was treated with a special antibody which further enhanced the signal. Samples extracted from liver tissue of acetaminophen-treated mice were examined with a special diffraction grating called a surface plasmon coupler and disperser (SPRCD). A concentration limit of 1,124.4 ng/mL in the total concentration of RNA was obtained. SPR offers a favorable platform for quantitative analysis of therapeutic antibody concentrations or antigen–antibody binding. For example, in an investigation of zoonotic disease leishmaniasis, Damos et al. [63] explored a SPR immunosensor in which the sensor surface was modified with 11-mercaptoundecanoic acid (11-MUA) for the detection of anti-Leishmania infantum antibodies. A SAM was fabricated by immersing the chip in an ethanol solution containing 11-MUA. After drying, the surface was activated through exposure to a NHS–EDC reagent just prior to use. The efficiency of this SPR-based immunosensor was evaluated by positive and negative canine sera with

Page 14 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

specificity against anti-Leishmania infantum antibodies up to 1:6,400. Infectious bursal disease virus (IBDV) detection based on the same SPR immunosensor was also reported by Hu et al. [64]. In their work, IBDV concentration was determined by self-assembly of the anti-IBDV monoclonal antibody (IBDmAb). The recorded detection limit was as low as 2.5 ng/mL. Ferguson et al. [65] reported a chloramphenicol derivative-antibody-coated SPR sensor chip for the screening of antimicrobial agent chloramphenicol and chloramphenicol glucuronide residues in poultry muscle, honey, prawn, and milk. A drug-specific antibody with known concentration was first mixed with sample before being injected into the SPR sensing cell. The corresponding detection limits of poultry, honey, prawn, and milk are respectively 0.005, 0.02, 0.04, and 0.04 mg/kg. Another SPR immunosensor was developed by Dudak and Boyac [66] for the detection of generic E. coli in water samples. By immobilization of biotin-conjugated polyclonal antibodies against E. coli, the sensing surface modification scheme was evaluated for specific detection of E. coli in water samples. The detection limit was (1.9  0.9)  104 cfu/mL, which was comparable with the results obtained from plate counting method. Modification by oligonucleotide immobilization on photolithographic patterned gold substrates was reported by Spadavecchia et al. [67]. The SPR chip contained a two-dimensional array of 50 mm wide gold dots. Probe oligonucleotides were immobilized on the sensing dots. The authors used a homemade intensity-based SPR imaging system for monitoring the hybridization process.

Conclusion In this chapter, we have summarized various SPR phase detection schemes in three categories (optical heterodyne, polarimetry, and interferometry). It is clear that the global healthcare market will continue to expand. This will create a strong market potential for biodetection technologies. SPR biosensing offers the unique advantage of real-time label-free detection. This means high degree of flexibility in terms of sample preparation and possibly of much reduced reagent costs. Since phase-detecting SPR biosensors have the merits of enhanced sensitivity and ease of achieving high-throughput multi-analyte detection with simple two-dimensional imaging techniques, there exist promising prospects for such sensors to gain market share from conventional angle-detecting SPR instruments, and even the possibility to capture a wider scope of applications from the healthcare industry. On the other hand, we also foresee that in the near future there will be an urgent need to develop ultrasensitive compact SPR sensor devices for rapid real-time on-site sensing applications. For instance, SPR sensors can play an important role in environmental pollution monitoring. Despite the contamination of pollutants (e.g., smog microparticles, lead, sulfur dioxide, etc.) in the environment that cause harmful effects to human, current available sensors may not be useful in detecting low concentration of the life-threatening substances as indicated earlier. Therefore, the development of enhanced SPR sensor integrated with molecular markers that will selectively capture desired molecules would be ideal for novel early detection of harmful substances in the environment, allowing the community to have enough time for strategically preparing plans to help and support the general public healthcare.

Page 15 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

Acknowledgments This work has been supported by the Hong Kong Research Grants Council under a group research project (Ref. # CUHK1/CRF/12G).

References 1. Wood RW (1902) XLII. On a remarkable case of uneven distribution of light in a diffraction grating spectrum. Philos Mag Ser 6 4(21):396 2. Otto A (1968) Excitation of nonradiative surface plasma waves in silver by the method of frustrated total reflection. Zeitschrift f€ ur Physik 216(4):398–410 3. Kretschmann E, Raether HZ (1968) Radiative decay of non-radiative surface plasmons excited by light. Verlag der Zeitschrift f€ ur Naturforschung 23:2135–2136 4. Liedberg B, Nylander C, Lundstrum I (1983) Surface plasmon resonance for gas detection and biosensing. Sensor Actuator B 4:299–304 5. Wu CM, Jian ZC, Joe SF, Chang LB (2003) High sensitivity sensor based on surface plasmon resonance and heterodyne interferometry. Sensor Actuator B 92(1–2):133–136 6. Chiu MH, Wang SF, Chang RS (2005) D-type fiber biosensor based on surface plasmon resonance technology and heterodyne interferometry. Opt Lett 30(3):233–235 7. Wang SF, Chiu MH, Chang RS (2006) Numerical simulation of a D-type optical fiber sensor based on the Kretchmann’s configuration and heterodyne interferometry. Sensor Actuator B 114(1):120–126 8. Chiu MH, Shih CH (2008) Searching for optimal sensitivity of single-mode D-type optical fiber sensor in the phase measurement. Sensor Actuator B 131(2):596–601 9. Wang SF (2009) U-shaped optical fiber sensor based on multiple total internal reflections in heterodyne interferometry. Opt Lasers Eng 47(10):1039–1043 10. Li YC, Chang YF, Su LC, Chou C (2008) Differential-phase surface plasmon resonance biosensor. Anal Chem 80(14):5590–5595 11. Kuo WC, Chou C, Wu HT (2003) Optical heterodyne surface-plasmon resonance biosensor. Opt Lett 28(15):1329–1331 12. Chou C, Wu HT, Huang YC, Chen YL, Kuo WC (2006) Characteristics of a paired surface plasma waves biosensor. Opt Express 14(10):4307–4315 13. Lee J-Y, Mai L-W, Hsu C-C, Sung Y-Y (2013) Enhanced sensitivity to surface plasmon resonance phase in wavelength-modulated heterodyne interferometry. Opt Commun 289:28–32 14. Ho HP, Law WC, Wu SY, Liu XH, Wong SP, Lin C, Kong SK (2006) Phase-sensitive surface plasmon resonance biosensor using the photoelastic modulation technique. Sensor Actuator B 114(1):80–84 15. Peng HJ, Wong SP, Lai YW, Liu XH, Ho HP, Zhao S (2003) Simplified system based on photoelastic modulation technique for low-level birefringence measurement. Rev Sci Instrum 74(11):4745–4749 16. Yuan W, Ho HP, Wu SY, Suen YK, Kong SK (2009) Polarization-sensitive surface plasmon enhanced ellipsometry biosensor using the photoelastic modulation technique. Sensor Actuator A 151(1):23–28 17. Stewart CE, Hooper IR, Sambles JR (2008) Surface plasmon differential ellipsometry of aqueous solutions for bio-chemical sensing. J Phys D 41(10):105408–105415

Page 16 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

18. Hooper IR, Rooth M, Sambles JR (2009) Dual-channel differential surface plasmon ellipsometry for bio-chemical sensing. Biosens Bioelectron 25(2):411–417 19. Markowicz PP, Law WC, Baev A, Prasad PN, Patskovsky S, Kabashin AV (2007) Phasesensitive time-modulated surface plasmon resonance polarimetry for wide dynamic range biosensing. Opt Express 15(4):1745–1754 20. Law WC, Markowicz P, Yong KT, Roy I, Baev A, Patskovsky S, Kabashin AV, Ho HP, Prasad PN (2007) Wide dynamic range phase-sensitive surface plasmon resonance biosensor based on measuring the modulation harmonics. Biosens Bioelectron 23(5):627–632 21. Patskovsky S, Jacquemart R, Meunier M, de Crescenzo G, Kabashin AV (2008) Phase-sensitive spatially-modulated surface plasmon resonance polarimetry for detection of biomolecular interactions. Sensor Actuator B 133(2):628–631 22. Patskovsky S, Maisonneuve M, Meunier M, Kabashin AV (2008) Mechanical modulation method for ultra-sensitive phase measurements in photonics biosensing. Opt Express 16(26):21305–21314 23. Chiang HP, Lin JL, Chen ZW (2006) High sensitivity surface plasmon resonance sensor based on phase interrogation at optimal incident wavelengths. Appl Phys Lett 88(14), 141105 24. Chiang HP, Lin JL, Chang R, Su SY, Leung PT (2005) High-resolution angular measurement using surface-plasmon- resonance via phase interrogation at optimal incident wavelengths. Opt Lett 30(20):2727–2729 25. Zheng Z, Wan Y, Zhao X, Zhu J (2009) Spectral interferometric measurement of wavelengthdependent phase response for surface plasmon resonance sensors. Appl Optics 48(13):2491–2495 26. Ng SP, Loo FC, Wu SY, Kong SK, Wu CML, Ho HP (2013) Common-path spectral interferometry with temporal carrier for highly sensitive surface plasmon resonance sensing. Opt Express 21(17):20268–20273 27. Nikitina PI, Grigorenkoa AN, Beloglazova AA, Valeikoa MV, Savchukb AI, Savchukc OA, Steinerc G, Kuhnec C, Huebnerc A, Salzerc R (2000) Surface plasmon resonance interferometry for micro-array biosensing. Sensor Actuator A 85(1):189–193 28. Homola J, Yee SS (1998) Novel polarization control scheme for spectral surface plasmon resonance sensors. Sensor Actuator B 51(1–3):331–339 29. Steiner G, Sablinskas V, H¨ubner A, Kuhne C, Salzer R (1999) Surface plasmon resonance imaging of microstructured monolayers. J Mol Struct 509(1–3):265–273 30. Piliarik M, Vaisocherová H, Homola J (2005) A new surface plasmon resonance sensor for high-throughput screening applications. Biosens Bioelectron 20(10):2104–2110 31. Piliarik M, Vaisocherová H, Homola J (2007) Towards parallelized surface plasmon resonance sensor platform for sensitive detection of oligonucleotides. Sensor Actuator B 121(1):187–193 32. Su YD, Chen SJ, Yeh TL (2005) Common-path phase-shift interferometry surface plasmon resonance imaging system. Opt Lett 30(12):1488–1490 33. Yu X, Ding X, Liu F, Deng Y (2008) A novel surface plasmon resonance imaging interferometry for protein array detection. Sensor Actuator B 130(1):52–58 34. Kabashin AV, Nikitin PI (1997) Interferometer based on a surface plasmon resonance for sensor applications. Quantum Electron 27(7):653–654 35. Kabashin AV, Nikitin PI (1998) Surface plasmon resonance interferometer for bio- and chemical-sensors. Opt Commun 150(1–6):5–8 36. Notcovich AG, Zhuk V, Lipson SG (2000) Surface plasmon resonance phase imaging. Appl Phys Lett 76(13):1665–1667

Page 17 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

37. Ho HP, Lam WW, Wu SY (2002) Surface plasmon resonance sensor based on the measurement of differential phase. Rev Sci Instrum 73(10):3534–3539 38. Ho HP, Lam WW (2003) Application of differential phase measurement technique to surface plasmon resonance sensors. Sensor Actuator B 96(3):554–559 39. Wu SY, Ho HP, Law WC, Lin C (2004) Highly sensitive differential phase-sensitive surface plasmon resonance biosensor based on the Mach-Zehnder configuration. Opt Lett 29(20):2378–2380 40. Yuan W, Ho HP, Wong CL, Kong SK, Lin C (2007) Surface plasmon resonance biosensor incorporated in a Michelson interferometer with enhanced sensitivity. IEEE Sensor J 7(1):70–73 41. Ho HP, Yuan W, Wong CL, Wu SY, Suen YK, Kong SK, Lin C (2007) Sensitivity enhancement based on application of multi-pass interferometry in phase-sensitive surface plasmon resonance biosensor. Opt Commun 275(2):491–496 42. Ng SP, Wu SY, Ho HP, Wu CML (2008) A white-light interferometric surface plasmon resonance sensor with wide dynamic range and phase-sensitive response. In: IEEE international conference on electron devices and solid-state circuits, HKSAR, Dec 2008 43. Ng SP, Wu CML, Wu SY, Ho HP (2011) White-light spectral interferometry for surface plasmon resonance sensing applications. Opt Express 19(5):4521–4527 44. Thiel AJ, Frutos AG, Jordan CE, Corn RM, Smith LM (1997) In situ surface plasmon resonance imaging detection of DNA hybridization to oligonucleotide arrays on gold surfaces. Anal Chem 69(24):4948–4956 45. Wegner GJ, Wark AW, Lee HJ, Codner E, Saeki T, Fang S, Corn RM (2004) Real-time surface plasmon resonance imaging measurements for the multiplexed determination of protein adsorption/desorption kinetics and surface enzymatic reactions on peptide microarrays. Anal Chem 76(19):5677–5684 46. Wark AW, Lee HJ, Corn RM (2005) Long-range surface plasmon resonance imaging for bioaffinity sensors. Anal Chem 77(13):3904–3907 47. Wong CL, Ho HP, Suen YK, Chen QL, Yuan W, Wu SY (2008) Real-time protein biosensor arrays based on surface plasmon resonance differential phase imaging. Biosensor Bioelectron 24(4):606–612 48. Halpern AR, Chen Y, Corn RM, Kim D (2011) Surface plasmon resonance phase imaging measurements of patterned monolayers and DNA adsorption onto microarrays. Anal Chem 83(7):2801–2806 49. Kabashin AV, Evans P, Pastkovsky S, Hendren W, Wurtz GA, Atkinson R, Pollard R, Podolskiy VA, Zayats AV (2009) Plasmonic nanorod metamaterials for biosensing. Nat Mater 8(11):867–871 50. Law WC, Yong KT, Baev A, Prasad PN (2011) Sensitivity improved surface plasmon resonance biosensor for cancer biomarker detection based on plasmonic enhancement. ACS Nano 5(6):4858–4864 51. Kravets VG, Schedin F, Kabashin AV, Grigorenko AN (2010) Sensitivity of collective plasmon modes of gold nanoresonators to local environment. Opt Lett 35(7):956–958 52. Zeng S, Yu X, Law WC, Zhang Y, Hu R, Dinh XQ, Ho HP, Yong KT (2013) Size dependence of Au NP-enhanced surface plasmon resonance based on differential phase measurement. Sensor Actuator B 176:1128–1133 53. Oh Y, Lee W, Kim D (2011) Colocalization of gold nanoparticle-conjugated DNA hybridization for enhanced surface plasmon detection using nanograting antennas. Opt Lett 36(8):1353–1355

Page 18 of 19

Handbook of Photonics for Biomedical Engineering DOI 10.1007/978-94-007-6174-2_38-2 # Springer Science+Business Media Dordrecht 2014

54. Bai Y, Feng F, Wang C, Wang H, Tian M, Qin J, Duan Y, He X (2013) Aptamer/thrombin/ aptamer-AuNPs sandwich enhanced surface plasmon resonance sensor for the detection of subnanomolar thrombin. Biosensor Bioelectron 47:265–270 55. Baccar H, Mejri MB, Hafaiedh I, Ktari T, Aouni M, Abdelghani A (2010) Surface plasmon resonance immunosensor for bacteria detection. Talanta 82(2):810–814 56. Liu Y, Cheng Q (2012) Detection of membrane-binding proteins by surface plasmon resonance with an all-aqueous amplification scheme. Anal Chem 84(7):3179–3186 57. Lyon LA, Musick MD, Natan MJ (1998) Colloidal Au-enhanced surface plasmon resonance immunosensing. Anal Chem 70(24):5177–5183 58. Sau TK, Murphy CJ (2004) Room temperature, high-yield synthesis of multiple shapes of gold nanoparticles in aqueous solution. J Am Chem Soc 126(28):8648–8649 59. Kyprianou D, Guerreiro AR, Nirschl M, Chianella I, Subrahmanyam S, Turner PF, Piletsky S (2010) The application of polythiol molecules for protein immobilisation on sensor surfaces. Biosens Bioelectron 25(5):1049–1055 60. Altintas Z, Uludag Y, Gurbuz Y, Tothill I (2012) Development of surface chemistry for surface plasmon resonance based sensors for the detection of proteins and DNA molecules. Anal Chim Acta 712:138–144 61. Yatabe R, Onodera T, Toko K (2013) Fabrication of an SPR sensor surface with antifouling properties for highly sensitive detection of 2,4,6-Trinitrotoluene using surface-initiated atom transfer polymerization. Sensors 13(7):9294–9304 62. Sipova H, Zhang S, Dudley AM, Galas D, Wang K, Homola J (2010) Surface plasmon resonance biosensor for rapid label-free detection of microribonucleic acid at subfemtomole level. Anal Chem 82(24):10110–10115 63. Souto EP, Silva V, Martins R, Reis B, Luz CS, Kubota T, Damos S (2013) Development of a label-free immunosensor based on surface plasmon resonance technique for the detection of anti-Leishmania infantum antibodies in canine serum. Biosens Bioelectron 46:22–29 64. Hu J, Li W, Wang T, Lin Z, Jiang M, Hu F (2012) Development of a label-free and innovative approach based on surface plasmon resonance biosensor for on-site detection of infectious bursal disease virus (IBDV). Biosens Bioelectron 31(1):475–479 65. Ferguson J, Baxter A, Young P, Kennedy G, Elliott C, Weigel S, Gatermann R, Ashwin H, Stead S, Sharman M (2005) Detection of chloramphenicol and chloramphenicol glucuronide residues in poultry muscle, honey, prawn and milk using a surface plasmon resonance biosensor and Qflex® kit chloramphenicol. Anal Chim Acta 529(1–2):109–113 66. Dudak FC, Boyac IH (2007) Development of an immunosensor based on surface plasmon resonance for enumeration of Escherichia coli in water samples. Food Res Int 40(7):803–807 67. Spadavecchia J, Manera MG, Quaranta F, Siciliano P, Rella R (2005) Surface plasmon resonance imaging of DNA based biosensors for potential applications in food analysis. Biosens Bioelectron 21(6):894–900

Page 19 of 19

Miniaturized Fluidic Devices and Their Biophotonic Applications Alana Mauluidy Soehartono, Liying Hong, Guang Yang, Peiyi Song, Hui Kit Stephanie Yap, Kok Ken Chan, Peter Han Joo Chong, and Ken-Tye Yong

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miniaturized Fluidic Regimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fluidic Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fabrication of a Miniaturized Fluidic Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biophotonic Applications with Miniaturized Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nucleic Acid Optical Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bioanalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flow Cytometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plasmonic Biosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nanoparticle Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary and Future Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 3 6 12 13 13 16 19 23 28 37 40

Abstract

Miniaturized fluidic devices provide a platform for reaction processes to be scaled down into the milli-, micro-, and nanoscale level. The advantages of using miniaturized devices include the reduction of sample volumes, faster processing rates, automation, portability, low cost, and enhanced detection limit. Bioanalysis, biosensing, bioimaging, and nanoparticle synthesis are some of the important research areas in the biophotonic field which are often burdened by A.M. Soehartono (*) • L. Hong • G. Yang • P. Song • H.K.S. Yap • K.K. Chan • K.-T. Yong School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected] P.H.J. Chong Department of Electrical and Electronic Engineering, Auckland University of Technology, Auckland, New Zealand e-mail: [email protected]; [email protected] # Springer Science+Business Media Dordrecht 2016 A. H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-6174-2_39-1

1

2

A.M. Soehartono et al.

time-consuming reaction processes, requirement of large quantity of samples, and cumbersome equipment with large footprint. As such, scaling down these reaction processes using miniaturized devices will be a promising approach to greatly improve the overall sensitivity of bioanalysis and biosensing and shorten the reaction time for producing high-quality nanoparticles for biophotonic applications. However, scaling down the reaction processes in the fluidic domains poses different technical challenges since the underlying physical phenomena differs from that at the macroscale. In this chapter, we aim to highlight the advancements and challenges in the fabrication of miniaturized devices for biophotonic applications. Keywords

Miniaturization • Biophotonics • Millifluidic • Microfluidic • Nanofluidic • Labon-a-chip

Introduction From using conventional upright microscopes to study cellular phenomena to the excitation of nanoparticles in cells toward tumor therapy, the interaction of light with biological matter is ubiquitous in modern medicine and bioengineering research. Biophotonics is a field that pertains to (i) studying the interactions of light with biological systems and (ii) developing sensitive and high-resolution light-based technologies for these biological studies. Biophotonic optical technologies permit the observation of morphological changes at the cellular and tissue level through characteristics such as reflectivity, scattering, absorption, and chemical changes [1]. To date, the biophotonic community has been continuously developing novel bio-optical approaches for improving the performance of healthcare tools such as diagnostics, therapeutics, drug screening, genome profiling, etc. For example, exploring molecular events and kinetics at the cellular level will certainly shed light on the overall reaction mechanism that would speed up the development of new drug therapies for treating different human diseases. Also, the ability to monitor the disease progression will allow clinicians to optimize therapeutic plans for patients [2]. However, many of the biophotonic-related reaction processes mentioned above still require bulky and expensive equipment, timeintensive steps, complex processing protocols, and low throughput with little-to-no automation. Miniaturized systems, particularly for fluidic devices, aim at scaling down the macroscale processes into the dimensions of milli-, micro-, and nanoscale and can also provide solutions to the shortcomings experienced in conventional tabletop processes. Miniaturized fluidic devices are generally used to speed up the processing and analysis time with their ability to multiplex and automate operations. Owing to their relatively small dimensions, smaller amounts of reagents and analyte are needed; this is especially useful in situations where samples are very limited or in low-resource settings. Being a low-cost alternative to conventional methods, an

Miniaturized Fluidic Devices and Their Biophotonic Applications

3

inexpensive miniaturized device also offers convenience to the users and the ease of disposability. The origin of miniaturized fluidic devices is a culmination of progress in the fields of molecular analysis, biodefense, molecular biology, and microelectronics [3] into what would become known as the field of microfluidics. Currently it is defined as the study of precision manipulation of low-volume fluids for various reaction processes. The diversity of its beginnings is a true reflection of the interdisciplinary nature of the microfluidic field: combining physics, chemistry, biochemistry, engineering, and nanotechnology. Furthermore, the introduction of soft lithography in the 1990s by Whitesides’ group has substantially accelerated the fabrication of various miniaturized systems through a cheaper microfabrication alternative upon comparing to standard photolithography. Thus, it is no surprise that we have witnessed a burst of research developments in the field of microfluidic devices. In this chapter, we aim to provide a broad discussion of the fabrication and applications of miniaturized devices in the dimensions of milli-, micro-, and nanoscale level. We review underlying principles that affect fluidic manipulation and briefly describe the fabrication techniques. Then, we highlight selected biophotonic applications using miniaturized devices – nucleic acid mapping, bioanalysis, flow cytometry, plasmonic biosensors, and nanoparticle synthesis. We take an application-based approach to our review in view of the fact that many biophotonic applications overlap in their miniaturized fluidic regime. For each of these applications, miniaturization provides a platform that can overcome shortcomings in the current process. For example, optical mapping has benefitted from the development of nanofluidic devices, as nanochannels permit interrogation of longer DNA sequence structures. Benchtop bioanalytical methods and nanoparticle synthesis tend to be long and require bulky equipment and specialized operators, also making them suitable candidates for miniaturization. Due to the scope of our review, we regret that not all related publications could be included, but we wish to direct the reader to the literature for more details in the miniaturization of each regime. Finally, we conclude the chapter with a summary and present our perspective on the future of healthcare with a vision of miniaturized devices.

Miniaturized Fluidic Regimes Miniaturized fluidic devices permit the manipulation of small fluid volumes up to orders of attoliters (1018) [4], and they can be classified as those in the millifluidic, microfluidic, and nanofluidic regimes. The prefixes of each fluidic regime refer to the critical operational lengths. Millifluidic systems operate with channel dimensions in the millimeter (mm) range, microfluidics in the micrometer range (μm), and nanofluidics in the nanometer (nm) range. With all miniaturized devices, an increase in the surface-area-to-volume ratio (SA/V) can be achieved, unparalleled to that on the macroscale. Nevertheless, miniaturization is more than just simply reducing length scales of a macroscale process. At smaller length scales, fluidic channel exhibits properties

4

A.M. Soehartono et al.

which differ from that of the macroscale, bringing about a new set of challenges in the device implementation. Fluids confined into small channels tend to have laminar flow, unlike large-scale channels which would exhibit turbulent flow at increased fluid velocities. Laminar flow, while beneficial for its controllability, presents challenges in mixing applications. Additional channel designs or micromixers have to be incorporated as mixing can only be achieved through diffusion in the laminar flow regime. Furthermore, differences in fluidic properties exist between the three regimes that can determine its application suitability. For example, millifluidic channels can achieve relatively high surface-area-to-volume ratio (SA/V) with a lower pressure drop compared with microfluidics [6]. When used for reactionware for nanoparticle synthesis, this property can alleviate problems such as blockages that would otherwise occur in microfluidic channels. With increasingly smaller dimensions, fundamental physical phenomena are increasingly influenced by the fluid interactions with its channel boundaries [7], and this is especially true of nanochannels. Typically, fluids in the microscale can be considered as a continuum, holding true in the milli- and microscale [8]. However, at the nanoscale, the liquid is considered as an ensemble of molecules [9]. Nanochannels are typically fabricated with high aspect ratios (between 5 and 200) [10]. As a result of higher fluidic resistance in nanochannels, pressure-driven flow is difficult, prompting the need for other mechanisms to drive fluid through the channel. The ultrahigh SA/V that can be achieved in nanochannels means that bulk properties of the fluid are subjected to more interfacial effects, such as electrostatic forces seen in the formation of the electric double layer (EDL). Other forces from nanoscaling such as solute entropy, van der Waals forces, and hydrophilic repulsion have similarly been exploited to drive DNA separation, chromatographic separation, and adsorption prevention in nanochannels, respectively [11]. The full potential of nanofluidic devices may not have been realized yet as the theoretical framework of nanofluidics is not fully developed, unlike its microfluidic counterpart, and is an active area of investigation. The practical implementation and fabrication tools available also need to be carefully considered. Achieving small-scale features, doing so consistently, and characterizing the features become increasingly more difficult as the dimensions approach the nanoscale. In this respect, nanofluidics lacks an easy fabrication method. Millifluidic benefits from its suitability for non-lithographic rapid prototyping using 3D printing. This approach is a cheap and convenient alternative to microfluidic soft lithography, which often uses lithographic patterning to create a mold. However, 3D printing is limited by the resolution of the printer, where typically submillimeter resolution is not consistently achievable. Table 1 highlights a sampling of the differences in fluidic properties and highlights the advantages and limitations between the three regimes. Miniaturized fluidic devices have been applied in a variety of research areas corresponding to the scale of detectable structures (Fig. 1). On the larger end of the spectrum, millifluidics has been reported in reactionware [6, 12, 13] and bioanalysis [14, 15]. The larger channel dimensions are suitable for studies on the cell population scale, with dynamic cell studies involving more realistic tumor spheroid models,

Miniaturized Fluidic Devices and Their Biophotonic Applications

5

Table 1 Overview of characteristics of miniaturized fluidic devices, its advantages, and limitations Regime, critical operating length [m] Millifluidic, 103

Fluidic channel characteristics Lower surface tension Lower SA/V ratio Higher fluid volumes Liquid considered as continuum

Microfluidic, 106

Higher surface tension High SA/V ratio Low fluid volumes Liquid considered as continuum

Nanofluidic, 109

Ultrahigh SA/V ratio Greater interfacial effects such as strong electrostatic interaction between charged surface and ions in channel (EDL formation) Low fluid volumes Liquid considered as ensemble of molecules

Advantages Cell population scale Non-lithographic rapid prototyping using 3D printing Able to process viscous fluids Suitable for nanoparticle synthesis and large-scale in vitro bioanalysis Developed theoretical framework Single-cell scale Rapid prototyping using soft lithography Wide range of working flow rates Suitable for flow cytometry, sensing, and bioanalysis Single macromolecule confinement Valence-based macromolecule separation Suitable for DNA sequencing

Limitations Low-resolution features Nonstandard components Nonstandard characterization Limited range of working flow rates Prone to clogging and air bubbles Nonstandard components Nonstandard characterization

Incomplete theoretical framework Nonstandard components Nonstandard characterization Lacks a disruptive fabrication technology Difficult repeatability

small animal models, and embryos [16]. However, larger channels may lead to the use of more reagents and samples. Microfluidic channels, on the other hand, can achieve high SA/V ratios, but, at the expense of higher surface tensions. Nevertheless, laminar flow can be maintained with a wide working range [17], and thus a host of applications have been reported such as flow cytometry [18–20], biosensing [21–24], nanoparticle synthesis [25–27], and single-cell studies [28–31]. Since the average human cell diameters range in 10–30 μm, this regime provides a platform for detecting structures on the single-cell level. Finally, nanofluidic devices are suitable for probing molecular-scale structures such as small bacterium [32, 33], viruses [34–36], and DNA [37–39], with diameters on the order of 200 nm, 75 nm, and 2 nm, respectively. Furthermore, surface effects can be leveraged to perform

6

A.M. Soehartono et al.

Fig. 1 Selected biophotonic applications using different miniaturized devices working at milli-, micro-, and nanoscale level, along with the corresponding detectable biological systems at the length scale

functions such as valence-based ion separation. When compared to other regimes, microfluidics is the most mature. However, there is a proliferation in the research work dedicated to millifluidic and nanofluidic studies, as these regimes can overcome some of the application-based limitations seen in microfluidics.

Fluidic Manipulation One of the main assets of miniaturized fluidic devices is the ability to flexibly manipulate fluidic motion. The basis of operations within fluidic devices is built upon on a set of fundamental operations to perform fluid manipulation and transport, such as flow actuating, valving, mixing, and separating. Application of one or more of these components results in a functional device to perform multiple tasks. Miniaturization permits large-scale integration, for example, thousands of operations can be combined onto a single device, warranting high-throughput testing and analysis in a single device. Naturally, the range of applications from which these operations have bred is diverse. For instance, to enhance the performance of a surface plasmon resonance (SPR)-based biosensor, the device simultaneously regulates temperature through integrated micro-heaters and temperature sensors to maintain a uniform temperature distribution and ensure the integrity of the biosensing

Miniaturized Fluidic Devices and Their Biophotonic Applications

7

Temperature Sensors

Outlet Inlet

Detection area Microvalue

Micropump

Heaters

Microchannel Flow sensor

Fig. 2 Schematic of a multifunctional SPR sensor, comprising of microvalves, micropumps, flow sensors, heaters, and temperature sensors (Reprinted from Ref. [40]. Copyright (2007), with permission from Elsevier)

measurements [40] (Fig. 2). This multifunctional capability has further been exploited in on-chip devices, such as lab-on-a-chip (LoC) devices, which aim to translate conventional benchtop laboratory analysis into a compact device for sample analysis. This idea has spawned alternate iterations in biomedical studies such as tumor-on-a-chip [41] and organs-on-chips [42] such as brain-on-a-chip [43], to name a few. In each of these devices, the purpose is to act as an in vitro model that mimics the true nature of the biological systems on a single chip for predicting outcome of medical diagnosis and therapeutic treatment without the use of any animal models. Here, we briefly look at some of the physical phenomena encountered at the milli-, micro-, and nanoscales before reviewing the unit operators. The theoretical framework of fluidic flow is built on the Navier–Stokes formalism, which describes the effects of fluidic confinement. In an example of this, we look at the Reynolds number, Re. In order to ensure predictable device performance, a well-defined flow is desirable. As small-scale systems offer a marked increase in the SA/V ratio, surface forces dominate over gravitational forces [44, 45]. Re is a dimensionless parameter that dictates the flow regime of a channel, whether laminar or turbulent, and is given by the ratio of inertial forces due to velocity of the fluid to the viscosity expressed as: Re ¼

ρVlc η

where ρ is the fluid density, η is the fluid viscosity, lc is the characteristic length for the flow, and V is the fluid velocity. The effects of scaling are reflected in fluidic flow

8

A.M. Soehartono et al.

through the Re. Inertial forces, decreasing proportionally with the reduction of lc, result in a smaller Re. Conversely, the reduction of lc means viscosity becomes more dominant, signified by a larger Re. A low Re means a Stokes’ flow, translating to laminar flow. For channel flows with Re values approaching 2300, transition flow sets in, and Re numbers much greater than 2300 denote turbulent flow [46]. Predictably, with this characteristic, flow regime can be manipulated by geometry design. In miniaturized fluidic devices, small channel dimensions, with the critical dimension usually being the channel height, results in Re much less than 100. For example, the predictability of laminar flow has been used to pattern different cell types in a single stream, flowing in parallel in one conduit [47]. Applications where turbulent flow is desirable, such as mixing, are facilitated by modes such as chaotic advection. Although several transport phenomena differ greatly at the nanoscale, the Re generally holds true in channels with one dimension smaller than 100 nm [48]. Fluid flow in channels is most commonly actuated by pressure or electrokinetics. Pressure-driven flow is widely adopted for its simplicity and can be accomplished off-chip, such as using syringe pumps. However, fluidic resistance may dampen these flows as large hydrodynamic pressure may be necessary to actuate the fluids. This is evident when observing the governing equations of an incompressible, Newtonian fluid. The fluidic resistance of a circular tube is expressed as: R¼

8μL πr 4

where μ is the fluid viscosity, L the channel length, and r the channel radius. As evidenced by the equation, resistance is inversely proportional to the fourth power of the channel radius. So, as the channel diameter decreases, resistance rapidly increases. Alternately, electrokinetics, which applies external electric fields to move the fluid through the channel, with mechanisms including electroosmosis, and electrophoresis, can be used to drive the fluid flow. Thus, another important concept is the occurrence of the electric double layer (EDL), bringing about surface-charge-governed ion transport [45], which can be found in nanochannels. The changing electrical potential near the surface results in spontaneous formation of an EDL, comprising of the Stern layer and the diffuse layer as shown in Fig. 3a. Although a bulk solution may have a neutral charge, surface charges are observed at the solid–liquid interface. In this instance, surfaces interacting with an aqueous solution gain a net surface charge. Due to electrostatic attraction, ions opposite in charge to the surface ions (i.e., counter-ions) accumulate to form a shielding layer along the charged surface, forming a layer called the Stern layer. Co-ions are repelled away from the layer, and so free ions in the fluid form a diffuse layer. The zeta potential ζ is at the slip plane between the two layers (Fig. 3a). The thickness of such a layer is determined by the Debye length, which is the distance where the surface charge has decayed to 1e of its original charge toward the bulk value. In aqueous solutions, the Debye length is found to be between 1 and 100 nm [49]. As can be observed from Fig. 3d, in microchannels, the EDL is negligible as the electrical potential decays to the bulk value, and the surface charges

Miniaturized Fluidic Devices and Their Biophotonic Applications

9

a

+

LD

+ +

+ Y

+

Stern layer

+

+

+ +

Diffuse layer

+

y

-

-

d

f Distance

b

+

Electric Potential

Microchannel

e

g Distance

c

Ionic Concentration

Nanochannel

Fig. 3 (a) Schematic of the electric double layer, showing the accumulation of counter-ions on the charged surface, along with the diffuse layer and Debye length, LD. The bold line indicates the electrical potential profile, ψ, which becomes more significant with shrinking dimensions (Reproduced from Ref. [11] with permission of The Royal Society of Chemistry). A comparison of the surface charge effects on (b) a microchannel versus (c) a nanochannel. In a microchannel, the electrical potential is decayed to bulk value (d), unlike the nanochannel center which has potential at its center (e). Furthermore, (g) the nanochannel shows a higher counter-ion concentration compared to co-ions, unlike (f) the microchannel (Reprinted with permission from Ref. [49]. Copyright (2005) American Chemical Society)

are not significant enough to electrostatically manipulate ions. However, the effects of the EDL become more pronounced as the channel dimensions diminish closer to the Debye length, making this unique to nanochannels [9, 50]. In the case of the nanochannel, an electrical potential is present in the center (Fig. 3e). Mainly, the EDL introduces nonuniform motion and electric fields transverse to the flow. Thus, the resultant axial and transversal fluxes of electrostatic fields can be used to separate and disperse the analyte ions [51]. For example, the speed of small molecules moving through the EDL depends on their valence or molecular weight [10],

10

A.M. Soehartono et al.

which can be used to separate molecules electrophoretically [52]. Perhaps the most notable application of nanochannel electrophoresis is in DNA sequencing, where application of an electric field can stretch, relax, or recoil DNA molecules [53]. Let us now discuss about operators within the miniaturized devices. Control can be achieved through the implementation of components such as valves [54], pumps [55], separators and concentrators [56], and mixers to integrate functionality on a single device [57]. On-chip pumping is necessary for self-contained bioanalytical devices such as genomic analysis and immunoassays and complements other components within a micrototal analysis system (μTAS) such as microfilters and microreservoirs [58]. Many micropumping schemes exist in the literature, and they can be classified as mechanical, like check valve pumps and peristaltic pumps, or nonmechanical [59], such as electrochemical pumps and the phase transfer pump. The performance is characterized by metrics such as flow rate, stability, and efficiency. More recently, an optical pump, driven by optical tweezing, was reported to transport volumes flows of up to 200 fL/s across a microchannel [60]. The pump utilizes the rotation induced in birefringent particles from the transfer of spin angular momentum in particles trapped with circularly polarized light. To facilitate fluid movement, two vaterite particles were counter-rotated and shown in Fig. 4c, and the study observed the movement of a 1 μm silica bead. Surface tension is another phenomenon that has also been exploited, where a nanofluidic bubble pump drove picoliter volumes through a channel via surface tension-directed gas injection [61]. The mechanisms for valve control include mechanical, pneumatic, and electrokinetic forces. In one pneumatically controlled valve, a dual-layer poly(dimethylsiloxane) (PDMS) substrate was used, with the one layer as the control layer and the other as the flow layer. Pressure forces the control layer down, obstructing the fluid flow path in the flow layer. By varying the pressure gradients, valve switching states can be achieved. Using microvalves, fluid flow into separate chambers can be controlled, creating a serial dilution network [62] to rapidly detect varying concentrations of analyte. While microvalves have been widely reported, nanovalves pose a technical challenge as existing valves reported are in the micrometer dimensions. Mawatari et al. reported a nonmechanical Laplace valve integrated into a nanochannel, operating based on a wettability boundary to generate fluid droplets. The valve is a nanopillar structure formed on the bottom of a nanochannel (Fig. 4a, b-i). By modifying the surface within the channel to be hydrophobic, the nanopillars become more hydrophobic as compared to the flat surface, thereby enabling the valve to withstand up to Laplace pressure at the liquid surface. When the valve breakthrough pressure is exceeded, the valve opens and actuates the droplet (Fig. 4b-iv) [54]. In another example, polymer brushes grafted onto a nanochannel were modulated by an external electric field to gate the fluid flow within the channel [63]. Further, several strategies can be used for separating and combining fluids, such as mixers. Mixers have the goal of thoroughly and rapidly mixing multiple samples in a device [64], which can be accomplished actively or passively. Attributing to the typically small Re values in microfluidic devices, mixing is effectively performed through diffusion. When two laminar streams come into contact with

Miniaturized Fluidic Devices and Their Biophotonic Applications

a

Hydrophobic Layer

11

c

Glass P

Laplace pressure PL

100 -1000 nm

Glass Nanostructure

b (i) Nanochannels

5μm/s

(ii)

5μm

valve closed

P1 water

(iii)

P2

(iv)

P3 valve opened

10 5 0 −5 −10 −7.5

∼1 fL P2

Pump speed (μm/s)

d

Nanopillars

−5 −2.5 0 2.5 5 7.5 Frequency of rotating particle (Hz)

P3

Fig. 4 (a) Schematic showing the operation of a Laplace nanovalve. (b) Droplet generation and actuation through the valve, with i) channel design (ii) filling the channel with water, the valve is closed (iii) application of pressure, creating a femtoliter droplet (iv) movement of droplet through the opened valve with breakthrough pressure (Reprinted with permission from Ref. [54]. Copyright (2012) American Chemical Society). (c) Flow field of the fluid while the birefringent pumps are rotating, and (d) the fluid speed is measured at the center of the pump (Reprinted from Ref. [60] with permission of The Royal Society of Chemistry)

each other, mixing will only occur through diffusion, resulting in a slow mixing time. Thus, the challenge of mixers is to attain efficient mixing rapidly. In milliand microchannels, passive mixing can be achieved by engineering the channel geometry to increase fluid folding. For example, a serpentine mixer split and recombined flows in successive F-shaped units [65], accumulating an overall chaotic advection. Other mixers that have been reported include a T-type mixer [66, 67] and a vortex mixer [68, 69]. Although nanofluidic devices are still in its infancy stage, several nanofluidic mixers have been reported. Accounting for the greater effects of the channel walls, mixing was realized with hybrid surfaces, alternating between hydrophilic and hydrophobic patterns in the channel walls of a Y-shaped mixer [70]. Active mixers, on the other hand, rely on external sources to increase the interfacial area, such as through electrokinetic mixers [71, 72] and magnetic mixers [73].

12

A.M. Soehartono et al.

Fabrication of a Miniaturized Fluidic Network Microchannels typically consist of a substrate layer and the channel layer. Some of the most common substrates include glass, silicon, and polymers [74]. Early adopters of microfluidic chips opted for hard materials such as glass or silicon, inorganic materials typically used in photolithography [3, 75]. Glass and silicon microfluidic chips can be patterned using surface micromachining; buried-channel techniques, including deep reactive-ion etching (DRIE) and chemical vapor deposition (CVD); or bulk micromachining [76]. These materials are particularly useful for high-temperature processing, high aspect ratio devices (up to 20:1), or electrode integration. However, the fabrication of glass- and silicon-based microfluidic chips requires specialized equipment, hazardous chemicals such as hydrofluoric acid (HF), and clean room conditions for fabrication, making it expensive and rendering it inaccessible for many researchers working on miniaturized devices. Furthermore, since glass and silicon are not gas permeable, long-term cell culturing cannot be maintained inside such chips. More recently, polymers have replaced the silicon substrate to be the dominant material for microchannel fabrication due to its low cost and ease for rapid prototyping, with the most common methods being casting or hot embossing [77] for replication. In particular, soft lithography is a non-lithographic microfabrication technique to replicate structures. Briefly, an elastomeric polymer is poured over a master mold, after which it is cured and peeled off [78]. Elastomeric polymers are flexible cross-linked polymer chains which can stretch or compress with force. PDMS is a notable elastomeric polymer widely used in soft lithography because of its low cost, biocompatibility, optical transparency, and gas permeability [79]. Furthermore, due to its permeability to oxygen, nitrogen, and carbon dioxide, viable cell cultures can be maintained, thus enabling PDMS chips to be used for long-term cell imaging applications. The master is typically created using standard lithographic techniques such as photolithography, electron beam lithography, and micromachining. Alternately, millifluidic channel molds can be fabricated using three-dimensional (3D) printers as a low-cost alternative. This is an attractive option compared to clean room lithography, which is more expensive and time-consuming. However, variable results have been reported with 3D – as much as 40 % variation on 3D printed mold designs with submillimeter resolution. Therefore, application of such technology may only be suitable for milli-scale or macroscale fluidics [80]. Nanofabrication, however, may not be as amenable to soft lithography as resolution limitations in conventional photolithography may not be sufficient to achieve the critical dimensions required for nanofluidics [81]. In this instance, where soft lithography is analogous to microfluidics, nanofluidics has yet to see a revolutionary fabrication technique [82], essentially limiting the complexity scale that can be achieved with nanometric channels. Perhaps the most important development in nanofabrication has been with the introduction of nanoimprint lithography (NIL) [83], which works by mechanically deforming the resist material to achieve feature sizes of up to 10 nm [84]. Chou et al. devised nanochannels fabricated by NIL [85]. Briefly, a mold with the designed pattern is produced by electron beam

Miniaturized Fluidic Devices and Their Biophotonic Applications

13

lithography (EBL) followed by reactive-ion etching (RIE). The mold is then pressed against the substrate coated with a resist layer, such as poly(methyl methacrylate) (PMMA). During the imprint step, the resist is heated to a temperature above its glass transition temperature. The mold is then removed, and the pattern on the resist layer is fully developed by RIE with oxygen. Other nanolithography techniques reported include focused ion beam, interferometric lithography, and sphere lithography [81].

Biophotonic Applications with Miniaturized Devices Nucleic Acid Optical Mapping Fluidic device with nanochannels have been used to stretch single DNA molecules for length measurements and optical mapping. The advances in the NIL technique enable the fabrication of nanochannels with cross sections smaller than the persistence length of DNA. As a result, labeled single DNA molecules can be stretched and imaged in nanochannels [86]. This could be a powerful tool for studying DNA because it enables isolating a single molecule from the bulk by limiting the degrees of freedom of molecules to 1D. As a result, imaging and analysis of DNA which is not immobilized to a surface becomes possible [87]. Generally, DNA stretching is achieved in nanochannels by two methods. In the first method, DNA molecules flowed through a funnel-like nanochannel spontaneously stretched because of the gradually increasing velocity, known as elongational flow [88] (Fig. 5a). In the second method, the motion of DNA molecules is controlled by entropic confinement [89, 90]. For example, in one report, stretched DNA is achieved by electrophoretically driving long DNA molecules into 45  45 nm nanochannels [91] (Fig. 5b). The spontaneous stretching of DNA molecules in the confinement of nanochannel is a result of the self-repulsion between negatively charged phosphate groups on the DNA backbone [87]. Three main techniques have been utilized to perform optical mapping on stretched DNA: restriction mapping, denaturation mapping, and sequence-specific tagging [92]. In conventional restriction mapping, dsDNA is first cut into small fragments by restriction enzymes, and the fragments are separated by electrophoresis. The length of each DNA fragment is determined by its distance traveled in electrophoresis process, after which a restriction map of the DNA can be generated. However, this method does not work well with long DNA molecule and furthermore is unable to perform single-cell mapping. These problems can be conveniently solved when restriction cleavage is performed on stretched single DNA molecule in nanochannel. Due to the confinement of nanochannel, the DNA fragments formed after restriction cleavage are retained at their original position. They can be fluorescence labeled and imaged by microscopy [93] (Fig. 6a). Moreover, the electrophoretic separation of DNA fragments is no longer necessary as the fragmented lengths can be determined by either direct length measurement from microscopy or measuring the fluorescence intensity signal of each fragment [92]. Thus this method is promising in ensuring high-throughput processing. Additionally, the capability of

14

A.M. Soehartono et al.

Fig. 5 Stretching of single DNA molecules in nanochannel (a) SEM image of a funnel-like nanochannel for generation of elongational flow to drive DNA molecules into the nanochannel for DNA stretching. Scale bar is 1 μm (Reprinted with permission from Ref. [88]. Copyright (2010) American Chemical Society). (b) Schematic and fluorescence image of DNA stretching by electrophoretically driving DNA molecules into the nanochannel (Reprinted by permission from Macmillian Publishers Ltd: Ref. [91])

performing single-cell mapping is important to study the genomic heterogeneity in the same cell types, and, consequently, differences in disease progression and drug response can be monitored. For example, Gupta et al. [94] applied this optical mapping technique to study the structural variation in multiple myeloma genome. Denaturation mapping is a technique utilizing the differences in local melting temperatures along the length of a long dsDNA molecule to generate a unique barcode for that DNA. On a DNA molecule, the AT-rich regions tend to have lower double-helix stability than GC-rich regions, indicating that AT-rich regions begin melting at lower temperatures. Reisner et al. [95] used this technique to generate a grayscale barcode with brighter and darker region along the length of stretched DNA molecules. Briefly, dsDNA molecules are stained with intercalating dye and stretched in nanochannel. When treated with heat, the DNA partially melts according to its base pair sequence, and the intercalating dye at the melted regions diffuses away to generate darker regions, while the unmelted regions remain as brighter regions (Fig. 6b). One advantage of this method is the relatively simple procedure (staining and heating); no enzymatic pretreatments of DNA are required. Nyberg et al. [96] used an antibiotic that has high affinity to AT-rich regions to prevent the binding of intercalating dye to AT-rich regions, thus creating a fluorescence map of AT versus GC-rich regions on the DNA. For sequence-specific tagging, DNA molecules first undergo enzymatic pretreatments before being stretched in a nanochannel. Briefly, nicking enzymes are applied to create single-stranded nicks on dsDNA molecules in a sequence-specific manner. Then fluorescent nucleotides are incorporated into the nicking site by DNA

Intensity (a.u.)

Miniaturized Fluidic Devices and Their Biophotonic Applications

15

2 1 0 −1 −2 30

35 40 45 50 55 60 Position along nanochannel (μm)

65

70

Fig. 6 Three optical mapping techniques for single DNA analysis. (a) Restriction mapping: Stretched DNA molecules are cut into fragments by restriction enzymes and the locations of the restriction sites are denoted as the holes (indicated by white arrows in the diagram) along the length of DNA molecules (Reprinted from Ref. [98], under CC BY 2.0). (b) Denaturation mapping: Stretched DNA molecules partially melt when treated with heat. The melted regions appear darker, while the unmelted brighter, generating a unique barcode for that DNA molecule. Scale bar is 10 μm (Reprinted by permission from Macmillian Publishers Ltd: Ref. [99]). (c) Sequence-specific tagging: Sequence-specific nicks are created on DNA samples after which the nicked sites are refilled with fluorescent nucleotides. These labeled DNA molecules are then stretched in the nanochannel and imaged under the microscope (Reprinted by permission from Macmillian Publishers Ltd: Ref. [91])

polymerase. At last, the treated DNA molecules are stretched in a nanochannel and the fluorescent image can subsequently be observed [90, 91] (Fig. 6c). The distance between occurrences of the specific sequence can be directly measured from fluorescent imaging to construct an optical map. The position of the nicking site along the DNA also can be imaged by other mechanisms, such as detecting the fluorescence resonance energy transfer (FRET) between the intercalating dye labeling the DNA and the acceptor dye labeling the nicking sites [97].

Bioanalysis The development of millifluidic-based bioanalytical devices, often categorized as LoC, offers a highly efficient platform for analysis of cell manipulation, drug

16

A.M. Soehartono et al. Em

a

bry

b

or

Gr

I ien nlet tg en ug era Dr ter ad

oo

m

Outlet

dia

Me

7

2

Outlet

6

5

4

3

1

ug

Gr

Dr

ad

Fis

hr

oo

m

ien

tg

ia

d en era Me ter Inl et

300μm

400μm

Fig. 7 Design of the fluidic device: (a) schematic diagram of the fluidic device and (b) actual image of the fabricated device from top view and micrographs of the embryo and larvae fish in the chamber (Reprinted from Ref. [101], under CC BY 4.0)

screening, mimicking tumor in vivo, and others [100–103]. The miniaturization efforts in the area of biological analysis encourage rapid analysis and simultaneous operation of multiple analyses, requiring low volume of sample and reagent, which in turn leads to low waste levels. Regardless of their methods of fabrication, demand of milli-scale fluidic devices remains exceptionally high, particularly in bioanalytical research. Li et al. developed a microfluidic device to assess the toxicity and teratogenic effect of antiasthmatic agent – aminophylline (Apl) toward zebrafish embryos and larvae [101]. The work also demonstrated in situ analysis of the modeled microorganism to assess their survival rate, body length, and hatched rate of the embryos. The design of the fluidic device comprised of two layers which include a PDMS (top layer) bonded onto a glass slide (bottom layer). The fabrication process was done by scribing the culturing chambers and fluidic channel patterns on a copper-based mold for molding a PDMS replica. A single fabricated chip composed of two units allowed simultaneous analyses, where one unit was used for toxicity and teratogenicity experiments and another for larvae experiments (Fig. 7). Each unit consists of a concentration gradient generator (CGG) with a sigmoidal distribution pattern to split and mix the drug and media to produce variations of drug concentrations and flow into culturing chamber (C1–C7). With this feature, rapid analysis of drug screening test at single organism level can be easily performed. While there exists many different methods to fabricate a fluidic-based device, it is of interest to find out how the performance of the device can be influenced by the choice of fabrication techniques and materials. A study done by Zhu et al. uses two different fabrication techniques, namely, Multi-Jet Modeling (MJM) and stereolithography (SLA) with several types of UV-curable resin, to investigate the quality and performance of the fabricated three-dimensional (3D) printed millifluidic device for in situ analysis of zebrafish embryo. In this work, the fluidic-based device features a serpentine channel for loading of living embryos and medium into an array of traps, using hydrodynamic force via small suction channels as seen in Fig. 8.

Miniaturized Fluidic Devices and Their Biophotonic Applications

17

a R1

R2

T1

R3

R4

T9 T8

T2 T7 T6 T4

T14

T12 T5

R7

T22

T20 T13

T41

T30

4 mm

T47

T39 T35

Outlet

T42 T38

T36 T29

T48

T40 T34

T31 T27

R10 R11 R12

T33

T28 T21

R9

T32 T26

T23 T19

R8

T25 T24

T18 T15

T11

R6

T17 T16

T10

T3

R5

T43

T46

T44 T37

T45

Inlet

b

c

Fig. 8 Schematic of fluidic chip (a) 2D CAD drawing showing miniaturized traps (T ) positioned in array of rows (R) (b) 3D CAD drawing (c) Actual fabricated device using SLA technique (Reproduced from Ref. [100] with permission of AIP Publishing)

Several tests were conducted in the study, including biocompatibility of the 3D printed microenvironment for long-term monitoring of bioassays on the living embryos and trapping efficiency of the embryos into the traps. Based on the results obtained, the sign of toxicity was not observed for most of the tested resins at 72 h of incubation. Similar to other tested chips, embryos placed within the chip fabricated using MJM showed no sign of embryo mortality after 48 h of incubation. However, sudden embryo mortality of more than 75 % was observed at 72 h of incubation. This observation suggests that the toxicity responses of embryos toward water-soluble photo-curable polymer leachates are cumulative, where signs of toxicity can only be observed after a certain period of time. On top of this, the fabricated devices demonstrated their capability for fluorescence imaging of the zebrafish embryos, as the quality of captured images are very much influenced by the choice of resins. For instance, PDMS-fabricated chips are capable of producing high-resolution stacked confocal images for a high-definition view of the embryo specimens. On the contrary, some resins were not able to give a clear fluorescence image as seen in Fig. 9. Further details of chip performance fabricated using other resin types are shown in the paper [101]. Overall, this study showed that millifluidic device is suitable for analysis of biological specimens.

18

a

b

A.M. Soehartono et al.

PDMS

PDMS

VisiJet Crystal

Watershed 11122XC

Dreve Fototec 7150 Clear

VisiJet SL Clear

Form Clear

VisiJet SL Clear

Fig. 9 Optical transparency of the printed substrate for fluorescence imaging. (a) Fluorescence imaging of zebrafish embryo that was immobilized inside the 3D printed fluidic-based device. (b) High-resolution stacked confocal imaging of zebrafish embryo on devices fabricated using PDMS and soft lithography (left) and stereolithography in VisiJet SL clear resin (right) (Reproduced from Ref. [100] with permission of AIP Publishing)

Apart from this, investigations focusing on millifluidic-based droplet analyzer devices have been actively carried out in recent years. Millifluidic technologies are particularly appealing for droplet-based analyzer applications such as microbial studies [102, 103]. Baraban et al. has developed a millifluidic droplet analyzer (MDA) to monitor the activities of Escherichia coli, including its growth rate and its resistance to the antibiotic cefotaxime. Referring to Fig. 10, the MDA demonstrated successful encapsulation of bacteria into droplets with predefined culture medium (e.g., nutrients and antibiotics) while relaying this droplet train to an attached detector block. The detector block, which comprised of a light source (i.e., mercury lamp) and photomultiplier tube (PMT), suggests the capability of the MDA is not only limited to develop a large scale of isolated droplets containing bacteria strain in a predefined microenvironment but can also enable the analysis of minimal inhibitory concentration (MIC) of the cefotaxime. The galK locus in the bacterial chromosome was pre-inserted with yellow fluorescent protein (YFP) to correlate fluorescence signal and number of cells within the droplets; thus, MIC of cefotaxime toward growth of Escherichia coli over a time span of 90 h can be monitored.

Miniaturized Fluidic Devices and Their Biophotonic Applications

19

DETECTOR

DROP-MAKER Inset I Antibiotics

Waste

Syringe-Antb.

LB (nutrients) Syringe-LB

Cross A

Waste

PMT output, V

Inset II

Acquisition time, s

V3

V2

V1

HFE-oil + LB+E.coli

surfactant

Cross B

Forth

Objective lens

Back

Syringe-Oil Syringe-Bact.

Lamp

DM ExF

EmF

Mineral oil

PC

PMT

Fig. 10 Schematic of the (a) millifluidic-based droplet analyzer where antibiotics, nutrients, and bacteria are injected and mixed at Cross A; water in oil droplets are formed at Cross B using HFE oil. Mineral oil droplets are used as spacers to separate individual droplets. (b) Detector block measures growth of bacteria over time (Reproduced from Ref. [102] with permission of The Royal Society of Chemistry)

a

b

c 25 °C

Algae in TAP medium 0.75 mm

HFE oil

Air (Spacer) Detection

Fig. 11 Millifluidic devices for the analysis of microalgae. (a) Generation of uniform droplets containing microalgae and spatially separated by air spacers. (b) Detection block to measure chlorophyll fluorescence emission intensities inside each droplet. (c) Transparent FEP tube wrapped in coil (Reprinted from Ref. [14], under CC BY 4.0)

On the other hand, Damodaran et al. of the same research group demonstrated an improved version of MDA to which is capable for monitoring growth kinetics of microalgae for up to 140 h long while conserving their viability and metabolism [14]. The working principle does not deviate much from their previous design; however, some newly developed key features include the injection of air spacers (replacing mineral oil spacer) for the separation of adjacent algal droplets (Fig. 11). The air spacer not only reduces chances of algal cross contamination but also extends the stability and life span of droplets. Moreover, complete isolation of millidroplets from each other also enables precise monitoring of growth kinetics and the size of

20

A.M. Soehartono et al.

individual droplet. Similar to the work presented by Baraban et al., a fluorescence readout was used to measure the chlorophyll fluorescence intensity of each drop to estimate changes in the number of algal cells. Finally, an additional droplet-sorter module was included in the new design to enable collection of single healthy algal droplet for further experimentation, such as MTT and MIC assay screening.

Flow Cytometry Flow cytometry is a powerful cell analysis tool for cytology, molecular biology, and clinical applications [19, 104–107] and works by passing a series of single cells through a focused laser beam at a rate of thousands cells per second. Optical signals from the system such as fluorescence, forward scatter, and side scatter can be detected and translated into morphological information such as cell size, count, and group. In clinical diagnosis, flow cytometers are the gold standard in HIV diagnostics, used to count CD4+ T lymphocytes [106, 108, 109]. However, conventional flow cytometers are extremely expensive, complex, difficult to operate, and bulky, impeding widespread usage. Microfluidics provides a promising solution for developing low-cost, portable, and ease-of-use flow cytometer [4, 107, 110]. Stateof-the-art microfluidic systems enable testing with single cells, which naturally fits the requirement of flow cytometry. A few microfluidic flow cytometers have been reported [106–113]. In comparison to conventional flow cytometers, microfluidic flow cytometers are much smaller where cells are flowing into on-chip microchannels with micrometer dimensions. Illumination and optical signal detection on cells are conducted through off-chip optics or on-chip optofluidic setups (Figs. 12a, b, and 13c). Microfluidic flow cytometry has several unique advantages, such as the capability of processing small-volume (100 nL–100 μL) and low-density samples [104]. Furthermore, the ability to integrate more sophisticated sample-manipulating functions in microfluidic device enables the screening, capturing, and testing on the rare cells in each sample. For example, circulating tumor cells (CTCs) contain information of cancer metastasis such as their viabilities, concentrations, and phenotypes, but are exceedingly rare in cancer patient’s blood sample (1–100 CTCs per 109 blood cells) [115]. Researchers have designed microfluidic interfaces for the enrichment of CTCs from blood by removing other blood cells to allow subsequent flow cytometry processing. Reported CTC enrichment methods include affinity-based capture using antibodies [116], deformability-based capture using microstructures [117, 118], and hydrodynamic separation, such as that by inertial lifting forces in a spiral channel [119, 120]. Besides CTCs, on-chip capturing and imaging of lymphomas from cerebrospinal fluids has been reported by Turetsky et al. as a new method to diagnose and characterize central nervous system (CNS) lymphoma [121]. In conventional flow cytometry, a surrounding sheath flow has to be added to the cell solution in order to achieve a focused single-file passing of cells so that cells can pass through the focused laser beam individually [107]. The surrounding sheath flow is achieved with a coaxial injection flow chamber, which is complex and expensive. It is

Miniaturized Fluidic Devices and Their Biophotonic Applications

21

Fig. 12 Microfluidic flow cytometry. (a) Photograph of a microfluidic flow cytometer with fluidic networks and integrated optofluidic setup. (b) Microscope image shows the hydrodynamic focusing of fluorescent polystyrene beads (green) and the alignment of optical fibers (125 μm in outer diameter). The FL fiber detects fluorescent light, the SSC fiber detects side scatter light, and the FSC detects forward scatter light. (c) Schematic of a microfluidic flow cytometer with hydrodynamic cell focusing. A, B, C, and D are inputs for sheath flow channels. Cell distribution in cross-sectional planes is shown in inserts 1, 2, 3, and 4 (Reprinted from Ref. [114] with permission of AIP Publishing)

important to note that adding a sheath flow would significantly reduce the sample concentration, which is undesirable when sample/reagent volumes are limited [104]. In microfluidic flow cytometry, novel ideas have been created for cell focusing. By defining the microfluidic network, horizontal focusing sheath flows can be added for cell focusing without the needs of an external flow chamber (Fig. 12b, c) [107, 122]. For applications that require focusing of cells in the vertical direction, microfluidic devices provide a simple yet useful method, utilizing the Dean vortex in the curved channels [107, 123–125]. In it, cells are focused in a specific cross-sectional plane (Fig. 12c). Combining the two focusing techniques, 3D hydrodynamic cell focusing is achieved which is equivalent to the coaxial sheath flow. All sheath flow channels, along with microfluidic structures, can be easily fabricated by soft lithography. Sheathless microfluidic focusing techniques have also been reported, such as bulk acoustic standing waves (BAW) [126], standing surface acoustic waves (SSAW) [127], dielectrophoresis

22

A.M. Soehartono et al.

a

b 35

mm

Simple LED Lens

12 mm

m m .5

55

Color Filter

m m

28

14 .5m m

c

Cellphone Lens

27.9 mm

Glass or PDMS (n1~1.45 or 1.41)

LED

LED

Liquid (n2=1.33)

CMOS sensor

d

Glass (n3=1.45)

Fluorescence

Absorption filter Fluorescence External lens

CMOS

Cellphone camera unit

Fig. 13 CD4+ cells were imaged via lens and CMOS on a smartphone. (a–c) Diagrams of optofluidic designs on the cell phone and cell imaging principle. (d) Photograph of the actual platform (Reprinted with permission from Ref. [106]. Copyright (2011) American Chemical Society)

(DEP) [128], and lateral flow displacement, which uses microstructures to induce flow path changes [129]. However, these techniques are difficult to incorporate with flow cytometry as the system complexity would be increased. Conversely, combining several focusing techniques can compensate for some limitations in a singular method. For example, hydrodynamic focusing can be incorporated with DEP and SSAW devices in order to reduce the orthogonal dispersion effect [107]. So far, many microfluidic flow cytometers are based on hydrodynamic cell focusing. Optical detection is another key function in flow cytometry. Most microfluidic flow cytometers have integrated components such as lasers, optical fibers, filters, PMTs, and oscilloscopes to mimic the illumination, waveguide, photodetection, and signal processing of conventional systems [107, 122]. However, in this respect, the miniaturization and integration of optical system still have room for improvement. Recent demonstration of on-chip CD4+ T-cell counts using a microfluidic flow cytometer provide another idea of optical system integration. Moon et al. [110] and Ozcan et al. [130] presented a lensless shadow imaging technique to observe cells and acquire images with CCD/CMOS detectors integrated with the microfluidics. High-resolution cell imaging, labeled microscope-on-a-chip, was demonstrated by Cui et al. [131]. Indeed, with the development of built-in cameras on smartphones or tablets, there is indeed potential in using such compact imaging

Miniaturized Fluidic Devices and Their Biophotonic Applications

23

device in flow cytometry. Tseng et al. presented a lens-free microscopy technique installed on a cell phone [132]. Zhu et al. also designed a smartphone imaging system for microfluidic chips (Fig. 13) [106]. Furthermore, the image processing and result analysis were accomplished through software available in the smartphone operation system (Android by Google). It is also worth noting that since cell phones have connectivity to the Internet, it is then convenient for the results to be analyzed by doctors or technicians located anywhere in the world. By integrating common equipment like the optical microscope or CCD/CMOS sensors on smartphones, reliable cell counting results with high accuracy could be acquired without the need of specialized hospital or laboratory equipment.

Plasmonic Biosensors Immunoassays are routinely used in the clinical setting for disease diagnosis and drug therapeutic monitoring. However, these tests are labor-intensive, slow with long incubation times, due to inefficient mass transport, and use expensive reagents and immunoagents [133]. Miniaturized biosensors are a platform that can eliminate the problems seen in conventional immunoassays by providing portability, high throughput, reduced reagent usage, automation, and low sample volumes. This is achieved through integration of microfluidic architecture with biosensing components. Biosensing is the analytical interrogation of biological samples by recording the response of a biological receptor upon analyte binding. Their main elements comprise of a transducer (mechanical, electrical, or optical) and a biological recognition element to capture an analyte. Analyte binding with the recognition element induces a measureable change in the signal, through variations in the refractive index sensitivity, intensity, or interference patterns. These correspond to optical properties of the sample, such as absorption, scattering, and reflectance. An ideal biosensor has high specificity, sensitivity, and selectivity in addition to its multiplexing compatibility, real-time detection, low-cost fabrication, and portability. Detection sensitivity Detection sensitivity is important in the miniaturization process in the miniaturization process, as the detection volume scales with the device, and many works are devoted to improving this parameter. SPR has been widely used in the development of label-free biosensors. The excitation of surface plasmon polaritons (SPP) forms an exponentially decaying evanescent wave of nanometer range in the medium that is highly sensitive to the surrounding medium. A change in the refractive index of the medium results in a change of the characteristics of incident light [134]. This is the basis of SPR biosensing, during which analyte binding to a recognition element changes the resonance coupling of the incident light and wave. SPR measurements can reveal the specificity affinity, kinetics of biomolecular interaction, and analyte concentration [135]. Transduction is actuated by a thin film, usually gold, for its chemical stability and free electron behavior and functionalized with a biological recognition element [136]. The typical interrogation configuration implements the prismcoupled Kretschmann configuration, where a metal thin film is excited by incident

24

A.M. Soehartono et al.

source, usually a laser, through a prism, and reflected light is collected at a detector. Evanescence is limited to approximately 300 nm from the dielectric interface, making SPR sensing well suited for fluidic miniaturization. Additionally, the complete confinement of the liquid creates a perfectly conformal dielectric environment [137]. SPR imaging permits the multiplexed and real-time detection by monitoring intensity or phase changes over a large surface area. Parallel channel or waveguide arrangements allow the interrogation of multiple analytes, reducing analysis time and useful in molecular analysis and diagnostic applications, such as immunoassays. Ouellet et al. reported a 264-element individually addressable arrays of patterned gold films with a serial dilution network for binding kinetic measurements. Compared to conventional multi-plating enzyme-linked immunosorbent assay (ELISA), detection and quantitative analysis can be done in 10 min, as opposed to at least 60 min with the ELISA method [138] (Fig. 14a). Luo et al. performed a direct immunoassay for the detection of biotin-bovine serum albumin (BSA), analyzed by an array of gold spots under a network of microfluidic circuitry with control layer and flow layer and found the limit of detection (LOD) can be as low as 0.21 nM. By implementing a sandwich assay using a gold nanoparticle-labeled antibody, the LOD was improved up to 38 pM. In another study, a microfluidic device studied the two-dimensional spatial phase variation of rabbit immunoglobulin (IgG) adsorption onto anti-rabbit IgG functionalized film with built-in temperature regulation [40]. The analyte was delivered to the detection area through a series of microvalves and pumps. Instead of thin films, localized surface plasmon resonance (LSPR) biosensors are based on the excitation of metallic nanostructures. Resonance properties are dependent on its shape, size, interparticle distance, and dielectric medium refractive index [139]. In LSPR, the shorter evanescent wave range (100 nm) compared with SPR means the sensing volume is reduced. Several structures have been reported, from nanospheres [140], nanorods [141], nanorings [142], and nanoholes with patterning achievable through self-assembly, electron beam lithography, and, more recently, NIL. In one example, arrayed gold nanodisks were fabricated on glass to detect prostate-specific antigen (PSA) in a sandwich immunoassay. The wavelength shift amplification due to gold nanodisks resulted in femtomolar detection using nanoimprint lithography [143]. In another example, a multiplex immunoassay was developed to detect six different cytokines simultaneously using barcode channel arrays comprised of immobilized gold nanorods [144]. The arrays were patterned using a simple PDMS channel, where gold nanorods have been shown to have a superior sensitivity over their nanosphere counterparts. Six cytokines concentrations, quantified with the scattering intensity, were measured over 480 sensing spots (Fig. 14b) with sensitivities between 5 and 20 pg/mL of a limited sample volume (1 μL). This assay was used to monitor the inflammatory response of infants after cardiopulmonary bypass surgery. More recently, nanowells and nanoholes have shown to be attractive sensing platforms that can host LSPR modes, with in-hole sensing of proteins in the attomolar range, while confining fluids within its volumes [145–147]. Its relative ease of fabrication through sacrificial array layers or imprinting makes it an attractive

Miniaturized Fluidic Devices and Their Biophotonic Applications

a

PDMS

25

Control Layer Flow Layer

Glass Vertical Reagent Reservoirs

Opening for Vertical Valves

+ +. .+ + + +. .+ + Horizontal Reagent Reservoirs Opening for Horizontal Valves

b PDMS

Fluidic channel

Imaging

Sample flow

Glass idic channel

Flu

PDMS

icroarray

Glass

1.0

(a.u.) 1.0

⏐Er/E0⏐2

Decay length 20 65 nm

0.5

Frequency

15

0.5 0

0.0 0

40

80

Distance (nm)

10 5 0 0

1200 400 800 Interparticle Distance (nm)

Binding

Scattering Intensity (a.u.)

norod M

Gold na

Intensity Increase

AuNR

AuNR

Spectrum Shift 500

550

600

650

700

750

800

Wavelength (nm)

Fig. 14 (a) Microfluidic architecture of an SPR-microfluidic chip (Reproduced from Ref. [138] with permission of The Royal Society of Chemistry). (b) Fabrication of an LSPR multiplex cytokine immunoassay, accompanied by a histogram showing interparticle distance of the nanorods, and the principle of LSPR detection (Reprinted with permission from Ref. [144]. Copyright (2015) American Chemical Society)

26

A.M. Soehartono et al.

b

(Diffusive Flow)

(Targeted Delivery)

Light

Light

Au NITRIDE

#2

PDMS

#1

Au NITRIDE

#4

#3

#1

#4

#3

EOT Signal

c

b

LED1

a

Prism Sheet

#2

PDMS

a

EOT Signal Gold film (50 nm thick)

PD2 (sensing) LED2

Pyrex glass plate (10x60x0.4 mm3)

Polarizer PD1 (reference)

Pinhole a: aperture, b: spherical cover

d LED 1

LED 2

Fig. 15 Schematic of (a) a flow-over nanohole array and (b) a flow-through nanohole array. Flow through is initiated by the blockage of the inlet/outlet at the top and bottom layer (Reprinted from Ref. [150] with permission of AIP Publishing). (c) Schematic configuration and (d) actual image of the planar optical waveguide (Reprinted from Ref. [151]. Copyright (2005) with permission from Elsevier)

platform for high-sensitivity detection. The earliest reported nanohole SPR sensors comprised of dead-end bottoms, where analytes were flown over the array, detecting sensitivities of 400 nm/refractive index unit (RIU) [148]. However, it was later found that the innermost parts of the holes had the highest levels of transduction with sensitivities of 650 nm/RIU [146], prompting research to efficiently transport analytes into the holes. Flow-through nanohole array sensor permits the targeted delivery of analytes by using extraordinary transmission (EOT) as the basis of detection. EOT is a phenomenon where SPP excitation results in transmission of the optical wave [149] and can be captured with a collinear detector. Yanik

Miniaturized Fluidic Devices and Their Biophotonic Applications

27

et al. demonstrated a lift-off free fabrication process using EBL and RIE to pattern a nanohole array suspended between two fluidic chambers. The fluidic flow-through ensures analytes are bound to the hole and is achieved by blocking the one of the two inlets/outlets at the top and bottom layers, steering the flow perpendicularly. Both flow-over (diffusive flow) and flow-through (targeted delivery) methods are shown in Fig. 15. In a comparative study, a 14-fold increase in mass transport rate constants was seen jumping from 0.0158 min1 in a flow-over scheme, to 0.2193 min1 in a flow-through scheme [150], with refractive index shifts of 630 nm/RIU reported. As mentioned before, the decreasing dimensions also contribute to the allowance of multifunctional operation, including the combination of multi-fluidic regimes. To improve sensitivity at ultralow concentrations, Wang et al. increased the target concentration molecule prior to analyte binding in a bead-based immunoassay electrokinetically in a nanofluidic preconcentrator integrated within a microfluidic channel [152]. Varying times in the preconcentrator enhanced the immunoassay sensitivity by more than 500-fold, going from 50 pM to the sub 100 fM range. Furthermore, coupled with advances in optical technology, monolithic optoelectronic integration could remove some of the needed bulky instrumentation. Optical fiber delivery of the excitation illumination may present challenges in integration with planar fluidic devices, thus multimode optical waveguides can be used to this end. As mentioned earlier, SPR sensing systems comprises of several components: the source, prism, transducer, and detector. In an example of a fully integrated miniaturized device, an SPR sensor using a planar waveguide with two lightemitting diodes (LEDs) and a photodetector was reported. The dual LED configuration detects the differential intensity shift of two wavelengths to increase the detection sensitivity [151]. An aperture focuses the LED illumination, with alternating illumination patterns controlled through a microcomputer. The light passes through a prism sheet and polarizer before being confined in the Pyrex waveguide encasing a gold thin film transducer, and a photodetector records the output light. The schematic and actual device is shown in Fig. 15c, d. Without needing a laser or spectrograph, this device presents a low-cost and portable alternative to conventional SPR systems. Waveguides also permit the use of interferometric interrogation. In a Mach–Zehnder interferometric waveguide [153], the light path is split into two arms: a reference and sensing arm. The sensing arm contains a sensing area, with recognition elements, and is where evanescent field interactions with the medium occur. The paths are subsequently recombined and the optical signal detected as an interference pattern. The principle of operation relies on a refractive index change in the medium that changes the effective propagation index of guided modes, resulting in a phase shift observed in the interference pattern. Integrated with SU-8 microfluidics, the device had a detection limit of 6  104 RIU.

Nanoparticle Synthesis Nanoparticles are materials in the order of 1–100 nm in size and have been shown to have various biomedical applications. For example, nanoparticles have been used as

28

A.M. Soehartono et al.

drug delivery vehicles for the treatment of various chronic diseases such as cancer and also as imaging probes for the visualization of tumors and marking of various cells. Increasing miniaturization of laboratory systems and devices has led to the integration of functional components in miniaturized devices known as LoC devices. In particular, for the field of nanosynthesis of materials for biomedical applications, these LoC systems possess many unique characteristics favorable for nanoparticle synthesis. For instance, LoC devices require only small amounts of samples, have quick reaction times, and, moreover, provide unique, controlled microenvironments which are crucial for nanosynthesis. These microenvironments are paramount to control crucial nanoparticle characteristics, such as morphology, size, and surface chemistry. Traditionally, in conventional benchtop (BT) nanoparticle synthesis, reactants are added into a three-neck flask. For certain synthesis protocols where an inert environment is necessary to prevent oxidation of the reactant species, nitrogen (N2) or argon (Ar) gas supply is fed into the three-neck flask. A magnetic stirrer bar is added to allow mixing of the reactants by physical agitation. The reaction mixture can be heated directly on a hot plate, heating mantle, or by immersion in a water or oil bath, with a temperature probe to regulate the temperature of the reaction mixture. On the other hand, for nanoparticle synthesis in miniature channels, the reaction mixtures are fed into the inlets of the chip using tubes connected to syringe pumps. The channels serve as the reaction vessel for microfluidic synthesis as the fluid traverses the entire chip. There are two main categories of fluid flow in the microchannels, namely, continuous flow (single phase) or segmented flow. The continuous fluid flow scheme involves a single uninterrupted fluid stream consisting of the reactant solutions (Fig. 16a). Mixing in the continuous fluid flow scheme occurs via passive diffusion between the laminar flow reaction mixtures. Another flow regime is the segmented flow scheme, which can be further subdivided into gas–liquid segmented flow and liquid–liquid segmented flow. In contrast to a single fluid stream, the gas–liquid segmented flow has the reaction mixtures trapped in liquid slugs separated by gas bubbles (Fig. 16c), while the reaction mixture forms a droplet carried by an immiscible carrier fluid for the case of the liquid–liquid segmented flow scheme (Fig. 16b). The segmented flow scheme promotes rapid mixing within the discrete droplets in the otherwise laminar fluid flow in the microchannels and also prevents channel fouling as the contact between reactant droplets and the channel walls are minimized compared to the continuous flow regime. The process of nanoparticle formation is best explained using the LaMer plot [156] as illustrated in Fig. 17, with three phases in this process. In the first phase (phase I), the monomer concentration of the precursors increases and exceeds the supersaturation level (S). In order to form highly monodispersed nanoparticles, it is essential that no seeds are present. Otherwise, heterogenous nucleation will occur, resulting in a series of particles with different sizes. At this point, due to the high activation energy of the homogenous nucleation process, no particle formation occurs yet, and the concentration of the monomers continues to build up. Then in

Miniaturized Fluidic Devices and Their Biophotonic Applications

b

a

reagent fluid reagent fluid

c reagent fluid

reagent fluid

29

reagent fluid

carrier fluid reagent fluid

gas

Fig. 16 (a) Continuous flow, (b) liquid–liquid segmented flow, and (c) liquid–gas fluid slug segmented flow (Reprinted with permission from Ref. [154]. Copyright 2006 John Wiley & Sons, Inc.)

Fig. 17 LaMer plot showing nanoparticle formation showing the nucleation (phase II) and growth (phase III) of the nanoparticles (Reproduced from Ref. [155] with permission of The Royal Society of Chemistry)

phase II, the nucleation stage, as the monomer concentration reaches the critical supersaturation level (SC), the energy of the system is now sufficient to overcome the energy barrier. This results in a rapid formation of nuclei (i.e., burst nucleation). As a result of the burst nucleation, the monomer concentration starts to fall rapidly until it drops below the critical SC value and goes into the phase III which is the growth stage. In this phase, further nucleation cannot be sustained; hence the particles start to increase in size, constituting the growth process. Therefore, burst nucleation is highly desirable to produce monodispersed nanoparticles as all the nuclei are formed simultaneously, and the growth conditions are identical. Although there is much interest in the synthesis of nanoparticles and polymeric materials using microfluidic chips and flow chemistry [154, 157, 158], it appears that subsequent applications of these as-synthesized nanoparticles are rather limited. Therefore, we have selected a few studies which we hope will generate more interest

30

A.M. Soehartono et al.

in the biophotonic applications of these materials and spur further development of nanoparticles using fluidic devices. In the following subsections, three different biophotonic applications of nanoparticles synthesized using miniature fluidic chip devices will be presented. Firstly, we will discuss cadmium tellurite (CdTe) quantum dots (QDs) synthesis in a microfluidic chip and its application for cell imaging. Next, fabrication of chitosan nanoparticles loaded with the drug paclitaxel for cancer therapy using a microfluidic chip will be studied. The last application involves the millifluidic chip synthesis of copper sulfide (Cu2xS) nanocrystals for laser photothermal therapy.

Bioimaging Using QDs Quantum dots (QDs) are semiconductor nanoparticles smaller than the exciton Bohr radius of the bulk material. Therefore, as a consequence of the quantum confinement effect, the emission wavelength of the QDs can be tuned by controlling their physical sizes. The photoluminescence of the QD is the reason why these nanoparticles are widely used in bioimaging applications. Compared to traditional fluorescent organic dyes with narrow excitation peaks and broad emission spectrums [159], QDs have

Fig. 18 (a) Simulated and actual microfluidic chips with different channel dimensions, (b) the corresponding experimental results of the microfluidic chip synthesized BSA–CdTe quantum dots (QDs), (c) photoluminescence, and (d) photostability characterizations of the microfluidic (MF) and benchtop (BT) BSA-QDs. (Reproduced from Ref. [165] with permission of The Royal Society of Chemistry)

Miniaturized Fluidic Devices and Their Biophotonic Applications

31

broad excitation spectrums and narrow emission peaks. This unique property enables multiple fluorescent QD markers (with different emission wavelengths) to label different cellular regions of interest to be observed while using a single excitation wavelength [160, 161]. Besides that, QDs are more resistant to photobleaching as compared to organic dyes [162], retaining their ability to fluoresce under continuous exposure at the excitation wavelength, thereby allowing studies which require uninterrupted monitoring of fluorescence emission such as the elucidation of signaling pathways [163] and cancer metastasis [164]. Hu et al. from our group demonstrated a detailed study of the microfluidic chip synthesis of cadmium tellurite (CdTe) QDs conjugated with bovine serum albumin (BSA) and folic acid (FA) biomolecules for imaging macrophage and pancreatic cancer cells [165]. In this work, the effects of parameters such as the reaction temperature and channel dimensions (800 mm in length, 200 μm in height, and widths of 200 μm, 400 μm, and 600 μm) on the quality of the QDs produced (Fig. 18a, b) were investigated. In addition, the photoluminescence and photostability of the microfluidic (MF) chip synthesized CdTe–BSA QDs, and the

Fig. 19 Bioimaging of RAW264.7 mice macrophage cells and Panc-1 human pancreatic cells labeled with the microfluidic (MF) (a) BSA-QDs, (b) MPA-QDs, and (c) FA-QDs. (Reproduced from Ref. [165] with permission of The Royal Society of Chemistry)

32

A.M. Soehartono et al.

conventional benchtop (BT) method were compared. As presented in Fig. 18c, d, it was discovered that the microfluidic chip could produce QDs of comparable quality and stability in a drastically shorter duration. RAW264.7 mice macrophages and Panc-1 human pancreatic cancer cells were seeded onto cover glasses with Dulbecco’s Modified Eagle’s Medium (DMEM) in a six-well plate prior to treatment with the QDs. The cells were then treated with the BSA-QD, 3-mercaptopropionic acid QD (MPA-QD), and FA-QD formulations and incubated for 4 h. The treated cells were then rinsed thrice with phosphate-buffered saline after the incubation and observed under a microscope. The microscope images in Fig. 19a, c show the RAW264.7 and Panc-1 cells labeled using the as-synthesized BSA-QDs and FA-QDs. The bright-green fluorescence signals are a clear indication that the microfluidic BSA-QDs and FA-QDs provide excellent contrast and hence can be used as optical contrast agents for cell imaging. In addition, folic acid (FA) functions as a targeting ligand as folate receptors are overexpressed in many types of cancer cells. Lastly, the reason for inclusion of the middle row MPA-QDs (Fig. 19b), despite MPA not being a biomolecule, is to indicate that uptake of the QDs into the cell occurred via a specific bio-mediated pathway and was not due to passive diffusion.

Drug Delivery Most anticancer drugs such as camptothecin and paclitaxel are hydrophobic in nature, making it difficult to deliver these drugs directly into the infected sites; hence nanoparticles can be used as drug delivery agents to encapsulate these hydrophobic drugs and transport them to the tumor cells. Majedi et al. [166] made use of a T-shaped microfluidic chip where the dimensions of the mixing channel were 150 μm (width) by 60 μm (height) by 1 cm (length) to assemble chitosan nanoparticles loaded with a cancer chemotherapy drug – paclitaxel. Chitosan is a polysaccharide which is popularly used as a drug delivery carrier because it is biodegradable, easily chemically functionalized, and sensitive to pH changes [167]. By injecting the hydrophobically modified chitosan supplement solution (HMCS) along with the paclitaxel chemotherapy drug (i.e., the polymeric stream – dark-green region in Fig. 20) and having the other two inlets (i.e., carrier stream, light-green regions) infused with water (an immiscible solution), a tightly focused fluid stream was formed. By varying the relative flow rates of the polymeric stream and the carrier stream, different mixing regimes were attained, resulting in fabrication of nanoparticles with different sizes, compactness, and surface charges. The chitosan nanoparticles were labeled with fluorescein isothiocyanate (FITC) dye so that the cellular uptake of these nanoparticles could be quantified by measuring the mean fluorescence intensity at different nanoparticle concentrations with using flow cytometry. Human breast adenocarcinoma (MCF-7) cells were employed as the cell model in this case study. Figure 21 showed that the microfluidic HMCS nanoparticles exhibited enhanced cellular internalization as compared to the bulk synthesized nanoparticles, which were attributed to the reduced particle sizes, increased compactness, and higher surface charge conferred by the microfluidic synthesized nanoparticles [168, 169].

Miniaturized Fluidic Devices and Their Biophotonic Applications

33

Fig. 20 T-shaped microfluidic chip for assembly of high molecular weight chitosan supplement (HMCS) nanoparticles via hydrodynamic flow focusing (Reprinted with permission from Ref. [166]. Copyright 2014 John Wiley & Sons, Inc.)

Fig. 21 Fluorescence intensity of fluorescein isothiocyanate (FITC)-labeled chitosan nanoparticles (FITC-HMCS or f-HMCS) indicating the cellular uptake with respect to f-HMCS concentration (Reprinted with permission from Ref. [166]. Copyright 2014 John Wiley & Sons, Inc.)

34

A.M. Soehartono et al.

Fig. 22 Paclitaxel (PTX) drug release profile of the chip synthesized chitosan nanoparticles as a function of the environmental pH to mimic circulation around tumor cells (Reprinted with permission from Ref. [166]. Copyright 2014 John Wiley & Sons, Inc.)

The microfluidic chip synthesized chitosan nanoparticles enabled controlled release of the PTX drug over time where the drug release rates vary according to the environmental pH. Figure 22 depicts the in vitro simulation of the cellular environment as the nanoparticles circulate in the body (pH = 7.4) maintaining their drug load, encounter tumor cells (pH = 6.5) causing an increase in the drug release rate, and being engulfed by the lysosome after they enter the tumor cell (pH = 5.5) where the nanoparticles rapidly off-load the drug. Therefore, the microfluidic chip synthesized chitosan nanoparticles are capable of carrying and delivering the anticancer drug to the targeted tumor cells due to their pH sensitivity, and the process can be monitored by measuring the fluorescence emission as the nanoparticles are labeled with FITC dye.

Photothermal Therapy Photothermal therapy (PTT) makes use of photothermal agents which can convert light energy (laser irradiation) to heat, thereby causing localized temperature elevations. Nanoparticles such as gold nanostructures [170, 171] and carbon-based nanomaterials like graphene [172] and carbon nanotubes [173] are commonly studied photothermal agents, while copper sulfide nanoparticles have only recently gained popularity [62, 174, 175]. The idea underlying photothermal therapy is to induce only malignant cells to uptake the nanoparticles, so that upon near-infrared (NIR) irradiation, only the cancerous cells will be annihilated, leaving the neighboring healthy tissues unscathed, thereby achieving targeted cancer therapy. Our group recently published a work on the synthesis of copper sulfide (Cu2xS) nanoparticles using a millifluidic chip with channels in the millimeter dimensions (1.5 mm (width) by 1.5 mm (height) by 545 mm (total length)) [176] as presented in Fig. 23. This chip produced Cu2xS nanoparticles of different sizes, morphologies,

Miniaturized Fluidic Devices and Their Biophotonic Applications

a Sulphur Precursor

35

b

Mixing of Precursors

Copper Precursor

Nucleation Growth Cu2–xS NC

Ethanol –

Quenching of as synthesized particles 10 mm

c

d

Fig. 23 (a) The annotated schematic diagram, (b) actual chip, (c) top view, and (d) cross-sectional view of the millifluidic chip device used for the Cu2xS nanocrystal synthesis (Reproduced from Ref. [176] with permission of The Royal Society of Chemistry)

and crystal structures by varying factors such as the injection flow rate of the precursors and molar ratio of the copper and sulfur precursors. The millifluidic chip is simple and cost-effective to fabricate, provides relatively high throughputs, and at the same time retains the advantages of microfluidic synthesis like the laminar flow in the channels [157], distance-to-time spatial resolution of the reaction process [158], and large surface-area-to-volume ratio for uniform heating. In order to be used for biological applications, the Cu2xS nanoparticles had to undergo ligand exchange with the L-glutathione (GSH) to change the initial organic phase of the nanoparticles to the aqueous phase to ensure biocompatibility and facilitate its subsequent uptake into the cells. RAW264.7 mice macrophage cells were cultured with DMEM in a six-well plate before 13.5 μM and 27 μM of Cu2xSGSH nanoparticles were added. The treated cells were then washed thrice with PBS buffer following 4 h of incubation at 37  C and 5 % CO2. These treated cells were then exposed to a 915 nm near-infrared (NIR) fiber laser at power densities of 36.7 W/cm2 and 52.1 W/cm2 for 15 min each. After which, the cells are then stained with propidium iodide (PI) – a fluorescent dye which is impermeable to the cell membrane of a healthy viable cell. Figure 24 illustrates the results after PI staining, where regions in red indicate that the macrophage cells have lost their plasma membrane integrity, thereby allowing

36

A.M. Soehartono et al.

Fig. 24 Microscope pictures of the effect of varying copper sulfide (Cu2xS) nanoparticle concentration on RAW264.7 mice macrophage cells after illumination by near-infrared (NIR) laser light at 915 nm for 15 min at power densities of 36.7 W/cm2 and 52.1 W/cm2. Cell death and damage show up in red after staining with propidium iodide (PI). The inset at the extreme left corner depicts the view of the photothermal therapy setup captured through a NIR viewer (Reproduced from Ref. [176] with permission of The Royal Society of Chemistry)

the dye to diffuse through the cell membranes and stain the cells. At power densities of 36.7 W/cm2 and 52.1 W/cm2, conspicuous regions of cell death were observed, while the controls – untreated macrophage cells (i.e., no Cu2xS) and the cells without NIR irradiation – remained viable. Close examination of the stained regions revealed seemingly circular regions annotated in Fig. 25 where at a fixed power density of 52.1 W/cm2, the diameter of the stained region was 571 μm for the lower Cu2xS concentration of 13.5 μM compared to 1472 μm for that of the higher concentration of 27 μM. Despite the limited photothermal conversion efficiency provided by the as-synthesized spherical Cu2xS, we believe that higher conversion efficiencies can be brought about by further optimization of the laser irradiation wavelength and employing specially engineered Cu2xS superstructures as demonstrated by Tian et al. [174].

Summary and Future Perspectives Miniaturized devices have emerged as a promising platform for biophotonics as they provide advantages such as tunability, portability, and potential integration of different components. We believe that the combination of milli- and nanofluidic regimes into existing microfluidic systems will provide a comprehensive system for personalized medicine. In the near future, we predict steady growth in a few research areas such as diagnostics and therapy where miniaturized devices have potential, but need further refinements for overcoming the challenges in medical research, diagnostics, nanomedicine, and fabrication technologies.

Miniaturized Fluidic Devices and Their Biophotonic Applications

37

Fig. 25 Microscope images with annotated circular regions indicating the extent of cell damage as a result of both copper sulfide (Cu2xS) nanoparticle concentrations and the near-infrared (NIR) laser power densities (Reproduced from Ref. [176] with permission of The Royal Society of Chemistry)

In medical research, we see two important applications: first, in the area of genomics, facilitated by nanofluidics, and, second, microfluidics and millifluidics in cellomics. Nanofluidics enables the imaging of stretched single DNA molecule, which can be used to generate optical maps of single DNA with high sensitivity and resolution. Current optical gene mapping is limited by the DNA length and is not capable of handling small sample volumes. These challenges can be addressed by using nanofluidic devices which can handle large DNA molecules (10s to 100s of kilobase pairs). When integrated with existing microfluidic devices for sample preparation, such as cell sorting and lysing, DNA extraction from a single cell is made possible. Such a tool could broaden the current knowledge in the influence of genetics on disease predisposition and predict individualized drug responses. However, current nanofabrication techniques are still laborious and expensive. Thus, nanofluidics urgently needs simpler and highly reproducible fabrication technologies that can achieve nanometric resolutions before it can reach critical mass. In diagnostics and therapeutic monitoring, miniaturized fluidic devices are attractive as a platform through flow cytometry and plasmonic biosensing. To this end, plasmonic biosensors need to reliably detect trace quantities of molecules of interest with high sensitivity and specificity. Although many biomarkers exist, their

38

A.M. Soehartono et al.

effectiveness is contingent on the intrinsic ability of the marker to correctly associate a detectable shift with a diagnosis or outcome. Further, the biomarkers should have long-term shelf life stability. As such, more work needs to be done in understanding the fundamentals of the biomarkers. In addition to improving the limit of detection, multiplexed biosensing will allow for high-throughput screening. While some works have explored this, it would be of interest to extend multiplexed sensing. Furthermore, by employing multiple biomarkers, it is hoped that the detection accuracy can be increased and cases of false positives will be greatly reduced. In flow cytometry, microfluidics allows low cost, portability, and ease of use; flow cytometers would be possible with the development of microfluidic technology. Similarly, it is still too early to confidently substitute existing flow cytometers with microfluidic devices in many cell studies or clinical analyses as throughput, accuracy, system integration, and multiple cell types’ compatibility still need to be improved. In nanomedicine, on-chip reactionware provides a means to develop and customize medications with higher reproducibility and tight size distributions. While there has been tremendous developments using conventional bulk macrosynthesis methods, the translation of such synthesis protocols into the microscale is not a direct process as the reaction conditions are intrinsically different. Therefore, we believe that in the coming few years, there will be more exploratory work regarding the synthesis of different nanoparticles using microfluidic and millifluidic technologies. By employing the miniaturized chip synthesis scheme, the reaction can be controlled with very fine millisecond resolution. Therefore, there might be hope of uncovering novel advantageous characteristics of nanoparticles such as narrower absorption peaks and higher densities, which otherwise might be challenging to achieve using the traditional synthesis methods. While many reported devices may perform one part of a process on a chip with supporting off-chip components, we feel that it is desirable to combine all operations to be on-chip, as is the goal with LoC-type devices. New developments in complementary areas such as optoelectronics (for more compact and efficient instruments) and fabrication will enable the device to become a fully independent operational device. Furthermore, we project that as these platforms move toward commercialization and clinical translation, fabrication methods and materials will shift to the research spotlight. Despite the ease of rapid prototyping of microfluidic devices with PDMS and soft lithography, predominantly used in academia, translation to mass fabrication using this material remains to be seen. Other rapid prototyping methods, such as 3D printing, are attractive options; however, the variety of materials available, their corresponding structural integrity, and biocompatibility are important factors which require further investigation. Additionally, despite a plethora of publications dedicated to miniaturized fluidic devices, there is a lack of standardization in fluidic topology and characterization, which could potentially hamper substantial research work. Without unifying guidelines on fluidic device operations, efforts that may have gone to research would instead shift to troubleshooting the experimental setup. Finally, we envision that personalized medicine can be revolutionized by the integration of modular components playing different roles as shown in Fig. 26. In

Miniaturized Fluidic Devices and Their Biophotonic Applications

39

Fig. 26 Our goal of individualized medicine achieved by integration of various fluidic component where the nanofluidic, microfluidic, and millifluidic regimes are abbreviated by nF, μF, and mF, respectively

this way, after a tissue or blood sample is obtained from the patient, diagnostics and treatment can be carried out within the fluidic system taking into account his or her unique genetic makeup, thereby elucidating an ideal course of action to treat the disease, if any. We think that there is great potential of miniature fluidic chips in the abovementioned areas, and thus continued development to address current limitations is essential to realize these goals.

References 1. Jürgens M, Mayerhöfer T, Popp J, Lee G, Matthews DL, Wilson BC (2013) Introduction to biophotonics. In: Handbook of biophotonics. Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim 2. Dougherty TJ, Gomer CJ, Henderson BW, Jori G, Kessel D, Korbelik M et al (1998) Photodynamic therapy. J Natl Cancer Inst 90:889–905 3. Whitesides GM (2006) The origins and the future of microfluidics. Nature 442:368–373

40

A.M. Soehartono et al.

4. Song P, Hu R, Tng DJH, Yong K-T (2014) Moving towards individualized medicine with microfluidics technology. RSC Adv 4:11499–11511 5. Prakash S, Yeom J (2014) Introduction, Chapter 1. In: Nanofluidics and microfluidics. William Andrew Publishing, Waltham, pp 1–8 6. Kitson PJ, Rosnes MH, Sans V, Dragone V, Cronin L (2012) Configurable 3D-Printed millifluidic and microfluidic ‘lab on a chip’ reactionware devices. Lab Chip 12:3267–3271 7. Prakash S, Karacor MB, Banerjee S (2009) Surface modification in microsystems and nanosystems. Surf Sci Rep 64:233–254 8. Nguyen NT, Wereley ST (2006) Fundamentals and applications of microfluidics, 2nd edn. Available: http://NTUSG.eblib.com.au/patron/FullRecord.aspx?p=286927 9. Hu G, Li D (2007) Multiscale phenomena in microfluidics and nanofluidics. Chem Eng Sci 62:3443–3454 10. Baldessari F, Santiago JG (2006) Electrophoresis in nanochannels: brief review and speculation. J Nanobiotechnol 4:1–6 11. Napoli M, Eijkel JCT, Pennathur S (2010) Nanofluidic technology for biomolecule applications: a critical review. Lab Chip 10:957–985 12. Biswas S, Miller JT, Li Y, Nandakumar K, Kumar CSSR (2012) Developing a millifluidic platform for the synthesis of ultrasmall nanoclusters: ultrasmall copper nanoclusters as a case study. Small 8:688–698 13. Navin CV, Krishna KS, Bovenkamp-Langlois GL, Miller JT, Chattopadhyay S, Shibata T et al (2015) Investigation of the synthesis and characterization of platinum-DMSA nanoparticles using millifluidic chip reactor. Chem Eng J 281:81–86 14. Damodaran SP, Eberhard S, Boitard L, Rodriguez JG, Wang Y, Bremond N et al (2015) A millifluidic study of cell-to-cell heterogeneity in growth-rate and cell-division capability in populations of isogenic cells of Chlamydomonas reinhardtii. PLoS One 10, e0118987 15. Cooper JA Jr, Li W-J, Bailey LO, Hudson SD, Lin-Gibson S, Anseth KS et al (2007) Encapsulated chondrocyte response in a pulsatile flow bioreactor. Acta Biomater 3:13–21 16. Wang WS, Vanapalli SA (2014) Millifluidics as a simple tool to optimize droplet networks: case study on drop traffic in a bifurcated loop. Biomicrofluidics 8:064111 17. Sackmann EK, Fulton AL, Beebe DJ (2014) The present and future role of microfluidics in biomedical research. Nature 507:181–189 18. Wang L, Flanagan LA, Jeon NL, Monuki E, Lee AP (2007) Dielectrophoresis switching with vertical sidewall electrodes for microfluidic flow cytometry. Lab Chip 7:1114–1120 19. Chung TD, Kim HC (2007) Recent advances in miniaturized microfluidic flow cytometry for clinical use. Electrophoresis 28:4511–4520 20. Dongeun H, Wei G, Yoko K, James BG, Shuichi T (2005) Microfluidics for flow cytometric analysis of cells and particles. Physiol Meas 26:R73 21. Prakash S, Pinti M, Bhushan B (2012) Theory, fabrication and applications of microfluidic and nanofluidic biosensors. Philos Trans R Soc Lond A Math Phys Eng Sci 370:2269–2303 22. Srinivasan V, Pamula V, Pollack M, Fair R (2003) A digital microfluidic biosensor for multianalyte detection. In: Micro electro mechanical systems, 2003. MEMS-03 Kyoto. IEEE the sixteenth annual international conference on, 2003, pp 327–330 23. Maeng J-H, Lee B-C, Ko Y-J, Cho W, Ahn Y, Cho N-G et al (2008) A novel microfluidic biosensor based on an electrical detection system for alpha-fetoprotein. Biosen Bioelectron 23:1319–1325 24. Adams AA, Okagbare PI, Feng J, Hupert ML, Patterson D, Göttert J et al (2008) Highly efficient circulating tumor cell isolation from whole blood and label-free enumeration using polymer-based microfluidics with an integrated conductivity sensor. J Am Chem Soc 130:8633–8641 25. Hung L-H, Choi KM, Tseng W-Y, Tan Y-C, Shea KJ, Lee AP (2006) Alternating droplet generation and controlled dynamic droplet fusion in microfluidic device for CdS nanoparticle synthesis. Lab Chip 6:174–178

Miniaturized Fluidic Devices and Their Biophotonic Applications

41

26. Zhao C-X, He L, Qiao SZ, Middelberg APJ (2011) Nanoparticle synthesis in microreactors. Chem Eng Sci 66:1463–1479 27. Shestopalov I, Tice JD, Ismagilov RF (2004) Multi-step synthesis of nanoparticles performed on millisecond time scale in a microfluidic droplet-based system. Lab Chip 4:316–321 28. Holmes D, Pettigrew D, Reccius CH, Gwyer JD, van Berkel C, Holloway J et al (2009) Leukocyte analysis and differentiation using high speed microfluidic single cell impedance cytometry. Lab Chip 9:2881–2889 29. Wheeler AR, Throndset WR, Whelan RJ, Leach AM, Zare RN, Liao YH et al (2003) Microfluidic device for single-cell analysis. Anal Chem 75:3581–3586 30. Brouzes E, Medkova M, Savenelli N, Marran D, Twardowski M, Hutchison JB et al (2009) Droplet microfluidic technology for single-cell high-throughput screening. Proc Natl Acad Sci 106:14195–14200 31. Carlo DD, Lee LP (2006) Dynamic single-cell analysis for quantitative biology. Anal Chem 78:7918–7925 32. Wang Z, Han T, Jeon T-J, Park S, Kim SM (2013) Rapid detection and quantification of bacteria using an integrated micro/nanofluidic device. Sens Actuators B 178:683–688 33. Jacobson SC, Baker JD, Kysela DT, Brun YV (2015) Integrated microfluidic devices for studying aging and adhesion of individual bacteria. Biophys J 108:371a 34. Harms ZD, Mogensen KB, Nunes PS, Zhou K, Hildenbrand BW, Mitra I et al (2011) Nanofluidic devices with two pores in series for resistive-pulse sensing of single virus capsids. Anal Chem 83:9573–9578 35. Mitra A, Deutsch B, Ignatovich F, Dykes C, Novotny L (2010) Nano-optofluidic detection of single viruses and nanoparticles. ACS Nano 4:1305–1312 36. Hamblin MN, Xuan J, Maynes D, Tolley HD, Belnap DM, Woolley AT et al (2010) Selective trapping and concentration of nanoparticles and viruses in dual-height nanofluidic channels. Lab Chip 10:173–178 37. Balducci A, Mao P, Han J, Doyle PS (2006) Double-stranded DNA diffusion in slitlike nanochannels. Macromolecules 39:6273–6281 38. Reisner W, Morton KJ, Riehn R, Wang YM, Yu Z, Rosen M et al (2005) Statics and dynamics of single DNA molecules confined in nanochannels. Phys Rev Lett 94:196101 39. Walter R, Jonas NP, Robert HA (2012) DNA confinement in nanochannels: physics and biological applications. Rep Prog Phys 75:106601 40. Lee K-H, Su Y-D, Chen S-J, Tseng F-G, Lee G-B (2007) Microfluidic systems integrated with two-dimensional surface plasmon resonance phase imaging systems for microarray immunoassay. Biosen Bioelectron 23:466–472 41. Albanese A, Lam AK, Sykes EA, Rocheleau JV, Chan WC (2013) Tumour-on-a-chip provides an optical window into nanoparticle tissue transport. Nat Commun 4:2718 42. Bhatia SN, Ingber DE (2014) Microfluidic organs-on-chips. Nat Biotechnol 32:760–772 43. Park J, Lee BK, Jeong GS, Hyun JK, Lee CJ, Lee S-H (2015) Three-dimensional brain-on-achip with an interstitial level of flow and its application as an in vitro model of Alzheimer’s disease. Lab Chip 15:141–150 44. van den Berg A, Craighead HG, Yang P (2010) From microfluidic applications to nanofluidic phenomena. Chem Soc Rev 39:899–900 45. Schoch RB, Han J, Renaud P (2008) Transport phenomena in nanofluidics. Rev Mod Phys 80:839–883 46. Beebe DJ, Mensing GA, Walker GM (2002) Physics and applications of microfluidics in biology. Annu Rev Biomed Eng 4:261–286 47. Takayama S, McDonald JC, Ostuni E, Liang MN, Kenis PJA, Ismagilov RF et al (1999) Patterning cells and their environments using multiple laminar fluid flows in capillary networks. Proc Natl Acad Sci 96:5545–5548 48. Sparreboom W, van den Berg A, Eijkel JCT (2010) Transport in nanofluidic systems: a review of theory and applications. New J Phys 12:015004

42

A.M. Soehartono et al.

49. Karnik R, Fan R, Yue M, Li D, Yang P, Majumdar A (2005) Electrostatic control of ions and molecules in nanofluidic transistors. Nano Lett 5:943–948 50. Abgrall P, Nguyen NT (2008) Nanofluidic devices and their applications. Anal Chem 80:2326–2341 51. Pennathur S, Santiago JG (2005) Electrokinetic transport in nanochannels. 1. Theory. Anal Chem 77:6772–6781 52. Pennathur S, Santiago JG (2005) Electrokinetic transport in nanochannels. 2. Experiments. Anal Chem 77:6782–6789 53. Mannion JT, Reccius CH, Cross JD, Craighead HG (2006) Conformational analysis of single DNA molecules undergoing entropically induced motion in nanochannels. Biophys J 90:4538–4545 54. Mawatari K, Kubota S, Xu Y, Priest C, Sedev R, Ralston J et al (2012) Femtoliter droplet handling in nanofluidic channels: a laplace nanovalve. Anal Chem 84:10812–10816 55. Song P, Tng DJH, Hu R, Lin G, Meng E, Yong K-T (2013) An electrochemically actuated MEMS device for individualized drug delivery: an in vitro study. Adv Healthcare Mater 2:1170–1178 56. Bhagat AAS, Hou HW, Li LD, Lim CT, Han J (2011) Pinched flow coupled shear-modulated inertial microfluidics for high-throughput rare blood cell separation. Lab Chip 11:1870–1878 57. Morteza A, John TWY, Mehdi S (2011) System integration in microfluidics. In: Microfluidics and nanofluidics handbook. CRC Press, Boca Raton, pp 269–286 58. Nisar A, Afzulpurkar N, Mahaisavariya B, Tuantranont A (2008) MEMS-based micropumps in drug delivery and biomedical applications. Sens Actuators B 130:917–942 59. Nguyen N-T, Huang X, Chuan TK (2002) MEMS-micropumps: a review. J Fluids Eng 124:384–392 60. Leach J, Mushfique H, di Leonardo R, Padgett M, Cooper J (2006) An optically driven pump for microfluidics. Lab Chip 6:735–739 61. Tas NR, Berenschot JW, Lammerink TSJ, Elwenspoek M, van den Berg A (2002) Nanofluidic bubble pump using surface tension directed gas injection. Anal Chem 74:2224–2227 62. Ouellet E, Lausted C, Lin T, Yang CWT, Hood L, Lagally ET (2010) Parallel microfluidic surface plasmon resonance imaging arrays. Lab Chip 10:581–588 63. Ouyang H, Xia Z, Zhe J (2010) Voltage-controlled flow regulating in nanofluidic channels with charged polymer brushes. Microfluid Nanofluid 9:915–922 64. Lee C-Y, Chang C-L, Wang Y-N, Fu L-M (2011) Microfluidic mixing: a review. Int J Mol Sci 12:3263 65. Kim DS, Lee SH, Kwon TH, Ahn CH (2005) A serpentine laminating micromixer combining splitting/recombination and advection. Lab Chip 5:739–747 66. Bothe D, Stemich C, Warnecke H-J (2006) Fluid mixing in a T-shaped micro-mixer. Chem Eng Sci 61:2950–2958 67. Wong SH, Ward MCL, Wharton CW (2004) Micro T-mixer as a rapid mixing micromixer. Sens Actuators B 100:359–379 68. Che-Hsin L, Chien-Hsiung T, Lung-Ming F (2005) A rapid three-dimensional vortex micromixer utilizing self-rotation effects under low Reynolds number conditions. J Micromech Microeng 15:935 69. Long M, Sprague MA, Grimes AA, Rich BD, Khine M (2009) A simple three-dimensional vortex micromixer. Appl Phys Lett 94:133501 70. Ye Z, Li S, Zhou B, Hui YS, Shen R, Wen W (2014) Nanofluidic mixing via hybrid surface. Appl Phys Lett 105:163501 71. Yu S, Jeon T-J, Kim SM (2012) Active micromixer using electrokinetic effects in the micro/ nanochannel junction. Chem Eng J 197:289–294 72. Kim D, Raj A, Zhu L, Masel RI, Shannon MA (2008) Non-equilibrium electrokinetic micro/ nano fluidic mixer. Lab Chip 8:625–628 73. Liang-Hsuan L, Kee Suk R, Chang L (2002) A magnetic microstirrer and array for microfluidic mixing. J Microelectromech Syst 11:462–469

Miniaturized Fluidic Devices and Their Biophotonic Applications

43

74. Lei KF (2015) Materials and fabrication techniques for nano- and microfluidic devices, Chapter 1. In: Microfluidics in detection science: lab-on-a-chip technologies. The Royal Society of Chemistry, Cambridge, pp 1–28 75. Ren K, Zhou J, Wu H (2013) Materials for microfluidic chip fabrication. Acc Chem Res 46:2396–2406 76. Iliescu C, Taylor H, Avram M, Miao J, Franssila S (2012) A practical guide for the fabrication of microfluidic devices using glass and silicon. Biomicrofluidics 6:016505 77. Becker H, Gärtner C (2008) Polymer microfabrication technologies for microfluidic systems. Anal Bioanal Chem 390:89–111 78. Xia Y, Whitesides GM (1998) Soft lithography. Annu Rev Mater Sci 28:153–184 79. Whitesides GM, Ostuni E, Takayama S, Jiang X, Ingber DE (2001) Soft lithography in biology and biochemistry. Annu Rev Biomed Eng 3:335–373 80. Tsuda S, Jaffery H, Doran D, Hezwani M, Robbins PJ, Yoshida M et al (2015) Customizable 3D printed ‘plug and play’ millifluidic devices for programmable fluidics. PLoS One 10, e0141640 81. Duan C, Wang W, Xie Q (2013) Review article: fabrication of nanofluidic devices. Biomicrofluidics 7:026501 82. Bocquet L, Tabeling P (2014) Physics and technological aspects of nanofluidics. Lab Chip 14:3143–3158 83. Guo LJ (2007) Nanoimprint lithography: methods and material requirements. Adv Mater 19:495–513 84. Li D (2008) Nanochannel fabrication. In: Li D (ed) Encyclopedia of microfluidics and nanofluidics. Springer US, Boston, pp 1409–1414 85. Chou SY, Krauss PR, Renstrom PJ (1996) Nanoimprint lithography. J Vac Sci Technol B 14:4129–4133 86. Kim Y, Kim KS, Kounovsky KL, Chang R, Jung GY, dePablo JJ et al (2011) Nanochannel confinement: DNA stretch approaching full contour length. Lab Chip 11:1721–1729 87. Marie R, Kristensen A (2012) Nanofluidic devices towards single DNA molecule sequence mapping. J Biophotonics 5:673–686 88. Cipriany BR, Zhao R, Murphy PJ, Levy SL, Tan CP, Craighead HG et al (2010) Single molecule epigenetic analysis in a nanofluidic channel. Anal Chem 82:2480–2487 89. Tegenfeldt JO, Prinz C, Cao H, Chou S, Reisner WW, Riehn R et al (2004) From the cover: the dynamics of genomic-length DNA molecules in 100-nm channels. Proc Natl Acad Sci U S A 101:10979–10983 90. Das SK, Austin MD, Akana MC, Deshpande P, Cao H, Xiao M (2010) Single molecule linear analysis of DNA in nano-channel labeled with sequence specific fluorescent probes. Nucleic Acids Res 38, e177 91. Lam ET, Hastie A, Lin C, Ehrlich D, Das SK, Austin MD et al (2012) Genome mapping on nanochannel arrays for structural variation analysis and sequence assembly. Nat Biotechnol 30:771–776 92. Friedrich SM, Zec HC, Wang TH (2016) Analysis of single nucleic acid molecules in microand nano-fluidics. Lab Chip 16:790–811 93. Miller JM (2013) Whole-genome mapping: a new paradigm in strain-typing technology. J Clin Microbiol 51:1066–1070 94. Gupta A, Place M, Goldstein S, Sarkar D, Zhou S, Potamousis K et al (2015) Single-molecule analysis reveals widespread structural variation in multiple myeloma. Proc Natl Acad Sci U S A 112:7689–7694 95. Reisner W, Larsen NB, Silahtaroglu A, Kristensen A, Tommerup N, Tegenfeldt JO et al (2010) Single-molecule denaturation mapping of DNA in nanofluidic channels. Proc Natl Acad Sci U S A 107:13294–13299 96. Nyberg LK, Persson F, Berg J, Bergstrom J, Fransson E, Olsson L et al (2012) A single-step competitive binding assay for mapping of single DNA molecules. Biochem Biophys Res Commun 417:404–408

44

A.M. Soehartono et al.

97. Jo K, Dhingra DM, Odijk T, de Pablo JJ, Graham MD, Runnheim R et al (2007) A singlemolecule barcoding system using nanoslits for DNA analysis. Proc Natl Acad Sci U S A 104:2673–2678 98. Riley MC, Kirkup BC, Johnson JD, Lesho EP, Ockenhouse CF (2011) Rapid whole genome optical mapping of Plasmodium falciparum. Malar J 10:1–8 99. Welch RL, Sladek R, Dewar K, Reisner WW (2012) Denaturation mapping of Saccharomyces cerevisiae. Lab Chip 12:3314–3321 100. Zhu F, Skommer J, Macdonald NP, Friedrich T, Kaslin J, Wlodkowic D (2015) Threedimensional printed millifluidic devices for zebrafish embryo tests. Biomicrofluidics 9:046502 101. Li Y, Yang F, Chen Z, Shi L, Zhang B, Pan J et al (2014) Zebrafish on a chip: a novel platform for real-time monitoring of drug-induced developmental toxicity. PLoS One 9, e94792 102. Baraban L, Bertholle F, Salverda MLM, Bremond N, Panizza P, Baudry J et al (2011) Millifluidic droplet analyser for microbiology. Lab Chip 11:4057–4062 103. Boitard L, Cottinet D, Bremond N, Baudry J, Bibette J (2015) Growing microbes in millifluidic droplets. Eng Life Sci 15:318–326 104. Piyasena ME, Graves SW (2014) The intersection of flow cytometry with microfluidics and microfabrication. Lab Chip 14:1044–1059 105. Yao B, Luo G-A, Feng X, Wang W, Chen L-X, Wang Y-M (2004) A microfluidic device based on gravity and electric force driving for flow cytometry and fluorescence activated cell sorting. Lab Chip 4:603–607 106. Zhu HY, Mavandadi S, Coskun AF, Yaglidere O, Ozcan A (2011) Optofluidic fluorescent imaging cytometry on a cell phone. Anal Chem 83:6641–6647 107. Mao X, Lin S-CS, Dong C, Huang TJ (2009) Single-layer planar on-chip flow cytometer using microfluidic drifting based three-dimensional (3D) hydrodynamic focusing. Lab Chip 9:1583–1589 108. Cheng XH, Irimia D, Dixon M, Sekine K, Demirci U, Zamir L et al (2007) A microfluidic device for practical label-free CD4 + T cell counting of HIV-infected subjects. Lab Chip 7:170–178 109. Rodriguez WR, Christodoulides N, Floriano PN, Graham S, Mohanty S, Dixon M et al (2005) A microchip CD4 counting method for HIV monitoring in resource-poor settings. Plos Med 2:663–672 110. Moon S, Keles HO, Ozcan A, Khademhosseini A, Haeggstrom E, Kuritzkes D et al (2009) Integrating microfluidics and lensless imaging for point-of-care testing. Biosens Bioelectron 24:3208–3214 111. Patra B, Peng C-C, Liao W-H, Lee C-H, Tung Y-C (2016) Drug testing and flow cytometry analysis on a large number of uniform sized tumor spheroids using a microfluidic device. Sci Rep 6:21061 112. Simonnet C, Groisman A (2006) High-throughput and high-resolution flow cytometry in molded microfluidic devices. Anal Chem 78:5653–5663 113. Strohm EM, Gnyawali V, Van De Vondervoort M, Daghighi Y, Tsai SS, Kolios MC (2016) Classification of biological cells using a sound wave based flow cytometer. In: SPIE BiOS, 9708:2016, pp 97081A–97081A-6 114. Mao X, Nawaz AA, Lin S-CS, Lapsley MI, Zhao Y, McCoy JP et al (2012) An integrated, multiparametric flow cytometry chip using “microfluidic drifting” based three-dimensional hydrodynamic focusing. Biomicrofluidics 6:024113–024113-9 115. Sarioglu AF, Aceto N, Kojic N, Donaldson MC, Zeinali M, Hamza B et al (2015) A microfluidic device for label-free, physical capture of circulating tumor cell clusters. Nat Methods 12:685–691 116. Gleghorn JP, Pratt ED, Denning D, Liu H, Bander NH, Tagawa ST et al (2010) Capture of circulating tumor cells from whole blood of prostate cancer patients using geometrically enhanced differential immunocapture (GEDI) and a prostate-specific antibody. Lab Chip 10:27–29

Miniaturized Fluidic Devices and Their Biophotonic Applications

45

117. Tan S, Yobas L, Lee G, Ong C, Lim C (2009) Microdevice for trapping circulating tumor cells for cancer diagnostics. In: 13th international conference on biomedical engineering, 2009, pp 774–777 118. Tan SJ, Lakshmi RL, Chen P, Lim W-T, Yobas L, Lim CT (2010) Versatile label free biochip for the detection of circulating tumor cells from peripheral blood in cancer patients. Biosen Bioelectron 26:1701–1705 119. Hou HW, Warkiani ME, Khoo BL, Li ZR, Soo RA, Tan DS-W et al (2013) Isolation and retrieval of circulating tumor cells using centrifugal forces. Sci Rep 3:1259 120. Warkiani ME, Guan G, Luan KB, Lee WC, Bhagat AAS, Chaudhuri PK et al (2014) Slanted spiral microfluidics for the ultra-fast, label-free isolation of circulating tumor cells. Lab Chip 14:128–137 121. Turetsky A, Lee K, Song J, Giedt RJ, Kim E, Kovach AE et al (2015) On chip analysis of CNS lymphoma in cerebrospinal fluid. Theranostics 5:796 122. Guo J, Ma X, Menon NV, Li CM, Zhao Y, Kang Y (2015) Dual fluorescence-activated study of tumor cell apoptosis by an optofluidic system. IEEE J Sel Top Quantum Electron 21:392–398 123. Bhagat AAS, Kuntaegowdanahalli SS, Kaval N, Seliskar CJ, Papautsky I (2010) Inertial microfluidics for sheath-less high-throughput flow cytometry. Biomed Microdevices 12:187–195 124. Di Carlo D, Irimia D, Tompkins RG, Toner M (2007) Continuous inertial focusing, ordering, and separation of particles in microchannels. Proc Natl Acad Sci 104:18892–18897 125. Hur SC, Tse HTK, Di Carlo D (2010) Sheathless inertial cell ordering for extreme throughput flow cytometry. Lab Chip 10:274–280 126. Lenshof A, Magnusson C, Laurell T (2012) Acoustofluidics 8: applications of acoustophoresis in continuous flow microsystems. Lab Chip 12:1210–1223 127. Ding X, Li P, Lin S-CS, Stratton ZS, Nama N, Guo F et al (2013) Surface acoustic wave microfluidics. Lab Chip 13:3626–3649 128. Li M, Li S, Cao W, Li W, Wen W, Alici G (2012) Continuous particle focusing in a waved microchannel using negative dc dielectrophoresis. J Micromech Microeng 22:095001 129. Golden JP, Kim JS, Erickson JS, Hilliard LR, Howell PB, Anderson GP et al (2009) Multiwavelength microflow cytometer using groove-generated sheath flow. Lab Chip 9:1942–1950 130. Ozcan A, Demirci U (2008) Ultra wide-field lens-free monitoring of cells on-chip. Lab Chip 8:98–106 131. Cui X, Lee LM, Heng X, Zhong W, Sternberg PW, Psaltis D et al (2008) Lensless highresolution on-chip optofluidic microscopes for Caenorhabditis elegans and cell imaging. Proc Natl Acad Sci 105:10670–10675 132. Tseng D, Mudanyali O, Oztoprak C, Isikman SO, Sencan I, Yaglidere O et al (2010) Lensfree microscopy on a cellphone. Lab Chip 10:1787–1792 133. Lin C-C, Wang J-H, Wu H-W, Lee G-B (2010) Microfluidic immunoassays. J Assoc Lab Autom 15:253–274 134. Zeng S, Baillargeat D, Ho H-P, Yong K-T (2014) Nanomaterials enhanced surface plasmon resonance for biological and chemical sensing applications. Chem Soc Rev 43:3426–3452 135. Hoa XD, Kirk AG, Tabrizian M (2007) Towards integrated and sensitive surface plasmon resonance biosensors: a review of recent progress. Biosen Bioelectron 23:151–160 136. Shankaran DR, Gobi KV, Miura N (2007) Recent advancements in surface plasmon resonance immunosensors for detection of small molecules of biomedical, food and environmental interest. Sens Actuators B 121:158–177 137. Kim J (2012) Joining plasmonics with microfluidics: from convenience to inevitability. Lab Chip 12:3611–3623 138. Luo Y, Yu F, Zare RN (2008) Microfluidic device for immunoassays based on surface plasmon resonance imaging. Lab Chip 8:694–700 139. Sepúlveda B, Angelomé PC, Lechuga LM, Liz-Marzán LM (2009) LSPR-based nanobiosensors. Nano Today 4:244–251

46

A.M. Soehartono et al.

140. Huang C, Bonroy K, Reekmans G, Laureyn W, Verhaegen K, Vlaminck I et al (2009) Localized surface plasmon resonance biosensor integrated with microfluidic chip. Biomed Microdevices 11:893–901 141. Aćimović SS, Ortega MA, Sanz V, Berthelot J, Garcia-Cordero JL, Renger J et al (2014) LSPR chip for parallel, rapid, and sensitive detection of cancer markers in serum. Nano Lett 14:2636–2641 142. Huang C, Ye J, Wang S, Stakenborg T, Lagae L (2012) Gold nanoring as a sensitive plasmonic biosensor for on-chip DNA detection. Appl Phys Lett 100:173114 143. Lee S-W, Lee K-S, Ahn J, Lee J-J, Kim M-G, Shin Y-B (2011) Highly sensitive biosensing using arrays of plasmonic Au nanodisks realized by nanoimprint lithography. ACS Nano 5:897–904 144. Chen P, Chung MT, McHugh W, Nidetz R, Li Y, Fu J et al (2015) Multiplex serum cytokine immunoassay using nanoplasmonic biosensor microarrays. ACS Nano 9:4173–4181 145. De Leebeeck A, Kumar LKS, de Lange V, Sinton D, Gordon R, Brolo AG (2007) On-chip surface-based detection with nanohole arrays. Anal Chem 79:4094–4100 146. Ferreira J, Santos MJL, Rahman MM, Brolo AG, Gordon R, Sinton D et al (2009) Attomolar protein detection using in-hole surface plasmon resonance. J Am Chem Soc 131:436–437 147. Eftekhari F, Escobedo C, Ferreira J, Duan X, Girotto EM, Brolo AG et al (2009) Nanoholes as nanochannels: flow-through plasmonic sensing. Anal Chem 81:4308–4311 148. Brolo AG, Gordon R, Leathem B, Kavanagh KL (2004) Surface plasmon sensor based on the enhanced light transmission through arrays of nanoholes in gold films. Langmuir 20:4813–4815 149. Martín-Moreno L, García-Vidal FJ (2004) Optical transmission through circular hole arrays in optically thick metal films. Opt Express 12:3619–3628 150. Yanik AA, Huang M, Artar A, Chang T-Y, Altug H (2010) Integrated nanoplasmonicnanofluidic biosensors with targeted delivery of analytes. Appl Phys Lett 96:021101 151. Suzuki A, Kondoh J, Matsui Y, Shiokawa S, Suzuki K (2005) Development of novel optical waveguide surface plasmon resonance (SPR) sensor with dual light emitting diodes. Sens Actuators B 106:383–387 152. Wang Y-C, Han J (2008) Pre-binding dynamic range and sensitivity enhancement for immunosensors using nanofluidic preconcentrator. Lab Chip 8:392–394 153. Sepúlveda B, del Río JS, Moreno M, Blanco FJ, Mayora K, Domínguez C et al (2006) Optical biosensor microsystems based on the integration of highly sensitive Mach–Zehnder interferometer devices. J Opt A Pure Appl Opt 8:S561 154. Song H, Chen DL, Ismagilov RF (2006) Reactions in droplets in microfluidic channels. Angew Chem Int Ed 45:7336–7356 155. Schladt TD, Schneider K, Schild H, Tremel W (2011) Synthesis and bio-functionalization of magnetic nanoparticles for medical diagnosis and treatment. Dalton Trans 40:6315–6343 156. LaMer VK, Dinegar RH (1950) Theory, production and mechanism of formation of monodispersed hydrosols. J Am Chem Soc 72:4847–4854, 1950/11/01 157. Lohse SE, Eller JR, Sivapalan ST, Plews MR, Murphy CJ (2013) A simple millifluidic benchtop reactor system for the high-throughput synthesis and functionalization of gold nanoparticles with different sizes and shapes. ACS Nano 7:4135–4150 158. Sai Krishna K, Navin CV, Biswas S, Singh V, Ham K, Bovenkamp GL et al (2013) Millifluidics for time-resolved mapping of the growth of gold nanostructures. J Am Chem Soc 135:5450–5456 159. Sharma P, Brown S, Walter G, Santra S, Moudgil B (2006) Nanoparticles for bioimaging. Adv Colloid Interface Sci 123–126:471–485 160. Voura EB, Jaiswal JK, Mattoussi H, Simon SM (2004) Tracking metastatic tumor cell extravasation with quantum dot nanocrystals and fluorescence emission-scanning microscopy. Nat Med 10:993–998 161. Han M, Gao X, Su JZ, Nie S (2001) Quantum-dot-tagged microbeads for multiplexed optical coding of biomolecules. Nat Biotechnol 19:631–635

Miniaturized Fluidic Devices and Their Biophotonic Applications

47

162. Medintz IL, Uyeda HT, Goldman ER, Mattoussi H (2005) Quantum dot bioconjugates for imaging, labelling and sensing. Nat Mater 4:435–446 163. Rosenthal SJ, Tomlinson I, Adkins EM, Schroeter S, Adams S, Swafford L et al (2002) Targeting cell surface receptors with ligand-conjugated nanocrystals. J Am Chem Soc 124:4586–4594 164. Gonda K, Watanabe TM, Ohuchi N, Higuchi H (2010) In vivo nano-imaging of membrane dynamics in metastatic tumor cells using quantum dots. J Biol Chem 285:2750–2757 165. Hu S, Zeng S, Zhang B, Yang C, Song P, Hang Danny TJ et al (2014) Preparation of biofunctionalized quantum dots using microfluidic chips for bioimaging. Analyst 139:4681–4690 166. Majedi FS, Hasani-Sadrabadi MM, VanDersarl JJ, Mokarram N, Hojjati-Emami S, Dashtimoghadam E et al (2014) On-chip fabrication of paclitaxel-loaded chitosan nanoparticles for cancer therapeutics. Adv Funct Mater 24:432–441 167. Prabaharan M (2012) Chitosan and its derivatives as promising drug delivery carriers. Momentum Press, New York 168. Chiu Y-L, Ho Y-C, Chen Y-M, Peng S-F, Ke C-J, Chen K-J et al (2010) The characteristics, cellular uptake and intracellular trafficking of nanoparticles made of hydrophobically-modified chitosan. J Control Release 146:152–159 169. Majedi FS, Hasani-Sadrabadi MM, Hojjati Emami S, Shokrgozar MA, VanDersarl JJ, Dashtimoghadam E et al (2013) Microfluidic assisted self-assembly of chitosan based nanoparticles as drug delivery agents. Lab Chip 13:204–207 170. Chen J, Glaus C, Laforest R, Zhang Q, Yang M, Gidding M et al (2010) Gold nanocages as photothermal transducers for cancer treatment. Small 6:811–817 171. Huang X, El-Sayed IH, El-Sayed MA (2010) Applications of gold nanorods for cancer imaging and photothermal therapy. In: Grobmyer RS, Moudgil MB (eds) Cancer nanotechnology: methods and protocols. Humana Press, Totowa, pp 343–357 172. Yang K, Zhang S, Zhang G, Sun X, Lee S-T, Liu Z (2010) Graphene in mice: ultrahigh in vivo tumor uptake and efficient photothermal therapy. Nano Lett 10:3318–3323 173. Robinson JT, Welsher K, Tabakman SM, Sherlock SP, Wang H, Luong R et al (2010) High performance in vivo near-IR (>1 μm) imaging and photothermal cancer therapy with carbon nanotubes. Nano Res 3:779–793 174. Tian Q, Tang M, Sun Y, Zou R, Chen Z, Zhu M et al (2011) Hydrophilic flower-like CuS superstructures as an efficient 980 nm laser-driven photothermal agent for ablation of cancer cells. Adv Mater 23:3542–3547 175. Mou J, Li P, Liu C, Xu H, Song L, Wang J et al (2015) Ultrasmall Cu2xS nanodots for highly efficient photoacoustic imaging-guided photothermal therapy. Small 11:2275–2283 176. Cheung T-L, Hong L, Rao N, Yang C, Wang L, Lai WJ et al (2016) The non-aqueous synthesis of shape controllable Cu2xS plasmonic nanostructures in a continuous-flow millifluidic chip for the generation of photo-induced heating. Nanoscale 8:6609–6622

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative Light F. Argoul, L. Berguiga, J. Elezgaray, and A. Arneodo

Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Waveguide-Based Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhancement of the Guiding Wave Mechanism by Surface Plasmon Resonance . . . . . . . . . . Waveguides and Total Internal Reflection (TIR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Total Internal Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Waveguide Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . From TIR Microscopy to Dielectric Waveguide Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interference Reflection Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dielectric Waveguide Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Improving Waveguide Microscopy with Structured Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface Plasmon Resonance: From Evanescent to Guided Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principles of Surface Plasmon Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensivity of SPR to Dielectric Layers: From Evanescent to Guided Waves . . . . . . . . . . . . . . . . SPR-Based Microscopy for Biological Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principles of Scanning Surface Plasmon Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V (Z) Response to a Discrete Phase Jump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V (Z) Responses from Glass//Water and Glass//Gold//Water Interfaces . . . . . . . . . . . . . . . . . . . . .

2 3 4 5 6 10 11 11 13 14 17 17 22 25 25 28 30

F. Argoul (*) • A. Arneodo LOMA (Laboratoire Ondes et Matière d’Aquitaine), CNRS, UMR 5798, Université de Bordeaux, Talence, France CNRS UMR5672, LP ENS Lyon, Université de Lyon, Lyon, France e-mail: [email protected]; [email protected] L. Berguiga CNRS, UMR 5270, INL, INSA Lyon, B^atiment Blaise Pascal, Villeurbanne, France e-mail: lotfi[email protected] J. Elezgaray CBMN, CNRS UMR5248, Université de Bordeaux, Pessac, France e-mail: [email protected] # Springer Science+Business Media B.V. 2016 A. H.-P. Ho et al. (eds.), Handbook of Photonics for Biomedical Engineering, DOI 10.1007/978-94-007-6174-2_40-1

1

2

F. Argoul et al.

SPRWG Microscopy on Thick Dielectric Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SPR and SPRWG Microscopy on Living Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31 32 34 36

Abstract

For more than 50 years, resonant waveguides (RWGs) have offered highly sensitive label-free sensing platforms to monitor surface processes such as protein adsorption, affinity binding, monolayer to multilayer build-up, bacteria and more generally adherent or confined living mammalian cells and tissues. Symmetrical planar dielectric RWG sensitivity was improved by metal coating of at least one of their surfaces for surface plasmon resonance undertaking (SPRWG). However, RWG sensitivity was often obtained at the expense of spatial resolution and could not compete with other high resolution fluorescence microscopies. For years, RWGs have only rarely been combined with high-resolution microscopy. Only recently, the improvement of intensity and phase light modulation techniques and the availability of low-cost high numerical aperture lenses have drastically changed the devices and methodologies based on RWGs. We illustrate in this chapter how these different technical and methodological evolutions have offered new, versatile, and powerful imaging tools to the biological community. Keywords

Resonant waveguides • Surface plasmon resonance • Goss-Hanchen effet • Evanescent field microscopy • Guided-wave microscopy • High resolution imaging of living cells

Introduction Our ability to examine living cells in their native context is crucial to understand their dynamics, structural functions, and transformations in health and disease. Cellbased optical assays have gained popularity in drug discovery and diagnosis, because they allow extraction of functional and local informations that would otherwise be lost with biochemical assays. Cell-based assays mostly focus on specific cellular events traced out by identified tagged (fluorescent) bio-molecular targets. However, despite their high specificity, these “target”-based assays require more manipulations (e.g., over-expression of targets with and without a readout tag) than biochemical assays and they often change the expected native cell event itself. Cell-based assays that could provide noninvasive and real-time recording of native cellular activity with high sensitivity have been the subject of intensive research for more than 50 years. A partial response to this quest was provided by optical biosensors based on total internal reflection (TIR) and evanescent waves. These optical biosensors share a common high surface-specific sensitivity. They have been developed in a variety of configurations, including spectroscopy (IR, Raman),

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

3

fluorescence (TIRF), guided wave sensors based on surface plasmon resonance (SPR), and phase contrast interferometry [1–3]. Biosensors based on SPR and resonant waveguide gratings were initially constructed to sense ligand affinities and kinetics of receptors immobilized on their surface. More recently, the same methods were adapted to probe the activity of living cells, such as cell adhesion and motility, proliferation and death [4–6]. The principle and advantage of waveguiding is to increase the sensitivity by multiple reflections and refractions of the light through or at the interface of the sample. However, increasing this sensitivity systematically degrades the spatial resolution and hence limits the resolution of a microscopic device that would benefit of the waveguide amplification. In this chapter, our aim is to show that a compromise can be found that optimizes both sensitivity and spatial resolution. It can be reached by combining the principles of the high resolution microscopy methods [7, 8] with waveguide techniques. Interference reflection microscopy (IRM) [9, 10], also known as interference contrast and surface contrast microscopies, has been used since the 1970s to study a wide range of cellular behavior including cell adhesion, motility, exocytosis, and endocytosis. This technique relies on reflections from an incident beam of light as it passes through materials of different refractive indices. These reflected beams interfere, producing either constructive or destructive interferences depending on the thickness and index of the layer of aqueous medium between the cellular object and the glass surface. More sophisticated and sensitive phase microscopy methods were developed recently to visualize nonintrusively optical index changes from living cells. In particular, Fourier phase microscopy (FPM) [11], digital holographic microscopy (DHM) [12], and quantitative phase microscopy (QPM) [13–21] were implemented to provide quantitative phase images of biological samples with remarkable sensitivity, reproducibility, and stability over extended periods of time. Conversely to waveguide methods, quantitative phase imaging methods are derived from diffractive optics principles and can reach a good spatial resolution, but they remain less sensitive to local optical index variations.

Waveguide-Based Sensors Planar optical waveguides utilize thin optically transparent films with a greater refractive index than the media in contact with their surfaces. Thanks to total internal reflection (TIR), for an appropriate incidence angle, the light injected into a thin waveguide remains confined inside the film and may propagate over distances that depend on both the optical absorption inside the film and the reflection efficiency at its upper and lower interfaces (Fig. 1). Actually the light is reflected on both sides of the film, and it is this intermittent evanescent wave coupling which guides the light. Planar optical waveguides offer highly sensitive label-free platforms to monitor surface processes in aqueous solutions, from protein adsorption, affinity binding, monolayer to multilayer build-up to bacteria or even living mammalian cells [22]. RWG biosensors exploit evanescent waves generated by the resonant coupling of light into a waveguide via a diffraction grating [23, 24]. RWG biosensors typically

4

F. Argoul et al.

Fig. 1 Sketch of a planar waveguide. Planar waveguide made of a planar film of thickness W and optical index nwg, embedded in between a lower (n1) and a upper (n2) medium, both with smaller optical indices. When the thin film is homogeneous, waveguiding forces linear zigzag optical rays in between the two surface boundaries

consist of a substrate (glass coverslip for instance, with or without an added layer), a waveguide film wherein a grating structure is embedded, a medium, and an adlayer (the sample and its medium to be characterized). Because the waveguide has a higher refractive index than its surrounding media, the guided light propagates within the waveguide due to the confinement by total internal reflection at the substrate-film and film-medium interfaces. However, RWG biosensors have a relatively poor lateral resolution due to the propagation distance of the guided light.

Enhancement of the Guiding Wave Mechanism by Surface Plasmon Resonance Surface plasmon resonance (SPR) has been extensively used since the 1970s to probe minute refractive index variations at the interface between a gold film and a dielectric medium [25–27]. Many geometries including metal films, stripes, nanoparticles, nanorods, holes, slits. . . may support SPR and offer a range of original properties, such as field enhancement and localization, high surface and bulk sensitivity, subwavelength localization. This explains why SPR has found applications in a wide variety of fields such as spectroscopy, nanophotonics, biosensing, plasmonic circuitry, nanolasers and subwavelength imaging. SPR is very popular for the detection of molecular adsorption of small molecules, thin polymer films, DNA or proteins, self-assembled layers, and so on [28, 29]. The principle of SPR detection takes advantage of the high sensitivity of the surface plasmon polariton (SPP) to refractive index gradients generated by the adsorption of molecules on gold. The plasmon wave undergoes a modification of both its amplitude and phase when it meets an obstacle of different index. SPP is a transverse magnetic (TM) or p-polarized surface wave that propagates along a metal-dielectric interface, typically at visible or infrared wavelengths [27]. SPP is a surface mode since the electric and magnetic field amplitudes decay exponentially in the z direction normal to the interface into both the metal and the dielectric material. Because of dissipative losses in the metal, SPPs are also damped in their propagation

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

5

along the x-direction (normal to the magnetic field), with the complex propagation constant kx ¼ k0x þ ik00x . The lateral propagation length Lx ¼ 1=2k00x is responsible for the lateral resolution limitation of surface plasmon microscopy and determines to what extent the optical properties of a laterally heterogeneous, thin dielectric film deposited on top of a SPP-carrying metal surface can be analyzed by reflectivity versus angle scans. More elaborate SPR microscopy techniques with enhanced sensitivity do exist and are based on Mach-Zehnder interferometry [30, 31] or dark-field microscopy [32]. However, prism-coupled SPR devices (Kretschmann configuration) cannot reach a subwavelength resolution due to the SPP guided wave propagation inside the thin gold film [33]. Devices that replace the prism by oil-immersion [34–36] or solid-immersion [37] lenses for coupling light into SPPs circumvent this difficulty by shaping and confining the SPP laterally to the gold surface. The combination of high numerical aperture lenses and interferometric devices pushed further down its resolution to subwavelength distances [38–44]. Mainly two methods have been proposed: a wide-field SPR microscope (WSPRM) and a scanning SPR microscope (SSPRM). We have more specifically focused on the second method during the past 10 years [39, 41–44]. Improvement of RWG sensors was achieved by replacing the substrate adlayer by a metal film in the so-called surface plasmon resonance waveguide (SPRWG) microscopy [45, 46]. SPRWG couples surface plasmon resonance with waveguide excitation modes and allows to determine index, thickness, and anisotropy of thin dielectric films [47, 48]. Actually SPRWG is applicable to films thicker than the length of the evanescent field and is therefore very well suited for cellular imaging [49–52]. It benefits from the strong field enhancement by SPP and guided mode configuration. In the simplest and most common case, the sensor consists of a noble metal film evaporated on a glass support. More sophisticated multi-layered structures combining metal and dielectric layers conferring better sensitivity have also been proposed to better match experimental requirements such as hydrophylicity or hydrophobicity, smaller or larger k vectors [53]. Again, similarly to high resolution SPR microscopy, subwavelength resolution SPRWG microscopy requires confining the plasmon laterally with a high aperture numerical lens [43, 44].

Waveguides and Total Internal Reflection (TIR) The standard ray model (Fig. 1) considers that the reflected beams at the film interfaces are not shifted upon reflection and does not rigorously account for the penetration (evanescent field) of the light inside the upper and lower media. The transformation of a propagative light into an evanescent light in a TIR configuration is the kernel of light guiding principles. An evanescent field settles at the interface of two media with optical indices n1 and n2 such that n2 < n1, its amplitude decays exponentially in medium 2 since the z component (normal to the interface) of its wave-vector k has a non-nul imaginary part. This field is also called near-field.

6

F. Argoul et al.

Fig. 2 Reflectivity of a beam at the interface between two media with different optical indices such that n1 > n2. (a) Traditional representation of the incident and reflected beams as rays, without taking into account the Goos-Hanchen effect (the angles are noted as ϕ’s, differently to our notation θ’s). (b) Representation of the incident and reflected beams when considering the Goos-Hanchen effect. The reflected beam lateral shift D is illustrated as positive in this example (Reprinted with permission from Zeitschrift fur Naturforschung [54])

Characteristic features of the near field are: (i) its wave vector residual component kx > kkk = 2πn2/λ0 = n2|k0| and (ii) its energy density E is greater than would be expected from the time-averaged flow of radiation through a volume element determined by the Poynting vector S: E > (8π/c)|S|. Purely propagating waves (without evanescent components) are the infinite plane waves, a sort of idealization that does not exist in nature. As soon as the light beam has a finite cross-section, evanescent (non-propagating along one direction at least) field components exist; their amplitude is inversely proportional to the diameter of the light beam. They henceforth play a non-negligible role in highly focused beams, confined light in nano-structures and waveguides.

Total Internal Reflection As far as waveguide efficiency computation is concerned, it is important to realize that at each TIR of the waveguide boundary, the electric field bleeds (dribbles) into the surrounding media through the evanescent wave. When the incident beam is collimated, a lateral shift of the reflected beam may occur, known as the GoosHanchen (GH) effect [55]. It seems as if the light beam was going beyond the position of the interface in z direction (Fig. 2b). This phenomenon may become highy significant for waveguides, since, at each reflection, it accumulates along the propagating direction of the waveguide. We revisit here the Fresnel model equations for reflection at a dielectric interface (Fig. 2a). For simplicity, we consider the system invariant along the y direction, and we therefore do not consider this third direction. In each medium the wave vector reads: ki = [kix = ki sin θi, kiz = ki cos θi], kkik = ki = 2πni/λ0, where ni is the optical index of medium i and λ0 the wavelength of light in vacuum. At the interface

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

7

between the two media, the conservation of the momentum (k.exy) yields n1 sin θ1 = n2 sin θ2. From the continuity equations for the tangential components of the electric and magnetic fields at the interface 1//2, a 2  2 transmission matrix can be written as: T Mð1==2Þ ¼

1 t12



 r 12 ; 1

1 r 12

(1)

where r12 and t12 are the complex reflection and transmission coefficients at the (1//2) interface. E (resp. E0 ) represents the electric field along the z > 0 (resp. z < 0) direction. These electric fields are related via a matrix product at each interface: 

E1 E01



 ¼ T Mð1==2Þ

 E2 ; E02

(2)

where the optical index n1 (resp. n2) is used for medium 1 (resp. 2). The coefficients r12 and t12 are obtained from the z-components of the complex wave vectors k1 and k2, respectively. For the two polarizations TM (p) and TE (s), we have: r TM ¼

n22 k1z  n21 k2z , n22 k1z þ n21 k2z

tTM ¼

2n1 n2 k1z ; n22 k1z þ n21 k2z

(3)

k1z  k2z , k1z þ k2z

tTE ¼

2k1z : k1z þ k2z

(4)

and r TE ¼

The propagation of light from the interface (1//2) (z = 0) to a distance d2 through media (2) is described by a propagation matrix P2:  P2 ½ d 2  ¼

eik2z d2 0

0

eik2z d2

 :

(5)

For the single interface shown in Fig. 2a, we have n1 sinθ1 = n2 sinθ2,  1=2 =λ0 . Total internal refleck1z = 2πn1 cos(θ1)/λ0, and k2z ¼ 2π n22  n21 sin2 ðθ1 Þ tion occurs when n1 sin ðθ1TIR Þ ¼ n2 . For θ1 > θ1TIR , k2z gets imaginary and its modulus  defines the inverse of the extinction length  2   2 2 1=2 δev ¼ λ0 = 2π n1 sin ðθ1 Þ  n2 of the evanescent field into media 2. With the notation e ¼ n22 =n21, we get the following simplified expressions for rTM and rTE:

8

F. Argoul et al.

r TM

 1=2 e cos θ1  e  sin2 θ1 ¼  1=2 ; e cos θ1 þ e  sin2 θ1

(6)

r TE

 1=2 cos θ1  e  sin2 θ1 ¼  1=2 : cos θ1 þ e  sin2 θ1

(7)

and

The condition for TIR becomes sin ðθ1TIR Þ ¼ e1=2. In the TIR regime ðθ1 > θ1TIR Þ, the phase change upon reflection can be computed from the phase of the complexvalued rTM and rTE: ϕ12TM ¼ I flnðr TM Þg;

(8)

ϕ12TE ¼ I flnðr TE Þg;

(9)

and

where I (resp. R) represents the imaginary (resp. real) part of a complex quantity. The existence of a lateral shift of the reflected beam was observed in first mid of the twentieth century by Goos and Hanchen [55] and formalized by a simple relation linking the reflected beam lateral shift Dpol to the derivative of its phase with respect to the angle of incidence [56–59]: Dpol ¼ 

λ0 @ϕ12pol ; 2πn1 @θ1

(10)

where the subscript pol is either TM or TE. Another formulation was also proposed, as straightforwardly resulting from Eqs. 8 and 9 [54, 59]: Dpol

     @ln r pol λ0 λ0 @r pol   ¼ I I ¼ = r pol : @θ1 2πn1 2πn1 @θ1

(11)

From Eqs. 6 and 7, we get: DTM

( ) λ0 e sin θ1 ¼ I  1=2  2 1=2 ; πn1 e  sin2 θ1 sin θ1  e cos2 θ1

(12)

( ) λ0 sin θ1 ¼ I  1=2 : πn1 e  sin2 θ1

(13)

and DTE

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

a

b

c

d

9

Fig. 3 Computation of the GH shift from a dielectric interface with real positive optical indices. (a) Real (dotted-dashed line) and imaginary (solid line) part of the reflectivities for TM (red) (resp. TE (blue)) polarized light versus θ1. (b) Modulus of the reflectivity. (c) Phase of the reflectivity. (d) GH shifts computed from Eqs. 12 and 13. n1 = 1.5151, n2 = 1.335, λ = 632.8 nm

These two shifts are shown for a glass//water interface in Fig. 3d. At the TIR angle θ1TIR , for both polarizations, the GH shift diverges, as expected. It is also important to notice that beyond the TIR angle, these shifts do not relax to zero but to a finite value  0.44 λ0, which is not negligible for highly focused beams. Actually, as the angle of incidence θ1 approaches the critical angle θ1TIR from above, the beam shift does not diverge to infinity, as predicted by Eqs. 12 and 13, but tends to a finite value bounded by the width of the beam [60]. Let us note that when the evanescent field is launched via a high numerical aperture objective lens, the reflected wave is phase shifted with respect to the incident wave, making the reflection point appear beyond the sample interface (inside the medium) [61, 62]. Generalization to Multilayer Systems It is straightforward to generalize the matrix equations (2) and (5) for a multilayer system: 

E1 E01



 ¼ T ð1==2Þ P2 ½d 2 T ð2==3Þ . . . Pi ½di T ði==ðiþ1ÞÞ , . . .

 En : E0n

(14)

From this equation, we can compute the reflectivity r ¼ E01 =E1 and the transmission t = En/E1 (E0n ¼ 0 since there is no injection of light in medium n in the backward direction). This matrix formalism will be used in the sequel to compute the reflected field of SPR-based waveguides. It can also be used to approximate a waveguide with a non-constant index profile by a stepped index function.

10

F. Argoul et al.

Waveguide Resonance In between the two media (n1, n2), let us insert a planar waveguide of optical index nwg (Fig. 1). For a beam angle θwg inside the waveguide larger than both its planar interface TIR angles, the light bounces successively on its lower (wg//1) and upper (wg//2) interfaces. Each internal reflection at these interfaces adds a phase shift (ϕwg//1 between the waveguide and medium 1 and ϕwg//2 between the waveguide and medium 2) to the electric field. The propagation inside the waveguide of thickness W travels back and forth and adds an extra phase shift of 2kwgW cos θwg (z component of the wave vector inside the waveguide) (Fig. 1). The transverse resonance condition for constructive interference in a round-trip passage is:     2kwg W cos θwg þ ϕwg==1 θwg þ ϕwg==2 θwg ¼ m  2π:

(15)

The TIR phase shifts at the two interfaces vary with the polarization of the electric field; we have therefore different sets of resonance angles θwg (discrete values) for each polarization. A guided mode can exist only when a transverse resonance condition is satisfied, when the repeatedly reflected wave at each interface has constructive interference with itself, when no light gets out of the waveguide. Using the relations (6) and (7) for rTM and rTE, respectively, and assuming a symmetric waveguide (n2 = n1), we get the following phase sum F (W, θwg) for the light propagation and reflection inside the waveguide. For TM polarization: (  1=2 )   e cos θwg  e  sin2 θwg 2πF W, θwg ¼ 4πnwg W cos θwg =λ þ 2I ln  1=2 e cos θwg þ e  sin2 θwg ¼ 2πm;

(16)

and for TE polarization: 

2πF W, θwg



(

 1=2 ) cos θwg  e  sin2 θwg ¼ 4πnwg W cos θwg =λ þ 2I ln  1=2 cos θwg þ e  sin2 θwg ¼ 2πm;

(17)

where e = n1/nwg. For numerical computation of the resonant guided modes, we choose a waveguide of optical index nwg = 1.5151 (glass), immersed in water media (n1 = n2 = 1.335). We solve numerically Eqs. 16 and 17 by computing F (W, θwg) (versus the variable θ1, where n1 sin θ1 = nwg sin θwg) and searching its intersects with horizontal lines corresponding to integer values. Figure 4 shows that the resonant modes of a planar waveguide shrouded inside a medium of lower optical index can be excited without the need of sophisticated prism coupling, by a simple injection of a light beam through the liquid media. In this illustration, we consider a rather thick waveguide, W = 4λ0  2.5 μm, to show that the guided modes (marked

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

a

11

b

Fig. 4 Resonant guided modes in a dielectric waveguide. (a) F (W, θwg) versus θ1. (b) |r|(θ1). In (b) the reflectivity modulus for the waveguide (solid line) is compared to the TIR reflectivity of a BK7//water interface (dotted line). The colors correspond to the two polarizations TM (red) and TE (blue). nwg = 1.5151, n1 = n2 = 1.335, λ = 632.8 nm, W = 4λ0. θ1 is the incidence angle in medium 1 such that nwg sin(θwg) = n1 sin(θ1), θ1TIR = 1.078. The guided modes are marked by circles

by circles) can occur for much smaller incidence angles than θ1TIR ¼ 1:078. In the reflectivity curves, the waveguide modes correspond to zero reflectivity since in that case, the light is trapped inside the waveguide. What is important for the waveguide is that the light be reflected by its interfaces with the immersion media, even if this reflection is not 100 % efficient. In this later case, there may still be a fraction of the light that propagates inside the waveguide, although it is more rapidly damped at each reflection.

From TIR Microscopy to Dielectric Waveguide Microscopy Interference Reflection Microscopy Ambrose [64] was the first to propose total internal reflection microscopy (TIRM) for imaging the contact of cells with a solid substrate. The evanescent field that illuminated the sample was generated by a prism, and an objective lens was placed on the opposite side of the prism to image the scattered light. As shown in Fig. 5a, this optical configuration implies that the objective lens is a water-immersion lens. An alternative configuration for TIRM consists in using the same objective lens for both the evanescent field production and the sample imaging (Fig. 5b). This configuration is better suited for high numerical aperture objective lenses. A matching optical index oil couples the objective lens to the glass coverslip used as a substrate for cell adhesion [65, 66]. Another strategy consists in using Reflection Interference Contrast Microscopy (RICM) [10, 67] that improves markedly the sensitivity of TIRM [10, 68]. RICM is globally inspired by phase contrast

12

F. Argoul et al.

Fig. 5 Optical configurations for total internal reflection microscopy (TIRM) (a) The evanescent field, excited by the prism and the image collection through the objective lens are separated in space. (b) The evanescent field excitation and the sample image capture are both performed through a high aperture objective lens (Reprinted with permission from Nature Reviews. Molecular Cell Biology [63])

Fig. 6 Comparison of phase contrast and reflection interference contrast (evanescent waves) microscopies. Imaging of human glioma cells with (a) phase contrast microscopy and (b) RICM (Reprinted with permission from the Journal of Cell Biology [9])

microscopy because it recombines a reference beam (undiffracted) with the beam diffracted by the sample. Figure 6 illustrates the imaging of human glioma cells with phase contrast microscopy (Fig. 6a) and RICM (Fig. 6b). This comparative analysis confirms that the RICM is able to sense the parts of the cells which are very close (a few 100 nm) to the substrate.

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

13

Dielectric Waveguide Microscopy Planar resonant waveguide (RWG) sensors have been widely used for biological applications for their ability to probe with high sensitivity the refractive index of lipid or protein layers, living cells or tissues [2, 3, 69]. Different sensing films such as phospholipid bilayers, functionalized alkyl silanes, polyethylene glycol chains, polyamino acids, and adsorbed proteins have been added to waveguide-based biosensor platforms enlarging their domains of application. RWG biosensors consist of four layers: a glass substrate with a diffractive grating, a waveguide thin film (with higher index), a cell layer, and the cell culture medium. Living cells can be directly cultured and coupled onto the waveguide surface to form a cell layer. When control cells are illuminated with a broadband light source, the RWG biosensor resonates for a specific wavelength (or angle) that depends on the density and distribution of biomass (e.g., proteins, molecular complexes) inside the cell. Upon biochemical or osmotic perturbation, cell morphological changes modify their local optical index and therefore the RWG resonance condition. The shift of the reflected wavelength after stimulation is characteristic of the intracellular mass redistribution that contributes to the cell optical index. The main advantage of RWG sensors, as compared to transmission microscopy techniques, is the short spatial range probed by the evanescent wave propagating at the RWG sensor surface (150 nm). Therefore, only partial density changes in the bottom portion of the cells in close contact with the biosensor surface are captured. However in most reported applications, the RWG sensors did not allow high resolution imaging of the cell mass redistribution. Optical waveguide microscopy was introduced by Hickel and Knoll [70], concomitantly to surface plasmon microscopy [47]. These two methods use the same concept, namely, the coupling of an incoming light to the sample which serves as an inhomogeneous resonant waveguide cladding. The back reflected field from the sample (which may also be diffracted by the sample) is Fourier back-converted by a lens to produce a wide-field image of the sample. The sensitivity of this method comes from the resonant coupling conditions offered by the cladding of the waveguide. The lateral resolution of these microscopes depends on the radiation losses of the waveguide; the larger the radiation losses, the higher the lateral resolution. More recently, waveguide evanescent field scattering (WEFS) microscopy was used to detect the field scattered by single bacteria attached or located very closely to the waveguide [71–73]. The coupling of the light to the propagating waveguide was achieved by a grating. The light scattered by the bacteria on the waveguide surface was captured by microscope objective lenses with different magnifications. WEFS microscopy is very efficient to detect single micron-size particles which come close to or in contact with the waveguide surface and to distinguish those which are in close contact from those which are still out of contact. Like SPR microscopy, this method is a label-free method and is suited for long-term or time-lapse studies, since there is no photo bleaching. Waveguide evanescent field fluorescence (WEFF) was also developed [72, 74–77] for imaging ultrathin films and cell to substrate interactions. This method requires staining the cell membrane or adhesion foci by

14

F. Argoul et al.

fluorescent dyes. Since the waveguide excitation is separated from the imaging path, the lateral resolution of the system is limited by the microscope numerical aperture and the evanescent field which is propagating inside the waveguide has to be coupled back to the microscope lens for imaging.

Improving Waveguide Microscopy with Structured Illumination A major limitation of imaging methods in living systems is their inability to resolve objects separated by a few hundreds of nanometers or less (diffraction limitation). Although electron microscopy offers very high resolution, it is restricted to fixed specimens and therefore not suited for surveying the spatiotemporal dynamics of living cells. Scanning-probe methods have been developed for the past 50 years to circumvent this limitation [78, 79]. However, their resolution is dependent on the sharpness of the mechanical, chemical, electrical, or optical probe, and they are slower than wide-field microscopy methods. Moreover, due to their intrusiveness in the culture medium, they require to change the probe often, introducing a supplementary source of experimental irreproducibility. One of the most cost-effective and promising methods developed so far for improving microscopy resolution is based on standing-wave illumination [80–83]. The use of standing-wave imaging in TIR geometry has improved the resolution by a factor 2.6 [83], as compared to the diffraction limited resolution δr = λ/(2NA) = λ/(2n sin α), where α is the halfangle of the objective lens aperture, NA its numerical aperture, and n its optical index. This resolution is comparable or even better than those offered by other scanning-probe microscopies in many biological systems. It can be performed much faster, avoiding to fix the living sample and therefore lowering the imaging fuzziness coming for the ceaseless activity (restlessness) of living systems at submicron scales. If we consider a microscope lens as a linear and time-invariant system, we can use linear response theory to describe the principles of image formation. Once we know the impulse response function of the microscope, i.e., its point spread function (PSF), the output of the system can be calculated by convolving the input signal (or image) with the impulse response function. Thus, the image I(x) corresponding to a given object (taken as a source for the microscope) is obtained by the convolution of the emitted intensity from the object Ie(x0) with the PSF(x = (x, y)). In the Fourier domain, this convolution reads as a simple product: I ð xÞ ¼ I e ð xÞ

N

PSFðxÞ,

b I ðkÞ ¼ b I e ðkÞ:OTFðkÞ;

(18)

where OTF is the optical transfer function defined as the Fourier transform of the PSF and k is the corresponding spatial frequency vector in the Fourier domain. If the object is taken to be thin, it can be described by a complex amplitude transmittance function Vobj(x0) = |T (x0)|1/2 exp(ikϕ (x0)) = I0(x0) exp(ikϕ (x0)), which gives the change in magnitude and phase of the light passing through it [84]. The emitted intensity Ie(x0) can be approximated by the product Ie(x0) = |Vobj|2(x0)I0(x0). This

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

15

intensity is further modified by the imaging system PSF. In the case of a fluorescent image, Sobj (x0) = |Vobj|2(x0) is replaced by the fluorescence intensity yield of the dye-labeled sample. In frequency domain, the emitted intensity writes: O b I e ðkÞ ¼ b I 0 ðkÞ Sbobj ðkÞ:

(19)

Substituting Eq. 18 into Eq. 19 leads to the following frequency spectrum of the final digitized image: n o O b I ðkÞ ¼ b I 0 ðkÞ Sbobj ðkÞ :OTFðkÞ:

(20)

When the illumination I0(x) = Ic is uniform, its Fourier transform is a delta function δ (k) and we have: n o b I ðkÞ ¼ I c Sbobj ðkÞ :OTFðkÞ:

(21)

This means that all the spatial frequencies of the sample are filtered in the Fourier domain by the OTF of the objective lens. The objective lens can be viewed as a low-pass filtering system, in the sense that it cuts the highest frequencies corresponding to the finer details of the object. Let us consider now a nonuniform illumination I0(x), for example a periodic modulation in one dimension (produced by interference fringes): I 0 ðxÞ ¼ I 0 ðxÞ ¼ I c ½1 þ m cos ð2πκ0 x þ φÞ;

(22)

where κ0 and φ denote the spatial frequency and the initial phase of the periodic illumination pattern, respectively. Ic and m are constants corresponding to the mean intensity and the fringe intensity modulation depth. The Fourier transform of this periodic function reads: h i m m b I 0 ðkx Þ ¼ I c δðkx Þ þ eiφ δðkx  κ 0 Þ þ eiφ δðkx þ κ 0 Þ : 2 2

(23)

Substituting Eq. 23 into Eq. 20 yields: n o m m b I ðkx Þ ¼ I c Sbðkx Þ þ eiφ Sbðkx  κ 0 Þ þ eiφ Sbðkx þ κ0 Þ :OTFðkx Þ: 2 2

(24)

The first term in Eq. 24 represents the normal frequency spectrum observed by a conventional microscope, where the cutoff frequency fc for kx is given by the numerical aperture of the objective lens fc = 1/δr = 2NA/λ. The second and third terms in Eq. 24 provide additional information on the object, since the central frequencies are shifted by κ0 and κ0, respectively. Thus kx satisfies now the relations |kx  κ0|  2π fc and |kx + κ0|  2π fc, respectively. By choosing the spatial frequency κ0 of the sinusoidal fringe illumination pattern larger than the cutoff frequency of the OTF, we

16

F. Argoul et al.

Fig. 7 Dual-color super-resolution TIRF microscopy based on structured illumination microscopy (SIM) of intracellular structures. (a) TIRF image. (b) TIRF-SIM image. (c) Zoom on the regions of interest shown in (a) and (b). (d) Intensity profiles, along the white, oblique section lines in (a) and (b). Scale bar: 1 μm. The astrocytes were overnight transfected with ER-EGFP (endoplasmic reticulum) and labeled with Mito-Tracker-Deep-Red, a marker of mitochondria. Images were taken sequentially upon 488- and 561-nm excitation, respectively, with no emission filter. Exposure time was 200 ms per frame (total time: 3.8 s for the acquisition of the two colors). Structured illumination period was 180 nm (206 nm) for the green (red) channel, respectively (Reprinted with permission from Optics Express [98])

extend the frequency spectrum domain and improve the image resolution. To obtain an isotropic resolution enhancement, an additional fringe intensity modulation along the y direction is necessary. This structured illumination microscopy (SIM) has been implemented for resolution enhancement in wide-field and spot-scanning fluorescence microscopy [85–91], differential interference contrast [92] and spatial light interference microscopy [11], as well as in standing-wave, harmonic excitation light microscopy (HELM) and nanostructured glass slides, coupled to total internal reflection microscopy [81–83, 93–97]. As an illustration of the important image improvement that can be achieved with structured illumination coupled to total internal reflection fluorescence microscopy, we present in Fig. 7 a dual-color imaging of mitochondrial dynamics in cultured cortical astrocytes. This figure enlightens for the first time the existence of interaction sites between near-membrane mitochondria and the endoplasmic reticulum. These wide-field fluorescence images reach an isotropic

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

17

100-nm resolution based on a subdiffraction fringe pattern generated by the interference of two colliding evanescent waves. In the context of structured illumination TIRF microscopy, interference of evanescent waves is particularly interesting. The standing waves that can be launched through a high aperture objective lens (TIR) indeed provide extended resolution by the fact that they have an effective wavelength λ0/(2n1 sin θ), where n1 is the optical index of the highest index medium and θ1 the incident angle. The physical resolution in the evanescent region inside the low optical index medium (n2) is defined by the wavelength of light in the higher index medium, which scales it down by a factor n2/n1, compared to propagative waves in medium 2. High-NA objective-launched standing-wave total internal reflection fluorescence microscopy was shown experimentally to reach a lateral resolution of approximately 100 nm for a 60X objective lens with NA = 1.45, beyond the classical diffraction limit [99].

Surface Plasmon Resonance: From Evanescent to Guided Waves Principles of Surface Plasmon Resonance Surface plasmons are surface charge density oscillations which are coupled to evanescent electromagnetic fields at the interface of a metal (eg ¼ e0g þ ie00g ) and a dielectric (e2). The decay length of the field into the metal is of the order of 10 nm for visible wavelengths and about half of this wavelength in the dielectric. For charges to accumulate at this interface, the electric field needs to be TM (p) polarized, such that a driving force for the charges occurs normal to it. SPPs are characterized by a propagation wave number kx, parallel to the interface that is larger than the modulus of k in free space. Therefore, SPPs cannot be excited by a free-propagating field. However, they can be excited by an evanescent field, created either by TIR or light scattering by nanoscale structures. The in-plane wave vector for SPPs reads: ( kxSP ðλ0 Þ ¼

k0xSP ðλ0 Þ

þ

ik00xSP ðλ0 Þ

2π λ

¼

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi) e2 eg ðλ 0 Þ ; e2 þ eg ðλ 0 Þ

(25)

where eg(λ0) is the dielectric function of gold. The electric field propagating along the x direction, coupled to surface plasmon can be written as: 0

00

EðxÞ ¼ E0 eikxSP x ¼ E0 eikxSP x ekxSP x :

(26)

This electric field is the product of two terms: the first one is propagating, whereas the second one decays exponentially along the x-direction. Generally, for TM (p) polarized incident light, the resonant electromagnetic field reaches its maximum at the interface with a typical enhancement factor 102 compared to the incident field amplitude. Conversely, there is no enhanced field for TE (s) polarized incident light.

18

F. Argoul et al.

Fig. 8 Kretschmann configuration for SPR excitation. (a) The SPR is coupled by a triangular prism. (b) Modulus of the reflectivity |rTM|(θi) versus θi for a TM polarized beam. (c) Modulus of the reflectivity |rTE|(θi) versus θi for a TE polarized beam. A model layer of 50 nm is chosen for this computation. Four wavelength values are represented: 500 nm (blue), 633 nm (red), 750 nm (brown) and 1500 nm (black). The optical index dispersion data for water and gold were taken from Refs. [101, 102], the ones for glass (BK7) were retrieved from Schott Optical Glass data sheets

The most famous device for SPR excitation has been proposed by Kretschmann [26, 100] (Fig. 8a), where a metal (usually silver or gold) film is placed at the interface of two dielectric media (1 and 2) such that medium 2 has a lower refractive index than medium 1. Beyond the TIR angle θ1TIR , for a specific incident angle θ1SPR the intensity of the reflected light decreases sharply. The angle for SPR resonance θ1SPR is related to the dielectric function e2 of the medium 2 in contact with gold when e1 and eg are fixed. The fact that kSPR is parallel to the gold-dielectric medium interface confers to the surface plasmon resonance wave, the property of a guided wave. Importantly, SPR cannot be excited by a TE (s) polarized light (Fig. 8c) and requires instead a TM (p) polarization (Fig. 8b). The SPR resonance angle θ1SPR and the resonance width both decrease with the wavelength λ0 of the exciting beam (Fig. 8b). Increasing the wavelength therefore improves both angular sensitivity and discrimination power of SPR for minute refractive index variations. However, this gain of sensitivity is obtained at the expense of the spatial resolution since not only a larger wavelength implies an increase of the evanescent field depth (spatial resolution along the z direction) but also an increase of the lateral propagation length Lx of the plasmon guided wave (spatial resolution along the x direction). If one neglects, in a first approximation, the correction to kSP due to the prism coupling (Δk), the lateral propagation length behaves as: 1 λ0 Lx ðλ0 Þ ¼ 00 ¼ 2kxSP ðλ0 Þ 2π

(

e0g ðλ0 Þ þ e2 e0g ðλ0 Þe2

)3=2

e0 2g ðλ0 Þ e00g ðλ0 Þ

:

(27)

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

19

Fig. 9 SPR reflectivity curves computed in a Kretschman configuration for a naked gold film (50 nm) separating a glass prism (BK7) and a dielectric medium (water) for TM (p) (solid lines) and TE (s) (dashed-dotted lines) polarizations. (a) Real R (r) (thick line) and imaginary I (r) (thin line) parts of r versus θ1. (b) Modulus |r|(θ1) of the reflectivity. (c) Phase ϕ (θ1) of the reflected field. (d) 3D representation of the reflectivity r in the 3D space (I (r), R (r), θ1)

Typical values for the lateral decay length are Lx  22 μm for λ0 = 515 nm and Lx  500 μm for λ0 = 1060 nm. The surface plasmon z decay length inside the dielectric is much shorter: 91=2 8 = 0 < e ð λ Þ þ e 0 2 g λ0 LzSP ðλ0 Þ ¼ : 2 ; 2π : e2

(28)

LzSP increases nonlinearly with λ0, starting from half the wavelength λ0 at 633 nm (300 nm) to reach about twice the wavelength λ0 at 1550 nm (3 μm). Figure 9 illustrates the complex reflectivity curves computed for a thin gold film (50 nm) in contact with water for λ = 632.8 nm. |rTM| and ϕTM (solid lines) differ drastically from |rTE| and ϕTE (dashed-dotted lines), confirming that the SPP-related steep variations of the reflectivity modulus and phase at θ1SPR ¼ 1:2517 rad occur in TM polarization solely. The real and imaginary parts of rTM have a remarkable behavior at plasmon resonance. R (rTM) increases first after the TIR angle θ1TIR and decreases at θ1SPR , approaches zero and increases again. I (rTM) reaches a plateau after the TIR angle θ1TIR and decreases to negative values at θ1SPR . Figure 9d illustrates in three dimensions (I (r), R (r), θ1) the rotation of rTM in the complex plane at θ1SPR . The width of the plasmon resonance (Fig. 9b) increases with surface plasmon radiative losses. When λ0 increases, the SPP resonance gets narrower (Fig. 8b), the

20

F. Argoul et al.

radiative losses decrease. The angular width of the surface plasmon resonance is determined by the imaginary part of the surface plasmon wave vector [103]: k00xSP ðλ0 Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Δθ1SPR ðλ0 Þ ¼ r :  2π λ0

2



(29)

2 k0 xSP ðλ0 Þ

We conclude from Fig. 9 that plasmon resonance is not only recognizable by a sharp decrease of the reflectivity modulus in TM polarization but also by a sharp drop of the phase of the reflected field. This phase drop at a very precise value of the incident angle should make the reflected beam very sensible to the GH effect, since at this angle the variation of the reflected phase with the wave number k is quite steep [59, 104, 105]. In Figs. 10 and 11 are shown the lateral shifts predicted by Eq. 10 and Refs. [56–59] from a thin gold layer (eg = 11.8134 + 1.2144i at 632.8 nm) inserted in between glass (BK7) and water, for two different gold thicknesses (50 and 30 nm, respectively). These figures provide deep insight on how the plasmon radiative losses enhance or attenuate the amplitude of the GH shift. When considering a 50 nm thick gold film, as commonly used for prism-coupled SPR excitation, we note that a first GH shift occurs around the TIR angle for TM polarization (Fig. 10c, d) and seems to be more localized than for a glass-water interface. A second GH shift of greater amplitude and of opposite sign occurs at plasmon resonance. Surprisingly, before dwelling to very negative shift values at θ1SPR , the GH shift increases before and after resonance in a sort of lateral rebound. Negative GH shifts are related to metal absorption (intrinsic damping) and depend on gold thickness, whereas positive GH shifts are due to radiative damping [106–108]. When the thickness of the gold film increases, so that the intrinsic damping becomes larger than the radiative damping, a negative GH shift is observed. In the reverse case, positive GH shift occurs. The shape of D/λ0 versus θ1 is telling us that if we use a high numerical aperture objective lens to launch the surface plasmons in a wide-field configuration, the different reflected beams should cross each other around plasmon resonance, leading to some aberrations in the reflected image of the focused spot [61]. With a 30 nm thick gold film, the GH shift in TM polarization changes sign and becomes very small at plasmon resonance (Fig. 11c, d). The angular width of the plasmon resonance increases, because the plasmon wave inside the metal is also leaking back inside the glass (radiative loss). If this fading out of the SPR sharpness could be considered as a disadvantage for SPR sensitivity, we realize by this simple computation that, in contrast, it could become an important criterium for improving the resolution of SPR microscopy.

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

a

b

c

d

21

Fig. 10 Computation of the GH shift from a 50 nm thick gold film inserted in between glass n1 and water n2. (a) Modulus of the reflectivity versus θ1. (b) Phase of the reflectivity. (c) GH shift computed numerically from Eq. 10. (d) Zoom on (c). TM polarization (red), TE polarization (blue). n1 = 1.5151, n2 = 1.335, eg = 11.8134 + 1.2144i, λ0 = 632.8 nm, dg = 50 nm

a

b

c

d

Fig. 11 Computation of the GH shift from a 30 nm thick gold film inserted in between glass n1 and water n2. (a) Modulus of the reflectivity versus θ1. (b) Phase of the reflectivity. (c) GH shift computed numerically from Eq. 10. (d) Zoom on (c). TM polarization (red), TE polarization (blue). n1 = 1.5151, n2 = 1.335, eg = 11.8134 + 1.2144i, λ0 = 632.8 nm, dg = 30 nm

22

F. Argoul et al.

Sensivity of SPR to Dielectric Layers: From Evanescent to Guided Waves SPP sensitivity made this technique very attractive for biosensing applications of very thin layers adsorbed on gold [109]. SPP changes with thicker dielectric layers are also interesting to investigate. As an illustration, we consider two gold films of thickness 50 and 25 nm, respectively, inserted as before in between glass and water and we add on gold an extra isotropic dielectric layer (d) with optical index nd = 1.4. We use the transfer matrix equation (Eq. 14) to compute rTM and rTE. The shift of the plasmon resonance to larger θ1 values with the thickening of the dielectric layer is more important for a gold film of thickness 50 nm (Fig. 12) than of thickness 25 nm (Fig. 13). This modification of the SPR resonance is also more visible on the GH shift curves D(θ1) for a 50 nm thick gold film (Fig. 12). For the 25 nm gold film (Fig. 13), the SPR resonance which is already much attenuated shifts only mildly and does not fade in depth, as compared with the 50 nm thick gold film (Fig. 12). Consequently, tiny thickness variations of an adsorbed dielectric layer are much better detected with a 50 nm gold film. However, for Wd values larger than 200 nm, the SPR reflectivity curve reaches a limit angle θ1LIM  1:49 (resp. 1.4) for a gold film thickness 50 nm (resp. 25 nm). Actually, in experimental situations, objective lenses made of BK7 glass (n1 = 1.5151) have limited numerical apertures 1.45, corresponding to an angle θmax = arcsin(NA/n1)  1.27 rad, making impossible the use of SPR excitation mode for microscopic imaging. Higher index objective lenses are therefore mandatory. However, the evolution of the reflectivity curve with the thickness of the dielectric layer is interesting, since it reveals the emergence of other resonance peaks which are not due to SPR, for much thicker dielectric layers. With dielectric layer thicknesses reaching the wavelength λ0 and beyond, the system behaves as an asymmetric metal//dielectric//water waveguide where resonance modes can be excited [23, 45, 53] with the same TIR configuration as for SPR. It can be modeled by a four layer system: a glass substrate n1, a thin gold film (ng, dg), a dielectric layer (nwg, Wwg) (the waveguide), and a final water medium with a smaller optical index (nd < n2 and nd < n1). Using the same matrix formulation as in Eq. 14, we compute the different reflectivity phase changes at each interface of the waveguide and the resonance conditions:       2πF a W wg , θwg ¼ 2kwg W wg cos θwg þ ϕwg==g θwg þ ϕwg==2 θwg ¼ ð2m þ ζ Þ  π;

(30)

where ζ = 0 (resp. 1) depending on the TM (resp. TE) polarization, m is an integer. The complex reflection coefficient r at each interface of the waveguide film (wg) with its bounding media (J) (g for the gold film and 2 for the bulk water medium) is

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

a

b

c

d

e

f

g

h

i

j

k

l

23

Fig. 12 SPR curves and GH shifts from a gold film (dg = 50 nm) with an adsorbed dielectric layer of width Wd, in contact with a water medium. (a, d, g, j) Modulus of the reflectivity versus θ1. (b, e, h, k) Phase of the reflectivity (shifted by its value at θ1 = 0). (c, f, i, l) GH shift computed numerically from Eq. 10. TM polarization: red lines, TE polarization: blue lines. n1 = 1.5151, n2 = 1.335, eg = 11.8134 + 1.2144i, λ0 = 632.8 nm. Four values of the thickness of the adsorbed layer (nd = 1.4) are reported Wd = 10, 50, 500, and 1800 nm, from top to bottom

r wg==J ¼

2ρ kz, wg =n2ρ wg  k z, J =nJ 2ρ kz, wg =n2ρ wg þ k z, J =nJ

;

(31)

where ρ = 0 for TE (s-polarization) and ρ = 1 for TM ( p-polarization), and kz, i  1=2 ¼ 2π=λ n2i  n21 sin2 θ1 . For nwg > n2, TIR occurs at the (d//2)interface and the reflectivity of gold rules the reflection at the (g//d) interface, the electromagnetic field in medium 2 is evanescent and r wg==2 ¼ jr wg==2 jeiφwg==2 . Using the remarkable iϕ

equality tan ϕ ¼ iðeeiϕe þeþiϕ Þ, we get the following expression for the phase ϕwg//J: iϕ

24

F. Argoul et al.

a

b

c

d

e

f

g

h

i

j

k

l

Fig. 13 SPR curves and GH shifts from a gold film (dg = 25 nm) with an adsorbed dielectric layer of width Wd, in contact with a water medium. (a, d, g, j) Modulus of the reflectivity versus θ1. (b, e, h, k) Phase of the reflectivity (shifted by its value at θ1 = 0). (c, f, i, l) GH shift computed numerically from Eq. 10. TM polarization: red lines, TE polarization: blue lines. n1 = 1.5151, n2 = 1.335, eg = 11.8134 + 1.2144i, λ = 632.8 nm. Four values of the thickness of the adsorbed layer (nd = 1.4) are reported Wd = 10, 50, 500, and 1800 nm, from top to bottom

ϕwg==J

" # i 1  r wg==J  : ¼ 2arctan 1 þ r gw==J

(32)

The SPRWG resonance m-modes are recognizable on |r(θ1)| curves as very narrow dips (Figs. 12g, j and 13g, j). Conversely to SPP waves, SPRWG waves can be excited by both TM and TE polarized illumination. These resonant guided waves can be excited at the gold-dielectric interface by prism, grating or objective lens coupling. However, because they are propagating inside the waveguide on much longer distances than SPR waves, they may integrate farther refractive index variations and be scattered similarly to plane waves in bulk materials. As reported in Refs. [110, 111], they can lead to scattering, refraction, and diffraction processes

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

25

which can be analyzed in Fourier space (k space). The comparison of the SPRWG modes for two gold thicknesses (Figs. 12 and 13) is very instructive. For a rather inefficient SPR gold sensor (25 nm) (Fig. 13), we note that the SPRWG resonance modes are as strong as for a very efficient SPR gold sensor film (50 nm) (Fig. 12). More interestingly, because the SPR resonance is smeared out, thinner gold films can separate more efficiently SPR from SPRWG. The emergence of SPRWG peaks occurs at a finite set of Wwg values, separated by intervals of δWwg =  λ0/(2nwg). These SPRWG peaks accumulate at the angular value θ1==wgTIR ¼ asin nwg =n1  1:178 corresponding to TIR at the glass/dielectric layer. For each SPRWG resonance mode, the phase drops abruptly by about 2π and the corresponding GH shift is quite large and sharply confined at the corresponding resonance angle. Note that the strong light confinement inside the SPRWG waveguides may produce GH shifts as large as tens of micrometers. These large GH shift effects look very promising to achieve optical biosensors with improved sensitivity.

SPR-Based Microscopy for Biological Applications Principles of Scanning Surface Plasmon Microscopy Lateral resolution of SPR microscopy [112] was markedly improved by the replacement of the prisms by high numerical aperture objective lenses [34, 113, 114]. Photon scanning tunneling techniques were also used to capture evanescent fields and characterize SPP propagation on the scale of a few tens of micrometers [115–117] and to demonstrate that prism-coupled SPR imaging cannot achieve a subwavelength resolution due to the lateral propagation of the SPP waves [118, 119]. Subwavelength resolution SPR microscopy images were obtained experimentally and theoretically with a far-field microscope [40, 113, 114, 120]. Subsequent high-resolution SPR microscopy studies, with high numerical aperture objective lenses, focused exclusively on amplitude images and did not discuss the possibility of far-field high-resolution SPR phase microscopy [36, 39, 121–125]. SPR phase microscopy was performed for the first time with a scanning SPR microscope (SSPRM) [41, 44] that combines (i) an heterodyne interferometer, (ii) a high numerical aperture objective lens and (iii) a three-dimensional piezo-scanning device (Fig. 14a). The SSPRM performs, at a given position (X, Y), the reconstruction of a V (Z ) integral of the backward reflected electric field r(X, Y, θ, φ) over the radial θ and azimuthal φ angles (Fig. 14b). This method is directly inspired from scanning acoustic microscopy [38, 126–128]. For a linearly polarized light beam (TM or TE) before the objective lens, a mixture of both TM and TE polarizations is obtained after crossing the objective lens:

26

F. Argoul et al.

Fig. 14 SSPRM setup and experimental V (Z ) curves from a glass//45 nm gold//water interface. (a) Microscope setup with the fully fibered interferometer. OI: optical isolator, WP: halfwavelength waveplates, L: lens, PMOF: polarization-maintaining optical fiber, OFC: optical fiber coupler, AOM: acousto-optics modulator, D: detector, CL: collimating lens. (b) Experimental |V (Z )| curves for TM (solid line) and TE (dashed line) polarizations recorded on a 45 nm gold film in contact with water. (c) |P2(θ1)r(θ1)| versus θ1 computed by inverse FFT transform of V (Z ). (d) Phase ϕ (θ1) of r(θ1) (Reprinted with permission from Applied Optics [44])

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

V ðX, Y, Z Þ ¼

ð 2π ð θmax 0

θmin

27



4iπn1 Z cos θ P2 ðθÞ r TM ðθÞ cos2 φ þ r TE ðθÞ sin2 φ e λ0 sin θ dθdφ; (33)

where Z is the focus position of the light beam by the objective lens with respect to the gold//dielectric interface, P(θ) is the pupil function of the objective lens, n1 is the index of the coupling medium (lens and matching index oil), (2π/λ0) the wave number. Ideally, the pupil function P should be as constant as possible; in experimental situations, it looks rather like a slowly varying Gaussian shape. The above mentioned three properties (i–iii) confer to SSPRM a unique ability to measure locally the phase of the reflected field Φ = Φ(V (Z)) and its spatial derivative dΦ/dZ that none of the other systems did afford previously. Actually dΦ/dZ, similarly to the SPR phase ϕ and its angular derivative dϕ/dθ1, crosses singular points at precise values of Z [129]. These Z positions correspond to highly contrasted phase images and hence increase the sensitivity of SPR imaging systems. Given that we use a pure TM polarization, the coupling of the incoming light through a high numerical aperture objective lens simultaneously provides a structured illumination and focuses SPPs [130]. With the change of variable [131]: V z ¼ ð2n1 =λ0 Þ cos θ ; dv ¼ ð2n1 =λ0 Þ sin θ dθ; the V (Z ) curve takes the form of a Fourier transform of the reflected field multiplied by the square of the objective lens pupil function P: λ0 π V TM, TE ðZÞ ¼ 2n1

ð vmax

P2 ðvz Þ r TM, TE ðvz Þ e2πivz z dvz ;

(34)

vmin

where νzmin = (2n1/λ0) cos θmax and νzmax = (2n1/λ0) cos θmin. If the incoming beam is 1 not diaphragmed for lower θ values, we have θmin = 0 and νzmax ¼ 2n λ0 . Given an c objective lens of numerical aperture NA, θmax = arcsin(NA/n1) and νzmax ¼ 2n λ0  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1  NA2 =n21 .

With a high NA objective lens, pure TM (resp. TE) polarization requires special optical or opto-electronic devices to convert the light polarization from linear V (Z ) to radial VTM (Z) (resp. azimuthal VTE (Z )) [122, 132–134]. From experimental V (Z ) curves, although they are inevitably limited in the range of Z values, we have shown that the modulus and the phase of r(θ1)P2(θ1) can be recovered by inverse Fourier transform the complex V (Z ) signals [43, 44] (Fig. 14c, d). As compared to prismcoupled SPR reflectivity devices, the advantage of the SSPRM is primarily a very strong confinement of light and a drastic gain in resolution [44]. Cancelling the pupil function P outside the angular interval [θmin = 0, θmax] and considering it as a constant inside this interval, we can rewrite the integral in Eq. 34 as:

28

F. Argoul et al.

λ0 π V TM, TE ðZ Þ ¼ 2n1

ð1 1

r TM, TE ðvz Þ e2πivz z dvz :

(35)

V(Z ) corresponds to the Fourier transform of the complex reflectivity r(ν). Putting r(θ) = cst = 1, V (Z ) can be computed analytically:   λ0 2πn1 ð1  cos θmax ÞZ 2πin1 ð1λcos θmax ÞZ 0 sin V ðZ Þ ¼ : e λ0 2n1 Z

(36)

The complex function V (Z ) written in Eq. 36 has two modulation frequencies: ΔZ1 ¼

λ0 λ0 and ΔZ 2 ¼ ; n1 ð1  cos θmax Þ n1 ð1 þ cos θmax Þ

(37)

respectively. With NA = 1.45, n1 = 1.5151, θmax  1.28 rad., λ0 = 632.8 nm, we get ΔZ1 ’ 588 nm and ΔZ2 ’ 324 nm. Thus |V (Z )| behaves as the modulus of a sinc function (| sin(t)/t|) of period ΔZ1/2 = 294 nm, centered around zero.

V (Z) Response to a Discrete Phase Jump Both SPR and SPRWG reflectivity phases ϕ (θ1) display very sharp jumps at resonance (Figs. 12 and 13) with a strong enhancement of the GH shifts. The V (Z) function is also very strongly impacted by these phase jumps since, when uncoupling the variations ϕ (θ1) from |r(θ1)|, one can show that the V (Z ) shape is majoritarily determined by the phase drop [129]. We revisit this demonstration here, modeling ϕ (θ1) by a discrete phase jump at resonance and keeping |r| = cst = 1. We simplify the phase profile by a step at θ1r (the phase derivative is infinite at SPR and/or SPRWG resonance): ðI 1 Þ

ϕ¼0

θ 1 < θ 1r

for

ðv > vr Þ;

(38)

and ðI 2 Þ ϕ ¼ Δϕ

for θ1 < θ1r

ðv < vr Þ;

(39)

where νr = 2n1 cos(θ1r)/λ0. Then, λ0 π V ðZÞ ¼ V I1 ðZ Þ þ V I2 ðZÞ ¼ 2n1

ð vr

2πivZ iΔϕ

e vmin

e

λ0 π dv þ 2n1

ð vmax vr

e2πivZ dv;

(40)

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

29

  2iπn1 Z λ0 2πn1 Z ½ cos θ1r þ cos θmax  iΔϕ e sin V ðZ Þ ¼ ½ cos θ1r  cos θmax  e λ0 λ0 2n1 Z  2iπn1 Z 2πn1 Z ½ cos θmin þ cos θ1r  þ sin ½ cos θmin  cos θ1r  e λ0 : (41) λ0 If Δϕ is strictly a multiple integer of 2π, the phase drop does not have any impact on the V (Z) curve and we recover the sinc function of Eq. 36, characteristic of the objective lens pupil function. When Δϕ 6¼ 2πm, V I2 (Z ) is a modulated sinc function with two characteristic periods: ΔZI2 , 1 ¼

λ0 ; n1 ð cos θmin  cos θ1r Þ

(42)

λ0 : n1 ð1 þ cos θ1r Þ

(43)

and ΔZ I2 , 2 ¼

Putting θmin = 0 in Eq. 42 gives the formula proposed by Somekh et al. in 2000 [38]. For θ1r ¼ θ1SPR ¼ 1:265 rad (corresponding to the plasmon resonance for a 50 nm gold film in contact with water (Fig. 9)), we get ΔZI2 , 1  299 nm and ΔZ I2 , 2  321 nm. The first term V I1 ðZÞ in Eq. 41 is also a modulated sinc function, characterized by two periods: ΔZI1 , 1 ¼

λ0 λ0 and ΔZI1 , 2 ¼ : n1 ð cos θ1r  cos θmax Þ n1 ð cos θ1r þ cos θmax Þ

(44)

If θmax = π/2, ΔZI1 , 1 ¼ ΔZI1 , 2 = λ0/(n1 cos θ1r)  1390 nm. The period of |V (Z )| estimated from Eq. 42 (ΔZI2,1/2 = 299 nm) is very close to the period of the modulus of the sinc function produced by an objective lens NA = 145 alone (294 nm). It may therefore be quite hard to distinguish a resonant mode from the windowing of the objective lens, since θmax ’ θSPR. It will therefore be necessary to select an objective lens with greater numerical aperture and optical index for measurements in water media. However, we will keep θ1  [0, π/2] for our computations, as an absolute limit. In that case ΔZ1/2 ’ 209 nm and ΔZ2 ’ 417 nm. Importantly, the modulation frequency ΔZr given by a resonance phase jump decreases with θ1r, the larger the θ1r, the faster the modulations of |V (Z )| coming from the SPRWG or SPR excitation. We note that these sinc functions are all centered around Z = 0, but their summation in complex form may lead to shifted and asymmetric V (Z ) curves. In simple cases, we have shown that a wavelet-based space-frequency (Z, ν) decomposition of V (Z) functions can efficiently separate the different sources of periodic modulations of these V (Z) curves and select those which correspond to SPR reflectivity resonances [42].

30

F. Argoul et al.

V (Z) Responses from Glass//Water and Glass//Gold//Water Interfaces In real situations, both the modulus and the phase of the reflectivity r contribute to V (Z ) and can be modeled as the superimposition of a set of sinc-type functions with prefactors that depend on the reflectivity amplitude. Figure 15 compares the V (Z ) responses for different model interfaces: (i) a [glass//water interface] (black), (ii) a [glass//gold film (25 nm)//water] interface (blue), (iii) a [glass//gold film (50 nm)// water] interface (red), and (iv) an hypothetic interface with a constant r (green). The constant reflectivity gives a fully symmetric (green) curve (Fig. 15c), as predicted by Eq. 36, and the unwrapped phase Φ(Z ) (Fig. 15d) follows an amazing sawtooth wave profile with π jumps that correspond precisely to the Z positions where |V (Z)| = 0, namely, R (V (Z )) = 0 and I (V (Z )) = 0. These π jumps are quite regular for the constant reflectivity situation (green curve). The moduli of V (Z ) and unwrapped phases Φ(Z ) for glass//water and glass//gold//water (25 and 50 nm thick gold films) interfaces are much less regular. The |V (Z )| curves are no longer symmetric, and their maxima shift noticeably. These |V (Z )| shifts are positive (towards positive Z values) for the glass//water interface (200 nm) and negative for a glass//gold (50 nm)//water interface (80 nm). Surprisingly, the thinner gold film gives two maxima at 100 and 190 nm. These shifts of the V (Z ) curves are the consequence of the GH shifts, and their signs depend also on the changes of sign of @ϕ/ @θ1 at the TIR angle or at the SPR angle (or both) [43, 120, 129].

a

b

c

d

Fig. 15 Reflectivity r(θ1) and V (Z ) functions for a TM polarized beam at a glass//water, without (black lines) or with an intermediate gold film of width 50 nm (resp. 25 nm): red (resp. blue) line. (a) |r|(θ1) versus θ1. (b) ϕ (θ1)/π versus θ1. (c) |V (Z )| versus Z. (d) Φ(Z )/π versus Z. The green curves correspond to a real and constant reflectivity r

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

31

SPRWG Microscopy on Thick Dielectric Layers Thick dielectric layers (≳ λ0), deposited on thin gold films can behave as an asymmetric waveguides and favor the emergence of resonance modes, as predicted by Eq. 30. This phenomenon is illustrated in Figs. 16 and 17 on the reflectivity and V (Z ) curves, computed for a 1 μm thick dielectric film (nwg = 1.4) sandwiched in between a thin gold film (50 and 25 nm, respectively) and water. For these gold thicknesses, the SPR resonance (in TM polarization) is pushed to the limit of the incident angle θ1 domain and is no longer useful for biosensing tasks. Nevertheless, a very sharp hollow appears at incident angles close to θ1TIR that brings extra biosensing possibilities in another angular domain. Importantly, both TM and TE polarizations give SPRWG resonance modes, and it appears that the TE polarization gives sharper and greater GWR phase jumps for thinner gold films. The discrimination of the TM and TE polarizations from the |V (Z )| curves is more marked with a 25 nm (Fig. 17c) than with a 50 nm thick gold film (Fig. 16c), in part because for the thin gold film the reflectivity is significantly smaller for incident angles θ1 below θ1TIR TIR. Another very interesting feature that occurs for TM polarization and thinner gold films with a dielectric layer of 1 μm is the quasi-linear variation of the phase of V (Z ). This phenomenon occurs because the |V (Z)| curve never gets very close to zero in that case; the complexity of |r|(θ1) (Fig. 17a) and ϕ (θ1) (Fig. 17b) curves produces a superimposition of at least three spatial frequency modulations (|r|(θ1) has three minima, and @ϕ (θ1)/@θ1 has three extrema). From our previous discussion

a

b

c

d

Fig. 16 Reflectivity r(θ1) and V (Z ) functions for a 1 μm thick dielectric layer nwg = 1.4 on a 50 nm gold film with a TM (red) (resp. TE (blue)) polarized light. n1 = 1.5151, n2 = 1.335, λ0 = 632.8 nm

32

F. Argoul et al.

a

b

c

d

Fig. 17 Reflectivity r(θ1) and V (Z ) functions for a 1 μm thick dielectric layer nwg = 1.4 on a 25 nm gold film with a TM (red) (resp. TE (blue)) polarized light. n1 = 1.5151, n2 = 1.335, λ0 = 632.8 nm

of GH shifts (sections “Total Internal Reflection” and “Sensivity of SPR to Dielectric Layers: From Evanescent to Guided Waves”), it is important to realize that each local extrema of the angular phase derivative (giving a GH shift) is likely to produce a different frequency on the |V (Z )| curve.

SPR and SPRWG Microscopy on Living Cells Direct and nonintrusive observation of adherent cells on solid substrates with objective-coupled surface plasmon resonance microscopy has become very competitive with other unstained microcopy methods. The first trials were performed on fixed cells in air [41, 42, 135, 136] and then in liquid medium [137–139]. However, very little experiments were carried out on living cells [140–143] and exclusively on short-term recordings. Better resolution was achieved by combining high-resolution SSPRM (Fig. 14a) with a radially polarized light (pure TM) [124, 128, 137, 143], the cell-substrate gap distance being estimated with bare gold surfaces as a control. A further improvement of this radial-SSPRM microscopy was obtained, introducing a fibered interferometer [43, 44, 144] that improved markedly the stability of the device and hence its temporal sensitivity over periods of several days. Time-lapsed recording are particularly interesting for the characterization of the dynamics of living cells, given that full resolution ((500  500) pixels) image capture takes about 1 min; the range of temporal scales affordable by this microscopy is more

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

33

Fig. 18 V (X, Y) amplitude and phase images reconstructed from an IMR90 fibroblast for different Z values, in TM (radial) polarization. (a) |V (Z )| curves selected at the position marked by colored symbols in panel (b). (b and d) Modulus images |V (X, Y)| scanned at position Z = 0.8 and 1.2 μm. (c and e) Phase image Φ(X, Y) scanned at the same positions as (b, d). dgold = 45 nm. The SSPRM images were recorded in air (fixed cells) with a 60X objective lens with NA = 1.45 (Olympus) (Reprinted with permission of Optics Express [41])

than three decades, without detectable drift of the optical setup. The V (X,Y) images that are shown in Fig. 18b–d were recorded with an unfibered version of this radially polarized microscope. Three V (Z) curves were recorded: two on the cell body (red

34

F. Argoul et al.

and green) and one outside the cell (blue). These V (Z ) curves are shifted artificially, putting their maxima at Z = 0. They all present a single maxima, similar to the previous computations shown in Fig. 15c for a 50 nm thick gold film. We compare in Fig. 18 the amplitude (Fig. 18b, d) and phase (Fig. 18c, e) images of the same zone, centered at an IMR90 cell nucleus. An optimum contrast of the images is obtained when choosing the Z focus at a local maximum of @ |V (Z)|/@Z [44]. The two Z focus selected in Fig. 18b, d are marked by two black vertical lines (b and d) in Fig. 18a. The first focus position Z = 0.8 μm concentrates on near to gold surface cellular structures, such as those surrounding the nucleus (Golgi and endoplasmic reticulum) which have been immobilized upon cell dehydration by ethanol. The second Z focus position is chosen after the two main side lobes of the |V(Z)| curves where the quasi-complete damping of V (Z) means that the microscope becomes sensible to cell body structures beyond the evanescent field. In other words, at this defocus, the cell itself plays the role of a dielectric waveguide placed in air. For instance, in Fig. 18d, we note the emergence of bright disks, inside the nucleus domain, with a size of about 1–2 μm. What is also very interesting is the information that we can retrieve from the phase images (Fig. 18c, e). These phase images were coded by a continuous grey coding and surprisingly they are not smooth grey images but present different domains with a specific grey level characteristic of phase plateaus. On the boundary of these domains, very sharp phase jump appear which delimit the nuclear membrane and the extracellular membranes (bottom of the images). Comparing the two focus positions in Fig. 18c, e, we note that the dynamic of the phase has increased by a factor 4/3 from Z = 0.8 to 1.2 μm. This type of observation was made recently on living myoblasts [44], and finely sampled V (Z) curves were recorded on 2D grids to allow the estimation of the local refractive index of adherent cells. Some intracellular structures were observed in liquid, and because they behave as intrinsic intracellular waveguides, they lead to highly contrasted phase images. Red blood cells are very interesting objects for quantitative phase microscopy, since their internal body and therefore their optical index can be assumed as homogeneous [17]. They also give very nice height images with SPRWG microscopy (Fig. 19a). Actually, since their height variation remains limited to a few microns (in the example shown in Fig. 19a, the red blood cell was partly dried before imaging in air), the amplitude |V (Z )| or the phase at a given Z focus position may suffice to recover the internal optical index of the cell. In Fig. 19, we compare a |V (Z )| image (Fig. 19a) with a topographic image recorded in contact mode with an atomic force microscope (Fig. 19b). We have also recorded full |V (Z )| curves from which we get an estimation of the index of 1.45 for a partly dried erythrocyte, slightly larger than previously estimated with digital holography microscopy (1.4  0.2) [145–147].

Conclusion We have surveyed in this chapter the development of nonintrusive microscopy methods based of evanescent waves, providing a formal support to the experimental observations that will help improving further these tools. SPR microscopy appears as

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

35

Fig. 19 Comparison of (a) a SSPRM image and (b) a topographic AFM image of an erythrocyte. Scale bar: 10 μm

a powerful trick to enhance further the electromagnetic field. Actually it is more than that because recent experimental studies have also shown that SPR can be monitored by an electric polarization of the gold film interface [148] and that it can be modulated in time during a biosensing experiment. Other materials (negative index materials (NIM) or left-handed materials) [149] than metals can be used to significantly enhance the field depth for SPR imaging and to increase the sensitivity to changes in refractive index in the bulk solution. Another interesting outcome is the possible excitation of SPPs by a TE polarized light at the interface between a dielectric and a metamaterial [150], which is impossible with positive refractive index materials according to Maxwells equations. Importantly, recent efforts have been invested to push SPR microscopy beyond the optical or visible spectral range to infrared [49–52, 151] and TeraHertz domains [152, 153]. When Otto [25] and Kretschmann and Raether [26] designed the optical system for exciting surface plasmon resonance, they could not dare to think how far this small system could lead. High-resolution surface plasmon microscopy has not yet fully demonstrated its whole application potential. In particular, its application to diagnosis for living systems should be further developed, in a similar way as quantitative phase microscopy is now becoming a reference for nonintrusive imaging of living cells [17, 21, 154–157]. As compared to holographic or interferometric phase imaging methods, SPR and SPRWG have a decoupled sensitivity. They also bring the possibility to uncouple the thickness and the optical index variations from submicron scale structures [158] when a guided wave mode is implied [43]. The improvement of wave guiding methods should also open new challenging perspectives for SPR-based sensor systems since they could be used for intravital diagnosis without need of staining the region of interest.

36

F. Argoul et al.

Acknowledgements We are indebted to Centre National de la Recherche Scientifique, Ecole Normale Supérieure de Lyon, Lyon Science Transfert (projet L659), Région Rhône Alpes (CIBLE Program 2011), INSERM (AAP Physique Cancer 2012), and the French Agency for Research (ANR-AA-PPPP-005, EMMA 2011) for their financial support.

References 1. Fan X, White IM, Shopova SI, Zhu H, Suter JD, Sun Y (2008) Sensitive optical biosensors for unlabeled targets: a review. Anal Chim Acta 620(1–2):8 2. Zourob M, Lakhatakia A (2010) Optical guided-wave chemical and biosensors I. Springer, Berlin/Heidelberg 3. Zourob M, Lakhatakia A (2010) Optical guided-wave chemical and biosensors II. Springer, Berlin/Heidelberg 4. Horvath R, Pedersen HC, Skivesen N, Svanberg C, Larsen NB (2005) Fabrication of reverse symmetry polymer waveguide sensor chips on nanoporous substrates using dip-floating. J Micromech Microeng 15(6):1260 5. Fang Y, Ferrie AM, Fontaine NH, Mauro J, Balakrishnan J (2006) Resonant waveguide grating biosensor for living cell sensing. Biophys J 91(5):1925 6. Velasco-Garcia MN (2009) Optical biosensors for probing at the cellular level: A review of recent progress and future prospects. Semin Cell Dev Biol 20(1):27 7. Zernike F (1955). How I discovered phase contrast. Science 121(3141):345 8. Stephens DJ, Allan VJ (2003) Light microscopy techniques for live cell imaging. Science (New York, NY) 300(5616):82 9. Bereiter-Hahn J, Fox CH, Thorell B (1979) Quantitative reflection contrast microscopy of living cells. J Cell Biol 82:767 10. Verschueren H (1985) Interference reflection microscopy in cell biology: methodology and applications. J Cell Sci 75:279 11. Popescu G, Deflores LP, Vaughan JC, Badizadegan K, Iwai H, Dasari RR, Feld MS (2004) Fourier phase microscopy for investigation of biological structures and dynamics. Opt Lett 29 (21):2503 12. Rappaz B, Marquet P, Cuche E, Emery Y, Depeursinge C, Magistretti P (2005) Measurement of the integral refractive index and dynamic cell morphometry of living cells with digital holographic microscopy. Opt Express 13(23):9361 13. Tychinskii VP (2001) Coherent phase microscopy of intracellular processes. Physics-Uspekhi 44(6):617 14. Popescu G, Ikeda T, Dasari RR, Feld MS (2006) Diffraction phase microscopy for quantifying cell structure and dynamics. Opt Lett 31(6):775 15. Tychinskii VP (2007) Dynamic phase microscopy : is a ‘dialogue’ with the cell possible ? Physics-Uspekhi 50(5):513 16. Bon P, Maucort G, Wattellier B (2009) Quadriwave lateral shearing interferometry for quantitative phase microscopy of living cells. Opt Express 17(15):468 17. Popescu G (2011) Quantitative phase imaging of cells and tissues. McGraw Hill, New York 18. Bon P, Savatier J, Merlin M, Wattellier B, Monneret S (2012) Optical detection and measurement of living cell morphometric features with singleshot quantitative phase microscopy. J Biomed Opt 17(7):076004 19. Martinez-Torres C, Berguiga L, Streppa L, Boyer-Provera E, Schaeffer L, Elezgaray J, Arneodo A, Argoul F (2014) Diffraction phase microscopy: retrieving phase contours on living cells with a wavelet-based space-scale analysis. J Biomed Opt 19(3):036007 20. Martinez-Torres C, Laperrousaz B, Berguiga L, Boyer-Provera E, Elezgaray J, Nicolini FE, Maguer-Satta V, Arneodo A, Argoul F (2015) Deciphering the internal complexity of living cells with quantitative phase microscopy: a multiscale approach. J Biomed Opt 20(9):096005

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

37

21. Martinez Torres C, Laperrousaz B, Berguiga L, Boyer Provera E, Elezgaray J, Nicolini FE, Maguer-Satta V, Arneodo A, Argoul F (2016) In: Popescu G, Park Y (eds) Quantitative phase imaging II. SPIE proceedings, SPIE, Bellingham, WA vol 9718. p 97182C 22. Li SY, Ramsden JJ, Prenosil JE, Heinzle E (1994) Measurement of adhesion and spreading kinetics of baby hamster kidney and hybridoma cells using an integrated optical method. Biotechnol Prog 10(5):520 23. Tiefenthaler K, Lukosz W (1989) Sensitivity of grating couplers as integrated-optical chemical sensors. J Opt Soc Am B 6(2):209 24. Kunz RE, Cottier K (2006). Optimizing integrated optical chips for labelfree (bio-)chemical sensing. Anal Bioanal Chem 384(1):180 25. Otto A (1968) Excitation of nonradiative surface plasmon waves in silver by the method of frustrated total reflection. Z Phys 410:398 26. Kretschmann E, Raether H (1968). Radiative decay of non-radiative surface plasmons excitated by light. Z Naturforsch A 23:2135 27. Raether H (1988) Surface plasmons on smooth and rough surfaces and on gratings. Springer, Berlin/Heidelberg 28. Nelson BP, Grimsrud TE, Liles MR, Goodman RM, Corn RM (2001) Surface plasmon resonance imaging measurements of DNA and RNA hybridization adsorption onto DNA microarrays. Anal Chem 73(1):1 29. Homola J (2003) Present and future of surface plasmon resonance biosensors. Anal Bioanal Chem 377(3):528 30. Nikitin PI, Grigorenko AN, Beloglazov AA, Valeiko MV, Savchuk AI, Savchuk OA, Steiner G, Kuhne C, Huebner A, Salzer R (2000) Surface plasmon resonance interferometry for micro-array biosensing. Sensors Actuators A Phys 85(1–3):189 31. Notcovich AG, Zhuk V, Lipson SG (2000) Surface plasmon resonance phase imaging. Appl Phys Lett 76(13):1665 32. Grigorenko AN, Beloglazov AA, Nikitin PI (2000) Dark-field surface plasmon resonance microscopy. Opt Commun 174(1):151 33. Burke JJ, Stegeman GI, Tamir T (1986) Surface-polariton-like waves guided by thin, lossy metal films. Phys Rev B 33(8):5186 34. Kano H, Mizuguchi S, Kawata S (1998) Excitation of surface-plasmon polaritons by a focused laser beam. J Opt Soc Am A 15(4):1381 35. Kano H, Knoll W (2000) A scanning microscope employing localized surface-plasmonpolaritons as a sensing probe. Opt Commun 182:11 36. Stabler G, Somekh MG, See CW (2004) High-resolution wide-field surface plasmon microscopy. J Microsc 214(3):328 37. Zhang J, See CW, Somekh MG, Pitter MC, Liu SG (2004) Widefield surface plasmon microscopy with solid immersion excitation. Appl Phys Lett 85(22):5451 38. Somekh MG, Liu SG, Velinov TS, See CW (2000) Optical V(z) for high-resolution 2 pi surface plasmon microscopy. Opt Lett 25(11):823 39. Berguiga L, Zhang S, Argoul F, Elezgaray J (2007) High-resolution surface-plasmon imaging in air and in water: V(z) curve and operating conditions. Opt Lett 32(5):509 40. Somekh MG, Stabler G, Liu S, Zhang J, See CW (2009) Wide field high resolution surface plasmon interference microscopy. Opt Lett 34(20):3110 41. Berguiga L, Roland T, Monier K, Elezgaray J, Argoul F (2011) Amplitude and phase images of cellular structures with a scanning surface plasmon microscope. Opt Express 19(7):6571 42. Boyer-Provera E, Rossi A, Oriol L, Dumontet C, Plesa A, Berguiga L, Elezgaray J, Arneodo A, Argoul F (2013) Wavelet-based decomposition of high resolution surface plasmon microscopy V (Z) curves at visible and near infrared wavelengths. Opt Express 21(6):7456 43. Berguiga L, Boyer-Provera E, Martinez-Torres C, Elezgaray J, Arneodo A, Argoul F (2013) Guided wave microscopy: mastering the inverse problem. Opt Lett 38(21):4269

38

F. Argoul et al.

44. Berguiga L, Streppa L, Boyer-Provera E, Martinez-Torres C, Schaeffer L, Elezgaray J, Arneodo A, Argoul F (2016) Time-lapse scanning surface plasmon microscopy of living adherent cells with a radially polarized beam. Appl Optics 55(6):1216 45. Tien PK (1977) Integrated optics and new wave phenomena wave guides. Rev Mod Phys 49(2):361 46. Salamon Z, Macleod HA, Tollin G (1997) Surface plasmon resonance spectroscopy as a tool for investigating the biochemical and biophysical properties of membrane protein systems. II: Applications to biological systems. Biochim Biophys Acta 1331(2):131 47. Hickel W, Knoll W (1990) Surface plasmon optical characterization of lipid monolayers at 5 μm lateral resolution. J Appl Phys 67(8):3572 48. Aust EF, Knoll W (1993) Electrooptical waveguide microscopy. J Appl Phys 73(6):2705 49. Bivolarska M, Velinov T, Stoitsova S (2006) Guided-wave and ellipsometric imaging of supported cells. J Microsc 224:242 50. Golosovsky M, Lirtsman V, Yashunsky V, Davidov D, Aroeti B (2009) Midinfrared surfaceplasmon resonance: A novel biophysical tool for studying living cells. J Appl Phys 105 (10):102036 51. Yashunsky V, Marciano T, Lirtsman V, Golosovsky M, Davidov D, Aroeti B (2012) Real-time sensing of cell morphology by infrared waveguide spectroscopy. PLoS One 7(10):e48454 52. Yashunsky V, Kharilker L, Zlotkin-Rivkin E, Rund D, Melamed-Book N, Zahavi EE, Perlson E, Mercone S, Golosovsky M, Davidov D, Aroeti B (2013) Real-time sensing of enteropathogenic E. coli-induced effects on epithelial host cell height, cell-substrate interactions, and endocytic processes by infrared surface plasmon spectroscopy. PLoS One 8(10): e78431 53. Knoll W (1998) Interfaces and thin films as seen by bound electromagnetic waves. Annu Rev Phys Chem 49:569 54. Wolter VH (1950) Untersuchungen zur Strahlversetzung bei Totalreflexion des Lichtes mit der Methode der Minimumstrahlkennzeichnung. Z Naturforsch A J Phys Sci 5(3):143 55. Goos F, Hanchen H (1943) Ein neuer und fundamentaler versuch zur totalreflexion. Ann Phys 6(1):333 56. Artmann VK (1948). Berechnung der seitenverstzung des totalreflectierten strahles. Ann Phys 6(2):87 57. McGuirk M, Carniglia CK (1977) An angular spectrum representation approach to the GoosHanchen shift. J Opt Soc Am 67(1):103 58. Puri A, Birman JL (1986) Goos-Hänchen beam shift at total internal reflection with application to spatially dispersive media. J Opt Soc Am A 3(4):543 59. Götte JB, Aiello A, Wördman JP (2008) Loss-induced transition of the Goos-Hänchen effect for metals and dielectrics. Opt Express 16(6):3961 60. Horowitz BR, Tamir T (1973) Unified theory of total reflection phenomena at a dielectric interface. Appl Phys 1(1):31 61. Novotny L, Grober RD, Karrai K (2001) Reflected image of a strongly focused spot. Opt Lett 26(11):789 62. Novotny L, Hecht B (2006) Principles of nano-optics. Cambridge University Press, Cambridge 63. Steyer JA, Almers W (2001) A real-time view of life within 100 nm of the plasma membrane. Nat Rev Mol Cell Biol 2(4):268 64. Ambrose EJ (1956) A surface contact microscopy for the study of cell movements. Nature 178:1194 65. Byrne GD, Pitter MC, Zhang J, Falcone FH, Stolnik S, Somekh MG (2008) Total internal reflection microscopy for live imaging of cellular uptake of sub-micron non-fluorescent particles. J Microsc 231(Pt 1):168 66. Choi R (2015) Design and characterisation of a label free evanescent waveguide microscope. MPhil Thesis, University of Nottingham

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

39

67. Radler J, Sackmann E (1993) Imaging optical thicknesses and separation distances of phospholipid vesicles at solid surfaces. J Phys II 3:727 68. Limozin L, Sengupta K (2009) Quantitative reflection interference contrast microscopy (RICM) in soft matter and cell adhesion. Eur J Chem Phys Phys Chem 10:2752 69. Herold KE, Rasooly A (2012) Biosensors and molecular technologies for cancer diagnostics. CRC Press, Boca Raton 70. Hickel W, Knoll W (1990) Optical waveguide microscopy. Appl Phys Lett 57(13):1286 71. Thoma F, Langbein U, Mittler-Neher S (1997) Waveguide scattering microscopy. Opt Commun 134:16 72. Hassanzadeh A, Nitsche M, Mittler S, Armstrong S, Dixon J, Langbein U (2008) Waveguide evanescent field fluorescence microscopy: thin film fluorescence intensities and its application in cell biology. Appl Phys Lett 92(23):233503 73. Nahar Q (2014) Oriented collagen and applications of waveguide evanescent field scattering (WEFS) microscopy. PhD thesis, University of Western Ontario 74. Grandin HM, Städler B, Textor M, Vörös J (2006) Waveguide excitation fluorescence microscopy: A new tool for sensing and imaging the biointerface. Biosens Bioelectron 21(8):1476 75. Agnarsson B, Ingthorsson S, Gudjonsson T, Leosson K (2009) Evanescent-wave fluorescence microscopy using symmetric planar waveguides. Opt Express 17(7):5075 76. Horvath R, Pedersen HC, Skivesen N, Selmeczi D, Larsen NB (2005) Monitoring of living cell attachment and spreading using reverse symmetry waveguide sensing. Appl Phys Lett 86 (7):071101 77. Agnarsson B, Lundgren A, Gunnarsson A, Rabe M, Kunze A, Mapar M, Simonsson L, Bally M, Zhdanov VP, Hook F (2015). Evanescent lightscattering microscopy for label-free interfacial imaging: from single sub-100 nm vesicles to live cells. ACS Nano 9(12):11849 78. Binnig G, Quate CF (1986) Atomic force microscope. Phys Rev Lett 56(9):930 79. Betzig E, Trautman JK, Harris TD, Weiner JS (1991) Breaking the diffraction barrier: optical microscopy on the nanometer scale. Science 251:1468 80. Bailey B, Farkas DL, Lansing Taylor D, Lanni F (1993) Enhancement of axial resolution in fluorescence microscopy by standing-wave excitation. Nature 366:44 81. Cragg GE, So PT (2000) Lateral resolution enhancement with standing evanescent waves. Opt Lett 25(1):46 82. Frohn JT, Knapp HF, Stemmer A (2000) True optical resolution beyond the Rayleigh limit achieved by standing wave illumination. Proc Natl Acad Sci U S A 97(13):7232 83. Beck M, Aschwanden M, Stemmer A (2008) Sub-100-nanometre resolution in total internal reflection fluorescence microscopy. J Microsc 232(1):99 84. Streibl N (1984). Phase imaging by the transport equation of intensity. Opt Commun 49(1):6 85. Gustafsson MGL (2000) Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J Microsc 198(2):82 86. Gustafsson MGL (2005) Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proc Natl Acad Sci U S A 102 (37):13081 87. Gustafsson MGL, Shao L, Carlton PM, Wang CJR, Golubovskaya IN, Cande WZ, Agard DA, Sedat JW (2008) Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophys J 94(12):4957 88. Somekh MG, Hsu K, Pitter MC (2008) Resolution in structured illumination microscopy: a probabilistic approach. J Opt Soc Am A 25(6):1319 89. Saxena M, Eluru G, Gorthi SS (2015) Structured illumination microscopy. Adv Opt Photon 7:241 90. Strohl F, Kaminski CF (2016). New frontiers in structured illumination microscopy. Optica 3(6):667 91. Muller M, Monkemoller V, Hennig S, Hubner W, Huser T (2016) Opensource image reconstruction of super-resolution structured illumination microscopy data in ImageJ. Nat Commun 7:10980

40

F. Argoul et al.

92. Chen J, Xu Y, Lv X, Lai X, Zeng S (2013) Super-resolution differential interference contrast microscopy by structured illumination. Opt Express 21(1):112 93. So PT, Kwon HS, Dong CY (2001) Resolution enhancement in standingwave total internal reflection microscopy: a point-spread-function engineering approach. J Opt Soc Am A 18(11):2833 94. Gliko O, Reddy GD, Anvari B, Brownell WE, Saggau P (2006) Standing wave total internal reflection fluorescence microscopy to measure the size of nanostructures in living cells. J Biomed Opt 11(6):064013 95. Chung E, Kim D, Cui Y, Kim YH, So PTC (2007) Two-dimensional standing wave total internal reflection fluorescence microscopy: superresolution imaging of single molecular and biological specimens. Biophys J 93(5):1747 96. Sentenac A, Belkebir K, Giovannini H, Chaumet PC (2009) High resolution total-internal fluorescence microscopy using periodically nanostructured glass slides. J Opt Soc Am A 26 (12):2550 97. Shen H, Huang E, Das T, Xu H, Ellisman M, Liu Z (2014) TIRF microscopy with ultra-short penetration depth. Opt Express 22(9):10728 98. Brunstein M, Wicker K, Hérault K, Heintzmann R, Oheim M (2013) Full-field dual-color 100nm super-resolution imaging reveals organization and dynamics of mitochondrial and ER networks. Opt Express 21(22):26162 99. Chung E, Kim D, So PTC (2006) Extended resolution wide-field optical imaging: objectivelaunched standing-wave total internal reflection fluorescence microscopy. Opt Lett 31(7):945 100. Kretschmann E (1978) The ATR method with focused light - application to guided waves on a grating. Opt Commun 26(1):41 101. Hale GM, Querry MR (1973) Optical constants of water in the 200-nm to 200-microm wavelength region. Appl Optics 12(3):555 102. Olmon RL, Slovick B, Johnson TW, Shelton D, Oh SH, Boreman GD, Raschke MB (2012) Optical dielectric function of gold. Phys Rev B 86(23):235147 103. Lirtsman V, Ziblat R, Golosovsky M, Davidov D, Pogreb R, SacksGranek V, Rishpon J (2005) Surface-plasmon resonance with infrared excitation: Studies of phospholipid membrane growth. J Appl Phys 98(9):093506 104. Yin X, Hesselink L, Liu Z, Fang N, Zhang X (2004) Large positive and negative lateral optical beam displacements due to surface plasmon resonance. Appl Phys Lett 85(3):372 105. Oh GY, Kim DG, Kim HS, Choi YW (2009) Analysis of surface plasmon resonance with Goos-Hanchen shift using FDTD method. Proc SPIE 7218:72180J 106. Wang LG, Chen H, Zhu SY (2005) Large negative GoosHanchen shift from a weakly absorbing dielectric slab. Opt Lett 30(21):2936 107. Liu X, Cao Z, Zhu P, Shen Q, Liu X (2006) Large positive and negative lateral optical beam shift in prism-waveguide coupling system. Phys Rev E 73(5):056617 108. Chen B, Basaran C (2011) Statistical phase-shifting step estimation algorithm based on the continuous wavelet transform for high-resolution interferometry metrology. Appl Optics 50(4):586 109. Homola J (2006) Springer series on chemical sensors and biosensors. Springer, Berlin/ Heidelberg 110. Rothenhausler B, Knoll W (1987) Total internal diffraction of plasmon surface polaritons. Appl Phys Lett 51(11):783 111. Rothenhausler B, Knoll W (1987) Plasmon surface polariton fields versus TIR evanescent waves for scattering experimentas at surfaces. Opt Commun 63(5):301 112. Fu E, Foley J, Yager P (2003) Wavelength-tunable surface plasmon resonance microscope. Rev Sci Instrum 74(6):3182 113. Somekh MG, Liu S, Velinov TS, See CW (2000) High-resolution scanning surface-plasmon microscopy. Appl Optics 39(34):6279 114. Somekh MG, See CW, Goh J (2000) Wide field amplitude and phase confocal microscope with speckle illumination. Opt Commun 174:75

Resonant Waveguide Imaging of Living Systems: From Evanescent to Propagative. . .

41

115. Hecht B, Bielefeldt H, Novotny L, Inouye Y, Pohl D (1996) Local excitation, scattering and interference of surface plasmons. Phys Rev Lett 77(9):1889 116. Velinov T, Somekh MG, Liu S (1999) Direct far-field observation of surface-plasmon propagation by photoinduced scattering. Appl Phys Lett 75(25):3908 117. Bouhelier A, Ignatovich F, Bruyant A, Huang C, Colas des Francs G, Weeber JC, Dereux A, Wiederrecht GP, Novotny L (2007) Surface plasmon interference excited by tightly focused laser beams. Opt Lett 32(17):2535 118. Dawson P, de Fornel F, Goudonnet JP (1994) Imaging of surface plasmon propagation and edge interaction using a photon scanning tunneling microscope. Phys Rev Lett 72(18):2927 119. Dawson P, Puygranier BAF (2001) Surface plasmon polariton propagation length: A direct comparison using photon scanning tunneling microscopy and attenuated total reflection. Phys Rev 63:1 120. Somekh MG (2002) Surface plasmon fluorescence microscopy: an analysis. J Microsc 206(2):120 121. Huang B, Wang W, Bates M, Zhuang X (2007) Three-dimensional superresolution imaging by stochastic optical reconstruction microscopy. Science 319:810 122. Watanabe K, Horiguchi N, Kano H (2007) Optimized measurement probe of the localized surface plasmon microscope by using radially polarized illumination. Appl Optics 46 (22):4985 123. Watanabe K, Terakado G, Kano H (2009) Localized surface plasmon microscope with an illumination system employing a radially polarized zerothorder Bessel beam. Opt Lett 34 (8):1180 124. Vander R, Lipson SG (2009) High-resolution surface-plasmon resonance real-time imaging. Opt Lett 34(1):37 125. Roland T, Berguiga L, Elezgaray J, Argoul F (2010) Scanning surface plasmon imaging of nanoparticles. Phys Rev B 81(23):235419 126. Atalar A (1978) An angular-spectrum approach to contrast in reflection acoustic microscopy. J Appl Phys 49:5130 127. Atalar A (1979) A physical model for the acoustic signatures. J Appl Phys 50:8237 128. Pechprasarn S, Somekh MG (2012) Surface plasmon microscopy: resolution, sensitivity and crosstalk. J Microsc 246(3):287 129. Argoul F, Roland T, Fahys A, Berguiga L, Elezgaray J (2012) Uncovering phase maps from surface plasmon resonance images: Towards a subwavelength resolution. C R Phys 13(8):800 130. Hu ZJ, Tan PS, Zhu SW, Yuan XC (2010) Structured light for focusing surface plasmon polaritons. Opt Express 18(10):10864 131. Ilett C, Somekh MG, Briggs GAD (1984) Acoustic microscopy of elastic discontinuities. Proc R Soc Lond A 393:171 132. Quabis S, Dorn R, Eberler M, Glockl O, Leuchs G (2000) Focusing light to a tighter spot. Opt Commun 179:1 133. Dorn R, Quabis S, Leuchs G (2003) Sharper focus for a radially polarized light beam. Phys Rev Lett 91(23):233901 134. Shoham A, Vander R, Lipson SG (2006) Production of radially and azimuthally polarized polychromatic beams. Opt Lett 31(23):3405 135. Sefat F, Denyer MCT, Youseffi M (2011) Imaging via widefield surface plasmon resonance microscope for studying bone cell interactions with micropatterned ECM proteins. J Microsc 241(3):282 136. Watanabe K, Matsuura K, Kawata F, Nagata K, Ning J, Kano H (2012) Scanning and nonscanning surface plasmon microscopy to observe cell adhesion sites. Biomed Opt Express 3(2):354 137. Moh KJ, Yuan XC, Bu J, Zhu SW, Gao BZ (2008) Surface plasmon resonance imaging of cellsubstrate contacts with radially polarized beams. Opt Express 16(25):20734

42

F. Argoul et al.

138. Soon CF, Khaghani SA, Youseffi M, Nayan N, Saim H, Britland S, Blagden N, Denyer MCT (2013) Interfacial study of cell adhesion to liquid crystals using widefield surface plasmon resonance microscopy. Colloids Surf B 110:156 139. Peterson AW, Halter M, Tona A, Plant AL (2014) High resolution surface plasmon resonance imaging for single cells. BMC Cell Biol 15:35 140. Mahadi Abdul Jamil M, Denyer MCT, Youseffi M, Britland ST, Liu S, See CW, Somekh MG, Zhang J (2008) Imaging of the cell surface interface using objective coupled widefield surface plasmon microscopy. J Struct Biol 164:75 141. Wang Z, Ding H, Popescu G (2011) Scattering-phase theorem. Opt Lett 36(7):1215 142. Wang S, Xue L, Lai J, Li Z (2012) Three-dimensional refractive index reconstruction of red blood cells with one-dimensional moving based on local plane wave approximation. J Opt 14 (6):065301 143. Toma K, Kano H, Offenha A (2014) Label-free measurement of cellelectrode cleft gap distance with high spatial resolution surface plasmon microscopy. ACS Nano 8(12):12612 144. Streppa L, Berguiga L, Boyer Provera E, Ratti F, Goillot E, Martinez Torres C, Schaeffer L, Elezgaray J, Arneodo A, Argoul F (2016) In: Vo-Dinh T, Lakowicz JR, Ho HPAH, Ray K (eds) Plasmonics in biology and medicine XIII. SPIE proceedings, SPIE, Bellingham, WA vol 9724. p 97240G 145. Popescu G, Park YK, Choi W, Dasari RR, Feld MS, Badizadegan K (2008) Imaging red blood cell dynamics by quantitative phase microscopy. Blood Cells Mol Dis 41(1):10 146. Park Y, Diez-Silva M, Popescu G, Lykotrafitis G, Choi W, Feld MS, Suresh S (2008) Refractive index maps and membrane dynamics of human red blood cells parasitized by plasmodium falciparum. Proc Natl Acad Sci U S A 105(37):13730 147. Rappaz B, Barbul A, Hoffmann A, Boss D, Korenstein R, Depeursinge C, Magistretti PJ, Marquet P (2009) Spatial analysis of erythrocyte membrane fluctuations by digital holographic microscopy. Blood Cells Mol Dis 42(3):228 148. Lioubimov V, Kolomenskii A, Mershin A, Nanopoulos DV, Schuessler HA (2004) Effect of varying electric potential on surface-plasmon resonance sensing. Appl Optics 43(17):3426 149. Pendry J (2000) Negative refraction makes a perfect lens. Phys Rev Lett 85(18):3966 150. Ruppin R (2000) Surface polaritons of a left-handed medium. Phys Lett A 277(1):61 151. Shalaev VM, Cai W, Chettiar UK, Yuan HK, Sarychev AK, Drachev VP, Kildishev AV (2005) Negative index of refraction in optical metamaterials. Opt Lett 30(24):3356 152. Withayachumnankul W, Abbott D (2009) Metamaterials in the terahertz regime. IEEE Photonics J 1(2):99 153. Yao H, Zhong S (2014) High-mode spoof SPP of periodic metal grooves for ultra-sensitive terahertz sensing. Opt Express 22(21):25149 154. Tychinsky V (2009) The metabolic component of cellular refractivity and its importance for optical cytometry. J Biophotonics 2(8–9):494 155. Yu L, Mohanty S, Zhang J, Genc S, Kim MK, Berns MW, Chen Z (2009) Digital holographic microscopy for quantitative cell dynamic evaluation during laser microsurgery. Opt Express 17(14):12031 156. Bon P, Wattellier B, Monneret S (2012) Modeling quantitative phase image formation under tilted illuminations. Opt Lett 37(10):1718 157. Park Y, Best CA, Badizadegan K, Dasari RR, Feld MS, Kuriabova T, Henle ML, Levine AJ, Popescu G (2010) Measurement of red blood cell mechanics during morphological changes. Proc Natl Acad Sci U S A 107(15):6731 158. Elezgaray J, Berguiga L, Argoul F (2014) Plasmon-based tomographic microscopy. J Opt Soc Am A 31(1):155