Handbook of Speckle Interferometry 9781510645387, 9781510645394, 1510645381

This handbook introduces speckle techniques to nonspecialists to help them understand the basic principles of speckle in

108 28 33MB

English Pages 118 [120] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Handbook of Speckle Interferometry
 9781510645387, 9781510645394, 1510645381

Table of contents :
Related Titles
Copyright
Introduction to the Series
Contents
Preface
Acknowledgements
Motivation
Objectives
1 Fundamentals of Interference
2 Speckle Interference and Displacement
3 Electronic Speckle Pattern Interferometers
4 Illumination and Displacement Detection
5 Transient Displacement Analysis
6 Phase Detection
7 Overview of Applications
Appendix: Speckle Statistics
References
Index
About the Author

Citation preview

Speckle Interferometry Abundio Dávila

This handbook is an introduction to speckle techniques to help nonspecialists understand the basic principles of speckle interferometry. The book mainly focuses on the use of speckle patterns with direct phase-measuring methods that produce an instantaneous phase. The major electronic speckle pattern interferometry (ESPI) techniques are presented using simplified mathematical notation that includes rigid-body and standard-body displacements to estimate object pose changes with six degrees of freedom. Additionally, the adoption of temporal phase unwrapping instead of spatial phase unwrapping is promoted. This handbook also includes a summary of recent industrial applications, with an update on current research in the ESPI field.

Handbook of Speckle Interferometry

Handbook of

Handbook of

Speckle Interferometry

DÁVILA

P.O. Box 10 Bellingham, WA 98227-0010

Abundio Dávila

ISBN: 9781510645387 SPIE Vol. No.: TT122 TT122

Tutorial Texts Series Related Titles • • •

Interferometry for Precision Measurement, Peter Langenbeck, Vol. TT94 Modulation Transfer Function in Optical and Electro-Optical Systems, Second Edition, Glenn D. Boreman, Vol. TT121 Practical Optical Dimensional Metrology, Kevin G. Harding, Vol. TT119

(For a complete list of Tutorial Texts, see http://spie.org/publications/books/tutorial-texts.)

Other Related SPIE Press Titles SPIE Field Guides: • •

Interferometric Optical Testing, Eric P. Goodwin and James C. Wyant, Vol. FG10 Displacement Measuring Interferometry, Jonathan D. Ellis, Vol. FG30

SPIE Press Monographs: • • • • •

• •

Analog and Digital Holography with MATLAB®, Georges T. Nehmetallah, Rola Aylo, and Logan Williams, Vol. PM256 Digital Shearography: New Developments and Applications, Lianxiang Yang and Xin Xie, Vol. PM267 Digital Shearography: Theory and Application of Digital Speckle Pattern Shearing Interferometry, Wolfgang Steinchen and Lianxiang Yang, Vol. PM100 Robust Speckle Metrology Techniques for Stress Analysis and NDT, Matias R. Viotti and Armando Albertazzi, Jr., Vol. PM251 Selected Papers on Electronic Speckle Pattern Interferometry: Principles and Practice, Peter Meinlschmidt, Klaus D. Hinsch, and Rajpal S. Sirohi, Eds., Vol. MS132 Speckle Phenomena in Optics: Theory and Applications, Second Edition, Joseph W. Goodman, Vol. PM312 Tissue Optics: Light Scattering Methods and Instruments for Medical Diagnostics, Third Edition, Valery V. Tuchin, Vol. PM254

Library of Congress Cataloging-in-Publication Data Names: Dávila, Abundio, author. Title: Handbook of speckle interferometry / Abundio Dávila. Description: Bellingham, Washington : SPIE, [2022] | Includes bibliographical references and index. Identifiers: LCCN 2021021942 (print) | LCCN 2021021943 (ebook) | ISBN 9781510645387 (paperback) | ISBN 9781510645394 (pdf) Subjects: LCSH: Holographic interferometry. | Speckle metrology. Classification: LCC TA1555 .D38 2021 (print) | LCC TA1555 (ebook) | DDC 621.36/75–dc23 LC record available at https://lccn.loc.gov/2021021942 LC ebook record available at https://lccn.loc.gov/2021021943

Published by SPIE P.O. Box 10 Bellingham, Washington 98227-0010 USA Phone: +1 360.676.3290 Fax: +1 360.647.1445 Email: [email protected] Web: http://spie.org

Copyright © 2022 Society of Photo-Optical Instrumentation Engineers (SPIE) All rights reserved. No part of this publication may be reproduced or distributed in any form or by any means without written permission of the publisher. The content of this book reflects the work and thought of the author. Every effort has been made to publish reliable and accurate information herein, but the publisher is not responsible for the validity of the information or for any outcomes resulting from reliance thereon. Printed in the United States of America. First printing For updates to this book, visit http://spie.org and type “TT122” in the search field.

Introduction to the Series The Tutorial Text series provides readers with an introductory reference text to a particular field or technology. The books in the series are different from other technical monographs and textbooks in the manner in which the material is presented. True to their name, they are tutorial in nature, and graphical and illustrative material is used whenever possible to better explain basic and more-advanced topics. Heavy use of tabular reference data and numerous examples further explain the presented concept. A grasp of the material can be deepened and clarified by taking corresponding SPIE short courses. The initial concept for the series came from Jim Harrington (1942–2018) in 1989. Jim served as Series Editor from its inception to 2018. The Tutorial Texts have grown in popularity and scope of material covered since 1989. They are popular because they provide a ready reference for those wishing to learn about emerging technologies or the latest information within a new field. The topics in the series have grown from geometrical optics, optical detectors, and image processing to include the emerging fields of nanotechnology, biomedical optics, engineered materials, data processing, and laser technologies. Authors contributing to the series are instructed to provide introductory material so that those new to the field may use the book as a starting point to get a basic grasp of the material. The publishing time for Tutorial Texts is kept to a minimum so that the books can be as timely and up-to-date as possible. When a proposal for a text is received, it is evaluated to determine the relevance of the proposed topic. This initial reviewing process helps authors identify additional material or changes in approach early in the writing process, which results in a stronger book. Once a manuscript is completed, it is peer reviewed by multiple experts in the field to ensure that it accurately communicates the key components of the science and technologies in a tutorial style. It is my goal to continue to maintain the style and quality of books in the series and to further expand the topic areas to include new emerging fields as they become of interest to our readers. Jessica DeGroote Nelson Optimax Systems, Inc.

v

Contents Preface Acknowledgements Motivation Objectives

ix ix x xi

1 Fundamentals of Interference

1

1.1 1.2 1.3 1.4

Interference 1.1.1 Interference by two wavefronts Coherence Length Laser Properties Speckle Fundamentals 1.4.1 Subjective speckle 1.4.2 Objective speckle 1.4.3 Speckle with partially coherent light

2 Speckle Interference and Displacement 2.1 2.2 2.3 2.4

Rigid Body Movement Intensity Correlation for Open or Closed Fringes Carrier Fringe Patterns in Speckle Interferometry for Open-Fringe Generation Practical Removal of Environmental Instabilities

3 Electronic Speckle Pattern Interferometers 3.1 3.2

3.3

Out-of-Plane Electronic Speckle Pattern Interferometer In-plane Electronic Speckle Pattern Interferometer 3.2.1 Generation of carrier fringes by rotation only (g) 3.2.2 Contouring Shearography 3.3.1 Carriers in shearography 3.3.2 Contouring in shearography

4 Illumination and Displacement Detection 4.1 4.2

Illumination Using Two Simultaneous Light Sources Three Sequential Illuminations Using a Single Light Source 4.2.1 L-shaped sequential illumination 4.2.2 Triangular sequential illuminations: Inverted T and equal angles vii

1 2 2 3 3 5 6 6 9 10 12 14 18 19 19 22 23 26 27 29 31 33 33 34 35 37

viii

Contents

5 Transient Displacement Analysis 5.1 5.2 5.3

Recording of Transient Events 5.1.1 Recording a simple fast event Recording Interferometric Events Camera Acquisition and Synchrony Methods Using Pulsed Lasers 5.3.1 Method 1: Single pulse per camera frame 5.3.2 Method 2: Single-pulse time delay per camera frame 5.3.3 Method 3: Twin pulses per two camera frames 5.3.4 Method 4: Twin pulses per frame

6 Phase Detection 6.1

6.2 6.3 6.4 6.5 6.6 6.7

Carriers in Single-Shot Phase Detection 6.1.1 Spatial synchronous detection 6.1.2 Fourier transform method for difference of phase 6.1.3 Carrier phase limits 6.1.3.1 Carrier frequency precision 6.1.3.2 Bandwidth 6.1.3.3 Signal-to-noise ratio and speckle size 6.1.3.4 Filtering and sampling Spatial Phase Shifting Using Carriers Phase Shifting Using Four Quadrants in the Image Plane Phase Shifting in Adjacent Pixels Using Polarization Heterodyne Interferometry Phase Shifting and Vortex Singularity Location Temporal Phase Unwrapping

41 42 42 43 46 46 47 49 51 53 53 54 57 59 59 60 60 61 61 64 65 67 68 68

7 Overview of Applications

75

Appendix: Speckle Statistics

77

References

79

Index

103

Preface Optics laboratory practices are the best way to understand the laws of physics that control the properties of electromagnetic waves and the way they interact with matter. As students start their journey on this science and perform optics experiments to harness the light features, it is always exciting to understand the mechanisms that allow us to measure physical properties using light. Light is everywhere and is part of our meaningful life. For the author, interacting with the universe through the use of light has been a joy and a journey of discovery. One of the aspects of electromagnetic waves that is most disconcerting is its interaction with rough surfaces that produce so-called speckle. This book is written for students who enjoy lab practices and look forward to dealing with, and understanding, speckle properties to find new ways of using them for measurement purposes.

Acknowledgements Scientific societies depend on lots of people interacting and building up knowledge in some particular area. However, it is only possible to acknowledge a few of those who contribute or help to complete a book like this. In my case, I shall name the few who contributed to my optics education. When I started studying optics for the bachelor degree, J. C. Ruiz was the first teacher who demonstrated optics experiments at the F.C.F.M.; thank you J. C. for your initial guidance. For my MSc studies, the professors at the Physical Optics department at CICESE were responsible for instructing me on the world of optics; thanks to all of you, but especially to J. E. A. Landgrave and H. Escamilla for allowing me to learn the interferometry and physical optics theories. During the early years of my research, the influence of D. Malacara at CIO helped me start my professional development; many thanks to Daniel. Still, it was during my PhD studies that the staff of the Mechanical Engineering department at Loughborough University taught me the speckle interferometry techniques that I will be describing and updating in this book. From my PhD days, I shall always be grateful to D. Kerr, J. M. Huntley, P. D. Ruiz, J. Tyrer, and J. Coupland for their help with comments or insights from previous research. Their comments and discussions were

ix

x

Preface

fundamental to building the accumulated knowledge that I used as a basis for my research in this optics area. I also thank my first colleague in this area, and now a dear friend, G. Kaufmann; thank you Guillermo for participating in our early research. I also acknowledge R. Mendoza for creating the graphics of several figures. I thank Dara Burrows for her talent and skill in editing the final version of this book. Finally, I would like to express my most sincere acknowledgements to my wife Lucy and my daughter, who help me by being supportive and encouraging me to persist until my dreams become real.

Motivation Speckle interferometry techniques have been evolving since they were first developed, producing a range of tools for several applications in industry and research. Nowadays, there is a set of well-established methods, but still, the most recent ones are not yet comprehensively documented. A review of the speckle techniques in terms of simple concepts such as carrier fringe patterns allows easy comprehension of the speckle phenomena, including digital holography techniques. Carrier fringe patterns can always be replaced by phase-shifting methods when the experiments are limited to closed-fringe patterns. Following the main emerging industrial and biomedical applications, the chapters included in this work also present the most recent research in this area. The initial growth in speckle phenomena applications began after the advent of the invention of the laser, which was the first source of coherent light produced and tested in the initial experiments by illuminating rough surfaces [1]. At the beginning of laser light utilization, experimental observation speckle phenomena showed a granular structure that could be visually associated with noise. Even though the same noise characteristic was observed long before the laser invention [2], the cameras and TVs needed to expand speckle applications were not readily available until the 1960s. However, one similar noise that was known at the time was the noise that appears when a TV is tuned without a signal, showing an image with white and dark pixels randomly distributed that changed in time without any apparent cause. Later discoveries from the background cosmic radiation caused by the Big Bang explained the radio signals that were detected by the TV antenna; the radiation caused by random interference produced the speckled patterns on the TV screens. Nonetheless, the idea of speckle as noise stagnated, and it took a few decades after the laser invention for it to be considered as a non-noisy signal that could be used routinely as a measurement tool. To remove the noise characteristics of speckle patterns, the first research on speckle techniques required dark rooms, specialized anti-vibration tables, and isolation from thermal air currents. These requirements caused the

Preface

xi

techniques to be branded as difficult to implement and limited the number of applications. Even now, interference techniques still present the same constraints, but standard interferometry equipment used to avoid inaccuracies in their measurements is available using off-the-shelf optics components. Nonetheless, as speckle techniques evolved, some noise continued to be challenging to remove, such as speckle decorrelation, which, to this date, is still challenging to control for displacement measurement and persists as one of the main drawbacks to the attainable precision of these techniques. Health hazards, on the other hand, remain when optical setups include high-power lasers for speckle-generating techniques, and the reader should be aware of the training required and safety measures (see, for example, Ref. [3]) while implementing the techniques described in this handbook.

Objectives This handbook presents a review of speckle interferometry techniques ranging from works from the early 1990s [4–7] to more recent works [8–17]. However, in this new compilation of speckle interferometry techniques, the main aims are as follows: •

• •

• • •

To explain the speckle techniques. The reader is introduced first to simple direct phase-measuring techniques based on carrier fringe patterns, also known as open fringes. These techniques give an instantaneous phase from a single fringe pattern, removing bias and modulation problems usually found when phase-shifting procedures process non-carrier interference fringe patterns. As the procedures needed for phase extraction from closed fringes rely on spatial or temporal phase-shifting procedures, its implementation depends on the kind of technology implemented in the experimental setups. To explain the evolution of speckle interferometry techniques based on similar digital holography principles. To summarize the main electronic speckle pattern interferometry (ESPI) techniques under a simplified mathematical notation that includes rigid body displacements, together with standard body displacements through the adoption of an object’s pose with six degrees of freedom. To promote the adoption of temporal phase unwrapping instead of spatial phase unwrapping. To present a summary of recent industrial applications. To include a short review or update of the recent research in ESPI.

The material in this handbook is a helpful introduction to speckle techniques for non-specialists, and readers will benefit from the insight and understanding of the basic speckle interferometry principles presented here.

Chapter 1

Fundamentals of Interference 1.1 Interference The electric amplitude of the simplest plane wavefront propagating along the z direction is given at the position ðx; y; zÞ and time t by the following equation: Eðx; y; z; tÞ ¼ a cosðwt  kzÞ;

(1.1)

where the electric oscillations are the same for each coordinate (x, y) and are therefore omitted; a is the wave amplitude; k ¼ 2p∕l is the wavenumber; w is the angular frequency, which can be expressed in terms of light frequency n as w ¼ 2pn; and l is the wavelength. The previous equation can be represented using complex numbers: Eðx; y; z; tÞ ¼ RefaeiðwtkzÞ g;

(1.2)

Eðx; y; z; tÞ ¼ RefAeiwt g;

(1.3)

or by

where Aðx; y; zÞ ¼ aeikz is the complex amplitude. The spatial dependence on z is frequently expressed in terms of the spatial phase f ¼ kz, in which a displacement z ¼ l is equivalent to a phase change of 2p. Usually, the spatial phase is the only variable of interest in interferometry, as the conventional detectors such as a camera or photodiodes are not able to register the temporal fluctuations that are usually integrated in time to give [18] , E 2 ðx; y; z; tÞ . ¼

Aðx; y; zÞA ðx; y; zÞ a2 ¼ : 2 2

(1.4)

Therefore, the detection from the spatial complex wavefront is expressed by the integral in time, which is also , E 2 ðx; y; z; tÞ . ¼ jAðx; y; zÞj2 ∕2 and is known as intensity (denoted by I in this book). Note that the intensity I ¼ AA ∕2 ¼ a2 ∕2 is a real value caused by the spatial phase and the time 1

2

Chapter 1

integration on the detector. Still, as the scaling factor of 2 is unimportant, the intensity is usually expressed as I ¼ AA . 1.1.1 Interference by two wavefronts If two complex spatial wavefronts E 1 ðx; y; z; tÞ and E 2 ðx; y; z; tÞ interfere, we have the addition of their complex expressions as E ¼ E 1 þ E 2 , where we have omitted the spatial and temporal dependencies. To obtain the detector intensity I , we express it in terms of the two wavefronts as I ¼ , EE  . to obtain pffiffiffiffiffiffiffiffiffi (1.5) I ¼ I 1 þ I 2 þ 2 I 1 I 2 cosðf1  f2 Þ; where I 1 ¼ , E 1 ., and I 2 ¼ , E 2 ., and the phase difference f1  f2 ¼ Df is proportional to the optical path difference (OPD) defined here as L through the wavenumber k ¼ 2p∕l: 2p L: (1.6) l It is easy to show that if the optical path difference L equals the wavelength l, a phase change of 2p is obtained. In most interferometric applications, it is necessary to find the OPD denoted by L. The presented equations of interferometry assume infinite light wave trains because finite light wave trains can limit the coherence length. Then a more general equation that includes a reduced coherence length is given by pffiffiffiffiffiffiffiffiffi I ¼ I 1 þ I 2 þ 2 I 1 I 2 cosðkLÞjgðkLÞj; (1.7) f1  f2 ¼

where g is the complex degree of coherence [18] that produces a modulation of the cosinusoidal fringes that reduce the visibility of the interference fringes.

1.2 Coherence Length When a light wave train is finite in length, it is due to a superposition of infinite wave trains that span a wavelength range Dl centered at l; therefore, the coherence length of a finite wave train is dl ¼ l2 ∕Dl:

(1.8)

Any OPD greater than this coherence length will nullify the interference effect between two finite wave trains. Round-trip coherence length is defined as half of the coherence length in air and is used as a measure of the depth resolution in interferometers that use wavelength changes to introduce phase changes. In general terms, the depth resolution is dz ¼

dl ; 2n

where n the refractive index of the medium.

(1.9)

Fundamentals of Interference

3

1.3 Laser Properties In the simplest case of a laser with a linear cavity and two mirrors, the laser bandwidth will depend on the bandwidth of the gain medium. As a typical example, the HeNe laser has a central wavelength l0 with a 1.5 GHz gain bandwidth, which causes a spread of wavelengths given by the following equation: Dl ¼

l20 Dn ; c

(1.10)

c being the speed of light; therefore, the HeNe emission at l0 ¼ 633 nm has a width of dl ¼ 0.002 nm. Another example is the Ti:sapphire laser, which has a Dn ¼ 128 THz gain bandwidth media, causing a wavelength spread of 273 nm at 800 nm, which is much broader than that of HeNe. As interference can be sampled using a constant-wavenumber ratio, a convenient sampling ratio is given by dk ¼ 2p

dl : l20

(1.11)

Now within the large wavelength spread of Ti:sapphire exist emission modes that depend only on the cavity length of the laser. In the case of a linear cavity, there are modes given at the frequency Dn ¼

c ; 2L

(1.12)

where L is the cavity length. For example, a cavity length of 30 cm will produce modes at a frequency of Dn ¼ 0.5 GHz, each mode using Eq. (1.10) with a spread of 1 pm. When many modes of the cavity interfere due to the phase changes induced by thermal changes, the laser is called continuous wave (CW).

1.4 Speckle Fundamentals There are two kinds of speckle: objective and subjective (see Fig. 1.1). The autocorrelation function of speckle intensity [19,20] is given by Z  Cðz; hÞ ¼

þ`

`

Z

þ`

`

Sðx; yÞe

ik

ðzxþhyÞ z

2  dxdy ;

(1.13)

4

Chapter 1

Figure 1.1 Formation of two kinds of speckle for two different experiments: (a) objective speckle when light propagation only in random directions is present in the optical setup, or the setup has no image-forming lens, and (b) subjective speckle when a lens collects the stray light interference. The laser light illuminates a diffuser or a ground glass to produce the random light.

where Sðx; yÞ is a scattering area, which, for a uniform circular illuminated area, can be expressed by  Cðz; hÞ ¼ I 0

2J 1 ð2pari ∕lf Þ 2pari ∕lf

2

;

(1.14)

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where ri ¼ z2 þ h2 , f is the propagation distance, I 0 is the squared averaged intensity, and a is the radius of the circular aperture.

Fundamentals of Interference

5

1.4.1 Subjective speckle The autocorrelation of the speckle intensity for a circular aperture reduces to the well-known formula of the Airy disk. According to the theory of diffraction [18], a converging wavefront from a circular aperture at the focus produces a 3D intensity pattern, in which the Airy disk results from the intensity found at the focal plane of the point spread function presented in Fig. 1.2. Therefore, from the point spread function in this figure, the first zero along the vertical axis defines the mean transversal speckle length, which is located at r0 ¼ 1.22lf # , where f # is the F-number, and the first zero along the optical axis is at a distance z0 ¼ 8lf 2# , defining the longitudinal length of the speckle size. The zeros along the vertical axis and the optical axis represent the 3D speckle size using the mean transversal speckle length, and the mean longitudinal speckle length, respectively, and are shown for reference in the diagram of Fig. 1.3. However in real practice, most imaging systems operate outside of the focal distance, and it is necessary to re-introduce changes to the definitions of speckle size. For an optical system with magnification M, the mean transversal speckle length is defined by r0 ¼ 1.22lf # ð1 þ MÞ;

(1.15)

and the mean longitudinal speckle length is calculated by z0 ¼ 8lf 2# ð1 þ MÞ2 :

(1.16)

Figure 1.2 3D point spread function at the focus of a converging wavefront from a circular aperture. The vertical axis contains the Airy disk where the first intensity zero (see the arrow) is at a distance r0 ¼ 1.22lf # . The horizontal axis first zero is located (see the arrow) at z0 ¼ 8lf 2# . The zeros on the vertical axis and on the horizontal axis define the 3D speckle length using the mean transversal speckle length and the mean longitudinal speckle length, respectively.

6

Chapter 1

Figure 1.3 Schematic diagram showing the limits of the central first-minimum of the 3D point spread function at the focus of a converging wavefront from the circular aperture shown in Fig. 1.2. The mean transversal speckle length and the mean longitudinal speckle length become half the maximum size of the encircled area in the vertical and horizontal directions. Speckle decorrelation occurs when two speckle patterns are displaced more than their mean longitudinal or transversal lengths.

1.4.2 Objective speckle Replacing f # by L∕2a, where now 2a is the diameter of an illuminated circular area, and L is the distance from the illuminated diffuser/rough-surface to the detection plane, we obtain the mean transversal speckle size, given by r˜ 0 ¼ 1.22

lL : 2a

(1.17)

1.4.3 Speckle with partially coherent light So far, we have assumed coherent laser light; however, with partially coherent light, the speckle phenomena still arise if enough spatial and temporal coherence is present in the detection plane. Temporal coherence is obtained from a finite wave train that spans a wavelength range of Dl and is centered at l when the coherence length is given by Eq. (1.8). Any OPD greater than this coherence length will nullify the interference effects between two finite wave trains, limiting the depth resolution, as represented in Eq. (1.9). On the other hand, to obtain spatial coherence in the detector plane, the van Cittert–Zernike theorem allows one to calculate the partial coherence obtained when an incoherently illuminated pinhole and two points in the detector plane are separated by a given distance in the far field [18], such as the spatial coherence given by the Fourier transform of the source intensity of the incoherently illuminated aperture. When temporal or spatial coherence is obtained, by the use of either a super-luminescent diode (SLD) or white light illuminating a pinhole, speckle phenomena can still be produced. However, as white light speckle or other kinds of features such as dots, texture, lines, or grids can be imprinted onto the

Fundamentals of Interference

7

surfaces under analysis, digital image correlation (DIC) techniques were developed as an extension of the coherent speckle techniques [21]. Additionally, because only imaging is involved, white light speckle has shown to be more stable than speckle generated by interference and has found submicron resolutions [22] in many applications, or using recently improved techniques; see, for example, Refs. [23,24].

Chapter 2

Speckle Interference and Displacement Assuming the use of long-coherence-length light g ¼ 1 and the introduction of phase shifting, the interference obtained from ESPI interferometers can be reduced to the addition of two beams R and S that differ in their randomness characteristics according to I ðx; yÞ ¼ I R ðx; yÞ þ I S ðx; yÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi þ 2 I R ðx; yÞI S ðx; yÞ cosðkLðx; yÞ þ fr ðx; yÞ þ S q Þ;

(2.1)

where IR and IS are the intensities of the two beams: a reference beam R and a static beam S; k is the wavenumber; L is the OPD produced by a laserilluminated object; fr ðx; yÞ is a random phase generated from the light scattered by the object or a reference beam; and S q ¼ ðq  1Þp∕2 when four phase shifts (q ¼ 1; 2; 3; 4) are used. As Sections 3.2 and 3.3 will show for two types of interferometers, if the phase term kLðx; yÞ includes the phase terms introduced by object displacements and rigid body movements due to three rotations, it can be described by kLðx; yÞ ¼ knðrR  rS Þ · U;

(2.2)

where kLðx; yÞ is the phase, k the wavenumber, n is the refractive index, rR and rS are the observation and illumination vectors, and U is the 6D pose, given by UT ¼ RdT ;

(2.3)

where T denotes the transpose, R is a rotation matrix that includes three rotations at rotation angles a; b; g around three axes as shown in Fig. 2.1, and d ¼ ðx þ x0 ; y þ y0 ; z þ z0 Þ is the displacement and translation vector that includes three diplacements and the three translation movements x0 ; y0 ; z0 for a total of six degrees of freedom per pose. This phase can have different representations, depending on the kind of ESPI interferometer used; the main 9

10

Chapter 2

Figure 2.1

The xyz coordinate system.

Table 2.1 Vectors of illumination and observation in four different types of speckle interferometers. Interferometer

rR – rS or rS0

Out-of-plane ðrR  rS Þ = ðsin uS  sin uR ; 0; cos uR þ cos uS Þ

ð sin uR ; 0; cos uR Þ  ð sin uS ; 0;  cos uS Þ

Out-of-plane with reduced sensitivity ðrR  rS0 Þ  ðrR  rS Þ ¼ ðsin uS  sin uS 0 ; 0; cos uS0  cos uS Þ uR ¼ 0; uS 0 ≠ uS ; uS . uS 0

½ð0; 0; 1Þ  ð sin uS0 ; 0;  cos uS0 Þ ½ð0; 0; 1Þ  ð sin uS ; 0;  cos uS Þ

In-plane (sources along x) ðrR  rS 0 Þ  ðrR  rS Þ ¼ (sin uS 0 þ sin uS ; 0; cos uS0  cos uS Þ uR ¼ 0; uS 0 ¼ uS

½ð0; 0; 1Þ  ð sin uS0 ; 0;  cos uS0 Þ ½ð0; 0; 1Þ  ðsin uS ; 0;  cos uS Þ

Shearing ðrR  rS Þ ¼ ðsin uS  sin uR ; 0; cos uR þ cos uS Þ uR  0

ð sin uR ; 0; cos uR Þ  ð sin uS ; 0;  cos uS Þ

differences are due to the illumination and observation directions (implemented in the optical setups) that appear in the vectors rR and rS . Table 2.1 shows how these vectors change according to the kind of interferometer implemented and by the light source position in the coordinate system, as shown in Fig. 2.2. The source position affects the illumination angle uS that produces a component in the x direction, as shown in the diagram. Still, if the source is represented in the right lower quadrant, the resulting contribution becomes positive. Figure 2.3 shows the typical source or twin-source locations for the interferometers presented from top to bottom in Table 2.1.

2.1 Rigid Body Movement Most of the time, ESPI applications use light beams propagating through free space; therefore, the refractive index n  1 is omitted. Most of the

Speckle Interference and Displacement

11

Figure 2.2 A source S is placed over the positive axis x with the unitary vector of illumination in the negative direction of our coordinate system. The observation angle uR can have the x component as negative or positive.

Figure 2.3 ESPI schematic illumination setups for uR ¼ 0 (normal to the surface in x): (a) out-of plane, (b) reduced-sensitivity out-of-plane, (c) in-plane, and (d) shearing.

interferometers in the literature assume that objects under study present only diplacements without rigid-body changes when stressed; however, real experiments cannot guarantee this condition, and rigid-body movements have been investigated in Ref. [25]. In the introduced matrix formalism, when an absence of rigid-body rotation is considered, the rotation angles a; b, and g are zero, which reduces R to the identity matrix I. Therefore, if the same rigid-body translations ðx0 ; y0 ; z0 Þ ¼ ðx00 ; y00 ; z00 Þ are preserved for two consecutive poses U and U0 , a displacement corresponding to an arbitrary point ðj; qÞ on an interferogram is calculated as the difference of poses and is expressed in this case simply by

12

Chapter 2

ðx0 ; y0 ; z0 ÞT  ðx; y; zÞT ¼ U0  U ¼ Id0T  IdT ;

(2.4)

where the superscript T denotes the transpose. However, the reader should be aware that R can be different from I if rigid-body rotations are introduced; therefore, in this complex scenario, the displacements x0  x; y0  y; z0  z should be replaced by those produced by R, and in general terms are represented instead by ½uðj; qÞ; vðj; qÞ; wðj; qÞ. On the other hand, the shearing interferometer, which measures the approximate derivatives of ­vðj;qÞ ­wðj;qÞ T displacement, e.g., ð­uðj;qÞ ­j ; ­j ; ­j Þ Dj, can also be influenced by the introduction of rigid-body movements. The main aim of the theory presented in this handbook is to show the effects of several kinds of interferometers and illuminations that produce a particular type of displacement measurement and to introduce rigid-body movement equations. The following section shows how a rigid-body displacement changes correlation fringes’ normal behavior.

2.2 Intensity Correlation for Open or Closed Fringes Most of the experiments with ESPI require an object loading/stressing mechanism that allows us to determine the micro-displacements between two objects’ stressed states. For each object state, the back-reflected light of its rough surfaces forms a speckle pattern. Therefore, two speckle patterns are usually obtained for each fringe pattern: one before the object deformation/strain is applied, denoted by I with an associated phase f1 , and one after the deformation/ strain occurs, denoted by I 0 with phase f2 . Thus, after correlating the two intensities using subtraction, and indicating the phase change by f1  f2 ¼ kL, we find that the squared correlation for a phase shift Sq of the fringe pattern is 

ðI 

I 0q Þ2

   kL þ S q kL  S q 2 ¼ 4I R I S sin fr þ sin . 2 2 2

(2.5)

This correlation produces a speckled fringe pattern and is obtained by subtracting, point by point, the qth specklegram of the deformed object from the first specklegram of the object at rest, and squaring this difference afterwards. Therefore, more than two speckle patterns are needed if a phase extraction procedure is used, as in the case of the four-shift phase extraction method, which uses four intensities I 0q with q ¼ 1; 2; 3; 4 for each phase shift S q ¼ ðq  1Þp∕2. However, for real-time display of correlation fringes, the squared correlation is frequently replaced by the absolute value of the intensity difference: jI  I 0q j. This last correlation is usually implemented experimentally in real-time by acquiring first I, and displaying the absolute value of the intensity difference with I 00 , scaled to fit the gray levels used in the display.

Speckle Interference and Displacement

13

Error sources in the acquisition process introduce signal modulations such as gray-level transformations of the camera gain, or uneven illuminations of the objects. Additionally, decorrelation usually produces distortions in the theoretical sinusoidal signals in Eq. (2.5), so it is a common practice to find tuning errors on the phase shifts. These errors cause phase ramps to become nonlinear after the phase extraction process. Many algorithms have been designed [26–34] to deal with these kinds of errors; in particular, phaseshifting algorithms with incrementally fixed phase steps provide improved performance when the number of phase steps is increased [35–37]. However, the algorithms based on constant—and adjustable—phase shifts such as the Carré algorithm [32,38] have proved to be useful for speckle interferometry. Thanks to its adaptable design, the Carré algorithm allows for the removal of local linear phase changes that are usually introduced by air turbulence or are due to the influence of a linear phase step miscalibration [39,40]. Moreover, new adaptable designs [41,42] deal with similar phase-shift miscalibrations. Most phase miscalibration errors produce nonlinear phase ramps after the phase extraction procedure. These phase errors can be strongly reduced by calculating the instantaneous phase, which does not require any phase shifts, and by using the method known as the “difference of phase,” which will be discussed in more detail in Section 6.1.2. The speckled fringes obtained from the correlation method in a real-time experiment provide a quick way for approximated displacement measurement. However, if better precision is required, phase-stepping techniques can be used after removing the high-frequency speckle noise (in the correlation fringes) that is produced by the random phase fr . In this case, the resulting fringe pattern presents smooth profiles described mainly by the second sine term of Eq. (2.5). On the other hand, a similar reduction of speckle noise can also be achieved by filtering the argument of the arctanðN∕DÞ function before performing the phase map extraction procedure, where N is the function’s numerator and D is its denominator. Therefore, the smoothing is applied instead to the N and D of any phase-shifting algorithm to remove the speckle noise. However, in these smoothing methods, usually, the linear phase ramps that appear in the phase maps from p to p can be nonlinear, depending on the smoothing procedure. Recently, liquid crystals have been used to remove speckle noise [43,44] from correlation fringes, and deep convolutional neural networks have provided efficient noise removal [45]. However, these filtering processes either are imperfect, introduce undesirable uncertainties, are not fast enough, or require optical processing for speedier processing. In contrast, the difference-of-phase method proposed in Ref. [46] combined with low-pass filtering is often preferred for measurement purposes [6,13] as it is fast and straightforward. As a final measurement step, the phase maps must be unwrapped to remove any 2p phase jumps and converted to displacement units. There are two kinds of fringe patterns; the most commonly produced

14

Chapter 2

Figure 2.4 (a) Typical speckle-correlation fringe pattern produced by an out-of-plane interferometer using Eq. (2.5) with S1 , and (b) the corresponding phase map transformed to gray levels for display, in which the brightest correlation fringes correspond to the 2p phase map discontinuities. Comparing the images, is easy to see that the corresponding phase between two consecutive bright, or black, fringes is 2p or from p to þp. On the other hand, the zero phase values coincide with the black fringes as no phase changes occur in the two speckle patterns, and the subtraction becomes zero.

are closed fringes, such as those shown in Fig. 2.4. On the other hand, if a carrier fringe pattern is introduced in any optical experiment, the open fringes obtained by this carrier can be readily used to obtain the instantaneous phase from a single-shot image.

2.3 Carrier Fringe Patterns in Speckle Interferometry for Open-Fringe Generation Spatial carriers provide a direct phase-extraction alternative, compared to phase-shifting, granting single-shot phase extraction. The simplicity of processing the least amount of data to obtain phase values is incomparable to the more time-demanding extraction processes that need more data and computing power. However, there is no clearcut way to chose between open and closed fringes, except based on the experiment itself, the computing speed, and the data quality that influences the selected option. Apart from the simplicity of using carrier fringes, their use determines the phase map without sign ambiguity from a single fringe pattern. It must be noted, however, that a different sign in the carrier frequency would produce an inversion of the phase values. There are experiments in which the phase sign can be known a priori: using phase-shifting techniques, closed fringes can be used instead of the open fringes produced by carrier fringes, but doing so usually requires more computing power [47]. Two main types of carriers are possible in speckle interferometry: spatial and temporal. The spatial carrier introduces a linear phase change in the spatial domain; e.g., as the speckle pattern is a 2D image in coordinates ðx; yÞ,

Speckle Interference and Displacement

15

the linear phase change is usually introduced along the x (horizontal) direction, but can it be presented as any combination of both directions. This linear phase change can be adjusted to have a few cycles over the whole speckle pattern or can be adjusted to have a few cycles inside the speckle size of the speckle pattern. In any case, the power spectra from a 2D Fourier transform of the speckle pattern would have two distinct symmetric peaks corresponding to the carrier frequency, and a central peak due to the intensity bias. Temporal carriers, on the other hand, introduce a linear phase change simultaneously over the whole speckle pattern in time (or phase shifting). Contrary to spatial carriers, several 2D speckle patterns are needed, and at least two speckle patterns with linear phase change to show an intensity correlation with fringe patterns corresponding to one temporal phase shift. Figure 2.5 shows the two primary options for choosing the fringe density of the carriers when a camera registers a particular speckle size. The figure represents the pixels as gray squares, the speckle size as lighter-gray disks, and the carrier fringes as vertical lines for the two main cases of carrier density: (a) for a few correlation fringes over the field of view and (b) for digital holography (DH). In the first case, carriers have a frequency with a period greater than both the speckle size and the interpixel spacing. The linear spatial carrier phase shown as black lines starts with a few cycles over the whole speckle pattern and ends with much less than half of the maximum frequency. In the second case, the carrier density is high, having a more significant rate change, with the period being smaller than the speckle size, and with a speckle size typically sampled in an area comprising several pixels. An example of carrier peaks generated by a speckle pattern with a lowdensity carrier is shown in Fig. 2.6(a). Figure 2.6(b) shows the 2D Fourier

Figure 2.5 Carrier fringes generated with (a) a frequency lower than the speckle size and (b) a frequency higher than the speckle size. The schematic diagram shows the camera pixels as gray-level squares, the speckle size as disks in a lighter-gray hue, and the carrier fringes as vertical lines representing constant p phase changes of a plane reference beam tilted toward the camera detector.

16

Chapter 2

Figure 2.6 (a) Carrier fringes generated with a period larger than the speckle size, and (b) the corresponding 2D Fourier transform.

transform of the speckle pattern in part (a), revealing the main carrier peaks along the center in the horizontal direction, with the carrier frequency at one-quarter of the whole bandwidth. The Fourier transform method can be used in this case for phase extraction but bearing in mind that this method uses only half of the Fourier spectrum, which also reduces the spatial resolution by half. If a high-density linear spatial phase change is introduced in a speckle pattern, the density can be high enough to have several 2p cycles inside the speckle size. Digital Fresnel and digital image-plane holography in off-axis configurations commonly use this approach [48–51]. In these techniques, the carrier phase is introduced in the optical setups by a plane reference wavefront that is tilted to produce a carrier; therefore, in a single shot, the phase of a speckle pattern can be extracted and subsequently used by the difference-of-phase method. Figure 2.7 shows a typical example of carrier fringes in digital holography where the fringes are not easily seen in the interferogram [Fig. 2.7(a)]; however, after its 2D Fourier transform shown in Fig. 2.7(b), the carrier peaks are shown as vertical bright fringes placed approximately at half of the horizontal bandwidth. Even if the digital holography technique gives an instantaneous phase, four significant drawbacks are always neglected by advocates of this technique: •

• •

A minimum of three-pixel sampling inside the speckle size is needed. Therefore, if the speckle size is large, higher intensities of the laser beam are needed. As a consequence of the carrier, a reduction of the spatial resolution is also obtained. The advantage of calculating an instantaneous phase by digital holography is often irrelevant when atmospheric disturbances and speckle decorrelation effects simultaneously distort the experimental

Speckle Interference and Displacement

17

Figure 2.7 (a) Carrier fringes generated with a period smaller than the speckle size [see diagram in Fig. 2.5(a)]; as the carrier fringes are much denser, they are lost inside the speckle size such that on close inspection of the speckle pattern, only speckles can be seen. (b) Corresponding 2D Fourier transform, where the carrier fringes can now be seen distributed over a set of frequencies, in which three separated spectral lobes corresponding to the speckle are now visible. The lobe separation depends on the frequency of the introduced carrier, as shown in Fig. 2.5(b).



data, introducing phase distortions that are faster than the camera integrating time. As a consequence of the carrier, a reduction of the spatial resolution also results.

To summarize the effects of the carrier fringes, two other phenomena should be taken into account: the role of speckle size and the light integration that occurs in the camera’s pixelated detectors. If these phenomena are considered, the speckle size can be either larger or smaller than the pixel size. And the spatial linear carrier can be either high enough to oscillate inside the speckle size or of low frequency such that it oscillates outside of the pixel size. On the other hand, the light integration over the pixel area allows two scenarios: the integration process contains several speckle sizes inside a pixel area, or several pixel areas are used to sample the speckle size. But in any case, if carrier peaks are detected in the 2D Fourier transform, the phase can be extracted by the Fourier transform method [52,53]. As a consequence of the several situations in which the speckle is experimentally detected, the probability densities for interference, integration, and polarization in six different cases are presented as a reference in the Appendix. These densities are usually approximated by calculating the histogram of the speckle patterns detected in actual experiments.

18

Chapter 2

2.4 Practical Removal of Environmental Instabilities Environmental fluctuations in practical ESPI can be divided into two primary instabilities: mechanical drifts and thermal drifts. Unless perfect isolation of mechanical vibrations and thermal air currents is achieved in laboratory experiments, most ESPI or holographic experiments will present real-time phase changes that reduce the precision of the final measurements. Under these conditions, real-time phase extraction benefits from including linear phase removal techniques or phase extraction algorithms such as the Carré algorithm that have shown to be appropriate for removing the linear induced phase. However, the speckle decorrelation phenomena removes and introduces speckle in the optics aperture due to object tilt, and the Fourier shift theorem explains the speckle shift in the aperture corresponding to the object tilt. Its noise effects are mainly reduced by low-pass filtering, or rereferencing when the object tilt displaces the speckle in the ESPI aperture by up to 8% of its diameter, or by changing to a five-step phase-extraction method, depending on the noise characteristics [54]. An alternative method could take into account and re-introduce the speckle phase after the tilting of the object under investigation in the aperture of the optical system. But first, experimental tilt detection in real-time should be addressed to implement stable-phase re-correlation. Further phase distortions can be generated from the speckle pattern capture and processing algorithms. In the speckle pattern acquisition, the nonlinear response of the cameras and electronic noise, including analog-todigital conversion, can introduce uncertainties in the light intensity. And once the digital intensity is saved in the computer, algorithms or computer precision can add additional phase uncertainties. Therefore, besides the camera aberration corrections, a camera gray-level calibration is needed to obtain a corrected linear intensity. As is well known, a speckle exponential intensity distribution is obtained when laser back-reflected light from a rough surface is acquired in a camera detector. This distribution will produce a large number of pixels with zero intensity, and a few with high-intensity values that always saturate the gray levels obtained after the analog-to-digital conversion. Some optical setups produce the exponential type of speckle intensity distribution that can be changed by speckle integration in the camera pixels. A nonlinear phase that oscillates around an average phase value due to thermal drifts is, therefore, an everyday experience when using ESPI interferometers. Then, if the instantaneous phase is obtained by digital holography, its phase values represent samples from an unknown, constantly changing, nonlinear phase. Still, there are phase-stepping algorithms that can remove local linear phase changes, such as the temporal Carré method, which more effectively reduces phase oscillations and improves the overall phase tracking.

Chapter 3

Electronic Speckle Pattern Interferometers There are three basic optical setups for electronic speckle pattern interferometers (ESPIs) that measure object displacements but are restricted to measuring a few of the possible six degrees of freedom. Three of these interferometers will be described in the next three sections. Their optical setups have been successfully combined to achieve the measurement of additional degrees of freedom [55]. For simplicity, all of the illuminations in the interferometers are assumed to be collimated. Still, as was pointed out in Ref. [56], the divergent light introduces additional constraints that need to be addressed in these kind of interferometers [57–60].

3.1 Out-of-Plane Electronic Speckle Pattern Interferometer As shown in Fig. 3.1 the basic out-of-plane ESPI setup makes use of the intensity registered in a CCD camera (image plane) by the interference of a speckle field (reflected light from the object) and a reference beam that is formed by focusing the light using L2 and L3 over the lens L1 aperture. Several phenomena are involved in the interference patterns produced by this kind of optical setup: depolarization, decorrelation, and spatial or temporal coherence. Some of these phenomena were reduced by introducing appropriate optics in the initial commercial designs [61]. First, the speckle field is generated by the light reflected from a test object (O) and collected by a lens (L1) with a small aperture. This light and the size of its aperture determine the speckle size. Second, the reference beam can be produced in several ways, with different geometrical arrangements for interference with the object speckle field (for simplified ESPI setups, see Refs. [62–66]), but the setups presented here have shown to give experimentally high-contrast ESPI correlation fringes. Furthermore, the reference beam can have different intensities or wavefront shapes. Third, maximum interference visibility is possible only if the light arriving from the object has followed the same optical path as the

19

20

Chapter 3

Figure 3.1 Basic out-of-plane ESPI setup with: BS 10:90 beamsplitters (R:T); M1 and M2 mirrors; SF1 and SF2 spatial filters; L1 camera lenses; L2 and L3 collimating and focusing lenses; CCD video camera; O object; M micrometer head; and PZT piezoelectric transducer. The travel distances of the light beams between the two beamsplitters are adjusted to cover the same length.

light from the reference beam. Finally, the polarization effects and the OPDs introduced by turbulence are also critical for increased interferometer performance. The out-of-plane ESPI setup of Fig. 3.1 shows an object illuminated at an angle u from the z axis, and observation along the z axis is aligned with the optical axis of the camera lens, the path lengths of the illuminating and reference beams being equal. The corresponding vectors rS and rR are ð sin uS ; 0;  cos uS Þ and ð sin uR ; 0; cos uR Þ, respectively. Therefore, in the absence of rigid-body translations and rotations, the pose vector is reduced to U ¼ ½x; y; z, and using small angles uR  0 and uS  0 for the optical setup presented in Fig. 3.1, from Eqs. (2.2) and (2.3), the detected optical phase becomes kL ¼ 2knz. If a larger angle for uS is chosen, then the optical phase becomes kL ¼ ð1 þ cos uS Þknz, and we notice that the sensitivity of the interferometer can be adjusted by the geometry of the illuminating beams and that ðrR  rS Þ ¼ ðsin uS  sin uR ; 0; cos uR þ cos uS Þ gives the sensitivity corresponding to the illumination geometry. Figure 3.1 shows a typical optical setup designed for out-of-plane detection, where the angle uR is chosen as zero, such that ðrR  rS Þ ¼ ðsin uS ; 0; 1 þ cos uS Þ. Now if the angle uS is chosen to be very

Electronic Speckle Pattern Interferometers

21

small, e.g., less than 15°, the remaining sensitivity can be approximated as only being sensitive to out-of-plane displacements, where 1 þ cos uS  2; if the refractive index of the air is used, the OPD is given again by kL  kð2zÞ:

(3.1)

The sensitivity achieved by this equation indicates that a correlation fringe pattern generated by this interferometer is as shown in Fig. 2.4, where the displacement z for a 2p phase change is l/2. However, it is not only the final displacements that can be affected by the sensitivity given by a particular interferometer. This sensitivity can also affect rigid-body displacements if the pose U changes due to rotation or translation. In this case, the pose changes become DU ¼ ½uðj; qÞ; vðj; qÞ; wðj; qÞ, with the rigid-body translations and rotations added to the object displacements, as the following section shows for another kind of interferometer. An example of carrier fringes that are correlated with an out-of-plane interferometer with a rectangular plate as an object is presented in Fig. 3.2, where the plate was firmly held on the edges, and a tilt of the reference beam was introduced to form the carrier fringes presented in part (a). The plate under observation is deformed by applying a force in the center, to obtain the carrier fringes with displacements in part (b). The sensitivity of the out-of-plane fringes can also be decreased using two simultaneous laser wavelengths that form a so-called synthetic

Figure 3.2 Correlation carrier fringes generated using an out-of-plane speckle interferometer with a tilted reference beam: (a) without stress, in which a tilt of the reference beam introduces a carrier, and contrary to digital holography, the correlated carrier fringes are easily seen. (b) With a load at the center of the plate, the correlated carrier fringes in part (a) change shape when an out-of-plane movement is introduced on the back of the metallic plate by applying a central force.

22

Chapter 3

wavelength. Contouring measurements can be obtained using synthetic wavelengths in out-of-plane optical setups from the speckle pattern [67] techniques derived from two-wavelength interferograms [68,69]. Applications of this technique strongly depend on synthetic-wavelength generation, which can be from a multiwavelength source [70], or can use a variable synthetic wavelength from a single diode laser [71]. Another similar option is the use of DH with or without multiplexing holograms [72–75].

3.2 In-plane Electronic Speckle Pattern Interferometer The typical in-plane ESPI interferometer with dual illuminations at the same angle over the object plane is shown in Fig. 3.3. A beamsplitter (BS) divides the initial laser beam into two beams that illuminate the object after reflection at mirrors M1 and M2. Advanced setups perform measurements using dual illuminations over the horizontal and vertical directions [76]. Most of the experiments with ESPI in any configuration require an object loading mechanism, which usually simultaneously introduces rigid-body object displacements with the object deformation. Therefore, it is convenient to express the pose [Eq. (2.3)] rotations R as a decomposition of three independent rotations: Ra, Rb, and Rg. In the general case of object rotations, we can use the Euler decomposition of a general rotation using the xyz convention with Tait–Bryan angles [77], expressed as

Figure 3.3 Basic in-plane ESPI setup for carrier fringe generation with: LB laser beam; DL diverging lens; L1 and L2 collimating and camera lenses; BS beamsplitter; M1 and M2 mirrors; CCD video camera; B1 and B2 pair of illuminating beams; O object; A an aperture of L2; and PZT piezoelectric transducer.

Electronic Speckle Pattern Interferometers

23

R ¼ Ra Rb Rg ;

(3.2)

where

Rg ¼

cos g  sin g 0

sin g cos g 0

0 0 1

! (3.3)

is the rotation matrix of the plane x-y around the z axis in a counterclockwise rotation (yaw),

Rb ¼

cos b 0 0 1 sin b 0

 sin b 0 cos b

! (3.4)

is the rotation matrix of the plane x-z around the y axis in a counterclockwise rotation (pitch), and

Ra ¼

1 0 0 0 cos a sin a 0  sin a cos a

! (3.5)

is the rotation matrix of the plane y-z around the x axis in a counterclockwise rotation (roll), and where the three matrices are used to represent phase changes generated by rotations and are shown for reference in Fig. 2.1. The same procedure for fringe formation presented in Section 2.2 can be followed to obtain the corresponding phase from any object before and after deformation. But previous to any deformation, carrier fringes can be introduced by an object rotation of an angle g. Furthermore, the in-plane correlation fringes of Fig. 3.4 show that rigid-body rotations can be introduced inadvertently, as shown in an in-plane experiment in which the back of a rectangular metal bar is pushed by a pointed screw that produced a rigid-body rotation shown as parallel fringes, plus a phase of deformation from a crack. Any object rotation can be determined with high precision using carrier fringes, as the following section will show. 3.2.1 Generation of carrier fringes by rotation only (g) The phase increment kL introduced after a general rotation is obtained using Eqs. (2.2) and (2.3) for each pose UT ¼ RdT . Therefore, the corresponding points in two specklegrams are determined by the matrix R and are given by [5,78–80]

24

Chapter 3

Figure 3.4 In-plane correlation fringes (with absolute correlation) of a rectangular metal bar with a crack on the top middle section. Object rotation produces the carrier fringes, and the crack deformation at the top center contributes to this phase.

kL ¼

2p ð½rR  rS0   ½rR  rS Þ · U; l

(3.6)

where l is the wavelength of the illuminating beams B1 and B2, which we assume are collimated, and ½rR  rS 0   ½rR  rS  ¼ ðsin uS0 þ sin uS ; 0; cos uS 0  cos uS Þ, with uS ¼ us0 ¼ u being the symmetrical angle of illumination with respect to z or the optical axis of the camera lens (Fig. 3.3). Under zero rotations, the pose vector is transformed into a multiplication of an identity matrix with object displacements d ¼ ½uðx; yÞ; vðx; yÞ; wðx; yÞ and is reduced to the object displacements: U ¼ ½uðx; yÞ; vðx; yÞ; wðx; yÞ; from Eq.(3.6) and the expressions for rS, rS, and rR, and with R being unitary, we have kL ¼

4p uðx; yÞ sin u; l

(3.7)

which is the typical phase due only to in-plane displacement. If correlation fringes are obtained with this interferometer, the sensitivity expressed by Eq. (3.7) indicates that the measurement of in-plane distance depends now on the illumination angle u, but more importantly, its phase change from two consecutive speckle-fields should not surpass [81] the Nyquist criterion kL,p/2, from which the maximum in-plane displacement is given by umax ¼

l : 8 sin u

(3.8)

Electronic Speckle Pattern Interferometers

25

Therefore, in-plane displacements must be carefully limited to avoid wrappedphase values. Now the pose vector U can take predictable values of rotation if rigidbody rotary movements are taken into account. To generate carrier fringes, a single rigid-body rotation R ¼ Rg at angle g of the object around the z axis is required to generate a pose Ug. To prove this, we note that after only the object rotation, the position vector r ¼ (x,y,z) becomes r0 ¼ ðx0 ; y0 ; z0 Þ, so the increment in the pose vector can be written as UTg ¼ r0T  r0T ;

(3.9)

r0T ¼ Rg rT ;

(3.10)

UTg ¼ ðRg  IÞrT ;

(3.11)

where

such that

where I is the identity matrix, and Rg is the rotation matrix of Eq. (3.3):  cos g Rg ¼  sin g 0

sin g cos g 0

0 0 : 1

For small values of g, cos g  1 and sin g  g, so ! 0 g 0 Rg  I  g 0 0 : 0 0 0

(3.12)

(3.13)

From Eqs. (3.11) and (3.13), and recalling that r ¼ (x, y, z), we obtain Ug  ðgy; gx; 0Þ:

(3.14)

Since ½rR  rS0   ½rR  rS  ¼ ðsin uS 0 þ sin uS ; 0; cos uS0  cos uS Þ, we obtain for equal angles u ¼ uR ¼ uS ½rR  rS0   ½rR  rS  ¼ ð2 sin u; 0; 0Þ:

(3.15)

Now from Eqs. (3.6), (3.14), and (3.15), we finally obtain the phase increment due to the small object rotation alone: kLg ¼ k2gy sin u:

(3.16)

This means that the correlation fringes obtained after a small rotation of the object through an angle g are parallel to the x axis, with a period along the y axis of

26

Chapter 3



l : 2g sin u

(3.17)

A similar effect of horizontal fringes can be seen if moiré fringes are formed using two superimposed vertical fringe patterns with a slight rotation of one of the vertical fringe patterns. In practice, distances are measured at the image plane. Since we know the distance between the centers of two adjacent pixels of the CCD in Eqs. (3.16) and (3.17), it is convenient to replace x and y for xc ∕M and yc ∕M, respectively, where xc and yc are the coordinates at the camera CCD, and M is the camera magnification. In the case of spherical wavefront illumination, the expression for rS  rR is more complex, as it depends on the object coordinates (x, y) [59]. 3.2.2 Contouring Considering only an object tilt given by inclination of the object around the y axis, the b object rotation can be described by the transformation Rb. For small values of b, cos b  1 and sin b  b, such that Rb  I 

0 0 0 0 b 0

! b 0 ; 0

(3.18)

and Ub  ðbz; 0; bxÞ:

(3.19)

Since for a simultaneous illumination with angle u ¼ uR ¼ uS along the x axis we obtain ½rR  rS0   ½rR  rS  ¼ ð2 sin u; 0; 0Þ;

(3.20)

from Eqs. (2.2), (3.19), and (3.20), we finally obtain the phase due to the small object tilt in b: kLb ¼ 

4p bz sin u: l

(3.21)

This phase produces contour fringes proportional to z corresponding to the object height h ¼ z. A contour fringe appears when kLb ¼ 2p and l ¼ 2bh sin u; or for each point of height

(3.22)

Electronic Speckle Pattern Interferometers

hðx; yÞ ¼

27

l : 2b sin u

(3.23)

Therefore, the b rotation can be used for contouring when fixed illumination angles are used, but ensuring that no other displacements are introduced in the process. When a tilt is introduced by object rotation, an important drawback is seen with this technique: the speckles gathered by the aperture of the interferometer imaging lens are shifted along the tilt direction, introducing and removing new speckle patterns in the aperture. The shift causes a sharp decrease in the contrast of the fringe patterns, known as speckle decorrelation. An alternative to object rotation is to rotate the illumination beams with respect to the static object [61], which produces similar results. It is also worth mentioning that moiré shadow techniques [82] bear a strong resemblance to the interferometric technique and can be used to understand the contouring effect. Although this technique introduces moderate decorrelation, when temporal Fourier transform methods are used in the signal collected in each pixel, height measurement of objects ranging from a few hundreds of micrometers to a few tens of millimeters can be determined [83]. The phase obtained by rotating the object or tilting it for contouring can be gathered regardless of the kind of in-plane interferometer used, including those that have been recently developed [84]. Popular alternatives to the standard inplane measurement along a single direction are radial in-plane setups that simultaneously detect multiple directions [85]. Another alternative is to use two wavelengths in this kind of interferometer for rotation measurement with in-plane sensitivity [86].

3.3 Shearography A typical Michelson-type shearography setup [6,59,87–90] is presented in Fig. 3.5. As the illumination and observation configurations are the same as in the out-of-plane optical setup, the phase terms are similar, except that the observation direction uR  0 and depends on the tilt of the Michelson mirrors, which introduces shear in the image of the speckle patterns. The amount of shear is often assumed to be constant over the field of view, but it can vary over the object’s surface if the object has a nonflat shape. As shear changes introduce evaluation errors that decrease the accuracy of the measurements [91], a quantitative evaluation of shear over the field of view [92] and the depth of field [93] is required for increased accuracy. The phase term is calculated again using Eq. (2.2), but now the approximate derivative of phase is expressed in the pose vector, as a collimated horizontal illumination is used:

28

Chapter 3

Figure 3.5 Basic shearography setup with: DI divergent illumination; Z camera zoom lenses; BS beamsplitter; M1 and M2 mirrors; CCD video camera; O object; A aperture of Z; S mechanical support; and PZT piezoelectric transducer.

kLx ðx; yÞ ¼ knðrR  rS Þ · Ux ;

(3.24)

­vðx;yÞ ­wðx;yÞ where rigid-body displacements are zero, Ux ¼ ½­uðx;yÞ ­x ; ­x ; ­x Dx, rR  rS ¼ ðsin uS ; 0; 1 þ cos uS Þ, and Dx is the shear distance. This produces the usual shearing phase term given by   ­uðx; yÞ ­wðx; yÞ kLx ðx; yÞ ¼ kn sinðuS Þ þ ½1 þ cosðuS Þ Dx; (3.25) ­x ­x

when uR  0 and uS ≠ 0. This property gives the advantage of measuring inplane and out-of-plane displacement derivatives simultaneously using sequential illuminations. Even though both kinds of displacements are mixed for each illumination, they can be independently separated, as will be shown in the next chapter. The visual interpretation of correlation fringes is more difficult if simultaneous contributions of in-plane and out-of-plane derivatives are allowed. However, two advantages can be seen in detecting the derivatives in shearography: (1) rigid-body movements usually produce linear displacements (in which case the derivatives are constant) and add constant phase terms to the phase derivatives of intrinsic object deformations, and (2) the sensitivity depends on the shear distance, which can be adjusted to suit a particular desired sensitivity. If more-accurate measurements are sought with the shearography or speckle techniques discussed here, it is necessary to include the wavefront distortions of the illuminating beams, the aperture influence, and the telecentric lens utilization [10,59,94]. The use of gratings for shearing the light beams was proposed in 1997 [95–99], and recent developments have achieved improved compact shearography systems that are vibration resilient and are based on diffractive optical elements (DOEs)

Electronic Speckle Pattern Interferometers

29

[100,101]. As phase derivatives are obtained from fringe patterns, there is always the need to compare the integrated phase. This phase can be obtained with an out-of-plane interferometer, and simultaneous setups have been devised for this purpose [102]. All of the speckle images obtained from a shearing interferometer are easily identified as double images, where two superimposed images are displaced, e.g., along the x axis by the shear distance Dx if the object under observation is plane. In the case of 3D objects, the magnification factor of a standard nontelecentric lens changes within the depth of field, and the shear is no longer constant, requiring the computation of 3D shear distribution [93]. Even though phase maps are obtained from these speckle images, they are only produced over the superposition area of the double image, and integration procedures are not straightforward, even in the 2D case, and require special integration procedures; see, for example, Refs. [103–111]. These kinds of interferometers have also been used with synthetic wavelengths [112]. Recent developments in shearography combine wavelength-scanning interferometry using tunable lasers with standard shearing interferometers to measure aspheric surfaces [113]. Finally, comprehensive reviews of the shearography techniques can be found Refs. [59,88,114]. 3.3.1 Carriers in shearography Carrier fringes in shearography have been introduced by displacing the illumination light source along the illumination direction to measure the second derivatives of plate deflections [115], to measure the fringe order difference for the first and second derivatives [116], and in transient phase detection [117]. Other approaches also measure phase differences by introducing carriers and shear simultaneously, either by a modified 4f system or by using two apertures in front of a lens [118,119]. If the object illumination does not change, we have the illumination vectors rR  rS ¼ ðsin uS ; 0; 1 þ cos uS Þ. However, a displacement of the diverging illumination DI (as shown in Fig. 3.5) along the propagation direction introduces a change in the optical path beam that can be approximated as a difference of quadratic approximations to the out-of-plane spherical wavefront: 

  x2 þ y2 1 1 wðx; yÞ  ;  2 R2 R1

(3.26)

where R1 and R2 are the first and second radii of curvature of the light wavefront before and after lens translation, respectively. Therefore, the approximated phase derivative obtained after a shearing interferometer with shear in x is given by

30

Chapter 3

kLx ðx; yÞ ¼ knðrR  rS Þ · Ux :

(3.27)

When Ux ¼ ð0; 0; ­wðx;yÞ ­x DxÞ, this phase is obtained if an object presents only out-of-plane displacements (e.g., a plate firmly supported at the edges and deformed at the center). In this case, we have 1 1 ­kn½x þy 2 ðR2  R1 Þð1 þ cos uS Þ 2

kLx 

2

­x

Dx;

(3.28)

or 

 R1  R2 kLx  2knð1 þ cos uS Þ xDx: R1 R2

(3.29)

Equations (3.28) and (3.29) represent the phase of a set of parallel vertical fringes with a period that depends on the illumination translation in the outof-plane direction. After the carrier introduction, when the object under analysis is static, it is deformed, and the carrier fringes consequently change their density, depending on the displacements. Therefore, the phase difference of undeformed carriers and distorted carriers is the phase introduced only by object displacements. Figure 3.6 is an example of carrier fringes in the y direction, modulated by a torsion of a metal bar in a shearing interferometer. An alternative approach to carrier generation in shearography has been used to study transient phenomena by using a Mach–Zender interferometer

Figure 3.6 Shearing correlation fringes from a solid cylindrical bar clamped on one end and subjected to torsion on the other end. The fringe density is modulated by the torsion outof-plane displacement.

Electronic Speckle Pattern Interferometers

31

[120,121] and nontransient phenomena by using a large angle of view [122] or with simplified optical setups that use prisms in the lens aperture [123]. When controlled object rotation is introduced, the phase can also be used to extract the surface contour [124,125]. 3.3.2 Contouring in shearography The observation and illumination directions can be exchanged [126] to obtain contouring in shearography. Alternatively, the illuminating source can be rotated by angle b, or, equivalently, the object can be rotated instead [127]. In the latter case, we have previously shown that the b object rotation can be described by the transformation matrix Rb, which for small values of b, sin b  b, such that Rb  I 

0 0 0 0 b 0

! b 0 : 0

(3.30)

For a single illumination with respect to the x axis, we obtain for the shearing interferometer rR  rS ¼ ðsin uS  sin uR ; 0; cos uR þ cos uS Þ, which, for uR ¼ 0, means that   ­z ­x  b Dx; 0; b Dx ; ­x ­x

(3.31)

rR  rS ¼ ðsin uS ; 0; 1 þ cos uS Þ:

(3.32)

Ux;b and

From Eqs. (2.2), (3.31) and (3.32) we finally obtain the derivative of the phase increment due to the object rotation: kLx;b

  2p ­z ¼ b sin uS Dx þ bð1 þ cos uS ÞDx : l ­x

(3.33)

Now the last term in the square brackets of the previous equation produces a constant phase shift for a given rotation. However, the first term in the square brackets produces a phase with fringes proportional to the derivative of the object height, or slope contouring. A slope contour fringe appears when kLx;b ¼ 2p, or, for a single positive phase jump, we obtain l ¼ b sin uS therefore,

­z Dx; ­x

(3.34)

32

Chapter 3

­z l ¼ ­x Dxb · sin uS

(3.35)

are slopes that represent the contours of the object under analysis. The double-image effect obtained in shearing interferometers can be corrected by integration if the shear is over one pixel. However, the shear can be fractional or more substantial than one pixel; therefore, several approaches have been devised to deal with the effects of double images and for the case of multiple simultaneous shears [105,128,129]. After integration or double-image removal, height z can be retrieved. Height in microscopy can also be measured using shearography with a Savart prism that is rotated [130] such that the derivative is proportional to the prism rotational angle. Similar research in shearography has been previously performed for radial and rotational directions of detection [131].

Chapter 4

Illumination and Displacement Detection 4.1 Illumination Using Two Simultaneous Light Sources To introduce the matrix notation for illumination changes, here we review and extend the notation used for the in-plane interferometer, where twin light sources were placed at the same distance from the origin, and along the negative and positive y axis. The optical path difference of Eq. (3.6) was expressed for the two light sources with subscripts (S), (R) and a pose after a matrix transformation such that the components u, v of displacement are detected by the in-plane interferometer. In our previous in-plane example of object rotation, our illumination geometry was determined by the vector difference ½rR  rS0   ½rR  rS  ¼ ðsin uS0 þ sin uS ; 0; cos uS 0  cos uS Þ, which corresponds to our illumination-observation sensitivity, so Eq. (3.16) was obtained by the dot product of our illumination-observation sensitivity vector with the displacement components obtained after pose-rotation in g, which in matrix notation, can be re-expressed as ! ! ! sin uS0 þ sin uS 0 cos uS0  cos uS gy ðrR  rS Þ · Ug 0 : 0 ¼k 0 0 0 k gx 0 0 0 0 (4.1) Using uS ¼ uR ¼ u, we obtained the optical path difference induced by rotation in Eq. (3.16) as kLg ¼ 2kgy sin u. In general terms, to consider three sequential illuminations, we can re-express Eq. (4.1) as 1 0 ð1Þ 1 0 ð1Þ ð1Þ ð1Þ L Lv Lw Lu @ Lð2Þ A ¼ @ Lð2Þ Lð2Þ Lð2Þ AUT ; (4.2) u v w ð3Þ ð3Þ ð3Þ ð3Þ L Lu Lv Lw where the terms in the round brackets after the equal sign denote the illumination sensitivity effects, and the transpose of the pose vector UT carries 33

34

Chapter 4

the result of rigid-body displacements and object displacements after a pose change. The illumination sensitivities are determined by the chosen illumination geometry in which (S,R) are placed, and have been labeled with the superindex (1) for the first sequential illumination, so Lg of Eq. (3.16) is now represented by L(1) for that illumination. The sequential illumination labeled as (2) is replaced by zero if it is nonexistent, as is the second element of the matrix of row (1) if the twin sources are placed along the x axis and have zero contributions along the y axis. Notice that uðj; qÞ; vðj; qÞ; wðj; qÞ depend on the rigid-body motions determined by R, the translation, and the object displacements described by Eq. (2.3). However, for rotation alone, the pose change 1 1 0 uðj; qÞ x þ x0 DUT ¼ @ vðj; qÞ A ¼ ðR  IÞ@ y þ y0 A wðj; qÞ z þ z0 0

(4.3)

should be used instead, as shown in the previous chapter. In the following sections, we will omit the (j,q) dependency for clarity. The matrix of Eq. (4.2) is known as the sensitivity matrix and depends on the kind of illuminationobservation geometry and the interferometer chosen for the experiment. In the next section, the three subsections show how the sensitivity matrix is modified when a single light source is chosen for illuminating at three sequential geometrical positions in a plane x0 -y0 placed a distance z0 from the origin of the x, y, z coordinate system. Although more illuminations can be used for a larger sensitivity matrix, three consecutive illuminations are usually enough for a complete analysis of displacement measurements.

4.2 Three Sequential Illuminations Using a Single Light Source The phase of a single light source along the x0 axis at position 1 in Fig. 4.1 can be represented by 1 0 sin uS  sin uR ðrR  rS · U A @ @ ¼k 0 0 k 0 0 0

10 1 u 0 cos uR þ cos uS Þ A @ v A (4.4) 0 0 w 0 0

if a light source is placed at three sequential positions in the x0 -y0 plane, with each position producing a different displacement detection [132,133]. In the matrix of Eq. (4.2), where the light sources with superscripts (1),(2),(3) are placed to indicate different illumination positions, each element of the 3  3 matrix will depend on the chosen positions for the three different arrangements described in the next subsections.

Illumination and Displacement Detection

35

4.2.1 L-shaped sequential illumination For the sources placed in an L configuration as shown in Fig. 4.1, several components become zero, as shown in the following matrix of optical path differences: 0 ð1Þ 1 0 ð1Þ 1 ! ð1Þ u L Lu 0 Lw ð2Þ @ Lð2Þ A ¼ @ 0 0 Lw A v : ð3Þ ð3Þ ð3Þ w L 0 Lv Lw The zero in the first row arises from the fact that source 1 does not have components of displacement in the y direction. The second row of the sensitivity matrix only has components along the z direction; therefore, the x and y sensitivities are zero; finally, source 3 corresponding to the last row is similar to source 1 but now without detection in the x direction. For an L configuration with combined in-plane and out-of-plane detection using the first row of Table 2.1, we obtain the following matrix of optical path differences: 1 0 ð1Þ 1 0 ð1Þ ð1Þ ð1Þ ð1Þ ! 0 cos uR þ cos uS sin uS  sin uR u L B C ð2Þ ð2Þ @ Lð2Þ A ¼ @ 0 0 cos uR þ cos uS A v ; ð3Þ ð3Þ ð3Þ ð3Þ ð3Þ w L 0 sin u  sin u cos u þ cos u S

R

R

S

(4.5) where for the second row of the 3  3 matrix, the observation and illumination

Figure 4.1 Light source positions for L-shaped sequential illumination 1,2,3: (a) schematic diagram showing the array of sequential illuminations shown in plane x0 -y0 , and (b) diagram of the array showing the 3D light propagation (dashed lines) toward a converging point, where the object under analysis is placed at the origin of the (x,y,z) coordinate system.

36

Chapter 4 ð2Þ

ð2Þ

are parallel to the z axis and therefore uR ¼ uS ¼ 0, and ð2Þ ð2Þ cos uR þ cos uS ¼ 2. Since each illumination is sequentially switched, the ð1Þ ð2Þ ð3Þ observation direction is on the z axis, and we have uR ¼ uR ¼ uR ¼ 0 and ð1Þ ð2Þ ð3Þ cos uR ¼ cos uR ¼ cos uR ¼ 1, from which the previous equation can be reexpressed as 1 0 ð1Þ 1 0 ! ð1Þ ð1Þ u L 0 1 þ cos uS sin uS A v : @ Lð2Þ A ¼ @ 0 (4.6) 0 2 ð3Þ ð3Þ ð3Þ w L 0 sin uS 1 þ cos uS Therefore, we are able to detect mixed in-plane and out-of-plane components from each illumination 1 and 3, and only out-of-plane component for illumination 2. Now we can remove the out-ofplane component by selecting equal angles of illumination for each source ð1Þ ð1Þ position, and subtracting Lð1Þ  Lð2Þ ¼ u sin uS þ wðcos uS  1Þ such that the in-plane component along u is isolated if the second term is extracted. The in-plane component along the v direction is given by ð3Þ ð3Þ Lð3Þ  Lð2Þ ¼ v sin uS þ wðcos uS  1Þ, and the in-plane component along v is isolated if the second term is extracted. The extracted terms for the u and v directions are calculated using the out-of-plane component obtained ð1Þ ð3Þ directly by Lð2Þ ¼ 2w, to obtain either wðcos uS  1Þ or wðcos uS  1Þ. Alternatively, a matrix inversion can be used to obtain the u,v,w components. Using the same sequential L illuminations, derivatives of displacement can be obtained but with a shearing interferometer: 0 ð1Þ 1 0 10 ­u 1 ð1Þ ð1Þ 0 1 þ cos uS sin uS Lx ­x @ Lð2Þ A ¼ @ 0 A@ ­v ADx: (4.7) 0 2 x ­x ð3Þ ð3Þ ­w ð3Þ 0 sin uS 1 þ cos uS Lx ­x ­u ­v ­w The components ­x ; ­x ; ­x can also be obtained by matrix inversion. However, the matrix condition number for each configuration should be checked first using the MATLAB® function cond, where the sensitivity matrix is the 3  3 matrix in the previous equation. In a similar manner but for a shearing interferometer with detection in the y direction, it is possible to obtain ­u ­v ­w ­y ; ­y ; ­y . With all of these derivatives, it is possible to obtain almost all of the components of the strain tensor given by 1 1 0 0 ­u 1 ­u ­v 1 ­u ­w εx g xy g xz ­x 2 ½­y þ ­x 2 ½ ­z þ ­x  ­v ­v 1 ­v ­w C @ g yx εy g yz A ¼ B þ ­u (4.8) @ 12 ½­y ­y ­y 2 ½­z þ ­y  A; 1 ­w ­u 1 ­w ­v ­w g zx g zy εz 2 ½ ­x þ ­z  2 ½ ­y þ ­z ­z

except for the derivatives shown in red, which correspond to z derivatives. The remaining four strains correspond to the 2D imaging of the object’s

Illumination and Displacement Detection

37

surface properties. The derivatives in red require knowledge of the intrinsic properties of the material, and the light does not penetrate most of the materials used with these techniques. Strain is proportional to stress for elastic materials that obey Hooke’s law, so stress can be obtained from strain [134–137] knowing Poisson’s ratio and Young’s modulus of the material under analysis. The lack of knowledge of the derivatives shown in red in Eq. (4.8) limits the number of stress components that can be obtained. 4.2.2 Triangular sequential illuminations: Inverted T and equal angles If a triangular configuration of the illuminating beams is chosen, two alternative configurations can be devised: the first is presented in Fig. 4.2, where two symmetrical illuminations are presented along the x0 axis, and the second is presented in Fig. 4.3, where the sources are equally distributed along a circular path with the same angle among them. In the first case we have 0 ð1Þ 1 0 ð1Þ 10 ­u 1 ð1Þ Lx 0 Lw Lu ­x ­v ð2Þ @ Lð2Þ A ¼ @ Lð2Þ 0 Lw A@ ­x ADx; x u ­w ð3Þ ð3Þ ð3Þ Lx 0 Lv Lw ­x or 1 1 0 ð1Þ ð1Þ ð1Þ ð1Þ 0 ­u 1 ð1Þ 0 cos uR þ cos uS sin uS  sinuR Lx ­x ð2Þ ð2Þ ð2Þ C ­v @ Lð2Þ A ¼ B @  sinuð2Þ 0 cos uR þ cos uS A@ ­x ADx: x S þ sin uR ­w ð3Þ ð3Þ ð3Þ ð3Þ ð3Þ Lx ­x 0 sinuS  sin uR cos uR þ cos uS 0

(4.9)

Figure 4.2 Light source positions for inverted T sequential illumination 1,2,3: (a) schematic diagram showing the array of sequential illuminations shown in plane x0 -y0 with illuminations 1 and 2 at symmetrical positions, and (b) diagram of the array showing the 3D light propagation (dashed lines) toward a converging point where the object under analysis is located and illuminated.

38

Chapter 4

Figure 4.3 Light source positions for equal-angle sequential illumination 1,2,3: (a) schematic diagram showing equal angles for each sequential illumination shown only in the plane x0 -y0 distribution, and (b) diagram with equal angles for each illumination, showing 3D light propagation (dashed lines) toward a converging point where the object under analysis is placed and illuminated.

ð1Þ

ð2Þ

ð3Þ

ð1Þ

ð2Þ

ð3Þ

When uR ¼ uR ¼ uR ¼ 0 and cos uR ¼ cos uR ¼ cos uR ¼ 1, Eq. (4.9) is reduced to 1 0 ð1Þ ð1Þ sin uS Lx @ Lð2Þ A ¼ B @  sin uð2Þ x S ð3Þ Lx 0 0

10 1 ð1Þ ­u 1 þ cos uS ­x ð2Þ C@ ­v A 1 þ cos uS A ­x Dx: ­w ð3Þ ­x 1 þ cos uS

0 0 ð3Þ sin uS ð1Þ

ð2Þ

(4.10)

Therefore, using the same angles, ðLx þ Lx Þ∕2 can be used to extract the ð1Þ ð2Þ out-of-plane component, and ðLx  Lx Þ∕2 can be used for the in-plane component along x. The in-plane component along y can be extracted using ð3Þ ð1Þ ð2Þ Lx  ðLx þ Lx Þ∕2. In the case of a source configuration with equal angles of 2p∕3 along a circle, we have 0

1 0 ð1Þ ð1Þ Lx Lu @ Lð2Þ A ¼ @ Lð2Þ x u ð3Þ Lx 0

ð1Þ

Lv ð2Þ Lv ð3Þ Lv

Or, replacing the sensitivities, we have

10 ­u 1 ð1Þ Lw ­x ­v ð2Þ Lw A@ ­x ADx: ­w ð3Þ Lw ­x

Illumination and Displacement Detection

0

1

0 qffiffi

ð1Þ sin uS

39

ð1Þ sin uR



ð1Þ

ð1Þ

ðsin uS sin uR Þ  2

 ð1Þ Lx B B ð2Þ C B qffiffi  ð2Þ ð2Þ B L C ¼B 3 ðsin uS sin uR Þ ð2Þ @ x A B 4  sin uð2Þ  þ sin u R S 2 @ ð3Þ Lx ð3Þ ð3Þ 0 sin uS  sin uR 0 1 3 4

ð1Þ cos uR

þ

ð1Þ cos uS

1

C C ð2Þ ð2Þ C cos uR þ cos uS C A ð3Þ

ð3Þ

cos uR þ cos uS

­u

B ­x C B ­v C B ­x CDx: @ A

(4.11)

­w ­x

ð1Þ

ð2Þ

ð3Þ

Assuming that uS ¼ uS ¼ uS ¼ u, and uR ¼ 0, by subtraction, we obtain the in-plane derivative component along x as Lð1Þ  Lð2Þ ¼ pffiffiffi 3 sin uDx­u∕­x. The corresponding out-of-plane derivative component is obtained by the combination ðLð1Þ þ Lð2Þ þ Lð3Þ Þ∕3. If inverse matrix calculations are used, this equal-angle configuration (see Fig. 4.4) has proved [59] to give the best conditioning number for uS  30° . Time delays are introduced by using several sequential illuminations; each delay arises in the time it takes to switch from one beam to the next, making this process slow, as it is initiated by an illumination switch that mainly uses mechanical movements of optical components. However, this switching speed limitation can be reduced, and some researches have recurred to pulsed lasers that illuminate the sample at small intervals of time that are synchronized with the camera acquisition speed [138]. Multiplexed carriers have been obtained using pulsed lasers with the addition of speckle patterns [139], or by simultaneous illumination with different CW lasers and the use of different

Figure 4.4 Shearing interferometer with equal angles of illumination simultaneously displaying the three ray paths (in green) in a single photograph. The object under analysis is placed in the intersecting area of the three beams.

40

Chapter 4

carriers in a single speckle pattern [140–147]. The spatial combination of sensitivities in a single camera frame has been recently explored [148]. Recent strategies have also allowed the combination of out-of-plane and shearing interferometers with multiple or single cameras [149,150]. Moreover, the multiplexing of several shears has been proposed for real-time wavefront sensors [151].

Chapter 5

Transient Displacement Analysis Whole-field displacement-detection techniques using electronic speckle pattern interferometry (ESPI) are reviewed in this chapter using pulsed and CW lasers for displacement measurement of fast, transient phenomena.* The capabilities of the leading optical setups used for speckle interferometry to study transient displacement events are discussed in terms of camera synchrony for the acquisition of the speckle patterns; these setups are compared with recently developed techniques. Since 1978, considerable interest has been shown in the measurement of high-speed mechanical displacements using ESPI or TV holography [152–156]. Following the implementation of the first techniques, major improvements were brought by the development of pulsed lasers, and the applications of pulsed ESPI started to emerge [154,157,158]. Nowadays, the acquisition speeds achieved by electronic cameras are so high that they overcome the time sampling interval achieved with the old technology of pulsed lasers. As a consequence, CW lasers can be used with high-speed cameras [159]; if the light is not sufficient, micro-spheres can be used to enhance the reflectivity [160]. On the other hand, laser technology continues to evolve, and new types of pulsed lasers have been developed; in particular, ultrasound technologies have evolved to detect transient in-plane and out-ofplane displacements on the order of sub-nanometers [161]. The following sections first review the leading synchronization schemes [162] used to date when pulsed and CW laser illuminations are used for recording speckle patterns, and secondly they discuss the main setups that use this kind of synchrony.

*For practical purposes in this book, a transient event is defined as a displacement that can be non-repeatable or repeatable, and which has a duration of less than 1/30 of a second, i.e., one TV frame time.

41

42

Chapter 5

5.1 Recording of Transient Events The first technology that was available to record transient events was the film camera used in cinematography. The film for these cameras ran at 30 frames per second (fps), and further improvements produced rotating-drum-type film cameras with capabilities of hundreds of nano-second exposures among consecutive frames [163]. The speed of electronic cameras has been evolving. Still, the most common frame speed is 30 fps (due to the human eye’s time response), which was preserved until the time of the development of actual digital cameras. Even so, specialized high-speed digital cameras can track a bullet traveling at 120 to 1200 m/s or a balloon explosion at 4500 fps. However, freezing a movement implies that the image obtained by the camera does not change during the recording. In interferometry, this last requisite might prove challenging. 5.1.1 Recording a simple fast event To start with a simple analysis of a fast event, we can define the time that elapses between two consecutive frames as t and Dt, the camera integration time of each frame, where we assume that an object moving in the image has been registered, as shown in Fig. 5.1. As Dt ≪ t, we usually assume that the image is frozen in the interval period Dt due to the short integration time. Then, if the event has a movement with a maximum velocity of v, it can be analyzed if nt , x, where x can be either the maximum spatial distance of the field of view in a frame or the distance for which a transient event can be captured twice inside the field of view. Therefore, events faster than v will occur when the object is out of the field of view and will not be registered.

Figure 5.1 Two consecutive camera frames with a running man as the object being registered after a time lapse t.

Transient Displacement Analysis

43

Under these assumptions, the goal of an inspection apparatus is to determine t, Dt, and v to successfully inspect the event. It is also easy to recognize that, if t and Dt are small, or if v is too high, realization of the inspecting apparatus can become a real challenge.

5.2 Recording Interferometric Events If the camera is now used to register interference patterns, the transient event can be perceived using the interference patterns in two main scenarios in which the wrapping periodicity and the possibility of local phase shifts and transformations over the observed field of view are inherent. The simplest is an interference pattern that can be observed over the whole field of view. Alternatively, the observation of a local-traveling interference pattern that is much smaller than the whole field of view can also be obtained. Let us assume that we would like to analyze the first scenario: a wrapped interference pattern that can change quickly over a whole field of view. In this case, we can be sure that a freezing of the fringe pattern can be useful if we avoid the wrapping. Otherwise, the unknown wrapping would produce uncertain measurements. Therefore, as interference is based on periodic signals, the maximum allowable phase change kL in any part of the whole field of view should be ≪2p before wrapping occurs: kLðx; y; tÞ , 2p;

(5.1)

where k ¼ 2p/l, and t ¼ Dt, with t denoting time and l denoting the wavelength of the light. The velocity of the interference pattern depends on the evolution of the optical path difference L(x, y, t) and is dictated by the phase change kL(x, y, t) in the interference pattern. In order to sample the interference pattern without introducing phase wrapping between two consecutive fringe patterns, and in terms of the OPD, we need to have Lðx; y; tÞ , l:

(5.2)

Additionally, the spatial resolution and sampling of the fringe pattern become crucial parameters as the cameras allow a maximum spatial frequency to be registered. Figure 5.2 shows a l/4 optical path difference over the whole field of view for two consecutive fringe patterns. The fringe patterns are formed by two plane wavefronts inclined by a small angle along the horizontal direction. The top fringe pattern is at time zero, and the bottom one is at time t. As explained previously, if any transient displacement produces shifts of the fringe pattern in the field of view by introducing an OPD more significant than the limitation imposed by Eq. (5.2), the fringe order will produce wrapped-phase values. However, there are ways to overcome this limitation by decreasing the sensitivity of detection using synthetic-wavelength methods

44

Chapter 5

Figure 5.2 From top to bottom: two consecutive camera frames of a whole-field interference fringe pattern with an optical path difference of L(t) ¼ l/4, showing how the fringes travel from one whole region to another whole region after a time lapse t.

such that more significant OPDs can be analyzed without wrapping of the fringe patterns [73]. A second scenario in the acquisition of fringe patterns is the appearance of local fringe patterns in different positions of the whole field of view. Figure 5.3 shows an example of a local wave pattern arising at different positions over the field of view. The local fringe pattern is associated with a simulation of a Rayleigh wave that deforms a solid material locally as it travels through it. This kind of wave is slower than primary waves and travels at approximately half the speed of primary longitudinal waves. Even though this is slower than primary waves, fringe pattern acquisition has proved to be difficult, especially in a whole field of view. Nonetheless, using interferometric setups, Rayleigh waves have been detected [164] traveling at speeds of 2888 m/s in aluminium. Similar optical methods have been developed to detect ultrasonic waves [165], and techniques for whole field-of-view detection using pulsed-laser technology has been a subject of research, as these techniques provide additional information of the whole field of view. Pulsed lasers [166,167] historically allowed for the capture of Lamb waves in two dimensions traveling along plates (see the background theory in Ref. [168]). In this handbook, only whole-field optical techniques are of interest, so performing measurements requires freezing the fringe patterns. The camera integration time or pulsed illumination can be used to freeze the traveling waves—either Rayleigh or Lamb—that can be transformed into fringe patterns by the optical setups. Table 5.1 shows the typical speeds of longitudinal wave propagation in four different materials.

Transient Displacement Analysis

45

Figure 5.3 Simulation of a local fringe pattern associated with a traveling Rayleigh wave at three consecutive times: (a), (b), and (c), where the local displacement of the Rayleigh wave is also displaced in time along the horizontal direction. Therefore, a pixel detecting interference in the transient displacement path will sense a modulated periodic signal.

Table 5.1 Approximated longitudinalwave speeds for mechanical propagation in diverse materials. Air

300 m/s

Perspex®

 2700 m/s

Steel

 5900 m/s

Aluminium

 6400 m/s

46

Chapter 5

As can be appreciated from this table, if the sound waves locally deform the material in which they travel, the propagation velocities can be very high, even with Rayleigh waves. However, using lens magnification over a field of view of the chosen interferometer, a framed area of a size on the order of meters can be reduced by the magnification to an order of millimeters in the camera detector. An analogy can be seen when recording racing cars with a video camera: if the field of view is a few hundred meters, we can capture the car traveling at video speeds; however, if we zoom the field of view, the speed of video acquisition should be increased to capture the car in the field of view. Therefore, acquisition using a magnification factor of the camera lens helps to gather the images of the traveling phenomenon, sampling it at reduced resolution but for different local areas of the whole field of view. Additionally, the excitation of the mechanical waves in materials has been challenging, requiring specialized transducers, and was recently improved by laser-induced plasma shock-wave generation [169]. The following section shows the most common methods that have been developed for camera acquisition using pulsed lasers or short camera integration times with CW lasers. The fringe patterns can be replaced by speckle patterns, whose phases can be measured in several ways, also taking into account the same wrapping and sampling issues of fringe patterns.

5.3 Camera Acquisition and Synchrony Methods Using Pulsed Lasers A variety of procedures have been developed to gather speckle patterns, depending on the laser technology and the available camera hardware. When pulsed lasers are used, the camera synchrony should be tailored to capture the light pulses [158]. Four basic synchrony methods depend on the laser and camera capabilities as well as the repeatability of the transient event that is sought to capture; these methods are described in the next sections. 5.3.1 Method 1: Single pulse per camera frame The most straightforward acquisition process is to sample the transient event with enough frequency that the event can be faithfully reproduced at any time. However, there are three alternatives for speckle pattern acquisition. In the first case, the phase of the speckle pattern can be extracted from two speckle patterns that might be consecutive or separated in time. In the second case, the phase of the speckle pattern can be extracted in each speckle pattern to obtain the phase difference; this is also known as single-shot phase retrieval. The third option is to record a single speckle pattern or several phase-shifted speckle patterns while the object is static, and one (or several phase-shifted) speckle pattern when the transient event is sampled, so phase shifting can be used to extract the phase difference; see, for example, Refs. [170,171]. A

Transient Displacement Analysis

47

Figure 5.4 Method 1: The top plot shows a camera signal with arrows representing laser pulses in synchrony with the camera frames. The time lapse T is the same between camera frames and pulses. The bottom plot shows an out-of-plane transient displacement in arbitrary units of displacement along the y axis, detected by measuring phase changes in a camera pixel, with the sampled displacements shown as dots, and the dashed lines showing the laser illumination in synchrony with the displacement under study. The transient displacement evolves slower than the sampling ratio of the laser pulses. Table 5.2 Velocities obtained for a displacement l/2 (at l ¼ 632 nm ) using method 1 (fps is frames per second, and T is the time between consecutive frames). fps

30

500

1000

10,000

200E6

T

33 ms

2 ms

1 ms

100 ms

5 ns

Velocity

10 mm/s

158 mm/s

316 mm/s

3.16 mm/s

63.2 m/s

slightly different option is to use a series of consecutive speckle patterns that are processed to extract a temporal phase difference, which after accumulation can be used to obtain the final phase; see, for example, Ref. [172]. The first alternative is designed to achieve a fringe pattern obtained from the subtraction correlation of two speckle fields whose fringes correspond to a carrier phase plus a change in the state of deformation among the sampled patterns. In the second method, each sample provides the instantaneous phase (carrier plus displacements caused by deformation). In this case, each camera frame is synchronized with a periodic laser pulse or with an integration time interval of a camera; the period T is the same for the camera and the pulses or integration interval (Fig. 5.4). If a transient event is analyzed using this method, the sampling intervals of period T must be short enough to sample the transient displacement below the Nyquist limit (Table 5.2). 5.3.2 Method 2: Single-pulse time delay per camera frame In an alternative method to analyze repeatable transient events using lowrepetition-rate pulsed lasers (frequency less than 1/T), a pulse is used to sample

48

Chapter 5

the transient displacement at different intervals. In this method, sampling of transient displacement is possible, but only for short-term, repeatable displacements [173,174]. Each pulse delay has been previously incremented to each frame repetition such that an appropriate sampling of the transient event is obtained. Using this kind of repeatable transient event allows the use of slower pulsed lasers but restricts the kinds of transient events that can be analyzed. On the other hand, high-repetition-rate lasers allow pulse repetitions at frequencies of 1/T that can be used to capture a transient event in synchrony with a high-speed camera. In this case, the previous technique can still be used for analyzing a repetitive transient event, as shown in comparison with method 1 in Figs. 5.5(b) and (c). Method 2 can be summarized as follows: • •

Acquisition of a single speckle pattern or several phase-shifted speckle patterns is achieved when the object under analysis is static. Single-shot acquisition of a single speckle pattern or several phase-shifted subsequent speckle patterns is achieved when the object under analysis is subjected to a repeatable transient displacement in each camera frame.

Figure 5.5 Comparison of methods 1 and 2: (a) In method 1, each pulse is in synchrony with each camera frame. (b) In method 2, the pulse synchrony is adapted to sample a repetitive transient displacement using pulse delays, and to sample a transient displacement in synchrony with the camera frames, as shown in part (c). The transient displacement must be shorter in time than the frame time and is repeated in synchrony with each camera frame but sampled at different points by the pulses in part (b) that are shifted by the additional time delay DT.

Transient Displacement Analysis •

49

The pulse phase delays control the synchronization of pulses with repeatable transient displacements: each pulse delay has been previously incremented to each frame repetition such that an appropriate sampling of the transient event is achieved.

As the phase can be extracted from a single speckle interferogram (or several) and a repetitive transient displacement can be obtained from mechanical strain experiments, a phase-delay synchrony can be produced in the laser pulses of each frame. The repetitive transient displacement represented schematically in Fig. 5.5(c) can be analyzed by sampling it with a pulse represented in part (b) that shifts the signal sampling at each frame due to a difference among the delayed pulses and the repetitive transient displacements. The laser pulse duration sets the main limitation of this method; for example, to obtain ten samples of a transient mechanical event with a laser pulse width of 20 ns spaced in time by 100 ns, the event would need to last 1 ms, achieving a maximum speed of 316 cm/s, which is one order of magnitude faster than the speed obtained by trying to capture a transient event with light pulses in each frame using a high-speed camera. Either method 1 or 2 can make use of the the temporal-phase-unwrapping technique using single-shot phase maps with high-speed cameras in terms of the sampling intervals discussed in the two methods. However, sampling of a repetitive transient displacement using method 2 with the temporal-phaseunwrapping method would give similar time resolutions, also achieving the highest velocities of those two methods, especially if the laser pulse duration is short and the electronics allows the synchronization of short pulses with the repetitive transient displacement. Although some lasers emit high-frequency pulses that are shorter than the camera interframe times, new laser technologies are necessary to get enough power from shorter pulses and single-frequency operation for ESPI transient event analysis. An alternative to using pulsed lasers is to use high-power CW lasers with short acquisition times in the camera. An example of temporal phase unwrapping without pulsed lasers allowed for the capture of events at 1 kHz [175]. Even though the use of a repetitive transient event with pulsed lasers allowed for an increase in the speed detection limit, the camera technology was still lagging at the onset of the implementation of these methods, and the technique of twin-pulse ESPI was suggested as an alternative. 5.3.3 Method 3: Twin pulses per two camera frames The use of twin-pulse lasers provided some additional advantages for the application of ESPI in industrial environments [153]. The first double-pulse lasers used in ESPI were of the ruby type, which were Q-switched twice within one flash tube cycle to produce two laser pulses in a time interval between

50

Chapter 5

10 ms and 1 ms with a pulse width of about 20 to 50 ns. Unfortunately, if new interferograms were required, the laser could give only one set of pulses every 10 s. This low rate greatly limited the supply of continuous images for TV rates. A preliminary solution to this problem was the Q-switched Nd:YAG laser [176], which was able to sustain double-pulse repetition at TV rates. However, when operating in the Q-switched mode, fluctuations of intensity (spatial and temporal) introduce a deficient quality to the visibility of the fringe pattern. In order to overcome the fluctuations of intensity, the use of two identical cavities and a seeding diode laser has been shown to reduce the temporal instability in intensity. This increased stability is seen when both oscillators are seeded by the same diode laser so as to produce two mutually coherent pulses with a variable separation [177]. This laser was capable of producing an unlimited supply of twin pulses and reducing the pulse separation even further. It also gave unprecedented advantages for the analysis of transient events. Although lasers evolved to produce shorter times between pulses, the camera technology did not evolve following this shorter time capability. Therefore, standard cameras were modified to cope with the small times required to analyze transient events [176]. Acquisition times of 200 ms for a single-cavity Nd:YAG double-pulse laser have been achieved with an interlinetransfer CCD. However, because a single pattern was possible, phase-shifting techniques were impossible to apply. To overcome this problem, the use of larger speckle sizes with tilted reference beams allowed for the encoding of several phase steps before correlation [155,178]. However, this approach has been tested only using single-pulse ruby lasers. Nonetheless, transient events with time resolutions higher than 30 ms have been analyzed using this method, with the consequent disadvantage of the low repetition rate. Figure 5.6(a) is a schematic diagram of method 3 for synchrony of twin pulses in which a reduction of the sampling interval is achieved by using the minimum interframe time (known as the blanking interval) in the acquisition process, and using slow cameras and correlation by subtraction. This technique can be summarized as follows: 1. 2.

Acquisition of two speckle patterns is achieved when the object under analysis is deformed or displaced by a repetitive transient event. The synchronization with the repetitive transient event is adjusted by using adaptable delays among the twin pulses to sample the event.

A study of the propagation of Lamb waves on thin plates [166] shows how this synchrony method has been successfully applied. In this case, the interline camera spacing was 1 ms, allowing for the use of a twin pulse spaced by 1.5 ms. Although Lamb and Rayleigh waves can propagate on aluminium at  3000 m/s, traveling over 0.1 m in 33.3 ms, the reported twin-pulse spacing can successfully register the movement of Lamb waves. The amplitude of

Transient Displacement Analysis

51

Figure 5.6 (a) Method 3: Synchrony among twin pulses and frames is designed to achieve correlation by subtraction of two consecutive speckle fields, or for phase acquisition with each pulse: T1 and T2 are adjusted to modify the twin pulses on either side of the blanking interval, between two consecutive frames. (b) Method 4: The twin-pulse acquisition process is faster if the twin pulses are acquired within the same frame, but this produces a correlation addition of speckle patterns: T1 and T2 are adjusted to fit the twin pulses within the frame.

Lamb waves, however, should be carefully restricted to proven nanometric amplitudes to avoid wrapping of the interference patterns [167]. Another exciting application of this method is the study of blade rotation, where the rotation of the blades replaces the Lamb waves. Several components of displacement can be analyzed in blades using unique illumination arrays and synchronization schemes in digital holography, including software de-rotation [179,180]. 5.3.4 Method 4: Twin pulses per frame When twin pulses are adjusted to fall on either side of the blanking interval (the time elapsed from frame to frame or on either side of the interline transfer gate in old cameras), the maximum OPD velocity of the object under analysis can be defined by l/(2t), where t is the time elapsed between pulses before wrapping occurs in the interference patterns. With camera technology improving to frames of 30 ms, a single subtraction fringe pattern obtained from two speckle frames can be obtained, achieving a maximum OPD speed of 1 cm/s for l ¼ 632 nm. This speed and even higher speeds can be obtained using high-speed cameras, with the added bonus of a better sampling rate for the displacement. In the twin-pulse mode, two pulses are fired during one exposure. As a consequence of the small-time separations, typically from 10 ns to 500 ms, both interferograms are added in the camera to produce a single video image. Even such an increased speed can be obtained with this method. The main drawback is that it generates additive-correlation fringe patterns, which

52

Chapter 5

Figure 5.7 Method 4: (a) Twin-pulse additive-correlation fringe pattern obtained in one camera frame. (b) Enhanced fringe pattern after subtraction of local means.

require a different process for imaging. Therefore, specialized image processing techniques were developed to obtain good contrast in these correlated patterns [181], as well as techniques capable of dealing with the fast optical switching capabilities of the pulses [182–184]. Figure 5.6(b) is a synchronization diagram for this technique, which can be summarized like the previous method but with the enhancement of additive-correlation fringes. The interframe time of high-speed cameras is fixed by design; however, the time between twin pulses can be shortened, surpassing the acquisition speed of high-speed cameras. A typical enhancement of additive-correlation carrier fringes is presented in Fig. 5.7: the correlation by addition of two speckle patterns is presented in part (a), and the enhanced fringe pattern using the technique of local subtraction of means [181] is shown in part (b). Rotation phenomena have been studied using this method of twin pulses in the same frame, and controlled synchronization schemes have separated the combined effects of rotation, displacements, and vibration [185].

Chapter 6

Phase Detection Phase can be detected instantaneously using a single speckle pattern or using the time to gather a series of speckle patterns. The instantaneous approach, also called single-shot phase detection, gives a single speckle pattern in which a spatial carrier is embedded, but phase extraction from this kind of pattern is limited in bandwidth. More than one speckle pattern is needed to process the whole bandwidth and to extract phase values using phase-shifting procedures described in the literature [31,186]. The acquisition of more than one speckle pattern requires time, and time is related to the properties of the object under analysis, such as deformation or translation, and ambient thermal drifts. In theory, the object’s properties must remain unchanged for each measurement, but in practice, they keep evolving for each measurement. Due to thermal and other noise effects in the acquisition process, the phase values are usually noisy and distorted, and smoothing procedures must be introduced in the numerator and denominator of the arctangent function that is always used in the final steps of phase extraction.

6.1 Carriers in Single-Shot Phase Detection The carrier-fringe methods rely on a fringe pattern of known frequency that is later modified by a phase change, as was first suggested by Ichioka and Inuiya [187] in interferometric setups, and was later used in fringe projection systems, as shown in Fig. 6.1, where a typical fringe carrier is processed. This example of the technique known as digital fringe projection [188] shows how the spatial carrier fringes created by projecting a consinusoidally varying intensity with a light projector over a plane surface shown in Fig. 6.1(a) are modulated by the height of a pyramidal object in Fig. 6.1(b). Only a single image of the fringe pattern is needed to obtain the phase if the frequency of the fringes is known as a priori information. Therefore, the departure from straight, vertical fringes conveys the phase information. A sign-corrected phase map can be obtained if a priori knowledge is introduced in the fringe pattern. This knowledge can be expressed as a

53

54

Chapter 6

Figure 6.1 Carrier fringes from a fringe projection setup. (a) Fringe pattern projected over a plane surface showing only the spatial carrier fringes. (b) Fringe pattern projected when a pyramid-shaped object is placed in the top of the plane surface showing the modulated carrier fringes.

constant phase change over the whole fringe pattern. The carrier phase method has been extensively used since the 1950s in electronic communications systems [47]. Several names have been associated with this technique, such as quadrature demodulation (in electronics), spatial synchronous detection (SSD) [189], and space heterodyne demodulation of fringe patterns or direct-measuring interferometry [190]. 6.1.1 Spatial synchronous detection In order to introduce a constant phase change along a single direction, the optical wavefront must first be interfered with a plane reference wavefront expressed by UðxÞ ¼ expikx ;

(6.1)

where now k ¼ 2p∕lc , with lc denoting the spatial period of the carrier. Assuming that the original wavefront has an optical path difference of Lðx; yÞ, the interference pattern will produce an intensity fringe pattern of the form h ih i I ðx; yÞ ¼ expikLðx;yÞ þ expikx expikLðx;yÞ þ expikx (6.2)

¼ 2 þ expikLðx;yÞikx þ expikLðx;yÞþikx .

(6.3)

Although this equation is based on the interference of CW light, the same kind of fringe patterns can be found in Fig. 3.2 and in a fringe projection setup (see Fig. 6.3). For the fringe projection, this equation gives a set of equispaced fringes or carrier fringes along the x direction, as in Fig. 6.1(a), for a plane object with Lðx; yÞ ¼ 0. When Lðx; yÞ is the optical path difference from a

Phase Detection

55

pyramidal object placed over a plane, it gives fringes as shown in Fig. 6.1(b). When this interference (or projected) pattern is analyzed in frequency space, it can be seen that two symmetrical phase terms are formed with respect to the zero order. A direct analogy can be seen in hologram image formation, where the process of reconstruction by illuminating with the same reference beam can reproduce the phase of the object and the other associated terms. Similarly, multiplication of Eq. (6.3) by Eq. (6.1) will give I ðx; yÞ ¼ 2expikx þ expikLðx;yÞi2kx þ expikLðx;yÞ ;

(6.4)

where the last term contains the isolated phase term Df ¼ kLðx; yÞ; therefore, this multiplication is equivalent to a shift of one of the carrier lobes to the center of the frequency spectrum. Next, if a low-pass Fourier filter is applied to this equation, the two first terms can be eliminated, and the remaining term will thus contain only the optical phase difference. Then, the phase can be extracted by using the real and imaginary components of the filtered result: 

 ImðI¯ ðx; yÞÞ Df ¼ arctan ; ReðI¯ ðx; yÞÞ

(6.5)

where Df is the difference of phase, and I¯ is the low-pass-filtered version of I. This phase matches the difference of phase from curved fringes [Fig. 6.1(b)] and carrier fringes [Fig. 6.1(a)], as shown in the example of wrapped phase presented in Fig. 6.2. The main advantage of the phase extraction technique that leads to Eq. (6.5) is that it can determine the phase map without sign ambiguity from a single fringe pattern. Although a single fringe pattern is needed (contrary to phase-shifting techniques), it has been shown that phase-shifted fringe patterns can be obtained from open fringes produced by the carrier [191]. Eventually, any phase-shifting algorithm can also be applied. It must be noticed, however, that a different sign in the carrier frequency would produce an inversion of the phase values. Also, in practice, the carrier frequency should be carefully calculated to deal with sub-pixel resolution from the spectral peak [192] and noise reduction by tolerant methods [193], or by estimation of local orientation, direction, and magnitude of a frequency field using regularization techniques [194,195], or by using the windowed Fourier transform [196] to deal with nonlinear carriers. An optical configuration for fringe projection designed to compensate for magnification from the projection and camera lenses systems is presented in Fig. 6.3(a) (see, for example, Ref. [197]). This configuration can be used to extract the height of the objects using the unwrapped phase of Eq. (6.5), represented by Dfu , using

56

Chapter 6

Figure 6.2 Wrapped phase map obtained from subtracting the phase maps from the two fringe patterns of Figs. 6.1(a) and (b), using phase extraction methods based on phase shifts or, equivalently, using Eq. (6.5).

Figure 6.3 Fringe projection setup for 3D measurement. (a) Optical setup for 3D measurement using half-field-of-view projection. The projector optical axis is parallel to the camera optical axis, and P is a projector with half-field-of-view image projection due to a lateral shift of the light source. C is a standard camera with a detector centered on its optical axis. (b) 3D height measurement of a pyramidal object obtained from the unwrapped phase [Eqs. (6.7), (6.6), (6.8), and (6.9)], using Df from the temporal phase unwrapping method described in Section 6.7 (units in millimeters for all axes).

hðx; yÞ ¼

Dfu ; k hðx;yÞ tan u

(6.6)

where the depth wavenumber is defined as k hðx;yÞ ¼ 2p∕lhðx;yÞ , with lhðx;yÞ being the distance between maximums (or minimums) along the x axis of the projected fringes for a given height hðx; yÞ with respect to a reference plane,

Phase Detection

57

and u is the angle defined by l 0 and d 0 [see Fig. 6.3(a)]. Similar or approximated relations between height and phase can be found in the literature for other optical arrangements [198,199]. One of the most common problems introduced by projectors is the fringe divergence that changes l in the field-of-view for different positions in the depth-of-field (DOF); corrections for this divergence have been studied using linear approximations [200] and nonlinear interpolation [201]. Therefore, after unwrapped phase calculus, a final procedure should be used to correct the effects produced by the magnification and divergence of the fringe patterns over the depth-of-field. The linear approach calculates the changes of l and magnification over the DOF by simultaneously iterating Eq. (6.6). For the linear change of fringe density, it is assumed that l ¼ l0 is known at a plane hðx; yÞ ¼ 0, and the linear change introduced in the period of the fringes by the diverging fringe pattern is zero at a distance hðx; yÞ ¼ l 0 . This distance can be approximated by   hðx; yÞ ; (6.7) lhðx;yÞ  l0 1  l0 which corrects the initial fringe density depending on the initial height given by Eq. (6.6). The same approach can be used to calculate the effect of linear magnification, leading to a change of the coordinates from ðx0 ; y0 Þ at hðx; yÞ ¼ 0 to the coordinates   hðx; yÞ xhðx;yÞ ¼ x0 1  ; (6.8) l0

yhðx;yÞ

  hðx; yÞ ¼ y0 1  ; l0

(6.9)

when hðx; yÞ ≠ 0. Assuming that other kinds of distortions are neglected, a few iterations of Eqs. (6.6), (6.7), (6.8), and (6.9) correct the initial phase measurements to produce the final 3D coordinates with increased precision. 6.1.2 Fourier transform method for difference of phase Nonetheless, in many applications, the phase of interest is the difference of phase Df, or the phase change from an object that evolves in time from a motionless state at time t to a motionless state at time t þ dt. In this case, the multiplication of Eq. (6.3) by Eq. (6.1) is no longer necessary. The Fourier transform method proposed by Takeda et al. [52,202] for high spatial-carrier frequency, or by Kreis et al. [53] including low spatial-carrier frequency is enough for phase difference purposes, but even so, the carrier frequency is still needed because the Fourier transform method can introduce undesirable

58

Chapter 6

phase sign inversions if closed fringe patterns are processed [203]. If a cosinusoidal fringe pattern with carriers in two dimensions is described as I ðx; yÞ ¼ aðx; yÞ þ bðx; yÞ cosðfðx; yÞ þ u0 x þ v0 yÞ;

(6.10)

where u0 and v0 are the frequency carriers, and the pattern can be re-expressed as I ðx; yÞ ¼ aðx; yÞ þ cðx; yÞ þ c ðx; yÞ;

(6.11)

where cðx; yÞ ¼ 12 bðx; yÞ exp½jðfðx; yÞ þ u0 x þ v0 yÞ, then it can be shown that the Fourier transform of Eq. (6.11) will have three components in the spatial frequency domain: the zero-frequency peak and two components that carry the phase information of the fringes: I ðu; vÞ ¼ Aðu; vÞ þ Cðu þ u0 ; v þ v0 Þ þ C  ðu  u0 ; v  v0 Þ:

(6.12)

Then, by bandpass filtering, the amplitude spectrum in the þu and þv half-planes, the zero-frequency peak Aðu; vÞ, and the negative spatial frequency component C  ðu  u0 ; v  v0 Þ are filtered out. As the remaining spectrum Cðu þ u0 ; v þ v0 Þ is no longer symmetrical, its inverse Fourier transform yields a real part Refcðx; yÞg and an imaginary part Imfcðx; yÞg. Then, the wrapped phase fðx; yÞ þ u0 x þ v0 y between p and þp can be calculated pointwise by   Imfcðx; yÞg fðx; yÞ þ u0 x þ v0 y ¼ arctan . Refcðx; yÞg

(6.13)

This phase can have sign inversions if the phase does not include carriers and results from closed fringes patterns; however, with open fringes produced with the carrier phase, the sign problem is avoided, and the difference of phase can be used without introducing phase discontinuities. This last procedure is commonly used for digital holography, as shown in Fig. 6.4, where two holograms in which a reference beam is used to introduce the carrier phase produce the two independent phase maps and their difference of phase. The phase difference absolute value from two independent phase measurements (1,2) is usually greater than 2p and can be considered as an unwrapped phase Dfu ¼ f1  f2 ; therefore, to obtain the phase values from p to p (wrapped phase map denoted by the w), it is necessary to wrap the phase differences that are outside of these limits. This is easily achieved by the following re-wrapping equation [204]:

Phase Detection

59

Figure 6.4 Difference of phase: (a) phase f1 before object displacement, (b) phase f2 after object displacement, and (c) difference of phase Dfw wrapped to 2p by Eq. (6.14) and smoothed with a median filter (images provided by C. I. Caloca-Méndez).



 Dfu Dfw ¼ Dfu  2p round ; 2p

(6.14)

where round is a MATLAB® command that rounds the quotient Dfu ∕2p to the nearest integer. Therefore, once the phase map f1 [Fig. 6.4(a)] is obtained by any method, including phase-shifting methods, the difference of phase can be obtained if another phase map f2 [Fig. 6.4(b)] is subtracted to produce Dfu ¼ f1  f2 , and using Eq. (6.14), the wrapped phase map can be obtained, as shown in Fig. 6.4(c). This same phase map with the difference of phase can be obtained if, before calculating the phase, the numerators and denominators of the arctan function are used, as will be shown in Section 6.7. 6.1.3 Carrier phase limits In evaluating its overall performance, it is useful to estimate how this method compares to conventional techniques. Vlad & Malacara [190] provide a typical analysis of carrier fringe methods and their limitations, and Green et al. [205] specifically investigate the limitations of the Fourier domain phase extraction technique. The reader is referred to these publications for a full theoretical analysis; here, a brief description of the practical consequences are presented [203]. There are five sources for the main limitations to measurement accuracy and range: carrier frequency precision, bandwidth, signal-to-noise ratio, speckle size, and filtering and sampling, as explained next. 6.1.3.1 Carrier frequency precision

Most often, the carrier frequency is assumed as a priori information that is well known and constant over the field of view. However, in practice, this

60

Chapter 6

assumption is no longer held; a slight departure from the real frequency value introduces phase changes that affect the accuracy of the phase detection. To minimize the effects of imprecise carriers, sub-pixel methods are usually needed to locate the carrier frequency with high precision [206], or even to locate it in the presence of noise [193]. However, when the carrier frequency is not constant over the field of view, local carrier frequencies can be calculated and used to overcome this limitation [194–196,207]. To overcome the problems of frequency detection with high precision for the phase-map calculation, the use of phase-shifting techniques offers an alternative solution to phase extraction that is independent of the method of carrier frequency detection being used. Among the methods reported in the literature, the five-step technique has shown to improve the Fourier transform methods [208] and has been used extensively, even for monitoring of temporal phase changes [172]. However, with fewer phase shifts, the Carré technique has also proved to improve the performance for ESPI applications [81]. In particular, the Carré technique is used when the phase shifts are constant but unknown, and it allows for phase calculation, even for phase shifts higher than p∕2. 6.1.3.2 Bandwidth

It is generally agreed that the phase terms kLðx; yÞ in Eq. (6.3) must vary more slowly than the carrier frequency of the phase terms kx. It is thus the carrier frequency magnitude that limits the displacement measuring range. The carrier frequency is modulated by the slope of the OPD function ­Lðx; yÞ∕­x, so the condition [190] that prevents over-modulation is    ­Lðx; yÞ   ≪ l. (6.15)  ­x  max If k is not large enough for a given target displacement, the lobes in the Fourier spectrum will not be well separated (i.e., there is insufficient bandwidth) and the method will fail. This lack of bandwidth can be seen experimentally when initially vertical carrier fringes become so curved that they cross a line parallel to the x axis more than once. 6.1.3.3 Signal-to-noise ratio and speckle size

The signal-to-noise ratio and speckle size essentially limit the upper modulation frequency of the carrier correlation fringes. When the fringes are formed by speckle correlation, the maximum carrier frequency is itself limited by the speckle size. This frequency will, in general, be well below the Nyquist limit for the camera system employed (around four speckles per cycle of the carrier are required in practice). Green et al. [205] found that an acceptable noise limit was reached at a signal-to-noise ratio of around 2.0 using four pixels per fringe resolution. The aim is to keep the speckle size as

Phase Detection

61

small as possible, within the optical resolution limit of the camera for correlation fringes. This small speckle size has the associated bonus that a large camera aperture is implied, making the technique more light efficient. The disadvantage is that noise reduction algorithms are needed to process the correlated fringe patterns. The fringe visibility (modulation depth) must also be high to achieve a sufficient signal-to-noise ratio everywhere in the image. On the other hand, in the case of digital holography, the speckle size is chosen to be large enough to get the carrier wave within the speckle size. The difference of phase method is preferred, and the signal-to-noise ratio is limited by the kind of spectral filter used in the phase extraction procedure and the camera noise. 6.1.3.4 Filtering and sampling

The image edges constitute a spatial sampling window. The discrete Fourier transform assumes a periodic repetition of the data outside this window, so abrupt changes in the carrier magnitude cause spurious frequencies in the Fourier domain that limit the lobe separability [205]. Application of a simple Hanning window, a Gaussian filter, or a Butterworth filter to the modulated carrier fringes can significantly increase the separability and hence the performance of the method.

6.2 Spatial Phase Shifting Using Carriers Carrier fringe processing is usually performed by removing the known carrier phase and leaving the remaining phase. This is also the basis of the phase extraction procedure of digital holography and spatial phase-shifting methods. The first publications on spatial phase shifting [209] were designed to produce a single fringe pattern in which phase changes were encoded in the adjacent pixels of the pattern. The same idea offered additional consequences when speckle patterns with adjacent phase changes were produced by introducing the carrier fringe inside the speckle. This technique is known today as digital holography [210] and, contrary to the fringe correlation techniques, requires a larger speckle size, big enough to allow the carrier fringe to be within the speckle size. However, if this condition is not met, the intensity hologram cannot be formed, and the phase cannot be extracted. The out-of-plane system presented in Fig. 3.1 uses a collimated reference beam, and the inclination of the plane wavefront is arbitrary. Here we show how a tilted reference beam can be used for digital holography purposes. Before the plane wavefront and the object wavefront reach the camera detector, the phase differences of the reference beam compared to the object beam suffer a delay in time proportional to the inclination of the reference beam. As this delay is proportional to the projected distance in the image plane, a phase change is recorded spatially in the speckle field.

62

Chapter 6

Figure 6.5 Schematic diagram of an out-of-plane ESPI configuration to produce three phase-shifted speckle fields by the spatial phase-shifting method. The rotation mount (RM) controls the tilt u of the collimated beam reflected by the beamsplitter (BS): 10:90, as shown schematically in Fig. 6.6. Lens L1 images plane O over the CCD detector, and usually the reference beams require further attenuation by an attenuator (AT) and collimation by a beam expander (BE). Mirrors M1 and M2 are used to reflect both beams, and a spatial filter (SF1) is used to expand the object beam.

Figure 6.5 shows a schematic diagram of the system in which a rotation stage introduces the inclination of the reference beam. In this figure, the optical setup is similar to the setup presented previously in Fig. 3.1. However, now the reference beam interferes with the object beam by placing a beamsplitter between the camera lens and its detector. This setup allows a controlled tilt in the reference beam; the collimated reference beam reaches the beamsplitter, and, using a high-precision rotatory mount, the reference tilt can be adjusted with high precision. As this method relies on the local phase change introduced in the direction of the tilt, decorrelation effects can be obtained between adjacent pixels if a small speckle size is used. Thus, to avoid this decorrelation, a speckle size that is more extensive than a pixel is used in order to sample several adjacent pixels. As this enlargement is needed in a single direction, a rectangular aperture can also be used to preserve a smaller speckle size in the perpendicular direction of the introduced tilt. A representation of the speckle size is schematically shown in Fig. 6.6 for circular speckle. The speckle size of a single speckle field using a wavelength l is given by Eq. (1.15). In practice, it is often necessary to use speckle sizes even larger than the calculated size. This requisite arises because a region with nearly constant correlation is needed by this method. As is very well known,

Phase Detection

63

Figure 6.6 Schematic diagram of a camera detector showing circular speckle and a tilted reference beam (dashed lines). The shaded speckle area represents the preservation of a nearly constant correlation in the transversal speckle size. The vertical lines represent spatial phase shifts of 0,  90° (S ¼ 0,  p∕2 rad) that are introduced by a selected angle u between the plane reference beam that is parallel to the camera detector-plane (shown in solid lines) and the tilted reference beam. In practice, only the tilted reference beam is used in the optical setup.

the autocorrelation function of the intensity fluctuation of two points on a speckle pattern reduces to the Airy disc formula (Section 1.4, [211]). Then, it is easily seen that a larger speckle size is more convenient to preserve correlation. It has been shown that, if a coherent background interferes with a speckle field, a reduced speckle size by a factor of two is obtained [212]. As this kind of interference is found in out-of-plane ESPI, the speckle size needed to accommodate neighboring pixels with carrier phase values is twice that of reference-less speckle interferometers such as those used for in-plane and shearing setups. Then, if the correlation is preserved in three adjacent pixels, a spatial shift of one speckle field to another before correlation gives the means to change the previously associated phase (by the reference beam) of each pixel. Consequently, three phase shifts can be obtained with correlation fringes: jI¯ ði; j  1; kL1 Þ  I¯ ði; j; kL2 ÞjS ¼ p∕2 jI¯ ði; j; kL1 Þ  I¯ ði; j; kL2 ÞjS ¼ 0 jI¯ ði; j þ 1; kL1 Þ  I¯ ði; j; kL2 ÞjS ¼ þp∕2 ;

(6.16)

where I¯ is the sampled intensity obtained for the 90° spatial phase shifting introduced by the tilted reference, kL1 is the initial speckle phase change, and kL2 is the last speckle phase change introduced after an optical path change in

64

Chapter 6

Figure 6.7 Spatial phase shifting showing the absolute correlation of Eqs. (6.16), corresponding to (a) a phase shift of p∕2, (b) a phase shift of 0, and (c) a phase shift of p∕2, using two correlated speckle patterns; (d) the phase map obtained after a phaseshifting algorithm of three steps.

the illuminated object. An example of correlation fringes for three spatial phase shifts of p∕2; 0, and p∕2 in S [Eq. (6.16)] is presented in Fig. 6.7. A phase map is obtained [Fig. 6.7(d)] after application of a three-step phase extraction algorithm, where speckle noise reduction processing previously smoothed the correlation fringes. The main limitation of this technique is the necessity of two speckle fields, one before and one after object deformation, which, after correlation, produce fringe patterns modulated by speckle noise and need speckle-noise reduction methods for correlation fringes. As each speckle field must be stored for posterior processing, the time spent transferring the speckle fields also introduces a time delay that might be important for registering fast events. Another significant limitation is the decrease in light intensity due to the small apertures needed in the generation of larger speckle sizes, although the high power of actual CW lasers and recently developed CMOS high-speed cameras has allowed the application of this technique at speeds of 250 ms [213]. This technique has also been tested in previous research using singlepulse ruby lasers with speeds of 30 ms, but with the associated drawback of a non-continuous supply of images due to the slow recovery time (10 s) of this kind of laser, which limits this particular application to lower than real-time speeds. Nonetheless, we assume that the event can be repeated many times and the synchrony of the transient event can be adjusted to sample the transient event. Then this approach does not pose any time restriction, except for the time uncertainty of the pulse generator. The real-time performance capability obtained using a ruby double-pulsed laser with this technique for vibration analysis has been explored, and a 150 ms sampling capability has been achieved with interline camera recordings [214].

6.3 Phase Shifting Using Four Quadrants in the Image Plane An alternative phase-shifting scheme was devised using a holographic element in the pupil of the imaging optics of an out-of-plane interferometer [215] from

Phase Detection

65

Figure 6.8 Quadrant phase change on the camera detector using a holographic element in the pupil of the imaging optics of an out-of-plane interferometer. The phase change is introduced in the speckled object beam by employing a holographic plate, and the reference beam is constant over the field of view.

previous work in interferometry [216]. These studies have shown that the replicated images that are projected in each of the four quadrants of the camera detector (image plane) present different spatial phase shifts (Fig. 6.8). The quadrants are usually not filled with the phase-shifted projected replicas when non-uniform illumination is used.

6.4 Phase Shifting in Adjacent Pixels Using Polarization The principle of polarization phase-shifting [217,218] relies on the combination of two circular polarizations utilizing a polarizer. Both circular polarizations are generated by transforming two perpendicular polarizations employing a quarter-wave plate. Then, the angle of the combining polarizer (analyzer) determines the phase between the two perpendicular polarizations. A custom-made pixelated phase sensor can be built using polarizers fitted to the camera pixels [219]. Then, using a circularly polarized beam combination from the object and reference lights (right-handed and lefthanded) plus a linear polarizer at angle a, it is possible to introduce a phase shift to the phase by an angle 2a. The a values depicted in Fig. 6.9 for four polarizations are obtained employing a polarizer in front of the camera pixels. The phase can be extracted using four adjacent pixels and is assumed to be nearly the same over the four pixels. As this is not usually the case in most experiments, a way to smooth the phase value changes is presented

66

Chapter 6

Figure 6.9 Pixelated polarization changes in a camera with fitted polarizers in each pixel: the linear polarizer angles a1 ¼ 0, a2 ¼ p∕2, a3 ¼ p, and a4 ¼ p∕2 that allow for a phase sensor based on a combination of left- and right-circular polarizations before the polarizers.

Figure 6.10 Pixelated phase sensor changes corresponding to the phase changes shown in Fig. 6.9. By selecting a set of nine adjacent pixels (shaded in gray), an averaged phase can be obtained in the central pixel [Eq. (6.17)].

in Fig. 6.10. From the adjacent pixels shown inside the shaded box of Fig. 6.10, the average phase from a pixelated sensor on selected pixels is obtained using the following equation: C¼

2ðI 2 þ I 8  I 4  I 6 Þ ; I 1  I 3 þ 4I 5  I 7  I 9

(6.17)

where the selected pixels are with zero phase change, and the averaged phase values obtained with this equation can be different from the real ones if abrupt

Phase Detection

67

phase changes are averaged in an experiment. There have been similar developments, in which instead of four phase-steps, only three phase-steps can be produced [220]; other developments use Bayer filters [221], or even two phase-steps [222]. Speeds of 262,500 fps have been reported [223] for this kind of phase extraction procedure, and contouring has been achieved with similar techniques [224]. Advances in shearography have also reported spatial phase stepping using polarization arrays with CW lasers [225,226] and polarization arrays with pulsed lasers [227]. The new polarized sensors can now be found in commercial cameras; see, for example, Refs. [228,229]. .

6.5 Heterodyne Interferometry The standard phase term of Eq. (1.6) contains a single phase term that shows an OPD that is proportional to the wavenumber k. However, in some experiments, it is possible to obtain phase terms with OPDs that are proportional to dk ¼ ðk  k 0 Þ, where dk ≪ k; this kind of interferometry is known as heterodyne. A simple interferometric setup to obtain this kind of phase term uses the standard out-of-plane interferometer setup with k 0 as a reference wavenumber. This is achieved by introducing an increment (or modulation) in wavelength by changing the current of a laser diode. Such an approach has been tested on specular and non-specular surfaces [230–232] such as k  k 0 ¼ 2pdl∕l2 ¼ k 0 dl∕l, and the phase difference Df introduced by the wavenumber increment can be represented by Df ¼ ðk  k 0 ÞL ¼ dkL;

(6.18)

which can be approximated as Df 

­k dl dlL ¼ k 0 L. ­l l

(6.19)

Similar phase terms but based on a reference grating wavenumber have been obtained in spectroscopic instruments (see, for example, Ref. [233]), or for measuring a differential phase with RMS better than 3 mrad [234]. Other options for heterodyning can be implemented using polarized heterodyne interferometry, Zeeman laser sources, or crystal components and acoustooptic modulators [235–239], or by translating a mirror, a glass wedge, or a diffraction grating [240]. Frequency multiplexing in time and space using heterodyne interferometry has also been proposed [241], as well as double heterodyne interferometry [242]. Simultaneous in-plane speckle measurements in two dimensions have been implemented by using an electro-optic modulator [243], and out-of-plane temporal measurements using LiNbO3 for frequency heterodyning have been implemented in Ref. [237] and also in shearography [238].

68

Chapter 6

6.6 Phase Shifting and Vortex Singularity Location A vortex can be interpreted as a circular carrier with a 2p wrapped phase periodicity and a central singularity. Optical vortices were identified first in naturally occurring speckle patterns. After improving the algorithms for locating vortex singularities, it was possible to obtain nanometric displacement analysis by tracking the vortex singularity with sub-pixel precision [244,245]. Further research was dedicated to the optical vortex properties of vortices produced by biological samples [246]. In particular, in microscopy, vortices were introduced in the Fourier domain and deliberately convoluted in the spatial domain within an image. The vortex rotation allowed phase measurement [247–249] of microscopic samples, with the same principle applied for phase extraction from speckle patterns using discrete and continuous vortices [250,251].

6.7 Temporal Phase Unwrapping The phase-shifting algorithms and Fourier transform methods are designed to generate a wrapped phase map that spatial unwrapping methods can unwrap. However, a better alternative to spatial unwrapping is temporal phase unwrapping (TPU), as was proposed by Huntley and Saldner [253] and later was revisited by the first author in Ref. [204] and using three dimensions in Chapter 2 of Kaufmann [13]. The addition of phase increments with magnitudes smaller than 2p was proposed first by Itoh [253] to track phase changes in time by using three simple operations: differentiation, wrapping, and integration. The differentiation is obtained by subtraction of two consecutive phase maps (phase changes in a time span less than 2p), and the wrapping is used to track the correct number of 2p phase jumps in time; once this number is known, it is integrated with the phase differences. The differentiation of consecutive phase maps does not always require the previous unwrapping procedure of each phase map (each phase map is typically obtained using a phase extraction method such as the Fourier transform, or using phase shifting). To avoid two consecutive spatial unwrapping operations of the obtained phase and the wrapping, the imaginary and real parts of the phase extraction method can be expressed as the numerator and denominator of the arctan function: N 1 and D1 for the first state of measurement of the phase change, and N 2 and D2 for second state of measurement of the phase change. The phase difference df1;2 ¼ k 1 L1  k 2 L2 (which can be wrapped, or unwrapped if the phase change is less than 2p) can be expressed by [204]   N 2 D1  D2 N 1 . (6.20) df1;2 ¼ tan1 D2 D1 þ N 2 N 1

Phase Detection

69

It is also common to use a low-pass filter for the numerator and denominator of the previous equation to reduce noise errors. On the other hand, rereferencing is also used in the TPU integration process to avoid the propagation of phase errors, depending on the kind of noise that is present in the fringe patterns or interferograms. [81,175,254–256] The TPU method states that it is not necessary to integrate small phase differences in order to unwrap the phases in time to obtain a phase bearing a large number of phase jumps. This kind of phase can be detected using different sensitivities in the optical setups, usually ranging from low to high. Examples of the use of different sensitivities in TPU have been suggested for fringe projection systems with six sets of fringes [257] (produced by wavenumbers k 1 ; k 2 ; : : : k 6 ) combined with a constant OPD L and phase shifting for generating the phase maps. The same approach has been suggested with the use of multiplexed fringe patterns of different sensitivities [258]. An alternative approach called wavelength scanning interferometry involves building a wavemeter [259]; the difference in this case is that the constant k remains constant for a given wavelength in a single-shot composite interferogram obtained using the interference of four glass-wedges in a field of view. The phase change introduced by previously changing the wavelengths in k 1 ; k 2 ; : : : k 6 can be exchanged for six phase-differences obtained from the wedge’s interference with OPDs L1 ; L2 ; : : : L5 . These path differences allow the calculation of an instantaneous unwrapped-phase from which the wavelength is calculated from a single-shot frame. As an example of this last case, Fig. 6.11 shows a typical phase obtained experimentally, in which different sensitivities were produced by choosing the set of wedges with different thicknesses Li ¼ Qi d, where any wedge i has a thickness controlled by the constants Qi , with phase given by fi ¼ 2knðQi dÞ:

(6.21)

The phase difference between two consecutive wedges is chosen by the selection of a constant Qiþ1 such that the first phase difference is unwrapped. This first phase is known as the “weak phase” and is given by dfun 1;2 ¼ 2knðQ1  Q2 Þd;

(6.22)

where the superscript on the difference of phase is denoted as “un” because an unwrapped-phase is obtained by first choosing ðQ1  Q2 Þd such that the phase change when the laser light is tuned becomes less than 2p (the lowsensitivity or weak phase). The next phase-change is assumed with substantial phase changes that include phase jumps and, when unwrapped, is expressed by

70

Chapter 6 10 wr 1,3

(radians)

5

i,j

0 un 1,2

-5

-10

-15

0

50

100

150

200 t (frames)

250

300

350

250

300

350

400

(a) 150 100

un 1,5

(radians)

50 0 -50 -100 -150 -200 -250 0

50

100

150

200

400

t (frames)

(b) Figure 6.11 (a) The first step of the TPU, which uses a low-sensitivity (weak) phase dfun 1, 2 that is scaled and used to unwrap a greater phase change dfwr 1, 3 containing phase jumps (at p). (b) The final step where the unwrapped phase dfun 1, 5 is obtained. The scaling factors used in the first and final steps were different and were determined by the experiment. Data for the plots were published in Ref. [259].

dfun 1;3 ¼ 2knðQ1  Q3 Þd.

(6.23)

Dividing Eq. (6.23) by Eq. (6.22), we obtain  dfun 1;3

¼

 Q1  Q3 dfun 1;2 . Q1  Q2

(6.24)

un This shows that dfun 1;3 can be obtained by scaling df1;2 by the scale factor s ¼ ðQ1  Q3 Þ∕ðQ1  Q2 Þ. However, such scaling will also magnify any phase

Phase Detection

71

noise present in dfun 1;2 and should be avoided except to extract the number of phase jumps corresponding to dfun 1;3 that (being integer numbers) are free of the scaling noise. This scaling problem of phase noise can be successfully avoided when the correct number of phase jumps of dfun 1;3 is used to unwrap wr the unscaled df1;3 , removing the wrappings introduced by the significant phase sensitivity, then using [260]  un  df1;3  dfwr 1;3 un wr df1;3 ¼ df1;3 þ 2p round ; (6.25) 2p where the last term of this equation gives the number of 2p phase jumps detected by the scaled phase using the round MATLAB® command, which rounds the difference to the nearest integer. The same approach can be un repeated to obtain dfun 1;4 from df1;3 , and an iterative approach with more phase sensitivities can be used to unwrap even more-significant phase

Figure 6.12 Flow diagram for calculating TPU when the scaling s ¼ 4 follows a power law as h ¼ sn , where n ¼ 1, . . . , is the unwrapping iteration. Green boxes correspond to the unwrapped phase using Eq. (6.25) (or equivalent equation for the unwrapping iteration) and are fed by the phases in the boxes indicated by the black and red arrows. Blue boxes show the scaled phases after scaling by a factor s of the unwrapped phases on top, and the gray shading shows the experimentally acquired phases. The bottom row represents power-law scaling, which is simplified here for phase scaling, with the power of the unwrapping iteration shown in the first row.

72

Chapter 6

differences. A simplified schematic diagram is presented in Fig. 6.12, where a constant phase scaling and unwrapping is presented, starting from a weak phase (top box of step n ¼ 1). This phase is then scaled by the factor s to obtain the phase in the blue box of step n ¼ 1. This scaled phase and the same phase but wrapped are then used in Eq. (6.25) to obtain the top green box of step n ¼ 2. The same process is repeated for step 2 using the unwrapped scaled and the wrapped phases, and the process iterates until the last step. It was shown in 1997 in fringe projection research that the phase error can be decreased by using several sensitivities [261] obtained by a decreasing geometrical series of projected fringes, with the exponential increase of sensitivities giving the best performance for this technique [260]. It was also later shown that an optimum frequency can be selected for maximizing the measurement range and reliability using a geometric series [262]. For a complete comparison of the TPU techniques in fringe projection, the reader is referred to Ref. [263]. The principles of the TPU scaling procedure can be applied using a set of wavenumbers to form synthetic wavelengths, and scaling of the phase values results from careful selection of a sequence of wavenumbers. From this sequence, the synthetic wavelengths can be found; the phase can be expressed in terms of wavelengths as   1 fi ¼ 2pn d; li

(6.26)

and Eqs. (6.21) and (6.22) can be re-expressed as dfun 1;2

  1 1 ¼ 2pn  d; l1 l2

(6.27)

where the selection of wavelengths l1 , l2 are chosen such that the first phase difference is unwrapped (weak phase). The next phase change is assumed with more-significant phase changes .2p and, when unwrapped, is expressed by dfun 1;3

  1 1 ¼ 2kn  d. l1 l3

(6.28)

Dividing Eq. (6.28) by Eq. (6.27), we obtain dfun 1;3 ¼

1 l1 1 l1

 l13  l12

! dfun 1;2 ;

(6.29)

l2 which can be expressed in terms of the synthetic wavelengths lˆ 1;2 ¼ ll11l , 2 l1 l3 ˆ where l1 . l2 , and l1;3 ¼ l1 l3 , where l1 . l3 , as

Phase Detection

73

dfun 1;3

ˆ  l1;2 ¼ dfun 1;2 . ˆl1;3

(6.30)

un This shows again that dfun 1;3 can be obtained by scaling df1;2 by the scale factor s ¼ lˆ 1;2 ∕lˆ 1;3 . Following the unwrapping operation described in Eq. (6.25), the TPU steps can again be implemented using synthetic wavelengths, but this time the scaling is determined by the selection of the synthetic wavelengths. A detailed example of synthetic-wavelength selection can be found in Ref. [264]. As suggested in Ref. [262], a final phase with maximum sensitivity can be included with obeisance to proper scaling with the most sensitive synthetic wavelength. However, in practice, wavelength selection is limited by the commercial availability of lasers in the chosen wavelengths. This varying availability affects the scaling constants but can be overcome by choosing a custom scaling for each pair of wavelengths, as suggested in multi-wavelength interferometry [265]. Nonetheless, as long as the scaling constants are determined by the selection of wavelengths (wavenumbers) or optical path differences (that are determined a priori from the experiment), the temporal phase unwrapping procedure can be performed without phase errors. This capability makes TPU superior to spatial phase unwrapping, which usually leaks spatial data and path-dependent errors into the unwrapped and processed phase.

Chapter 7

Overview of Applications At the present time, ESPI techniques are popular and frequently used as nondestructive optical techniques [266], and are also used as an aid to obtain improved mechanical designs. The mechanical designer usually removes most of the unnecessary volume of its CAD 3D prototypes to reduce the weight of materials used in making new devices. In this process, the designer must be confident that the device will withstand the forces applied in working conditions. However, only a physical test with rigs and actual working stresses can guarantee the structural integrity of the produced device when being analyzed with these techniques. However, several factors limit this application, the most important being the cost of producing devices that use ESPI for testing structural integrity. The most affordable procedure is to learn the basics of the technique and to build prototypes of the interferometers with offthe-shelve optics. Another critical limitation arises from the level of expertise required to operate the devices; specialized knowledge is necessary to obtain good results in the application of the technique. The main applications of ESPI in engineering are static displacement measurement, stress and strain measurement, and vibration analysis [5], where damage detection and localization can be accomplished [267]. The main industries interested in the application of ESPI techniques are the automotive [268,269] and aircraft industries [270,271], and metal-based manufacturing; see, for example, a typical application for leak detection in stainless steel kegs [272]. However, other industries such as electronics manufacturing and civil engineering can make use of these techniques, too. In civil engineering, ESPI techniques are used for strain sensing and structural health monitoring, e.g., for bridges or dams [273]. Other applications have been demonstrated in geophysical rock deformation [274], ultrasonic design of cutting devices [275] for the food industry, pressure transducers [276,277], underwater sound transmission [278], inspection of power plants [279], delamination in fiber composites [280], thermal deformations in metal plates [281–284] or in carbon fiber composites [285], inspection of defects of composite materials in inner cylindrical surfaces [286],

75

76

Chapter 7

artwork diagnostics [287,288], biomedical studies using speckle [79,289], cornea age-related elasticity [290], eye ametropy and accommodation response [291], biomechanical changes in the cornea induced by refractive surgery [292,293], bones and implants [277,294–297], biological studies with dynamic speckle [245,298], blood flow studies including cerebral stroke effects [289,299–304], blood flow assessment of gingival health [305], and the so-called laser speckle flowgraphy for blood flow in the eye to quantify ocular circulation in vivo (see, for example, Ref. [306]). Additional uses of ESPI techniques are for drying, aging, and stress of biscuits and paints [307,308], structural mechanics [309], refractive index measurement in glass [310], determination of the progressive power in ophthalmic lenses [311], X-ray phase contrast [312], detection of photoacoustic signals using speckle contrast [313], and recently for spectroscopy with speckled wavemeters able to resolve from femtometers to attometers [314–316]. Multiple applications are also described in Ref. [145]. Applications for 3D measurements have also been explored using contouring with in-plane ESPI and shearography, with synthetic wavelengths, and by wavelength scanning interferometry [317]. The research of speckle phenomena in infrared astronomy aided in obtaining unprecedented details of stars orbiting black holes [318,319] and was awarded a 2020 Nobel prize. These are just a few examples of what speckle interferometry can accomplish in engineering and scientific applications; further applications can be found the literature cited in the References.

Appendix: Speckle Statistics Speckle patterns that are detected experimentally possess varying statistical properties that depend on the optical setup and detectors involved in the experiment. In Section 2.3, Fig. 2.5(a) represents speckles with a low F-number in the lens aperture such that the pixel area can integrate the speckles. In the opposite situation [Fig. 2.5(b)], the speckles can be sampled by the camera pixels. These two simple situations can easily be obtained by changing a lens aperture. However, interference and polarization can also affect the resulting speckle patterns. The probability densities for interference, integration, and polarization in six different cases are presented as a reference in Table A.1 [320–322]. These densities are usually approximated by calculating the histogram of the speckle patterns detected in actual experiments; polarized light is denoted by p. The variable m is the number of cells and is given by the following equation [323]:

Table A.1. Probability densities for interference, integration, and polarization for six different cases. Only resolved speckle (p)

pðI Þ ¼ hI1i expðI hI iÞ

Only integrated speckle (p) (Eqs. 4-165 [20], 1.30 [322])

I pðI Þ ¼ ðhImiÞm GðmÞ expðmI hI i Þ

Only integrated speckle (non-polarized) Sec. 1.1.6 [322]

I 2 mI pðI Þ ¼ ð2hImi Þð2 mÞ Gð2 mÞ expð hI i Þ

m1

2 m1

Combination: Smooth reference & resolved speckle (p) Eq. 1.51 [322] IR ¼ constant, I only resolved speckle intensity

I 2

M pðI M Þ ¼ 2IIRMhI i expð4I R hI iÞ

IM ¼ resolved speckle and smooth reference intensity Combination: Smooth reference & integrated speckle (p) IR ¼ constant, I only resolved speckle intensity

mI 2

M M pðI M Þ ¼ 2ImIR hI i expð 4I R hI i Þ

IM ¼ integrated speckle and smooth reference Combination: Speckle reference & integrated speckle (p) I0 ¼ Speckle integrated and speckle reference intensity Average modulation drops with the square root of m

77

pðI 0 Þ ¼ ðhImiÞ2m

ðI 20 Þ2m1 Gð2mÞ

 exp

mI 0 hI i



78

Appendix: Speckle Statistics

ðlLÞ2 ; m ¼ Rl RL 4 0 0 ðl  xÞðL  yÞjms ðx; yÞj2 dxdy

(A.1)

where l and L are the pixel dimensions, and ms(x,y) is the autocorrelation function. The value of m is approximately given by the number of integrated speckles per pixel, and for an optical system is given by [9] m

PA ; p½1.22lf # ð1 þ MÞ2

(A.2)

where PA is the pixel area, f# is the F-number, and M is the magnification. On the other hand, the statistics of dynamical speckle patterns have been also studied and utilized by Okamoto and Asakura [324], and Rabal and Braga [246].

References 1. J. Ridgen and E. Gordon. The granularity of scattered optical maser light. Proceedings of the IRE, 50(11):2367–2368, 1962. 2. P. Hariharan. Speckle patterns: A historical retrospect. Optica Acta: International Journal of Optics, 19(9):791–793, 1972. 3. K. Barat, editor. Laser Safety. IOP Publishing, pages 2053–2563, 2019. 4. A. E. Ennos. Speckle interferometry. In E. Wolf, editor. Progress in Optics XVI. Elsevier North-Holland Publishing Company, 1978. 5. R. Jones and C. Wykes. Holographic and Speckle Interferometry, Second edition. Cambridge U. Press, 1989. 6. R. J. Sirohi, editor. Speckle Metrology. Marcel Dekker, 1993. 7. R. K. Erf, editor. Quantum Electronics–Principles and Applications, Academic Press, 1978. 8. Á. F. Doval. A systematic approach to TV holography. Measurement Science and Technology, 11(1):R1, 2000. 9. P. K. Rastogi, editor. Digital Speckle Pattern Interferometry and Related Techniques. Wiley, Chichester, 2001. 10. P. Jacquot. Speckle interferometry: A review of the principal methods in use for experimental mechanics applications. Strain, 44(1):57–69, 2008. 11. R. Sirohi. Optical Methods of Measurement: Wholefield Techniques, Second edition. CRC Press, Taylor & Francis Group, 2009. 12. C. A. Sciammarella and F. M. Sciammarella. Experimental Mechanics of Solids. Wiley, 2012. 13. G. H. Kaufmann, editor. Advances in Speckle Metrology and Related Techniques. Wiley-VCH, 2011. 14. L. Yang, X. Xie, L. Zhu, S. Wu, and Y. Wang. Review of electronic speckle pattern interferometry (ESPI) for three dimensional displacement measurement. Chinese Journal of Mechanical Engineering, 27(1):1–13, Jan 2014. 15. C. Joenathan, R. S. Sirohi, and A. Bernal. Advances in Speckle metrology, Optics Encyclopedia, Wiley, 1–99, 2015.

79

80

References

16. P. Rastogi, editor. Digital Optical Measurement: Techniques and Applications. Artech House, 2015. 17. C. A. Sciammarella and L. Lamberti. Basic models supporting experimental mechanics of deformations, geometrical representations, connections among different techniques. Meccanica, 50(2):367–387, Feb 2015. 18. M. Born and E. Wolf. Principles of Optics. Cambridge University Press, 2002. 19. T. S. McKechnie. Statistics of Coherent Light Speckle Produced by a Stationary and Moving Apertures. PhD thesis, University of London Imperial College of Science and Technology, London, 1974. 20. J. W. Goodman. Speckle Phenomena in Optics Theory and Applications. Roberts and Company Publishers, 2007. 21. F. P. Chiang and A. Asundi. White light speckle method of experimental strain analysis. Appl. Opt., 18(4):409–411, Feb 1979. 22. G. Vendroux and W. G. Knauss. Submicron deformation field measurements: Part 2. Improved digital image correlation. Experimental Mechanics, 38(2):86–92, Jun 1998. 23. B. Pan. Digital image correlation for surface deformation measurement: historical developments, recent advances and future goals. Measurement Science and Technology, 29(8):082001, 2018. 24. J.-N. Perie and J.-C. Passieux, editors. Advances in Digital Image Correlation (DIC). MDPI, 2020. 25. P. Jacquot and P. K. Rastogi. Speckle motions induced by rigid-body movements in free-space geometry: an explicit investigation and extension to new cases. Appl. Opt., 18(12):2022–2032, Jun 1979. 26. J. Van Wingerden, H. J. Frankena, and C. Smorenburg. Linear approximation for measurement errors in phase shifting interferometry. Appl. Opt., 30(19):2718–2729, 1991. 27. Y. Surrel. Design of algorithms for phase measurements by the use of phase stepping. Appl. Opt., 35(1):51–60, Jan 1996. 28. D. W. Phillion. General methods for generating phase-shifting interferometry algorithms. Appl. Opt., 36(31):8098–8115, Nov 1997. 29. Y. Surrel. Fringe analysis. In P. K. Rastogi, editor. Photomechanics, Springer Verlag, pages 55–102, 1999. 30. K. Creath. Phase-measurement interferometry techniques. In E. Wolf, editor. Progress in Optics, volume 26. Elsevier, pages 349–393, 1988. 31. D. Malacara. Optical Shop Testing. Wiley, 2007. 32. P. Rastogi and E. Hack, editors. Phase Estimation in Optical Interferometry. CRC Press, 2014. 33. O. Dalmau, M. Rivera, and A. Gonzalez. Phase shift estimation in interferograms with unknown phase step. Optics Communications, 372:37–43, 2016.

References

81

34. M. Rivera, O. Dalmau, A. Gonzalez, and F. Hernandez-Lopez. Twostep fringe pattern analysis with a gabor filter bank. Optics and Lasers in Engineering, 85:29–37, 2016. 35. Y. Surrel. Phase stepping: a new self-calibrating algorithm. Appl. Opt., 32(19):3598–3600, Jul 1993. 36. M. Servin, J. C. Estrada, J. A. Quiroga, J. F. Mosiño, and M. Cywiak. Noise in phase shifting interferometry. Opt. Express, 17(11):8789–8794, May 2009. 37. E. Hack and J. Burke. Measurement uncertainty of linear phasestepping algorithms. Review of Scientific Instruments, 82:061101, 2011. 38. K. Qian, F. Shu, and X. Wu. Determination of the best phase step of the Carré algorithm in phase shifting interferometry. Measurement Science and Technology, 11(8):1220–1223, 2000. 39. E. Hack. Measurement uncertainty of Carré-type phase-stepping algorithms. Optics and Lasers in Engineering, 50(8):1023–1025, 2012. 40. E. Hack. Uncertainty in phase measurements. In P. Rastogi and E. Hack, editors. Phase Estimation in Optical Interferometry, CRC Press, pages 293–326, 2015. 41. J. C. Estrada, M. Servin, and J. A. Quiroga. A self-tuning phaseshifting algorithm for interferometry. Opt. Express, 18(3):2632–2638, Feb 2010. 42. O. Medina, J. C. Estrada, and M. Servin. Robust adaptive phaseshifting demodulation for testing moving wavefronts. Opt. Express, 21(24):29687–29694, Dec 2013. 43. E. Hack, P. N. Gundu, and P. Rastogi. Adaptive correction to the speckle correlation fringes by using a twisted-nematic liquid-crystal display. Appl. Opt., 44(14):2772–2781, May 2005. 44. Y. Shen, N. A. Ochoa, and J. M. Huntley. Real-time speckle interferometry fringe formation with an adaptive phase mask. Appl. Opt., 41(13):2454–2460, May 2002. 45. K. Yan, Y. Yu, C. Huang, L. Sui, K. Qian, and A. Asundi. Fringe pattern denoising based on deep learning. Optics Communications, 437:148–152, 2019. 46. S. Nakadate and H. Saito. Fringe scanning speckle-pattern interferometry. Appl. Opt., 24(14):2172–2180, Jul 1985. 47. M. Servin Guirado. Advanced Techniques for Fringe Analysis. PhD thesis, Centro de Investigaciones en Optica, 1993. 48. U. Schnars and W. Jüptner. Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques. Springer Verlag, 2005. 49. M. Karray, P. Slangen, and P. Picart. Comparison between digital Fresnel holography and digital image-plane holography: The role of the imaging aperture. Experimental Mechanics, 52(9):1275–1286, Nov 2012.

82

References

50. P. Picart, M. Gross, and P. Marquet. New techniques in digital holography. In P. Pascal, editor. Basic Fundamentals of Digital Holography. John Wiley & Sons, 2015. 51. J. Li and P. Picart, editors. Digital Holography. John Wiley & Sons, 2012. 52. M. Takeda, H. Ina, and S. Kobayashi. Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J. Opt. Soc. Am., 72(1):156–160, Jan 1982. 53. T. Kreis. Digital holographic interference-phase measurement using the Fourier-transform method. J. Opt. Soc. Am. A, 3:847–855, 1986. 54. A. Davila, J. M. Huntley, G. H. Kaufmann, and D. Kerr. High-speed dynamic speckle interferometry: phase errors due to intensity, velocity, and speckle decorrelation. Appl. Opt., 44(19):3954–3962, Jul 2005. 55. L. Yang, Y. Wang, and R. Lu. Advanced optical methods for whole field displacement and strain measurement. In 2010 International Symposium on Optomechatronic Technologies, IEEE, pages 1–6, Oct 2010. 56. D. I. Farrant and J. N. Petzing. Sensitivity errors in interferometric deformation metrology. Appl. Opt., 42(28):5634–5641, Oct 2003. 57. A. Martínez, R. Rodríguez-Vera, J. A. Rayas, and H. J. Puga. Error in the measurement due to the divergence of the object illumination wavefront for in-plane interferometers. Optics Communications, 223(4):239–246, 2003. 58. A. Martínez and J. A. Rayas. Evaluation of error in the measurement of displacement vector components by using electronic speckle pattern interferometry. Optics Communications, 271(2):445– 450, 2007. 59. D. Francis, R. P. Tatam, and R. M. Groves. Shearography technology and applications: a review. Measurement Science and Technology, 21(10):102001, 2010. 60. J. Parra-Michel, A. Martínez, M. Anguiano-Morales, and J. A. Rayas. Measuring object shape by using in-plane electronic speckle pattern interferometry with divergent illumination. Measurement Science and Technology, 21(4):045303, Mar 2010. 61. P. C. Montgomery. Forward Looking Innovations in Electronic Speckle Pattern Interferometry (ESPI). PhD thesis, Loughborough University, 1987. 62. T. R. Moore. A simple design for an electronic speckle pattern interferometer. American Journal of Physics, 72(11):1380–1384, 2004. 63. T. R. Moore. Erratum: “A simple design for an electronic speckle pattern interferometer” [am. j. phys. 72(11), 1380–1384 (2004)]. American Journal of Physics, 73(2):189–189, 2005. 64. T. R. Moore and J. J. Skubal. Time-averaged electronic speckle pattern interferometry in the presence of ambient motion. Part I. Theory and experiments. Appl. Opt., 47(25):4640–4648, Sep 2008.

References

83

65. G. Restivo, G. A. Isaicu, and G. L. Cloud. Low-cost non-destructive inspection by simplified digital speckle interferometry. Journal of Nondestructive Evaluation, 27(4):135, Oct 2008. 66. S. R. Guntaka, V. Sainov, V. Toal, S. Martin, T. Petrova, and J. Harizanova. Compact electronic speckle pattern interferometer using a near infrared diode laser and a reflection holographic optical element. Journal of Optics A: Pure and Applied Optics, 8(2):182–188, Jan 2006. 67. E. Hack, B. Frei, R. Kästle, and U. Sennhauser. Additive–subtractive two-wavelength ESPI contouring by using a synthetic wavelength phase shift. Appl. Opt., 37(13):2591–2597, May 1998. 68. Y.-Y. Cheng and J. C. Wyant. Two-wavelength phase shifting interferometry. Appl. Opt., 23(24):4539–4543, Dec 1984. 69. K. Creath, Y.-Y. Cheng, and J. C. Wyant. Contouring aspheric surfaces using two-wavelength phase-shifting interferometry. Optica Acta: International Journal of Optics, 32(12):1455–1464, 1985. 70. E. A. Barbosa and A. C. L. Lino. Multiwavelength electronic speckle pattern interferometry for surface shape measurement. Appl. Opt., 46(14):2624–2631, May 2007. 71. D. Mariano da Silva, E. Acedo Barbosa, G. Cunha Cardoso, and N. U. Wetter. Real-time contour fringes obtained with a variable synthetic wavelength from a single diode laser. Applied Physics B, 118(1):159–166, Jan 2015. 72. M. Agour, R. Klattenhoff, C. Falldorf, and R. B. Bergmann. Speckle noise reduction in single-shot holographic two-wavelength contouring. Proc. SPIE, 10233:102330R, 2017 [doi: 10.1117/12.2264971]. 73. A. Schiller, T. Beckmann, M. Fratz, D. Belzer, A. Bertz, D. Carl, and K. Buse. Digital holography on moving objects: multiwavelength height measurements on inclined surfaces. Proc. SPIE, 10329:103290D, 2017 [doi: 10.1117/12.2270176]. 74. P. Bergström, D. Khodadad, E. Hällstig, and M. Sjödahl. Dualwavelength digital holography: single-shot shape evaluation using speckle displacements and regularization. Appl. Opt., 53(1):123–131, Jan 2014. 75. D. Khodadad, P. Bergström, E. Hällstig, and M. Sjödahl. Fast and robust automatic calibration for single-shot dual-wavelength digital holography based on speckle displacements. Appl. Opt., 54(16):5003– 5010, Jun 2015. 76. S. Arikawa, K. Ashizawa, K. Koga, and S. Yoneyama. Optimum image extraction and phase analysis for ESPI measurements under environmental disturbance. Experimental Mechanics, 56(6):987–997, Jul 2016. 77. H. Goldstein, J. Safko, and C. Poole. Classical Mechanics, Third edition, Pearson, 2001. 78. R. K. Erf, editor. Speckle Metrology. Academic Press, 1978.

84

References

79. J. C. Shelton and J. F. Orr, editors. Optical Measurement Methods in Biomechanics. Chapman and Hall, 1997. 80. A. Dávila, S. Márquez, E. Landgrave, Z. Vázquez, K. Vera, and C. Caudillo. Axial loading verification method for small bones using carrier fringes in speckle pattern interferometry. Journal of Modern Optics, 62(11):937–942, 2015. 81. A. Svanbro, J. M. Huntley, and A. Davila. Optimal re-referencing rate for in-plane dynamic speckle interferometry. Appl. Opt., 42(2):251–258, Jan 2003. 82. L. Pirodda. Shadow and projection moiré techniques for absolute or relative mapping of surface shapes. Optical Engineering, 21:4:640–649, 1982 [doi: 10.1117/12.7972959]. 83. C. Joenathan, B. Franze, P. Haible, and H. J. Tiziani. Shape measurement by use of temporal Fourier transformation in dual-beam illumination speckle interferometry. Appl. Opt., 37(16):3385–3390, Jun 1998. 84. X. Gao, L. Yang, Y. Wang, B. Zhang, X. Dan, J. Li, and S. Wu. Spatial phase-shift dual-beam speckle interferometry. Appl. Opt., 57(3):414–419, Jan 2018. 85. M. R. Viotti, A. Albertazzi Goncalves, Jr., and G. H. Kaufmann. Measurement of residual stresses using local heating and a radial inplane speckle interferometer. Optical Engineering, 44(9):093606, 2005 [doi: 10.1117/1.2050307]. 86. A.-K. Nassim, L. Joannes, and A. Cornet. In-plane rotation analysis by two-wavelength electronic speckle interferometry. Appl. Opt., 38(12):2467–2470, Apr 1999. 87. L. Yang and X. Xie. Digital Shearography: New Developments and Applications. SPIE Press, 2016 [doi: 10.1117/3/2235244]. 88. W. Steinchen and L. Yang. Digital Shearography: Theory and Application of Digital Speckle Pattern Shearing Interferometry. SPIE Press, 2003. 89. H. O. Saldner. TV Holography and Shearography in Experimental Mechanics. PhD thesis, Luleå University of Technology, Sweden, 1996. 90. N. Ida and N. Meyendorf, editors. Handbook of Advanced Nondestructive Evaluation. Springer Nature Switzerland AG, 2019. 91. F. Zastavnik, L. Pyl, J. Gu, H. Sol, M. Kersemans, and W. Van Paepegem. Calibration and correction procedure for quantitative out-ofplane shearography. Measurement Science and Technology, 26(4):045201, Feb 2015. 92. A. Andersson, N. Krishna Mohan, M. Sjödahl, and N.-E. Molin. TV shearography: quantitative measurement of shear-magnitude fields by use of digital speckle photography. Appl. Opt., 39(16):2565–2568, Jun 2000.

References

85

93. A. G. Anisimov, M. G. Serikova, and R. M. Groves. 3D shape shearography technique for surface strain measurement of free-form objects. Appl. Opt., 58(3):498–508, Jan 2019. 94. W. S. Wan Abdullah and J. N. Petzing. Development of speckle shearing interferometer error analysis as an aperture function of wavefront divergence. Journal of Modern Optics, 52(11):1495–1510, 2005. 95. A. Gundlach, J. M. Huntley, B. Manzke, and J. Schwider. Speckle shearing interferometry using a diffractive optical beam splitter. Optical Engineering, 36(5):1488–1498, 1997 [doi: 10.1117/1.601351]. 96. H. Schreiber and J. Schwider. Lateral shearing interferometer based on two Ronchi phase gratings in series. Appl. Opt., 36(22):5321–5324, Aug 1997. 97. J. Schwider. Continuous lateral shearing interferometer. Appl. Opt., 23(23):4403–4409, Dec 1984. 98. V. Nercissian, I. Harder, K. Mantel, A. Berger, G. Leuchs, N. Lindlein, and J. Schwider. Diffractive simultaneous bidirectional shearing interferometry using tailored spatially coherent light. Appl. Opt., 50(4):571–578, Feb 2011. 99. C. Joenathan, A. Bernal, and R. S. Sirohi. Spatially multiplexed x-y lateral shear interferometer with varying shears using holographic lens and spatial Fourier transform. Appl. Opt., 52(22):5570–5576, Aug 2013. 100. F. Aparecido Alves da Silva, D. P. Willemann, A. Vieira Fantin, M. E. Benedet, and A. Albertazzi Gonzalves. Evaluation of a novel compact shearography system with DOE configuration. Optics and Lasers in Engineering, 104:90–99, 2018. 101. T. Ling, D. Liu, X. Yue, Y. Yang, Y. Shen, and J. Bai. Quadriwave lateral shearing interferometer based on a randomly encoded hybrid grating. Opt. Lett., 40(10):2245–2248, May 2015. 102. N. Krishna Mohan, H. Saldner, and N.-E. Molin. Electronic speckle pattern interferometry for simultaneous measurement of out-of-plane displacement and slope. Opt. Lett., 18(21):1861–1863, Nov 1993. 103. B. Mancilla-Escobar, Z. Malacara-Hernández, and D. MalacaraHernández. Two dimensional wavefront retrieval using lateral shearing interferometry. Optics Communications, 416:100–107, 2018. 104. P. H. Phuc, N. T. Manh, H.-G. Rhee, Y.-S. Ghim, H.-S. Yang, and Y.-W. Lee. Improved wavefront reconstruction algorithm from slope measurements. Journal of the Korean Physical Society, 70(5):469–474, Mar 2017. 105. M. Servin, M. Cywiak, and A. Dávila. Lateral shearing interferometry: theoretical limits with practical consequences. Opt. Express, 15(26):17805–17818, Dec 2007.

86

References

106. P. Liang, J. Ding, Z. Jin, C.-S. Guo, and H. T. Wang. Two-dimensional wave-front reconstruction from lateral shearing interferograms. Opt. Express, 14(2):625–634, Jan 2006. 107. A. Davila, M. Servin Guirado, and M. Facchini. Fast phase-map recovery from large shears in an electronic speckle-shearing pattern interferometer using a Fourier least-squares estimation. Optical Engineering, 39:2487–2494, 2000 [doi: 10.1117/1.1287260]. 108. C. Elster and I. Weingärtner. Solution to the shearing problem. Appl. Opt., 38(23):5024–5031, Aug 1999. 109. S. Waldner. Removing the image-doubling in shearography by reconstruction of the displacement field. Optics Communications, 127(1):117–126, 1996. 110. F. Roddier and C. Roddier. Wavefront reconstruction using iterative Fourier transforms. Appl. Opt., 30(11):1325–1327, Apr 1991. 111. W. H. Southwell. Wave-front estimation from wave-front slope measurements. J. Opt. Soc. Am., 70(8):998–1006, Aug 1980. 112. D. Singh Mehta, P. Singh, M. S. Faridi, S. Mirza, and C. Shakher. Twowavelength lateral shearing interferometry. Optical Engineering, 44(8):085603, 2005 [doi:10.1117/1.2012498]. 113. Y.-S. Ghim, H.-G. Rhee, A. Davies, H.-S. Yang, and Y.-W. Lee. 3D surface mapping of freeform optics using wavelength scanning lateral shearing interferometry. Opt. Express, 22(5):5098–5105, Mar 2014. 114. F. Chen. Digital shearography: state of the art and some applications. Journal of Electronic Imaging, 10(1):240–251, 2001 [doi:10.1117/1. 1329336]. 115. J. Takesaki and Y. Y. Hung. Direct measurement of flexural strains in plates by shearography. Journal of Applied Mechanics, 53:125–129, 1986. 116. Y. Y. Hung, J. D. Hovanesian, and J. Takezaki. A fringe carrier technique for unambiguous determination of fringe orders in shearography. Optics and Lasers in Engineering, 8(2):73–81, 1988. 117. A. Dávila, G. H. Kaufmann, and C. Pérez-López. Transient deformation analysis by a carrier method of pulsed electronic speckle-shearing pattern interferometry. Appl. Opt., 37(19):4116–4122, Jul 1998. 118. X. Xie, L. Yang, N. Xu, and X. Chen. Michelson interferometer based spatial phase shift shearography. Appl. Opt., 52(17):4063–4071, Jun 2013. 119. B. Bhaduri, N. Krishna Mohan, M. P. Kothiyal, and R. S. Sirohi. Use of spatial phase shifting technique in digital speckle pattern interferometry (DSPI) and digital shearography (DS). Opt. Express, 14(24):11598– 11607, Nov 2006. 120. F. Santos, M. Vaz, and J. Monteiro. A new set-up for pulsed digital shearography applied to defect detection in composite structures. Optics and Lasers in Engineering, 42(2):131–140, 2004.

References

87

121. X. Gao, Y. Wang, X. Dan, B. Sia, and L. Yang. Double imaging MachZehnder spatial carrier digital shearography. Journal of Modern Optics, 66(2):153–160, 2019. 122. C. Cai and L. He. Improved Mach-Zehnder interferometer-based shearography. Optics and Lasers in Engineering, 50(12):1699–1705, 2012. 123. E. Sánchez Barrera, A. Vieira Fantin, D. P. Willemann, M. E. Benedet, and A. Albertazzi Goncalves, Jr. Multiple-aperture one-shot shearography for simultaneous measurements in three shearing directions. Optics and Lasers in Engineering, 111:86– 92, 2018. 124. R. M. Groves. Development of Shearography for Surface Strain Measurement of Non-Planar Objects. PhD thesis, Cranfield University, Centre for Photonics and Optical Engineering, School of Mechanical Engineering, 2001. 125. A. Fernandez, A. Davila, C. Perez-Lopez, G. Mendiola, and J. BlancoGarcia. Algorithm for surface contouring using two-source phasestepping digital shearography. Proc. SPIE, 4419:170–173, 2001 [doi:10. 1117/12.437103]. 126. T. Santhanakrishnan, N. Krishna Mohan, and R. S. Sirohi. Oblique observation speckle shear interferometers for slope change contouring. Journal of Modern Optics, 44(4):831–839, 1997. 127. P. K. Rastogi. An electronic pattern speckle shearing interferometer for the measurement of surface slope variations of three-dimensional objects. Optics and Lasers in Engineering, 26(2):93–100, 1997. 128. S. Velghe, J. Primot, N. Guérineau, M. Cohen, and B. Wattellier. Wavefront reconstruction from multidirectional phase derivatives generated by multilateral shearing interferometers. Opt. Lett., 30(3):245–247, Feb 2005. 129. R. Legarda-Sáenz, M. Rivera, R. Rodríguez-Vera, and G. TrujilloSchiaffino. Robust wave-front estimation from multiple directional derivatives. Opt. Lett., 25(15):1089–1091, Aug 2000. 130. H.-X. Trinh, S.-T. Lin, L.-C. Chen, S.-L. Yeh, C.-S. Chen, and H.-H. Hoang. Shearing interference microscope for step-height measurements. Journal of Microscopy, 266(2):178–185, 2017. 131. C. Joenathan, C. S. Narayanamurthy, and R. S. Sirohi. Radial and rotational slope contours in speckle shear interferometry. Optics Communications, 56(5):309–312, 1986. 132. S. Winther. 3D strain measurements using ESPI. Optics and Lasers in Engineering, 8(1):45–57, 1988. 133. L. Bruno, G. Bianco, and M. A. Fazio. A multi-camera speckle interferometer for dynamic full-field 3D displacement measurement: Validation and inflation testing of a human eye sclera. Optics and Lasers in Engineering, 107:91–101, 2018. 134. S. I. Krishnamachari. Applied Stress Analysis of Plastics: A Mechanical Engineering Approach. Springer, 1989.

88

References

135. A. Asundi. Introduction to engineering mechanics. In P. K. Rastogui, editor. Photomechanics. Springer Verlag, pages 33–54, Berlin, 1999. 136. A. S. Khan and X. Wang. Strain Measurements and Stress Analysis. Pearson, 2001. 137. M. R. Viotti and A. Albertazzi. Robust Speckle Metrology Techniques for Stress Analysis and NDT. SPIE Press, 2014 [doi:10.1117/3.1002651]. 138. F. Mendoza Santoyo, G. Pedrini, S. Schedin, and H. J. Tiziani. 3D displacement measurements of vibrating objects with multi-pulse digital holography. Measurement Science and Technology, 10(12):1305, 1999. 139. D. I. Farrant, G. H. Kaufmann, J. N. Petzing, J. R. Tyrer, B. F. Oreb, and D. Kerr. Measurement of transient deformations with dual-pulse addition electronic speckle-pattern interferometry. Appl. Opt., 37(31):7259–7267, Nov 1998. 140. T. Takatsuji, B. F. Oreb, D. I. Farrant, and J. R. Tyrer. Simultaneous measurement of three orthogonal components of displacement by electronic speckle-pattern interferometry and the Fourier transform method. Appl. Opt., 36(7):1438–1445, Mar 1997. 141. P. Picart, E. Moisson, and D. Mounier. Twin-sensitivity measurement by spatial multiplexing of digitally recorded holograms. Appl. Opt., 42(11):1947–1957, Apr 2003. 142. T. Saucedo-A., M. H. De la Torre-Ibarra, F. Mendoza Santoyo, and I. Moreno. Digital holographic interferometer using simultaneously three lasers and a single monochrome sensor for 3D displacement measurements. Opt. Express, 18(19):19867–19875, Sep 2010. 143. K. Liu, S. J. Wu, X. Y. Gao, and L. X. Yang. Simultaneous measurement of in-plane and out-of-plane deformations using dualbeam spatial-carrier digital speckle pattern interferometry. In Applied Mechanics and Materials, volume 782, pages 316–325, 2015. 144. Y. Fang, S. J. Wu, and L. X. Yang. Synchronous measurement of threedimensional deformations using tri-channel spatial-carrier digital speckle pattern interferometry. In Applied Mechanics and Materials, volume 868, pages 316–322, 2017. 145. L. Yang and J. Li. Shearography. In N. Ida and N. Meyendorf, editors. Handbook of Advanced Non-Destructive Evaluation, Springer International Publishing, pages 1–37, 2018. 146. S. Wang, J. Dong, F. Pöller, X. Dong, M. Lu, L. M. Bilgeri, M. Jakobi, F. Salazar-Bloise, and A. W. Koch. Dual-directional shearography based on a modified common-path configuration using spatial phase shift. Appl. Opt., 58(3):593–603, Jan 2019. 147. H. Hooshmand-Ziafi, K. Hassani, and M. Dashtdar. In-plane deformation gradient measurement using common-path spatial phase shift shearography. Proc. SPIE, 11060, 1106008, 2019 [doi:10.1117/12. 2525234].

References

89

148. J. Dong, S. Wang, M. Lu, M. Jakobi, Z. Liu, X. Dong, F. Pöller, L. M. Bilgeri, F. Salazar Bloise, A. K. Yetisen, and A. W. Koch. Real-time dual-sensitive shearography for simultaneous in-plane and out-of-plane strain measurements. Opt. Express, 27(3):3276–3283, Feb 2019. 149. G.-Q. Gu, G.-Z. Xu, and B. Xu. Synchronous measurement of out-ofplane displacement and slopes by triple-optical-path digital speckle pattern interferometry. Metrology and Measurement Systems, 25:3–14, Jan 2018. 150. Q. Zhao, W. Chen, F. Sun, P. Yan, B. Ye, and Y. Wang. Simultaneous 3D measurement of deformation and its first derivative with speckle pattern interferometry and shearography. Appl. Opt., 58(31):8665–8672, Nov 2019. 151. B. A. Horwitz. Multiplex techniques for real-time shearing interferometry. Optical Engineering, 29(10):1223–1232, 1990 [doi:10.1117/12.55719]. 152. G. Da Costa. Transient phenomena analysis. In R. K. Erf, editor. Speckle Metrology, Academic Press, pages 267–280, 1978. 153. T. J. Cookson, J. N. Butters, and H. C. Pollard. Pulsed lasers in electronic speckle pattern interferometry. Optics and Laser Technology, 10:119–124, 1978. 154. F. Mendoza Santoyo, D. Kerr, J. R. Tyrer, and T. C. West. A novel approach to whole field vibration analysis using a pulsed laser system. Proc. SPIE, 1136, 335–345, 1989 [doi:10.1117/12.9617055]. 155. G. Pedrini, B. Pfister, and H. Tiziani. Double pulse-electronic speckle interferometry. Journal of Modern Optics, 40(1):89–96, 1993. 156. N.-E. Molin. Optical methods for acoustics and vibration measurements. In T. D. Rossing, editor. Springer Handbook of Acoustics, Springer, pages 1139–1163, 2014. 157. J. R. Tyrer. Two laser speckle correlation. UK Pat. 8712933, 8613635, 1986. 158. A. J. Moore, J. D. C. Jones, and J. D. R. Valera. Dynamic measurements. In P. K. Rastogui, editor. Digital Speckle Pattern Interferometry and Related Techniques, John Wiley & Sons, Chapter 4, pages 225–287, 2001. 159. A. J. Moore, D. P. Hand, J. S. Barton, and J. D. C. Jones. Transient deformation measurement with electronic speckle pattern interferometry and a high-speed camera. Appl. Opt., 38(7):1159–1162, Mar 1999. 160. R. A. Martinez-Celorio, A. Davila, C. Perez-Lopez, and L. Marti Lopez. Visibility enhancement of carrier fringes in electronic speckle shearing pattern interferometry using microspheres for light detection in back reflection. Optik, 112:99–104, 2001. 161. T. E. Blum, K. van Wijk, B. Pouet, and A. Wartelle. Multicomponent wavefield characterization with a novel scanning laser interferometer. Review of Scientific Instruments, 81(7):073101, 2010.

90

References

162. A. J. Moore et. al. Pulsed ESPI. In P. K. Rastogui and D. Inaudi, editors. Trends in Optical Non-destructive Testing and Inspection. Elsevier Science, 2000. 163. J. M. Huntley. High-speed laser speckle photography. Part 1: repetitively Q-switched ruby laser light source. Optical Engineering, 33(5):1692–1699, 1994 [doi: 10.1117/12.168542] 164. A. D. W. McKie, J. W. Wagner, J. B. Spicer, and J. B. Deaton. Dualbeam interferometer for the accurate determination of surface-wave velocity. Appl. Opt., 30(28):4034–4039, Oct 1991. 165. J. P. Monchalin. Optical detection of ultrasound. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 33(5):485–499, 1986. 166. D. Cernadas, C. Trillo, Á. F. Doval, Ó. López, C. López, B. V. Dorro, J. L. Fernández, and M. Pérez-Amor. Non-destructive testing of plates based on the visualisation of Lamb waves by double-pulsed TV holography. Mechanical Systems and Signal Processing, 20(6):1338–1349, Aug 2006. 167. C. Trillo, D. Cernadas, A. F. Doval, C. López, B. V. Dorrío, and J. L. Fernández. Detection of transient surface acoustic waves of nanometric amplitude with double-pulsed TV holography. Appl. Optics, 42(7):1228– 35, Mar 2003. 168. K. F. Graff. Wave Motion in Elastic Solids (Dover Books on Physics). Dover Publications, Inc., 1975. 169. N. Hosoya, A. Yoshinaga, A. Kanda, and I. Kajiwara. Non-contact and non-destructive Lamb wave generation using laser-induced plasma shock wave. International Journal of Mechanical Sciences, 140:486–492, 2018. 170. L. Bruno and A. Poggialini. Phase shifting speckle interferometry for dynamic phenomena. Opt. Express, 16(7):4665–4670, Mar 2008. 171. I. T. Nistea and D. N. Borza. High speed speckle interferometry for experimental analysis of dynamic phenomena. Optics and Lasers in Engineering, 51(4):453–459, 2013. 172. T. Wang, L. Kai, and Q. Kemao. Real-time reference-based dynamic phase retrieval algorithm for optical measurement. Appl. Opt., 56(27):7726–7733, Sep 2017. 173. J. N. Petzing, A. Dávila, D. Kerr, and J. R. Tyrer. Pulsed carrier out-ofplane speckle interferometry for transient vibration analysis. Journal of Modern Optics, 45(4):825–836, 1998. 174. P. D. Ruiz, A. Davila, G. Mendiola, and G. H. Kaufmann. Measurement of the temporal evolution of periodic induced displacement derivatives using stroboscopic electronic speckle-shearing interferometry. Optical Engineering, 40(2):318–324, 2001 [doi:10.1117/1.1339876]. 175. J. M. Huntley, G. H. Kaufmann, and D. Kerr. Phase-shifted dynamic speckle pattern interferometry at 1 khz. Appl. Opt., 38(31):6556–6563, Nov 1999.

References

91

176. R. Spooren. Double-pulse subtraction TV holgraphy. Opt. Eng., 31(5):1000–1007, 1992 [doi: 10.1117/12.56149]. 177. J. R. Tyrer. Applications of pused holography and double pulsed electronic speckle pattern interferometry to large vibrating engineering structures. Proc. SPIE, 599:181–185, 1985 [doi: 10.1117/12.952373]. 178. B. Pfister, M. Beck, and H. Tiziani. Speckleinterferometrie mit alternativen phasenshiebe-methode an beispielen aus der defektanalyse. In W. Waiderlich, editor. Laser in der Technik: Vortrage Des 10. Internationalen Kongresses Laser 91, Berlin, Springer-Verlag, pages 63– 67, 1992. 179. C. Pérez López, F. Mendoza Santoyo, and J. A. Guerrero. Decoupling the x, y and z displacement components in a rotating disc using threedimensional pulsed digital holography. Measurement Science and Technology, 14(1):97, 2003. 180. C. Pérez López, F. Mendoza Santoyo, M. Cywiak, B. Barrientos, and G. Pedrini. New method for optical object derotation. Optics Communications, 203(3):249–253, 2002. 181. N. Alcala, J. L. Marroquin, and A. Dávila. Phase recovery using a twinpulsed addition fringe pattern in ESPI. Optics Communications, 163:15– 19, 1999. 182. A. Dávila, D. Kerr, and G. H. Kaufmann. Fast electro-optical system for pulsed ESPI carrier fringe generation. Optics Communications, 123:457–464, 1996. 183. A. J. Moore, J. R. Tyrer, and F. Mendoza Santoyo. Phase extraction from electronic speckle pattern interferometry addition fringes. Appl. Opt., 33(31):7312, 1994. 184. A. Fernández, A. J. Moore, C. Pérez-López, A. F. Doval, and J. BlancoGarcía. Study of transient deformations with pulsed TV holography: application to crack detection. Appl. Opt., 36(10):2058–2065, Apr 1997. 185. C. Pérez López, F. Mendoza Santoyo, R. Rodríguez Vera, and M. Funes-Gallanzi. Separation of vibration fringe data from rotating object fringes using pulsed ESPI. Optics and Lasers in Engineering, 38(3):145– 152, 2002. 186. B. V. Dorrío and J. L. Fernández. Phase-evaluation methods in wholefield optical measurement techniques. Measurement Science and Technology, 10(3):R33, 1999. 187. Y. Ichioka and M. Inuiya. Direct phase detecting system. Appl. Opt., 11(7):1507–1514, Jul 1972. 188. S. Zhang. High-Speed 3D Imaging with Digital Fringe Projection Techniques. CRC Press, 2016. 189. K. H. Womack. Interferometric phase measurement using spatial synchronous detection. Opt. Eng., 23:391–395, 1984 [doi: 10.1117/ 7973306].

92

References

190. V. I. Vlad and D. Malacara. Direct spatial reconstruction of optical phase from phase-modulated images. In E. Wolf, editor. Progress in Optics, Elsevier Science, pages 261–317, 1994. 191. R. Yazdani, S. Petsch, H. Reza Fallah, M. Hajimahmoodzadeh, and H. Zappe. Phase shifting in the spatial frequency domain. Optical Engineering, 55(3):034104, 2016 [doi: 10.1117/1.OE.55.3.034104]. 192. M. Wang, G. Du, C. Zhou, S. Si, Z. Lei, X. Li, and Y. Li. Precise and fast phase wraps reduction in fringe projection profilometry. Journal of Modern Optics, 64(18):1862–1869, 2017. 193. Z. Dong and H. Cheng. Highly noise-tolerant hybrid algorithm for phase retrieval from a single-shot spatial carrier fringe pattern. Optics and Lasers in Engineering, 100:176–185, 2018. 194. J. L. Marroquin, R. Rodriguez-Vera, and M. Servin. Local phase from local orientation by solution of a sequence of linear systems. J. Opt. Soc. Am. A, 15(6):1536–1544, Jun 1998. 195. J. L. Marroquin, M. Rivera, S. Botello, R. Rodriguez-Vera, and M. Servin. Regularization methods for processing fringe-pattern images. Appl. Opt., 38(5):788–794, Feb 1999. 196. N. Agarwal, C. Wang, and Q. Kemao. Windowed Fourier ridges for demodulation of carrier fringe patterns with nonlinearity: a theoretical analysis. Appl. Opt., 57(21):6198–6206, Jul 2018. 197. J. A. Rayas and A. Dávila. Optimization of a DIY parallel-optical-axes profilometer for compensation of fringe divergence. Appl. Opt., 60(31):9790–9798, Nov 2021. 198. P. Jia, J. Kofman, and C. E. English. Comparison of linear and nonlinear calibration methods for phase-measuring profilometry. Optical Engineering, 46:(4), 043601, 2007 [doi: 10.1117/1.2721025]. 199. K. J. Gasvik. Optical Metrology, Third edition. John Wiley & Sons, 2002. 200. A. Martínez, J. A. Rayas, H. J. Puga, and K. Genovese. Iterative estimation of the topography measurement by fringe-projection method with divergent illumination by considering the pitch variation along the x and z directions. Optics and Lasers in Engineering, 48(9):877–881, 2010. 201. F. J. Cuevas, M. Servin, and R. Rodriguez-Vera. Depth object recovery using radial basis functions. Optics Communications, 163(4):270–277, 1999. 202. M. Takeda. Fourier fringe demodulation. In P. Rastogi and E. Hack, editors. Phase Estimation in Optical Interferometry, CRC Press, pages 1–29, 2015. 203. A. Davila. Transient Displacement Analysis Using Double Pulsed ESPI and Fringe Processing Methods. PhD thesis, Loughborough University, 1996.

References

93

204. J. Huntley. Automated analysis of speckle interferograms. In P. K. Rastogi, editor. Digital Speckle Pattern Interferometry and Related Techniques, Wiley, Chapter 2, pages 95–97, 2001. 205. R. J. Green, J. G. Walker, and D. W. Robinson. Investigation of the Fourier transform method of fringe pattern analysis. Opt. Laser Eng., 8:29, 1988. 206. J. M. Huntley. An image processing system for the analysis of speckle photographs. Journal of Physics E: Scientific Instruments, 19(1):43–49, Jan 1986. 207. J. C. Estrada, J. L. Marroquin, and O. M. Medina. Reconstruction of local frequencies for recovering the unwrapped phase in optical interferometry. Scientific Reports, 7(1):6727, 2017. 208. G. H. Kaufmann and G. E. Galizzi. Phase measurement in temporal speckle pattern interferometry: comparison between the phase-shifting and the Fourier transform methods. Appl. Opt., 41(34):7254–7263, Dec 2002. 209. D. C. Williams, N. S. Nassar, J. E. Banyard, and M. S. Virdee. Digital phase-step interferometry: a simplified approach. Optics And Laser Technology, 23(3):147–150, 1991. 210. U. Schnars and W. Juptner. Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques. Springer, 2005. 211. J. C. Dainty. Speckle Statistics and the Detection of Images in Hologram Reconstructions of Bubble Chamber Tracks and Other Objects. PhD thesis, Imperial College of Science, Technology and Medicine Blackett Laboratory, Optics Section, 1972. 212. A. E. Ennos. Speckle interferometry. In J. C. Dainty, editor. Laser Speckle and Related Phenomena in Topics in Applied Physics Vol. 9. Springer Verlag, 1984. 213. C. Pérez-López, M. H. De la Torre-Ibarra, and F. Mendoza Santoyo. Very high speed cw digital holographic interferometry. Opt. Express, 14(21):9709–9715, Oct 2006. 214. G. Pedrini, H. J. Tiziani, and Y. Zou. Digital double pulse-TVholography. Optics and Lasers in Engineering, 26(2-3):199–219, 1997. 215. B. B. Garcia, A. J. Moore, C. Perez-Lopez, L. Wang, and T. Tschudi. Transient deformation measurement with electronic speckle pattern interferometry by use of a holographic optical element for spatial phase steeping. Appl. Opt., 38(28):5944–5947, 1999. 216. O. Y. Kwon and D. M. Shough. Multichannel grating phase-shift interferometers. Proc. SPIE, 0599, 273–279, 1986 [doi: 10.1117/12.952387]. 217. G. C. Jin, N.-K. Bao, and P. S. Chung. Applications of a novel phase-shift method using a computer-controlled polarization mechanism. Optical Engineering, 33(8):2733–2737, 1994 [doi: 10.1117/12.173565].

94

References

218. G. C. Jin and S. Tang. Electronic speckle pattern interferometer with polarization phase-shift technique. Optical Engineering, 31(4):857–860, 1992 [doi: 10.1117/12/155373]. 219. J. C. Wyant. Computerized interferometric surface measurements. Appl. Opt., 52(1):1–8, 2013. 220. A. Hettwer, J. Kranz, and J. Schwider. Three channel phase-shifting interferometer using polarization-optics and a diffraction grating. Optical Engineering, 39(4):857–860, 2000 [doi: 10.1117/1.602453]. 221. X. Tian, X. Tu, J. Zhang, O. Spires, N. Brock, S. Pau, and R. Liang. Snapshot multi-wavelength interference microscope. Opt. Express, 26(14):18279–18291, Jul 2018. 222. Y. Awatsuji, T. Tahara, A. Kaneko, T. Koyama, K. Nishio, S. Ura, T. Kubota, and O. Matoba. Parallel two-step phase-shifting digital holography. Appl. Opt., 47(19):D183–D189, Jul 2008. 223. Y. Awatsuji. Parallel Phase-Shifting Digital Holography, John Wiley & Sons, pages 1–23, 2014. 224. T. Tahara, A. Maeda, Y. Awatsuji, K. Nishio, S. Ura, T. Kubota, and O. Matoba. Parallel phase-shifting dual-illumination phase unwrapping. Optical Review, 19(6):366–370, Nov 2012. 225. X. Xie, C. P. Lee, J. Li, B. Zhang, and L. Yang. Polarized digital shearography for simultaneous dual shearing directions measurements. Review of Scientific Instruments, 87(8):083110, 2016. 226. Y. B. Seo, H. B. Jeong, H.-G. Rhee, Y.-S. Ghim, and K.-N. Joo. Singleshot freeform surface profiler. Opt. Express, 28(3):3401–3409, Feb 2020. 227. V. Aranchuk, A. K. Lal, C. F. Hess, J. Davis Trolinger, and E. Scott. Pulsed spatial phase-shifting digital shearography based on a micropolarizer camera. Optical Engineering, 57(2):024109, 2018. [doi: 10.1117/ 1.OE.52.2.024109] 228. https://www.svs-vistek.com and https://thinklucid.com/. 229. https://www.theimagingsource.com/products/industrial-cameras/usb-3. 0-polarsens/. 230. K. Tatsuno and Y. Tsunoda. Diode laser direct modulation heterodyne interferometer. Appl. Opt., 26(1):37–40, Jan 1987. 231. J. Chen, Y. Ishii, and K. Murata. Heterodyne interferometry with a frequency-modulated laser diode. Appl. Opt., 27(1):124–128, Jan 1988. 232. H. Atcha and R. P. Tatam. Heterodyning of fibre optic electronic speckle pattern interferometers using laser diode wavelength modulation. Measurement Science and Technology, 5(6):704–709, Jun 1994. 233. N. R. Gomer, C. M. Gordon, P. Lucey, S. K. Sharma, J. C. Carter, and S. M. Angel. Raman spectroscopy using a spatial heterodyne spectrometer: Proof of concept. Applied Spectroscopy, 65(8):849–857, 2011.

References

95

234. F. Guzmán Cervantes, G. Heinzel, A. F. García Marín, V. Wand, F. Steier, O. Jennrich, and K. Danzmann. Real-time phase-front detector for heterodyne interferometers. Appl. Opt., 46(21):4541–4548, Jul 2007. 235. P. J. de Groot. A review of selected topics in interferometric optical metrology. Reports on Progress in Physics, 82(5):056101, Apr 2019. 236. V. V. Protopopov. Laser Heterodyning, Springer Series in Optical Sciences, vol. 149, Springer, Chapter 5, pages 243–305, 2009. 237. S. Wang, Z. Gao, Z. Feng, X. Zhang, D. Yang, and H. Yuan. Heterodyne imaging speckle interferometer. Optics Communications, 338:253–256, 2015. 238. X. Wang, Z. Gao, J. Qin, X. Zhang, and S. Yang. Temporal heterodyne shearing speckle pattern interferometry. Optics and Lasers in Engineering, 93:76–82, 2017. 239. P. Haible, M. P. Kothiyal, and H. J. Tiziani. Heterodyne temporal speckle-pattern interferometry. Appl. Opt., 39(1):114–117, Jan 2000. 240. G. E. Sommargren. Up/down frequency shifter for optical heterodyne interferometry. J. Opt. Soc. Am., 65(8):960–961, Aug 1975. 241. M. Takeda and M. Kitoh. Spatiotemporal frequency multiplex heterodyne interferometry. J. Opt. Soc. Am. A, 9(9):1607–1614, Sep 1992. 242. H. J. Tiziani, N. Kerwien, and G. Pedrini. 9.1 Iinterferometry: Datasheet from Landolt-Börnstein Group VIII. Advanced Materials and Technologies vol. 1a2: Laser Fundamentals. Part 2, Springer-Verlag, 2006. 243. H.-L. Hsieh and P.-C. Kuo. Heterodyne speckle interferometry for measurement of two-dimensional displacement. Opt. Express, 28(1):724–736, Jan 2020. 244. W. Wang, T. Yokozeki, R. Ishijima, A. Wada, Y. Miyamoto, M. Takeda, and S. G. Hanson. Optical vortex metrology for nanometric speckle displacement measurement. Opt. Express, 14(1):120–127, Jan 2006. 245. W. Wang, S. G. Hanson, and M. Takeda. Optical vortex metrology. In G. H. Kaufmann, editor. Advances in Speckle Metrology and Related Techniques, Wiley-VCH, pages 207–238, 2011. 246. H. J. Rabal and R. A. Braga, Jr., editors. Dynamic Laser Speckle and Applications. CRC Press, 2008. 247. C. Maurer, S. Bernet, and M. Ritsch-Marte. Refining common path interferometry with a spiral phase Fourier filter. Journal of Optics A: Pure and Applied Optics, 11(9):094023, 2009. 248. C. Maurer, A. Jesacher, S. Bernet, and M. Ritsch-Marte. What spatial light modulators can do for optical microscopy. Laser & Photonics Reviews, 5(1):81–101, 2011.

96

References

249. A. Jesacher and M. Ritsch-Marte. Synthetic holography in microscopy: opportunities arising from advanced wavefront shaping. Contemporary Physics, 57(1):46–59, 2016. 250. A. Aguilar, A. Dávila, and J. García-Márquez. Multi-step vortex filtering for phase extraction. Opt. Express, 22(7):8503–8514, Apr 2014. 251. A. Aguilar, A. Dávila, and J. E. A. Landgrave. Displacement measurement with multi-level spiral phase filtering in speckle interferometry. Optics and Lasers in Engineering, 52:19–26, 2014. 252. J. M. Huntley and H. Saldner. Temporal phase unwrapping algorithm for automated interferogram analysis. Applied Optics, 32(17):3047–3052, 1993. 253. K. Itoh. Analysis of the phase unwrapping algorithm. Appl. Opt., 21(14):2470–2470, Jul 1982. 254. G. H. Kaufmann and G. E. Galizzi. Phase measurement in temporal speckle pattern interferometry: comparison between the phase-shifting and the Fourier transform methods. Appl. Opt., 41(34):7254–7263, Dec 2002. 255. A. Davila, P. D. Ruiz, G. H. Kaufmann, and J. M. Huntley. Measurement of sub-surface delaminations in carbon fibre composites using high-speed phase-shifted speckle interferometry and temporal phase unwrapping. Optics and Lasers in Engineering, 40(5):447–458, 2003. 256. A. Davila, J. M. Huntley, G. H. Kaufmann, and D. Kerr. High-speed dynamic speckle interferometry: phase errors due to intensity, velocity, and speckle decorrelation. Appl. Opt., 44(19):3954–3962, Jul 2005. 257. C. R. Coggrave. Temporal Phase Unwrapping: Development and Application of Real-time Systems for Surface Profile and Surface Displacement Measurement. PhD thesis, Loughborough University, 2001. 258. M. Takeda, Q. Gu, M. Kinoshita, H. Takai, and Y. Takahashi. Frequency-multiplex fourier-transform profilometry: a single-shot threedimensional shape measurement of objects with large height discontinuities and/or surface isolations. Appl. Opt., 36(22):5347–5354, Aug 1997. 259. A. Davila, J. M. Huntley, C. Pallikarakis, P. D. Ruiz, and J. M. Coupland. Simultaneous wavenumber measurement and coherence detection using temporal phase unwrapping. Appl. Opt., 51(5):558– 567, Feb 2012. 260. J. M. Huntley and H. O. Saldner. Shape measurement by temporal phase unwrapping: comparison of unwrapping algorithms. Measurement Science and Technology, 8(9):986–992, Sep 1997. 261. H. O. Saldner and J. M. Huntley. Temporal phase unwrapping: application to surface profiling of discontinuous objects. Appl. Opt., 36(13):2770–2775, May 1997.

References

97

262. C. E. Towers, D. P. Towers, and J. D. C. Jones. Optimum frequency selection in multifrequency interferometry. Opt. Lett., 28(11):887–889, Jun 2003. 263. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review. Optics and Lasers in Engineering, 85:84–103, 2016. 264. A. Dávila and J. Rayas. Scale-insensitive property of the temporal phase unwrapping in fringe profilometry applications. Optik, 253:168451, 2022. 265. K. Falaggis, D. P. Towers, and C. E. Towers. Algebraic solution for phase unwrapping problems in multiwavelength interferometry. Appl. Opt., 53(17):3737–3747, Jun 2014. 266. D. O. Thompson and D. E. Chimenti, editors. Review of Progress in Quantitative Nondestructive Evaluation. Springer, 1997. 267. J. V. Araújo dos Santos and H. Lopes. Damage Localization Based on Modal Response Measured with Shearography, World Scientific, Chapter 5, pages 141–172, 2018. 268. F. Chen, W. D. Luo, M. Dale, A. Petniunas, P. Harwood, and G. M. Brown. High-speed ESPI and related techniques: overview and its application in the automotive industry. Optics and Lasers in Engineering, 40(5):459– 485, 2003. 269. V. Pagliarulo, F. Farroni, P. Ferraro, A. Lanzotti, M. Martorelli, P. Memmolo, D. Speranza, and F. Timpone. Combining ESPI with laser scanning for 3D characterization of racing tyres sections. Optics and Lasers in Engineering, 104:71–77, 2018. 270. J. Robillard and E. Conley, editors. Industrial Applications for Optical Data Processing and Holography. CRC Press, 1992. 271. ASTM E2581-14. Standard practice for shearography of polymer matrix composites and sandwich core materials in aerospace applications. Technical report, ASTM, West Conshohocken, Pennsylvania, 2014. 272. V. Pagliarulo, V. Bianco, P. Memmolo, C. Distante, B. Ruggiero, and P. Ferraro. Leaks detection in stainless steel kegs via ESPI. Optics and Lasers in Engineering, 110:220–227, 2018. 273. Y. Pang, B. K. Chen, S. F. Yu, and S. N. Lingamanaik. Enhanced laser speckle optical sensor for in situ strain sensing and structural health monitoring. Opt. Lett., 45(8):2331–2334, Apr 2020. 274. S. Takemoto. Applications of holography and ESPI in geophysical siences. In J. D. Trolinger, editor. Optical Inspection and Testing, volume CR46, SPIE Press, pages 175–196, 1992. 275. J. N. Petzing and J. R. Tyrer. Shearing interferometry for power ultrasonic vibration analysis. In J. F. Silva Gomes, editor. Recent Advances in Experimental Mechanics: Proc. of the 10th International Conference on Experimental Mechanics, pages 631–636, 1994.

98

References

276. J. M. Huntley and H. O. Saldner. Multi-channel pressure sensor using speckle interferometry. Optics and Lasers in Engineering, 23(5):263–275, 1995. 277. P. K. Upputuri, S. Umapathy, M. P. Kothiyal, and N. Krishna Mohan. Microscopic TV holography and interferometry for surface profiling and vibration amplitude measurement in microsystems. Defence Science Journal, 61(5):491–498, 2011. 278. J. N. Petzing, J. R. Tyrer, and J. R. Oswin. Measuring flextensional transducer mode shapes underwater using laser speckle interferometry. In Sonar Transducers ‘95, volume 17(3), pages 220–229. Institute of Acoustics, 1995. 279. S. N. Bobo. Shearographic strain assessment for inspection of fossil-fuel power plants. Materials Evaluation, 49(10): 1308–1311, 1991. 280. A. Maranon, P. D. Ruiz, A. D. Nurse, J. M. Huntley, L. Rivera, and G. Zhou. Identification of subsurface delaminations in composite laminates. Composites Science and Technology, 67(13):2817–2826, 2007. 281. G. H. Kaufmann. Nondestructive testing with thermal waves using phase-shifted temporal speckle pattern interferometry. Optical Engineering, 42(7):2010–2014, 2003 [doi: 10.1117/1.1579702]. 282. G. H. Kaufmann, M. R. Viotti, and G. E. Galizzi. Flaw detection using temporal speckle pattern interferometry and thermal waves. J. Holography and Speckle 1(2):80–84, 2004. 283. A. E. Dolinko and G. H. Kaufmann. Enhancement in flaw detectability by means of lockin temporal speckle pattern interferometry and thermal waves. Optics and Lasers in Engineering, 45(6):690–694, 2007. 284. A. E. Dolinko. Non-destructive visualization of defect borders in flawed plates inspected by thermal load. Journal of Physics D: Applied Physics, 41(20):205503, 2008. 285. C. Dong, K. Li, Y. Jiang, D. Arola, and D. Zhang. Evaluation of thermal expansion coefficient of carbon fiber reinforced composites using electronic speckle interferometry. Opt. Express, 26(1):531–543, Jan 2018. 286. F. J. Macedo, M. E. Benedet, A. Vieira Fantin, D. P. Willemann, F. Aparecido Alves da Silva, and A. Albertazzi. Inspection of defects of composite materials in inner cylindrical surfaces using endoscopic shearography. Optics and Lasers in Engineering, 104:100–108, 2018. 287. D. Paoletti, G. S. Spagnolo, M. Facchini, and P. Zanetta. Artwork diagnostics with fiber-optic digital speckle pattern interferometry. Applied Optics, 32(31):6236–6241, 1993. 288. D. Paoletti and G. Schirripa Spagnolo. IV Interferometric methods for artwork diagnostics. In E. Wolf, editor. Progress in Optics, volume 35 (Supplement C), Elsevier, pages 197–255, 1996.

References

99

289. A. Ponticorvo and A. K. Dunn. How to build a laser speckle contrast imaging (LSCI) system to monitor blood flow. J. Vis. Exp., 2010(45): 2004, 2010. 290. N. E. Knox Cartwright, J. R Tyrer, and J. Marshall. Age-related differences in the elasticity of the human cornea. Investigative Ophthalmology & Visual Science, 52(7):4324, 2011. 291. N. Mohon and A. Rodemann. Laser speckle for determining ametropia and accommodation response of the eye. Appl. Opt., 12(4):783–787, Apr 1973. 292. P. D. Jaycock, L. Lobo, J. Ibrahim, J. Tyrer, and J. Marshall. Interferometric technique to measure biomechanical changes in the cornea induced by refractive surgery. Journal of Cataract & Refractive Surgery, 31(1):175–184, 2005. 293. A. Wilson, J. Marshall, and J. Tyrer. The role of light in measuring ocular biomechanics. Eye, 30(2):234–240, 2016. 294. J. N. Petzing, C. Heras-Palou, J. King, and J. R. Tyrer. The analysis of human femurs and prostheses using electronic speckle pattern interferometry. Engineering Science and Education Journal, 7(1):35–40, 1998. 295. R. A. Martínez-Celorio, R. González-Peña, R. Cibrián, R. Salvador, M. F. Mánguez, and L. Martí-López. Young’s modulus measurement of the radius bone using a shearing interferometer with carrier fringes. Optics and Lasers in Engineering, 48(7):727–731, 2010. 296. V. Moreno, C. Vázquez-Vázquez, M. Gallas, and J. Crespo. Speckle shearing pattern interferometry to assess mechanical strain in the human mandible jaw bone under physiological stress. Proc. SPIE 8001, 800131 (2011) [doi: 10.1117/12.892165] 297. A. Dávila, S. Márquez, E. Landgrave, Z. Vázquez, K. Vera, and C. Caudillo. Axial loading verification method for small bones using carrier fringes in speckle pattern interferometry. Journal of Modern Optics, 62(11):937–942, 2015. 298. M. Pajuelo, G. Baldwin, H. Rabal, N. Cap, R. Arizaga, and M. Trivi. Bio-speckle assessment of bruising in fruits. Optics and Lasers in Engineering, 40(1):13–24, 2003. 299. M. Tziraki, L. Song, and D. S. Elson. An endoscopic multi-exposure laser speckle contrast analysis system for blood flow and microcirculation measurements. Proc. SPIE 9327, 93270B, 2015 [doi: 10.1117/12. 2078225] 300. M. Z. Ansari and A. K. Nirala. Monitoring capillary blood flow using laser speckle contrast analysis with spatial and temporal statistics. Optik, 126(24):5224–5229, 2015. 301. M. Zaheer Ansari, H. Cabrera, and E. E. Ramírez-Miquet. Imaging functional blood vessels by the laser speckle imaging (LSI) technique

100

302. 303.

304.

305.

306.

307.

308.

309.

310.

311.

312.

References

using Q-statistics of the generalized differences algorithm. Microvascular Research, 107:46–50, 2016. P. Kong, Y. L. Zhou, Y. Y. Xie, and M. Jiang. A novel highly efficient algorithm for laser speckle imaging, Optik, 127(15):5852–5859, 2016. R. Zhang, L. Song, J. Xu, X. An, W. Sun, X. Zhao, Z. Zhou, and L. Chen. Laser speckle imaging for blood flow based on pixel resolved zero-padding auto-correlation coefficient distribution. Optics Communications, 439:38–46, 2019. A. H. Aminfar, N. Davoodzadeh, G. Aguilar, and M. Princevac. Application of optical flow algorithms to laser speckle imaging. Microvascular Research, 122:52–59, 2019. C. Regan, S. M. White, B. Y. Yang, T. Takesh, J. Ho, C. Wink, P. Wilder-Smith, and B. Choi. Design and evaluation of a miniature laser speckle imaging device to assess gingival health. Journal of Biomedical Optics, 21(10), 104002, 2016 [doi: 10.1117/1.JBO.21.10.104002]. N. Kiyota, H. Kunikata, Y. Shiga, K. Omodaka, and T. Nakazawa. Relationship between laser speckle flowgraphy and optical coherence tomography angiography measurements of ocular microcirculation. Graefe’s Archive for Clinical and Experimental Ophthalmology, 255(8):1633–1642, Aug 2017. Q. Saleem, R. D. Wildman, J. M. Huntley, and M. B. Whitworth. Improved understanding of biscuit checking using speckle interferometry and finite-element modelling techniques. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 461(2059):2135–2154, 2005. A. E. Elmahdy, P. D. Ruiz, R. D. Wildman, J. M. Huntley, and S. Rivers. Stress measurement in east asian lacquer thin films owing to changes in relative humidity using phase-shifting interferometry. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 467(2129):1329–1347, 2011. I. M. De la Torre, M. del Socorro Hernandez Montes, J. M. FloresMoreno, and F. Mendoza Santoyo. Laser speckle based digital optical methods in structural mechanics: A review. Optics and Lasers in Engineering, 87 (Supplement C):32–58, 2016. C. Guo, D. Li, D. P. Kelly, H. Li, J. P. Ryle, and J. T. Sheridan. Measuring refractive index of glass by using speckle. Appl. Opt., 57(22): E205–E217, Aug 2018. E. A. Barbosa, D. M. Silva, C. E. Nascimento, F. L. Galváo, and J. C. R. Mittani. Progressive power lens measurement by low coherence speckle interferometry. Optics and Lasers in Engineering, 51(7):898–906, 2013. S. Zabler, editor. Special Issue: Phase-Contrast and Dark-Field Imaging. Journal of Imaging, 2018.

References

101

313. M. Benyamin, H. Genish, R. Califa, A. Schwartz, Z. Zalevsky, and N. Ozana. Non-contact photoacoustic imaging using laser speckle contrast analysis. Opt. Lett., 44(12):3110–3113, Jun 2019. 314. G. D. Bruce, L. O’Donnell, M. Chen, and K. Dholakia. Overcoming the speckle correlation limit to achieve a fiber wavemeter with attometer resolution. Opt. Lett., 44(6):1367–1370, Mar 2019. 315. N. K. Metzger, R. Spesyvtsev, G. D. Bruce, B. Miller, G. T. Maker, G. Malcolm, M. Mazilu, and K. Dholakia. Harnessing speckle for a subfemtometre resolved broadband wavemeter and laser stabilization. Nature Communications, 8:1–8, 2017. 316. H. Cao. Perspective on speckle spectrometers. Journal of Optics, 19(6):060402, 2017. 317. M. F. Shirazi, P. Kim, M. Jeon, C.-S. Kim, and J. Kim. Free space broad-bandwidth tunable laser diode based on littman configuration for 3D profile measurement. Optics & Laser Technology, 101:462–467, 2018. 318. A. M. Ghez, P. W. Gorham, C. A. Haniff, S. R. Kulkarni, K. Matthews, G. X. Neugebauer, and W. N. Weir. Infrared speckle imaging at Palomar. Proc. SPIE 1237, 249–258 (1990) [doi: 10.1117/12.19298] 319. A. Finkbeiner. Astronomy: Star tracker. Nature, 495:296–298, 2013. 320. M. Lehmann. Statistical Theory of Two-Wave Speckle Interferometry and Its Application to the Optimization of Deformation Measurements. PhD thesis, École Polytechnique Fédérale de Lausanne, 1998. 321. J. M. Chicharro Higuera. Estudio de la Magnetostricción por Interferometría de Speckle. PhD thesis, Universidad Politécnica de Madrid: Escuela Técnica Superior de Ingenieros de Minas, 2000. 322. M. Lehmann. Speckle statistics in the context of digital speckle interferometry. In P. K. Rastogi, editor. Digital Speckle Pattern Interferometry and Related Techniques, Wiley, Chapter 1, pages 1–58, 2001. 323. J. W. Goodman. Statistical properties of laser speckle patterns. In J. C. Dainty, editor. Laser Speckle and Related Phenomena in Topics in Applied Physics Vol. 9, Springer Verlag, pages 21–26, 1984. 324. T. Okamoto and T. Asakura. III: The statistics of dynamic speckles. In E. Wolf, editor. Progress in Optics, volume 34 (Supplement C), Elsevier, pages 183–248, 1995.

Index D Deep convolutional neural network for noise removal, 13 Depth resolution from coherence length, 2 Derotation, 51 Difference of phase, 13, 69 Diffractive optical elements in shearography, 29 Discrete vortices for phase shifting, 68 Divergent illumination, 19 Double-image effect in shearing interferometry, 29 Double-image removal from multiple shears, 32 Drawbacks for digital holography, 16

A Absolute value correlation, 12 Adaptable phase-shifting algorithms, 13 Airy disc formula, 63 Applications, 75 Autocorrelation function of the intensity fluctuation of two points on a speckle pattern, 63 B Blood flow, 76 Bones studies with speckle, 76 C Camera acquisition and synchrony methods using pulsed lasers, 46 Carré phase shifting, 13, 60 Carrier fringe patterns for open-fringe generation, 14 Carrier phase limits, 59 Carrier sub-pixel precision, 59 Carriers in digital holography, 15 Carriers in shearography, 31 Carriers in single-shot phase detection, 53 Coherence length, 2 Contouring in shearing interferometry, 31 Contouring in shearography, 31 Contouring with in-plane interferometer, 27

E Enhancement of addition fringes, 52 Equal-angle illuminations, 37 Euler decomposition for general rotation, 22 Eye cornea elasticity, accomodation, 76 F Fourier transform method for difference of phase, 57

103

104

G Generation of carrier fringes by rotation, 23 H Heterodyne interferometry, 67 I Illumination and observation angles, 10 Illumination beam induced errors, 28 Illumination setups, 10 Illumination using two simultaneous light sources, 33 In-plane and out-of-plane ESPI interferometer, 19 In-plane ESPI interferometer, 22 Integration of phase obtained in shearing interferometry, 29 Intensity correlation, 12 Interference by two wavefronts, 2 Interference, 1 Inverted T illumination, 37 L L shape sequential illumination, 35 Lamb waves, 50 Local carrier frequencies, 60 Local carrier magnitude and direction, 55 M Mean longitudinal speckle size, 5 Mean transversal speckle size, 5 Multiplexed carriers, 40 Multiwavelength interferometry, 73 Mutiplexing of phase sensitivity, 69 N Nonlinear carriers, 55 Number of integrated speckles, 78

Index

O Object displacement, 9 Out-of-plane ESPI interferometer, 19 Out-of-plane interferometer, 10 P Phase detection, 53 Phase map, 14 Phase-shifting algorithms, 13 Phase shifting and vortex singularities location, 68 Phase shifting from carriers, 55 Phase shifting vs Fourier transform method, 60 Phase shifting in adjacent pixels using polarization, 65 Phase shifting using quadrant phase shifts in the image plane, 64 Point spread function, 5 Polarized sensors, 67 Progressive power of ophtalmic lenses, 76 Q Quadrature demodulation, 54 R Radial in-plane interferometer, 27 Radial shearing, 32 Re-referencing, 69 Re-wrapping, 58 Recording a simple fast event, 42 Recording interferometric events, 43 Recording of transient events, 42 Removal of environmental instabilities, 18 Rigid body movement, 11 Rigid body movements (6D pose), 9 Rigid body rotation: Euler decomposition, 22 Rotational shearing, 32

Index

S Sensitivity matrix, 34 Shear miscalibration in 3D, 29 Shear quantitative evaluation, 27 Shearing using DOEs and gratings, 29 Shearography, 27 Shearography for height measurement in microscopy, 32 Simplified out-of-plane ESPI optical setups, 19 Single pulse per camera frame, 46 Single-pulse time delay per camera frame, 47 Space heterodyne demodulation, 54 Spatial carrier, 15 Spatial phase shifting using carriers, 61 Spatial synchronous detection, 54 Speckle interference, 9 Speckle noise removal, 13 Speckle size, 3 Statistics of integrated speckle, 77 Statistics of resolved speckle, 77 Statistics of smooth reference & resolved speckle, 77 Statistics of speckle reference & integrated speckle, 77 Strain tensor, 36 Stress and strain, 37 Sub-nanometer in-plane and out-of-plane detection, 41 Sub-pixel position of singularities using vortices, 68

105

Super-high-resolution speckled wavemeter, 76 Synthetic wavelength with out-ofplane sensitivity, 22 Synthetic wavelengths for temporal phase unwrapping, 72 Synthetic wavelengths with shearography, 29 T Temporal phase unwrapping, 49 Three sequential illuminations using a single light source, 34 Transient displacement analysis, 41 Triangular sequential illuminations, 37 Twin pulses per frame, 51 Twin pulses per two camera frames, 49 Two wavelengths with in-plane sensitivity for rotation analysis, 27 V Vibration-resilient shearography, 29 Vortices in biological samples, 68 Vortices in microscopy, 68 W Wavelength-scanning lateralshearing interferometry, 29 X X-ray phase contrast, 76

Abundio Davila has been a researcher at the Centro de Investigaciones en Óptica (CIO) since 1986. He received a Bachelor of Science in Physics from the Physics Department of the Universidad Autónoma de Nuevo León (UANL), a Master of Science from the Applied Physics Department of the Centro de Investigación Científica y Educación Superior de Ensenada (CICESE), both in México, and a PhD from Loughborough University, UK, specializing in speckle interferometry techniques in 1992. Dr. Dávila has contributed to nondestructive optical testing techniques with pulsed and CW lasers and wavelength scanning interferometry and has contributed to spectrometry with tunable light sources. He was invited twice to collaborate as a Visiting Fellow and Visiting Research Associate at the Mechanical and Manufacturing Engineering Department of Loughborough University. He has given more than 42 conference presentations, holds 1 patent, and has written 40 journal articles with more than 570 citations.

Speckle Interferometry Abundio Dávila

This handbook is an introduction to speckle techniques to help nonspecialists understand the basic principles of speckle interferometry. The book mainly focuses on the use of speckle patterns with direct phase-measuring methods that produce an instantaneous phase. The major electronic speckle pattern interferometry (ESPI) techniques are presented using simplified mathematical notation that includes rigid-body and standard-body displacements to estimate object pose changes with six degrees of freedom. Additionally, the adoption of temporal phase unwrapping instead of spatial phase unwrapping is promoted. This handbook also includes a summary of recent industrial applications, with an update on current research in the ESPI field.

Handbook of Speckle Interferometry

Handbook of

Handbook of

Speckle Interferometry

DÁVILA

P.O. Box 10 Bellingham, WA 98227-0010

Abundio Dávila

ISBN: 9781510645387 SPIE Vol. No.: TT122 TT122