Photonics rules of thumb : optics, electro-optics, fiber optics, and lasers [Third ed.] 9781510631755, 1510631755

799 119 54MB

English Pages [732] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Photonics rules of thumb : optics, electro-optics, fiber optics, and lasers [Third ed.]
 9781510631755, 1510631755

Citation preview

PHOTONICS RULES OF THUMB Third Edition

John Lester Miller, Ed Friedman, Jack Sanders-Reed, Katie Schwertz, and Brian McComas

Library of Congress Cataloguing-in-Publication Data Names: Miller, John Lester, 1959- author. | Friedman, Edward, author. Title: Photonics rules of thumb : optics, electro-optics, fiber optics, and lasers / John Lester Miller, Ed Friedman, Jack Sanders-Reed, Katie Schwertz, Brian McComas. Description: Third edition. | Bellingham, Washington : SPIE Press, [2020] | Revised edition of: Photonics rules of thumb : optics, electro-optics, fiber optics, and lasers / Ed Friedman, John Lester Miller. 2nd ed. c2004. | Includes bibliographical references and index. | Identifiers: LCCN 2019059788 (print) | LCCN 2019059789 (ebook) | ISBN 9781510631755 (paperback) | ISBN 9781510631762 (pdf) | Subjects: LCSH: Photonics. Classification: LCC TA1520 .F75 2020 (print) | LCC TA1520 (ebook) | DDC 621.36/5--dc23 LC record available at https://lccn.loc.gov/2019059788 LC ebook record available at https://lccn.loc.gov/2019059789 Published by SPIE P.O. Box 10 Bellingham, Washington 98227-0010 USA Phone: +1 360.676.3290 Fax: +1 360.647.1445 Email: [email protected] Web: http://spie.org Copyright © 2020 Society of Photo-Optical Instrumentation Engineers (SPIE) All rights reserved. No part of this publication may be reproduced or distributed in any form or by any means without written permission of the publisher. The content of this book reflects the work and thought of the authors. Every effort has been made to publish reliable and accurate information herein, but the publisher is not responsible for the validity of the information or for any outcomes resulting from reliance thereon. Printed in the United States of America. Last updated 22 May 2020 For updates to this book, visit http://spie.org and type “PM314” in the search field.

PHOTONICS RULES OF THUMB Third Edition

John Lester Miller, Ed Friedman, Jack Sanders-Reed, Katie Schwertz, and Brian McComas

SPIE PRESS Bellingham, Washington USA

Contents Preface Acknowledgements

xvii xxi

1 Astronomy Introduction Blackbody Temperature of the Sun Number of Stars as a Function of Wavelength A Simple Model of Stellar Populations Number of Infrared Sources per Square Degree Number of Infrared Stars above a Given Radiance Direct Lunar Radiance Atmospheric Seeing Comparison of Resonant Fluorescence and Rayleigh Guide Stars Number of Actuators in an Adaptive Optic Bandwidth Requirement for Adaptive Optics Photon Rate at a Focal Plane Reduction of Magnitude by Airmass Night-Sky Exposure Time with a Fixed Camera

1 1 2 3 3 6 9 10 12 13 16 18 19 20 22

2 Atmospherics Introduction Vertical Profiles of Atmospheric Parameters Visibility Distance for Rayleigh and Mie Scattering Atmospheric Effects at 10.6 Microns Cn2 Estimates Cn2 as a Function of Weather Impact of Weather on Visibility Bufton Vertical Profile of Wind Speed Index of Refraction of Air Fried Parameter Horizontal-Path Fried Parameter Phase Error Estimation Day vs. Night Scintillation Models for Laser Beams Resolution Looking Down Isoplanatic Angle Strehl Ratio of the Atmosphere Aperture Averaging

23 23 25 26 26 30 33 35 37 38 41 43 44 45 47 49 50 56

v

 

vi

Contents

Adaptive Optics Influence Function Shack–Hartmann Noise Laser Beam Wander Variance is Approximately Proportional to the Cube of the Pathlength Pulse Stretching in Scattering Environments Optimal Truncation of a Gaussian Beam Propagating in the Atmosphere Increased Requirement for Rangefinder SNR to Overcome Atmospheric Effects Free-Space Link Margins Summary of Phase Modulators for Adaptive Optics Telescope Seeing Created by a Dome Floor Telescope Seeing Due to Still or Ventilated Air

59 59 61 64 64 67 69 70 71 71

3 Acquisition, Tracking, and Pointing Introduction Correct Measure of Detection Performance Tracker vs. Detection Detection Criteria Signal-to-Noise Ratio Requirements Psychometric Function Optical Blur Should Be Oversampled by FPA Pixels (Don’t Overdo It!) Dwell in Cell Probability of Detection Estimation Limits of Position Estimation Multisensor Tracking Johnson Criteria Extension of the Johnson Criteria to Other than 50 Percent Identification and Recognition Improvement for Interpolation Resolution Requirement Resolution Required to Read a Letter Detection Nomograph Correcting for Probability of Chance National Image Interpretability Rating Scale

73 73 74 75 78 79 80 82 83 84 87 91 92 96 99 101 102 105 106 108

4 Backgrounds Introduction Clutter and Signal-to-Clutter Ratio Clutter Power Spectral Density Infrared Clutter Behavior Frame Differencing Gain Earth’s Emission and Reflection Illuminance at Earth’s Surface from Various Sources Illuminance Changes during Twilight Emissivity Approximations Reflectivity of a Wet Surface

111 111 113 114 115 116 117 119 121 122 123

Contents

Effective Sky Temperature Sky Irradiance Zodiacal Light Backgrounds from Asteroids

vii

125 126 127 127

5 Cost and Economics Introduction Moore’s Law Metcalfe’s Law Englebart’s Law The Value of Early Investment Cost Reduction Techniques Learning Curves Learning Curves for Optics Optics Cost Cost Function of a Lens Tolerance Cost Examples Price of a Custom vs. Off-the-Shelf Optic Telescope Component Costs Impact of Tolerances on the Cost of Optics Stahl Segmented Cost Rule Tolerancing Guidelines for Glass Spherical Optics Tolerancing Guidelines for Plastic Optics Cost of Digital Image vs. Film Small Pixels Reduce Cost System Percentage Costs Length of a Job Search Photolithography Yield

129 129 130 133 134 135 138 139 140 142 143 145 146 147 150 152 153 154 156 156 157 158 158

6 Degraded Visual Environments Introduction Basic Attenuation and Visibility: Beer’s Law Atmospheric Attenuation Curves Atmospheric Visibility Curves Attenuation vs. Particle Size Visibility in Smoke Equations for Empirical Visibility Penetration vs. Resolution Mandatory Mitigation for Sensor Blindness Deep Wells Are Good

161 161 162 164 166 167 168 169 172 173 174

7 Focal Plane Arrays Introduction Infrared Detector Characterization Responsivity and Quantum Efficiency ROIC Charge Capacity

175 175 180 183 184

viii

Low Quantum Efficiency Detectors Are Useful Silicon Quantum Efficiency HgCdTe x Concentration Quantum Dot Fundamentals Diverge from Conventional Detectors Avalanche Photodiode Performance Responsivity of Avalanche Photodiodes Peak vs. Cut-off CMOS Depletion Scaling Focal Plane Array Noise Sources Rule 07 Law 19 Radiative Estimate of Dark Current Defining Background-Limited Performance for Focal Planes The Concepts of D and D* Ideal D* and View Angle Dependence on R0A Shot Noise Rule Infrared Detector DC Pedestal Digitizer Sizing Noise as a Function of Temperature Noise Bandwidth of Detectors Noise Equations for CMOS Specifying 1/f Noise Nonuniformity Effects on SNR Correlated Double Sampling 8 Human Vision Introduction Retinal Illumination Diffraction Effects in the Human Eye Energy Flow into the Eye Pupil Size Quantum Efficiency of Cones Rod and Cone Response Cone Density Rod Density Peak Eye Resolution Optical Fields of View Contrast Performance Simplified Optical Transfer Functions for Eye Components Eye Motion during the Formation of an Image Visual Performance as a Function of Age Old-Age Rules Superposition of Colors Dyschromatopic Vision

Contents

185 186 188 191 193 193 195 196 197 200 204 204 205 207 207 209 211 212 213 214 216 217 218 221 223 225 225 228 230 231 233 234 236 238 239 242 243 245 246 247 249 250 252 254

Contents

Eye Adaptation Time Eat Your Vegetables Stereograph Distance Assorted Eye Facts Head-Mounted-Display Latency

ix

258 259 261 262 264

9 Lasers Introduction Lidar Basic Equations Laser Brightness Laser Beam Quality Gaussian Beam Radius On-Axis Intensity of a Beam Aperture Size for Laser Beams Laser Beam Divergence Laser Beam Spread vs. Diffraction Types of Lidars Laser Radar Range Equation Lidar Bidirectional Reflectance Distribution Function Thermal Focusing in Laser Rods Cross-Section of a Retroreflector Air Breakdown

267 267 269 271 272 274 276 278 278 280 280 282 282 283 285 286

10 Materials and Structures Introduction Diameter-to-Thickness (Aspect) Ratio The Influence of the Mounting Method on Plate Deflection Self-Weight Deflection of Mounted Mirrors Mirror Support Criteria Fundamental Frequency of a Vibrating Plate Fundamental Frequency of a Deformable Mirror Design Guidelines for Pressure Windows Dome Collapse Pressure Glass Does Not Flow Allowable Stress in an Optic Relationship between Tensile and Compressive Stress Estimation of Preload Torque Stress Birefringence Induced by an Applied Load Maximum Stress on an Optic Due to a Metal Retainer A Bonded Mirror Is Three Times More Stable in Tension or Compression than Shear Mechanical Stability Rules Mass Is Proportional to the Element Size Cubed Deflection of a Mirror at the End of a Beam Scan Mirror Deflection Figure Change of Metal Mirrors

289 289 291 293 295 298 301 302 303 305 306 306 308 309 310 311 313 313 315 318 320 321

x

Contents

Foam Core Mirrors Spin-Cast Mirrors Serrurier Truss Spacecraft Issues Related to Space Optics Damage Mechanisms Associated with Micrometeoroids Black Coatings Index of Refraction Resources Carbon-Silicon Carbide Coefficient of Thermal Expansion Properties of Aluminum as a Function of Temperature Permeability of Gases through Thin Films of Aluminum Time to Uniform Temperature Temperature Dependence of the Verdet Constant Modeling Cryo Multilayer Insulation 11 Miscellaneous Introduction Position of the Sun Distance to Horizon Contrast Digital Pixel Equivalent of Chemical Film Common Image and Video Compression Formats The Power of Dimensional Analysis Scissor Integration Number of Seconds in a Year Solid Angles Speed of Light Water Weighs a Ton Avoid Galling Metal Failure of Cylinders and Spheres under External Pressure Defining Screw Threads Friction-Induced Pointing Error after Rate Reversal Shipping Environments Clean Room Classifications Converting Resolution Safety Factors for Optics Use Speckle to Focus 90 Percent of Anything Is Plumbing Arrhenius Equation Miller’s Rule of Test Failure Cooling with Liquid or Solid Cryogen Joule–Thomson Cool-Down Time Low-Earth-Orbit Thermal Changes Crickets as Thermometers Image Intensifier Resolution Photomultiplier Tube Power Supply Noise

324 327 328 330 333 335 337 340 342 346 347 350 351 355 355 355 357 358 360 361 364 366 367 367 369 369 370 370 372 373 374 375 376 377 378 378 379 381 381 383 383 384 384 386

Contents

Quantization Error

xi

386

12 Ocean Optics Introduction Index of Refraction of Seawater Absorption Coefficient Absorption of Ice at 532 nm Absorption Caused by Chlorophyll a Bathymetry f-Stop under Water Underwater Detection Underwater Glow Ocean Reflectance Wave Slope

389 389 391 393 394 395 396 397 398 400 401 402

13 Optical Design and Analysis Introduction Small-Angle Approximation Effects from Light Passing Through a Plane Parallel Plate Beam Deviation Due to a Thin Wedge Prism Impacts of Optical Element Motion Defocus for a Telescope Focused at Infinity Hyperfocal Distance Focal Length and Field of View Limit on FOV for Reflective Telescopes Maximum Useful Pupil Diameter Minimum f-Number f-Number for Circular Obscured Apertures Light Refraction in a Cube Aberration Scaling Spherical Aberration and f-Number Blur vs. Field-Dependent Aberrations Reducing Optical Design Sensitivities Separate the Centers of Curvature Reduce the Ray Angles of Incidence Efficient Reflective Triplet Layout for Feasibility Checks Efficient Three-Mirror Anastigmat Layout for Feasibility Checks Properties of Visible Glass Per Pixel Resolution of a Spectrometer Smith’s Modern Optical Engineering Design Rules of Thumb Diffraction Graph Diffraction Is Proportional to Perimeter Diffraction Principles Derived from the Uncertainty Principle Diffraction Spikes Estimating Surface Scatter

405 405 408 409 411 412 414 416 417 417 418 419 421 422 423 424 425 426 427 428 429 430 432 434 434 436 438 439 440 442

xii

Contents

Power Spectral Density of Surface Roughness In-Field Source Contribution to Stray Light in the Focal Plane Fest’s Stray Light Rules of Thumb Performance Budgeting Using the Hopkins Ratio Linear Approximation for the Optical Modulation Transfer Function Strehl Ratio and Telescope Aberrations Strehl for Obscured Apertures Total Error Using the Root-Sum-Squared Approach Optical Performance of a Telescope Visible Imaging System Resolution Optimal Telescope Resolution for the Human Eye Peak-to-Valley Approximates Four Times the Root Mean Square Ritchey–Chretien Telescope Aberrations Spectral Bandwidth and Resolution of Acousto-optical Tunable Filters Circular Variable Filters Blazed Grating Performance Fabry–Pérot Etalons Pulse Broadening in a Fabry–Pérot Etalon Hollow Waveguides Inflated Mirrors Handheld Binocular Efficiency Stop Down Two Stops Anti-reflection Coating Index Coating Shift with Angle of Incidence Coating Shift with Temperature Grating Blockers Far-Field Model of a Light Source, or the “Rule of 5” Lambertian Source Illumination of a Detector Detecting Linear Polarization Modeling an Optical System Using the Fast Fourier Transform A Collection of Optical Engineering Rules of Thumb Use a “Pencil Bounce” to Determine Image Orientation Thermal Gradients in a Primary Mirror Thermal Lensing 14 Optical Manufacture and Test Introduction Progress in the Fabrication of High-Quality Mirrors Caution while Cleaning Optics Thickness of a Doublet Bond Sag of an Optic Scratch-Dig Specifications Oversizing an Optical Element for Producibility Mind Your Karow Factor Cyanoacrylate Usage

444 445 446 450 451 452 456 461 463 464 464 465 467 468 471 472 474 477 477 479 481 482 484 485 486 488 489 491 493 494 496 506 507 509 511 511 513 515 516 517 518 520 520 522

Contents

xiii

Surface Tilt Is Typically the Worst Error Diamond-Turned Mirror Substrate Design Diamond-Turned Mirror Figure Error Surface Figure and Wavefront Error Requirements Determining if a Surface Is Convex or Concave Using Fringe Movement Approximations for the Foucault Knife-Edge Test Effect of Surface Irregularity on a Wavefront Hartmann Test for Pinhole Size Maximum Size of the Opening for an Integrating Sphere Choosing an Eyepiece for Star Tests Collimator Design Temperature Control Is Critical for Accurate Inhomogeneity Testing Optical Table Displacement Requirements Detection of Flatness by the Human Eye Lesser-Known Lab Tools Yoder’s Rules of Thumb

522 523 524 525 527 527 529 530 531 532 532 533 533 535 535 536

15 Photogrammetry Introduction Basic Optical Equations Triangulation vs. Model-Based Pose Estimation Stereo vs. Triangulation Basic Triangulation Equations Two-Camera Triangulation Accuracy Triangulation Error Tree Sensor Placement for Triangulation Maximum Triangulation Range Triangulation Equations for Cameras Azimuth Corrections for Euclidean Coordinate Systems Model-Based Pose Estimation: Number of Points and Spatial Distribution From What Point in an Optical System Is Range Measured?

539 539 540 543 543 544 545 549 550 552 553 555 556 558

16 Radiometry Introduction The Electromagnetic Spectrum Photons-to-Watts Conversion Brightness of Common Sources The Blackbody Equation Logarithmic Blackbody Functions Narrow-band Approximation to Planck’s Law Peak Wavelength of Wien’s Displacement Law Choice of Waveband Lambert’s Law Etendue In-Band Solar Irradiance at the Top of the Atmosphere Rule of 4(f/#)2

561 561 564 566 567 567 570 571 573 573 575 576 579 580

xiv

Relationship between Minimum Resolvable Temperature and Noise-Equivalent Temperature Difference Ideal NET Simplification Cavity Emissivity Incorrectly Sizing Radiometric Areas No Ice Cream Cones Calibrate under User’s Conditions for Best Results Radiometry of a Spherical Cow Quick Test of NET

Contents

581 582 584 587 588 590 591 593

17 Systems Introduction Pick Any Two Divide by the Number of Visits Dawes Limit BLIP Limiting Rule Rayleigh Criterion Focal Length and Resolution Diffraction Limit in LWIR Procedures to Reduce Narcissus in Infrared Systems System Off-Axis Optical Rejection Signal-to-Noise Ratio for Different Targets Simplified Range Equation General Image Quality Equation Mechanical Shock Response Estimating rms Acceleration Response due to Random Vibrations Typical Values of Electro-optical System Parameters Vibration Isolation Wind Loading

595 595 597 598 599 600 600 602 603 603 604 606 608 609 611 612 614 615 617

18 Target Phenomenology Introduction Emissivity Approximations Solar Reflection Always Adds to the Signature Lambertian vs. Specular Bidirectional Reflectance Distribution Function Hagen–Rubens Relationship for the Infrared Reflectivity of Metals Causes of White Pigment Color Human Body Signature Infrared Skin Characteristics Jet Plume Phenomenology Plume Thrust Scaling Rocket Plume Rules Temperature as a Function of Aerodynamic Heating Laser Cross-Section Chlorophyll Absorptance

619 619 620 621 623 624 627 628 629 630 631 632 633 634 636 636

Contents

Normalized Difference Vegetation Index

xv

637

Appendix Glossary Tables of Numerical, Physical, and Material Properties (or Other Information) Properties of Infrared Materials Thermal and Structural Properties of Materials CIE Chromaticity Diagram Basic Equations Blackbody Nomograph Nomograph of Telescope Performance Azimuth and Elevation Conventions Photonic Noise Sources Guidelines for Writing SI Units Derivation of the Third Equation in the Rule “The Relation of Ideal D* to View Angle”

639 639 667 683 689 693 694 695 696 697 697 702

Index Author Biographies  

707 715

702

 

With the stroke of an eraser, Fraunhofer made diffraction calculations tractable (reprinted with permission from Brigham Young University).

 

Preface The evolution of the photonic sciences parallels, and feeds from, developments in a number of somewhat unrelated fields, including astronomy, satellite and remote sensing technology, materials science, electronics, biomedical sciences, optical communications, military developments, and many others. The common thread of all of this effort, which was defined in the 1950s, is that scientists and engineers have been able to combine highly successful electronic technologies with the more ancient concepts and methods of optics and electromagnetic wave propagation. The merging of these fields has provided an unprecedented capability for instruments to “see” targets and communicate with them in a wide range of wavelengths for the benefit of security systems, science, defense, and (more recently) consumers. In the future, we see the rise of autonomous systems as a sea change that will drive a significant increase in the need for sensing systems to allow the autonomous system to sense and understand its environment. Major departments at universities are now devoted to producing new graduates with specialties in this field. There is no end in sight for the advancement of these technologies, especially with the continued development of electronics and computing as increasingly integral parts of photonic instrumentation. One of the disturbing trends in this technology is the constant narrowing of the role of engineers. As the technology matures, it becomes more difficult for anyone working in an area of photonics to understand all that is being done in the related sciences and engineering. This book has been assembled to make a first, small step to expose anyone working in the optics and photonics community to a wide range of critical topics through simple calculations and explanations. There is no intent to compete with classic texts or the many journals or conferences devoted to the photonics field, all of which provide considerable detail in every area. Rather, this book is intended to allow any engineer or scientist, regardless of specialty, to make rapid and accurate estimations on a wide range of topics that might be encountered in system design, modeling, or fabrication, as well as to provide a guide for choosing which details to consider more diligently. This book will help any electro-optics (EO) team to make quick assessments, generally requiring no more than a calculator, so that they quickly find the right solution for a design problem. The book is also useful for managers, marketeers, and other semi-technical folks who are new to the optics industry (or are on its periphery) to develop a feel for the difference between the chimerical and the real. Students may find the same type of quickcalculation approach valuable, particularly in the case of oral exams in which the professor is pressuring the student to do a complex problem quickly. Using these assembled rules, you can keep your wits about you and provide an immediate and nearly correct answer, xvii

 

xviii

Preface

which usually will save the day. But after the day is saved, you should go back to the question and perform a rigorous analysis. These rules are useful for quick sanity checks and basic relationships. Being familiar with the rules allows one to rapidly pinpoint trouble areas or ask probing questions in meetings. They aid in thinking on your feet and in developing a sense of what will work and what won’t. Another potential application of the contents is to provide a checklist for reviewers asked to assess the completeness of a design or resolve trade studies early in the development of a system. But they are not, and never will be, the last word. Dear reader, it is fully recognized that errors may still be present, and for that we apologize in advance to you and to those from whom the material was derived. We try to the best of our abilities to remove errors we inherit from the references we used and in new material we created. To assist us in this endeavor, we have solicited the cooperation of as many experts as would agree to help. Their input gives us a wide variety of experience from many different technical points of view. Alas, technology advances, and all of us wonder how we can possibly keep up. Hopefully, this book will not only provide some specific ideas related to photonics technology, it will also suggest some ways of thinking about things that will lead to a whole new generation of such rules and ideas. As we discovered with the previous editions of this book, not everyone has the same thing in mind when considering “a rule of thumb.” To qualify for our definition of a rule of thumb, a rule should be useful to a practitioner and possess at least most of the following attributes:      

It should be easy to implement. It should provide roughly the correct answer. The main points should be easy to remember. It should be simple to express. It should highlight the important variables and diminish the role of generally unimportant variables. It should provide useful insight to the workings of the subject matter.

As in earlier editions of books in this series, we found it valuable to create a detailed standard form and stick to it as closely as possible. We did in this edition as well. References are provided whenever possible. In addition, reference material is mentioned that can be considered as recommended reading for the reader with a desire for more detail than could be presented in the “rule” and “discussion.” The reader should note that each rule “stands on its own,” so the abbreviations and terminology may not be entirely consistent throughout. This is intentional; we use the notation of the reference whenever we can so that if you read the original material you will recognize what that author defined. Some rules are derived from the laws of physics, and some represent existing technology trends. Many derive from observations made by researchers in the field, augmented by curve fitting that results in polynomial approximations. The authors of the previous versions of this book arrived at the same place by very different paths. John (now retired, but busy consulting) spent some of his career in infrared astronomy before joining the aerospace industry to work on infrared sensors for space surveillance. He later worked on tactical sensors for search-and-rescue, self-driving cars, active imaging and enhanced vision systems. Ed (now retired, but very busy teaching) spent

Preface

xix

most of his career working on remote sensing technologies applied to Earth, its atmosphere and oceans, and, more recently, astronomical instruments and advanced pointing systems. John and Ed met in Denver in 1985, both working for a major government contractor on exotic electro-optical systems. Those were halcyon days, with money flowing like water, and contractors winning billions of dollars for some concepts that were overly optimistic or barely possible at best. In the center of the whole fray were bureaucrats, politicians, and managers who were demanding that we design systems that would be capable of the impossible. We saw many requirements and goals being levied on our systems that were far from realistic, often resulting from confusing (and poorly understood) interpretations of the capabilities of optical and electro-optical systems and the properties of targets or backgrounds. We found a common ground when managers discovered that many co-workers, in an attempt to outdo the competition, were promising to perform sensor demonstrations that violated many rules of engineering, if not physics. On one multibillion-dollar program, after some consistent exposure to neophytes proposing all sorts of undoable things, we decided to try to educate everyone by creating a half-serious, half-humorous posting for the local bulletin board (this was before websites were ubiquitous) called “Dr. Photon’s Rules of Thumb.” It was a list of basic rules that apply when optics or electro-optics are used. Figure 1 shows the first version that we found all across the company, and even among competitors.

Figure 1 The original Rules of Thumb that started this book.

xx

Preface

For the current version, John and Ed invited (maybe some would say tricked) three other younger authors to modernize, enhance, and yield a new perspective. These luminaries are Jack Sanders-Reed, Brian McComas, and Katie Schwertz, whose bios appear in the back matter of this book. Katie is a hybrid optical and optomechanical designer, drawing from her time at both the University of Rochester and University of Arizona optics programs. She primarily works on commercial and industrial optical subsystems. Her time spent under the tutelage of Jim Burge in Arizona taught her the value of a good estimation, which she didn’t fully appreciate until her time spent in industry. Her Master’s work involved optomechanics “rules of thumb.” Jack has 40 years of experience ranging from medical imaging, to surface science, to pilot vision systems and atmospheric phenomenology, to target detection, tracking, and photogrammetry and covering the electromagnetic spectrum from hard x-ray through visible and infrared to millimeter-wave with both passive and active imaging. Brian has over 30 years of experience working on EO systems for military, astronomical, remote sensing, and industrial use. His Ph.D. was developed under the direction of Ed Friedman. To summarize, this collection of rules and concepts represents an evolving, idiosyncratic, and eclectic toolbox. The rules, like tools, are neither good nor bad; they can be used to facilitate the transition of whimsical concepts to mature hardware or to immediately identify a technological path worth pursuing. Conversely, misused, they can obfuscate the truth and, if improperly applied, derive incorrect answers. Our job was to refine complex ideas to simplified concepts and present these rules to you with appropriate cautions. However, it is your responsibility to use them correctly. Remember, it is a poor worker who blames their tools and we hope you will find these tools useful. John Lester Miller Ed Friedman Jack Sanders-Reed Katie Schwertz Brian McComas 2020  

Acknowledgments The authors would like to acknowledge the help given by the following people: 

James Contreras 



Eric Fest 



Paris and Sophie Kiehl 



Scott McNeill 



Peter Scott 



Charles Silver 



Paul Spyak 



Dave Strong 



William Sundermeier 



William Tennant 



Dan Vukobratovich 



Lucy Zhang 

We are also thankful to Dave Mudrick who contributed the cartoons (except for one). His website is tomduckandharry.com/home.  

xxi

 

         

 

Chapter 1 Astronomy Introduction This chapter contains a selection of rules specifically involving the intersection of astronomy and electro-optics (EO). Sensors frequently look upward, so astronomical objects often define the background for many systems. Moreover, many sensors are specifically designed to detect heavenly bodies, so astronomical relationships define the targets for many sensors. Over the past few hundred years, astronomy has driven advances in optics, and then in photonics and optics in merged systems for astronomy. These disciplines have been as interwoven as DNA strands. Frequently, key discoveries in astronomy are impossible until photonic technology develops to a new level. Conversely, photonic development has often been funded and refined by the astronomical sciences as well as the military. Military interests have been an important source of new technology that has furthered the application of EO in astronomy. The authors contend that one of the most important contributions of the Strategic Defense Initiative (SDI) and its subsequent versions was the advancement of certain photonic technologies that currently benefit astronomers, e.g., space-based telescopes, adaptive optics, synthetic guide stars, large and sensitive focal planes, advanced materials for astronomical telescopes, new methods of image stabilization, and advanced computers and algorithms for interpreting images distorted by atmospheric effects. Recent developments in the size and technology of ground-based telescopes allow them to compete with space-based observation capabilities (except in spectral regions where the atmosphere strongly absorbs or scatters). The dual Keck 10-m telescopes represent an amazing electro-optical engineering achievement that is being surpassed in size by several telescopes. Some initial compromises in the field of view (FOV) over which atmospheric correction can be obtained are expected, but work continues to overcome these issues.1 By employing segmented lightweight mirrors and lightweight structures, and by adjusting the mirrors in real time, many of the past notions and operating paradigms of both ground-based and space-based telescopes have been discarded. This new emphasis on novel technology applied to Earth- and space-based telescopes represents a major addition to the astronomical community’s toolbox and a shift in the electro-optical and astronomical communities’ perceptions. In the near future, these high-technology telescopes, coupled with advanced precision instruments, will provide astronomers with new tools to make new and wondrous discoveries. There is no inherent reason why the technologies used in ground telescopes 1

 

2

Chapter 1

cannot be used in space; in fact, the next generation of science telescopes will feature these advances. For example, the James Webb Space Telescope will have a segmented, actuated primary mirror. In honor of the important role that adaptive optics now plays in groundbased astronomy and will soon play in space-based astronomy, we have included a number of rules on that topic. Some of the other chapters overlap with this one. In particular, the chapter on atmospheric effects is particularly important to the astronomy community; the reader would benefit from reviewing its contents and possible applications to astronomical issues. Reference 1. The E-ELT Construction Proposal, www.eso.org/public/archives/books/pdf/ book_0046.pdf (2020).

Blackbody Temperature of the Sun Consider the Sun to be a 6000-K blackbody. Discussion The Sun is a complex system. It is a main-sequence star (G2) of middle age. Almost all of the light we see is emitted from a thin layer at the surface. The nuclear fusion that created the energy that powered the emitted photon occurred 300,000 to a million years earlier! Temperatures can reach over 10 million degrees in the center. However, at the surface, the temperature is usually under 6000 K, with the best fit to a blackbody curve usually at 5770 K. The Sun appears as a blackbody with a temperature from about 5750–6100 K, depending on the wavelength of observation chosen for matching the various candidate blackbody curves. At shorter-than-visible wavelengths, the Sun appears somewhat brighter than the temperatures shown above. The Sun (and all other stars) has absorption and emission lines, so using a blackbody approximation is valid only for broad band passes. The absorption lines are well documented and can be found in collections of Fraunhofer line shapes and depths.1–3 The lines result from various metals in the outer atmosphere of the Sun, with iron, calcium, and sodium causing some of the most prominent lines. Earth’s atmosphere strongly absorbs some wavelengths, so solar radiation that reaches the surface may not resemble a blackbody spectrum for those wavelength bands. The above is from a general curve fit for wide band passes, disregarding atmospheric absorption. A star’s blackbody temperature based on spectral type is (to a first order) approximately B: A: F: G: K: M:

27000 K, 9900 K, 7000 K, 5900 K, 5200 K, and 3800 K.

Astronomy

3

It is likely an accident that the peak of the Sun’s radiation is well matched to a major transmission window of the atmosphere. On the other hand, it is no accident that the peak performance of the human vision system is well matched to the solar radiation that reaches the ground. Evolution of visual systems has ensured that performance is best around 555 nm. Due to the absorption properties of the atmosphere, the Sun deviates significantly from blackbody properties when seen from the ground. References 1. E. A. Gurtovenko and V. A. Sheminova, “Formation depths of Fraunhofer lines,” arxiv.org/pdf/1505.00975.pdf (5 May 2015). 2. E. Hecht, Optics, 2nd Edition, p. 234, Addison Wesley (1987). 3. Wikipedia, “Fraunhofer lines,” en.wikipedia.org/wiki/Fraunhofer_lines (2020).

Number of Stars as a Function of Wavelength For a given sensitivity at visible wavelengths and beyond, the longer the wavelength is, the fewer stars can be sensed. The falloff in the number of stars is approximated by the following: –0.4R

#S2 ≈ #S1 × 10

,

where #S2 is the number of stars at wavelength 2 (2 larger than 1) at a given irradiance, R is the ratio of one wavelength to another (2/1), and #S1 is the number of stars at wavelength 1. Discussion This rule is based on curve-fitting empirical data by the authors. It is useful for separate narrow bands, from about 0.7–15.0 microns. Generally, this provides accuracy to within a factor of 2. Most stars are like our Sun and radiate most of their energy in the visible part of the spectrum. As the wavelength increases, there are fewer stars, because the Planck function is dropping for the stars that peak in the visible, and fewer stars have peak radiant output at longer wavelengths.

A Simple Model of Stellar Populations The total number of stars above a given visual magnitude MV can be estimated from a numerical model derived from measured data:1 #S  11.84  100.4204 MV ,

where #S is the approximate number over the whole sky. Discussion This simple rule is accurate to within a factor of 3 between magnitudes 0 and 18.5. It provides a good match—no worse than a factor of 5 for most magnitudes. A reminder,

4

Chapter 1

however, is appropriate about the difference between visual and absolute magnitudes. The definition of the relationship between the two is quite simple. The absolute magnitude is the magnitude that the star would exhibit at a distance of 10 parsecs (about 33 light years). Two stars can be compared in apparent magnitude by

m1  m2  2.5log

d12 , d 22

where d1 and d2 are the distances of stars 1 and 2, respectively. Therefore, using 10 parsecs for d2, the formula becomes

m1  m2  2.5log

d12  m  M  5log d1  5, d 22

where M indicates a measure of absolute magnitude. d1 and d2 are measured in parsecs. The one value to remember is that in the astronomical V band,2 a magnitude-zero star has a photon flux very close to 107 photons/cm2/sec/micron. This can be easily derived from the number in Fig. 1.11,3 for the V band because the energy in a photon is hc/λ. The properties of other bands can be found in the Appendix. Tables 1.1 and 1.2 provide additional details about stellar populations and their radiometric properties.

 

Figure 1.1 Data is shown for three band passes (Bp, G, and Rp) of the GAIA astrometric spacecraft.1 The heavy line depicts the equation in the rule.

Astronomy

5

Table 1.1 Estimated total number of stars brighter than a given magnitude.1 Visual Cumulative Magnitude Band Bp (330–680 nm) G (330–1050 nm) Rp (630–1050 nm) 2.0 2.5 0.00E+00 3.80E+01 9.40E+01 2.5 3.0 0.00E+00 1.15E+02 3.08E+02 3.0 3.5 4.40E+01 2.99E+02 6.53E+02 3.5 4.0 2.52E+02 6.19E+02 1.25E+03 4.0 4.5 6.16E+02 1.18E+03 2.34E+03 4.5 5.0 1.21E+03 2.17E+03 4.32E+03 5.0 5.5 2.31E+03 3.83E+03 7.90E+03 5.5 6.0 4.10E+03 6.75E+03 1.43E+04 6.0 6.5 7.28E+03 1.22E+04 2.49E+04 6.5 7.0 1.29E+04 2.15E+04 4.31E+04 7.0 7.5 2.22E+04 3.70E+04 7.37E+04 7.5 8.0 3.81E+04 6.34E+04 1.24E+05 8.0 8.5 6.48E+04 1.07E+05 2.07E+05 8.5 9.0 1.09E+05 1.79E+05 3.42E+05 9.0 9.5 1.81E+05 2.96E+05 5.60E+05 9.5 10.0 2.97E+05 4.85E+05 9.08E+05 10.0 10.5 4.84E+05 7.81E+05 1.45E+06 10.5 11.0 7.77E+05 1.25E+06 2.29E+06 11.0 11.5 1.23E+06 1.96E+06 3.60E+06 11.5 12.0 1.93E+06 3.06E+06 5.61E+06 12.0 12.5 3.00E+06 4.74E+06 8.67E+06 12.5 13.0 4.61E+06 7.28E+06 1.33E+07 13.0 13.5 7.03E+06 1.11E+07 2.01E+07 13.5 14.0 1.06E+07 1.67E+07 3.00E+07 14.0 14.5 1.58E+07 2.48E+07 4.46E+07 14.5 15.0 2.33E+07 3.66E+07 6.59E+07 15.0 15.5 3.39E+07 5.33E+07 9.65E+07 15.5 16.0 4.91E+07 7.73E+07 1.39E+08 16.0 16.5 7.05E+07 1.11E+08 1.99E+08 16.5 17.0 1.00E+08 1.56E+08 2.81E+08 17.0 17.5 1.41E+08 2.18E+08 3.96E+08 17.5 18.0 1.97E+08 3.01E+08 5.49E+08 18.0 18.5 2.76E+08 4.14E+08 7.45E+08 18.5 19.0 3.84E+08 5.68E+08 9.72E+08 19.0 19.5 5.32E+08 7.48E+08 1.20E+09 19.5 20.0 7.22E+08 1.00E+09 1.35E+09 20.0 20.5 9.59E+08 1.30E+09 1.38E+09

Reference 3 provides this equation as a third-order fit to the measured data:

log  N   0.754  0.4896V  0.001159V 2  0.000235V 3 , N  10log  N .

6

Chapter 1

Table 1.2 Radiometric properties of magnitude-zero stars in several bands. This data is widely available in astronomy and telescope texts, e.g., p. 166 of Zissis.1

  References 1. gea.esac.esa.int/archive/visualization (2020). 2. G. Zissis, Sources of Radiation, p. 169, Vol. 1 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 3. www.aa.quae.nl/en/antwoorden/magnituden.html (2019). 4. “Photometric system,” en.wikipedia.org/wiki/Photometric_system (2020).  

Number of Infrared Sources per Square Degree The number of infrared sources Ns brighter than the irradiance at wavelength  per square degree is1 log Ns[s(b)] ≈ log[A(b,l)] + B(b,l) log[E12{,s(b)}], where E12{,s(b)} is the equivalent spectral irradiance at 12 microns producing N{s(b)} sources per square degree, jansky; s(b) is the spectral index, defined as the ratio of the 12micron spectral irradiance to the 25-micron spectral radiance that produced the same source count N (as a function of galactic latitude, the spectral index is s(b) = –0.22 – 1.38(1.0 – exp(b/15)); b is the galactic latitude in degrees, 0° ≤ b ≤ 90°; l is the galactic longitude in degrees, 0° ≤ l ≤180° (0° is in the direction of Sagittarius); log [A(b,l)] is 0.000488l  0.78 

0.000061l 2  0.02082l  3.0214 ;   b 1.4  1       12  

Astronomy

7

and B(b,l) is

 b   (0.00978l  0.88) 1.0  exp     (0.00978l  1.8)  8.0  0.05l    for 0° ≤ l ≤ 90°. For l > 90°, B is –0.92. Discussion The IR sky is rich with astronomical sources. This rule provides an excellent match to the distribution of sources found in the archive developed by the Infrared Astronomical Satellite (IRAS) spacecraft. The first equation in this rule works well from wavelengths of about 2–40 microns. The largest uncertainty exists in the approximate longitude range of 0–90° and 270–360° for galactic latitudes within ±3° of the galactic equator. The jansky unit deserves some attention.3 This term has its genesis in radio astronomy but is finding wide use in IR astronomy. A jansky is defined as 10–26 watts per square meter of receiving area per hertz of frequency band (W/m2/Hz) and is named for the pioneer radio astronomer Karl Jansky. The following discussion shows how the conversion from typical radiant intensity to jansky units is performed. There is equivalence between the energy E expressed in either frequency or wavelength as follows: E d   E d , E  E

d . d

Also, = c/so that d/d= –c/. This leads to

E  E

c 6 10 , 2

where the numerical factor converts from W/m2/Hz on the right side of the equation to W/m2/μm on the left side. Both c and λ are expressed in meters. From this equation, we find that a jansky at 20 microns equals about 7.5 × 10-15 W/m2/micron. At 0.56 micron, a jansky equals 9.6 × 10–12 W/m2/micron. Visible-wavelength stars can also be a source of confusion and a source of wavefront control signals. Figures 1.2 and 1.3 illustrate the variation in the density of stars of a particular magnitude as a function of galactic latitude. The data can be matched to about a factor of 2–5 with the following expression: 16  watts  3  10  E  2 jansky,  2  cm micron 

where  is expressed in microns. From this equation, we find that 1.5 jansky at 12 microns equals about 3.125 × 10–18 W/cm2/micron.

8

Chapter 1

Number of stars per square degree

100000 10000 1000

90 40

100

20 10

10 0

1

-10 -20

0.1

-40 -90

0.01 5

10

15

20

Visual magnitude

Figure 1.2 The number of stars per square degree is approximately exponential up to magnitudes of about 18. The legend shows galactic latitude.4 100000

Number of stars per square degree

10000 1000 100 10 1

0.1

0.01

-90-80-70-60-50-40-30-20-10 0 10 20 30 40 50 60 70 80 90 Galactic latitude (°) Figure 1.3 The presence of the high density of stars near the galactic plane is easy to see in this graph. The legend is visual magnitude from the Earth with Mv = 5 as the lowest line, incrementing by 1 photographic magnitude up to Mv = 18.4

Astronomy

9

Figures 1.2 and 1.3 illustrate the variation in the density of stars of a particular magnitude as a function of galactic latitude. The data can be matched to about a factor of 2–5 with the following expression:

N  M v , latitude  6.55 104 e

 latitude /30 M v

e ,

where N is the number of stars per square degree with magnitude greater than Mv, and the latitude is expressed in degrees. References 1. D. D. Kryskowski and G. Suits, “Natural Sources,” pp. 179 in G. Zissis, Sources of Radiation, Vol. 1 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 2. Data derived from W. Wolfe and G. Zissis, The Infrared Handbook, Environmental Research Institute of Michigan, Ann Arbor, MI (1978). 3. Reference 1, p. 177. 4. Reference 2, Table 3-6.

Number of Infrared Stars above a Given Radiance A simple model provides a quick estimate of the number of infrared sources per square degree as a function of galactic latitude, radiance level, and wavelength. This rule complements the prior rule: Number of infrared sources per square degree that exceed a particular value of radiance = Weighting factor (Wavelength)–3.

Discussion The Milky Way galaxy is extremely complex and includes roughly 250 billion stars. Many of these stars produce significant infrared (IR) signatures. Having a general idea of the density of IR stars can the useful for telescopes that will detect the relevant wavelengths. The prior rule (“Number of Infrared Sources per Square Degree”) provides one approach for estimating the number of infrared sources per square degree as a function of radiance, galactic latitude, and longitude. This rule averages data over galactic longitude and provides a very simple equation that involves looking up a single number (the weighting factor) and using it in the equation in the rule. Table 1.3 provides the weighting factors. The simplification of this rule is that the weighting factors apply to all wavelengths, and so the rule is simple to use. The performance of the rule has been explored and is found to match data in Zissis1 to a factor of two over a wide range of wavelengths (2–40 microns), latitudes (10–90), and radiance levels (1E-14 to 1E-20). Blank entries in the table indicate that the rule does not apply in those conditions.

10

Chapter 1

Table 1.3 Weighting factors to use in the equation of this rule. Empty cells indicate the limit of the range of latitude and radiance values for which the rule applies. Radiance Level (w/cm2/micron) 1.00E-14 1.00E-15 1.00E-16 1.00E-17 1.00E-18 1.00E-19 1.00E-20

10 6.36E-01 7.56E+00 1.02E+02 -

20 4.77E-01 4.25E+00 3.95E+01 3.64E+02 4.75E+03 3.02E+04 3.01E+05

Galactic Latitude (deg) 30 45 3.19E-01 2.23E-01 2.69E+00 1.88E+00 2.28E+01 1.56E+01 2.00E+02 1.32E+02 2.23E+03 1.08E+03 1.39E+04 9.24E+03 1.25E+05 7.63E+04

60 1.82E-01 1.53E+00 1.28E+01 1.04E+02 9.74E+02 7.14E+03 7.48E+04

90 1.40E-01 1.15E+00 9.57E+00 7.93E+01 6.76E+02 5.52E+03 4.61E+04

Table 1.4 Numerical experiments illustrate the performance of this rule *Longitudinal average of data in Zissis for a particular latitude and radiance Latitude 10 20 30 45 60 90

Wavelength (microns) 40 12 4 40 30 12

Radiance

Weighting

Result

Actual Value*

Error

1.00E-14 1.00E-17 1.00E-16 1.00E-20 1.00E-18 1.00E-15

6.36E-01 3.64E+02 2.28E+01 7.63E+04 9.74E+02 1.15E+00

9.9E-06 2.1E-01 3.6E-01 1.2E+00 3.6E-02 6.6E-04

1.7E-05 1.6E-01 4.1E-01 1.3E+00 3.5E-02 5.0E-04

42% 28% 13% 5% 2% 33%

This rule derives from numerical analysis and curve fitting of the data in the reference. An example will help explain how to use the rule. If we want to estimate the number of sources with a radiance above 1E–16 w/cm2/micron at a galactic latitude of 45 degrees and for any wavelength, we look up the appropriate weighting factor (15.6) and do the following computation: Number of infrared sources per square degree = 15.6(Wavelength)–3. Table 1.4 provides example results from application of the equation in the rule. Reference 1. G. Zissis, Sources of Radiation, p. 182, Vol. 1 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993).

Direct Lunar Radiance In the visible wavelength range, the dominant signal from the Moon is the reflectance of sunlight. This is expressed in the form of its radiance L: Lreflected    

Lbb   ,5900 K  Rm    

,

Astronomy

11

where the equation specifically notes the approximate blackbody temperature of the Sun (5900 K). The notation Lbb(, 5900 K) describes the blackbody radiance of the Moon as a function of wavelength  for a temperature of 5900 K. Rm is the reflectivity of the Moon, which has typical values of 0.1 in the visible wavelengths, 0.3 for 3–6 microns, and 0.02 for 7–15 microns.1  is the solid angle formed by the Moon when viewed from Earth. In the infrared (IR), the Moon’s thermal emission must also be considered as it can be the dominant source of photons in some bands. Discussion The Moon is an important (and sometimes dominant) source of radiation in the night sky. Its signature includes radiation ranging from the visible to the IR, so all types of sensors must be designed to tolerate its presence. Many sensors (such as image intensifiers, lowlight-level cameras, and modern consumer cameras) can exploit this light. The light seen when viewing the Moon is the superposition of emitted radiation, reflection of solar radiation, and emission from the atmosphere:

Lmoon          Lreflected     Lemitted      Latm    , where τ is the transmission of the atmosphere, Lemitted is the radiance of the Moon, and Latm is the radiance of the atmospheric radiance of Earth’s atmosphere at the lunar bands in question. The IR signature from the full Moon is defined by its apparent blackbody temperature of 390 K. Anyone using the following equation should note that the actual temperature of the Moon depends on the time elapsed since the location being imaged was last illuminated by the Sun. This can result in a substantial difference from the following equation, but it is good enough if you don’t know the details of the lunar ephemeris:

Lemitted      Lbb  , 390 K  . Nighttime temperatures are around 95 K2. The spectral emissivity ε(λ) in the equation above can be estimated by using the reflectivity numbers quoted previously, remembering that 1 – R =  As a result of changes in the distance from Earth to the Moon, the solid angle of the Moon seen from Earth is  = 6.8 × 10–5 sr [with variation from 5.7 × 10–5 to 7.5 × 10–5]. It is an accident of nature that the angular size of the Moon is close to the angular size of the Sun. Under rare conditions, the Moon’s position in its orbit results in a lunar angular size that almost exactly matches the angular size of the Sun. If an eclipse occurs under these conditions, the entire disk of the Sun is blocked, and it is possible to view the solar corona. If the Moon is slightly farther away from Earth during an eclipse, then the entire disk of the Sun is not covered, and an annular eclipse occurs. References 1. J. Shaw, “Modeling infrared lunar radiance,” Opt. Eng. 38(10), 1763–1764 (1999).

12

Chapter 1

2. J. P.Williams et al., “The global surface temperatures of the Moon as measured by the Diviner Lunar Radiometer Experiment,” Icarus 283, 300–325 (Feb. 2017).

Atmospheric Seeing Good “seeing” from the ground is seldom much better than about 1 arc second (arcsec), or about 5 microradians. Discussion The inhomogeneous and time-varying refractive index of the atmosphere degrades images of distant objects. The varying atmosphere induces wavefront tilt (apparent displacement of the target), scintillation (fluctuating apparent brightness of the target), and wavefront aberrations (blurring that includes coma, astigmatism, and all the other defects in the image). The combination of these effects is called “seeing.” Typical seeing obtainable on good nights at high-altitude observatories is approximately 1 arcsec (about 5 microradians). The atmosphere imposes this empirical limit; it is not strongly related to the aperture of the optics. Common amateur telescopes apertures of 10 cm or less are well matched to the atmosphere in the sense that larger apertures do not permit finer resolution unless wavefront correction is used. A small-aperture telescope is sensitive to wavefront tilts, which manifest as images that seem to move around over time intervals of one-tenth of a second or so. Large-aperture telescopes such as those used by professional astronomers are sensitive to additional aberrations caused by the atmosphere, which manifest as fuzzy images that appear to boil. Over the long exposures typically employed in astronomy, the boiling is averaged out, and the fuzzy images have typical angular extents of 1 arcsec. Large apertures do, of course, collect more light, some of which can be used to control active optical elements that can undo much of the effect of bad atmospheric seeing. Large telescopes are located where the seeing has been measured to be good, e.g., mountaintops. Two rules in the Atmospherics chapter deals directly with telescope influences on the overall seeing of an observatory: “Telescope Seeing Created by a Dome Floor” and “Telescope Seeing Due to Still or Ventilated Air.” Seeing is better at high altitudes and at longer wavelengths. Bad sites and bad weather (lots of convection) make seeing worse. Seeing tends to improve with wavelength (to something like the 6/5th power); that is, the seeing angle gets smaller (better) as the wavelength increases. See the various rules in the Atmospherics chapter, particularly the rule “Fried Parameter.” Although very rare, seeing may approach 0.1–0.2 arcsec (or around 0.5–1.0 microradian) with ideal conditions. References Any book on amateur (or professional) astronomy will provide details on the issue of seeing, e.g., D. Schroeder, Astronomical Optics, Academic Press (2000).

Astronomy

13

Comparison of Resonant Fluorescence and Rayleigh Guide Stars Astronomers have successfully used both resonant and Rayleigh laser guide stars to correct the performance of telescopes equipped with adaptive optics. In both cases, the photon count N impinging on each pixel (or group of pixels) of an array of guide star sensors is given by

NF

D 2 , 4ntot ptot

where F is the photon flux (photons/m2) arriving at the telescope, D is the diameter of the telescope, ntot is the total number of cells in the guide star sensor, and ptot is the total number of detectors in each cell. The photon flux for both the resonant scattering and Rayleigh scattering laser guide star is derived in the following sections. Discussion This rule provides methods for computing the signal available from each type of guide star. Another computation (the Strehl ratio of the atmosphere, defined in the Atmospherics chapter) shows how the signal is used to correct tilt in the incoming starlight field. Using a sodium guide star as an example, the photon flux from a resonant scattering laser guide star is as follows:1 2 N pde  QE Tt TrTatm Ep

Cs eff  Asub , 2 hc 4hs 1   n / s  

where Ep is the energy per pulse, Tt is the laser transmitter optical transmission, Tr is the optical transmission of the wavefront sensor, Tatm is the one-way transmission of the atmosphere, Asub is the area of a subaperture in the wavefront sensor, QE is the quantum efficiency of the wavefront sensor detection process,  is the wavelength, c is the speed of light, h is the Planck’s constant, hs is the height of the sodium layer above the site, Cs is the sodium column density, eff is the effective cross-section of the sodium atom's interaction with the laser beam [the factor (1 + n/s) is due to saturation], n is the natural radiative lifetime of sodium (16 ns), and s is the saturation time. The last of the defined parameters is the saturation time of sodium fluorescence. This is important since saturation of the stimulation is a bounding limit on the signal that can be produced. An expression for the saturation time is

s 

2t 2rms hc , E p Tatm eff

where rms is the radius of the laser beam profile (assumed to be Gaussian), which can be expressed as 1.22(hs/r0), and t is the laser pulse width. The equation for the photon flux for a Rayleigh guide star is similar to that of a resonant guide star. The photon flux from a Rayleigh guide star is given by1

14

Chapter 1

2 N pde  QE TtTrTatm

Asub BS l E p  , R2 hc

where BS is the fraction of incident laser photons scattered in the backward propagation direction per meter of scattering volume (sr–1/m–1), l is the length of the scattering volume (the range gate), and R is the range to the midpoint of the scattering volume. The length of the guide star is defined by the absorption rate of the stimulating light in the atmosphere. Low absorption leads to a long “streak,” whereas high absorption leads to a highly localized light source. The product of the backscatter cross-section and the atmospheric density is

2  n02  1 T0 P  h  ,  BS  h   N0 4 P0 T  h  2

where n0 is the index of refraction of air at sea level, N0 is the number density of air molecules at sea level, and P0, T0, P(h), and T(h) are the atmospheric pressure and temperature at sea level and altitude h, respectively.1 For typical atmospheric values,

 BS  2.8  1032

P h

T  h

sr 1m 1 ,

which implies a result of 5.1 × 10–7 sr–1/m–1 at 10 km. This means that a 1-km-long streak will return only about 0.05 percent of the incident light per steradian. The signal strength achieved in the wavefront sensor determines how effectively atmospheric aberrations are corrected. For example, tilt error is2 0 

0.61 L , SNRave d

where is the rms tilt measurement error, L is the laser wavelength, SNRave is the average signal-to-noise ratio of the detector, and d is the subaperture size. The signal-to-noise ratio of the guide star is given by SNR 

k s2 , k s  kb  det

where ks is the number of electrons due to the beacon signal, kb is the number of electrons due to the background, and det is the detector noise variance (including read noise, dark current, etc.). The signal level ks in electrons is given by

ks  N QE ,

Astronomy

15

where N is the number of photons incident on a single detector, and QE is the detector’s quantum efficiency. Of course, tilt is not the only aberration to be corrected. Rules in the Atmospherics chapter describe the success at reducing higher-order aberrations using guide stars and adaptive optics. Isterling3 provides additional detail on the factors (including the size a of particles in the atmosphere) that determine eff: 4

eff

2

8  2nmed  6  m 2  1     a  2  , 3  0  m 2

where 0 is the vacuum wavelength, a is the particle radius, and m = nsph/nmed is the ratio of the refractive index of the particle nsph to that of the surrounding medium nmed. Two additional restrictions on guide star design must be considered.1 They have to do with the length of the streak of light created by each pulse. The first is related to the range gate in the receiver; it should match the 1/e2 positions of the ends of the streak to the angular width of the guide star. This leads to

l  4.88

H 2 , Dt r0

where r0 is the Fried parameter, H is the average altitude of the guide star, and Dt is the transmitter optic diameter. A second requirement deals with ensuring that no subapertures can resolve the length of the streak. This leads to

l  2

H 2 , Dr0

where D is the receiver aperture (and will usually be the aperture of the astronomical telescope). Parenti and Sasiela4 and Parenti5 provide a convenient metric for the performance of guide star systems. The calculation provides insight into the resolution that can be achieved using a guide star system. Of course, two paths through the atmosphere are involved: one for the guide star light, and one for the light from the astronomical object being observed. The angle between these two paths (and the brightness of the guide star) determines the track jitter associated with a long-exposure image: 2

 ResLE

2

2     dt  20  hc /    D 2   1.22   1  5   tilt  1.22   1   .     3dt r02 I t D  D   0 tilt    0 tilt       anisoplanatism

servo error

sensor noise

The first jitter term is the anisoplanatic error that results from the difference in overall tilt between two beam paths with an angular separation 𝜗. 0 (the isoplanatic angle, discussed in the Atmospherics chapter) is about 8 arcsec but can vary with location, season, daytime versus nighttime conditions, etc. The servo error term derives from the ratio of sensor dwell time (dt to the tilt time spectrum of the atmosphere (about 12 milliseconds)).

16

Chapter 1

The last term under the radical is related to photon noise in the detected signal It. As usual, r0 is the Fried parameter, and  is the quantum efficiency of the detectors. References

1. S. R. Robinson, Ed., Emerging Systems and Technologies, Sec. 1.7.3.1, Volume 8 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 2. B. K. McComas and E. J. Friedman, “Wavefront sensing for deformable spacebased optics exploiting natural and synthetic guide stars,” Opt. Eng. 41(8), 2039– 2049 (2002). 3. J. Isterling, “Electro-Optical Propagation through Highly Aberrant Media,” p. 35, Ph.D. Dissertation, University of Adelaide digital.library.adelaide.edu.au/ dspace/bitstream/2440/64292/8/02whole.pdf (2020). 4. R. R. Parenti and R. J. Sasiela, “Laser Guide-Star Systems for Astronomy,” J. OSA 11(1), 288–309 (1994). 5. R. R. Parenti, “Adaptive Optics for Astronomy,” Lincoln Laboratory Journal 5(1), 93–114 (1992). Additional References





“The Statistics of Isoplanatic Angle and Adaptive Optics Time Constant,” derived from DIMM Data, Venice 2001: Beyond Conventional Adaptive Optics, M. Sarazin, and A. Tokovinin, eso.org/gen-fac/pubs/astclim/papers/ venice2001/venice2001-msarazin.pdf (2020). J. W. Hardy, Adaptive Optics for Astronomical Telescopes, Oxford, p. 121 (1998).

Number of Actuators in an Adaptive Optic To correct for atmospheric turbulence effects, an adaptive optic system needs a minimum number of actuators. The number of actuators that is required, if evenly spaced over the adaptive optic, is1 2

 telescope aperture    , r0  

where r0 is the form of the Fried parameter used for spherical waves, which applies to nearby sources:

r0  3.024  k 2 Cn2 L 

3/5

.

For plane waves, such as those received from starlight,

r0  1.68  k 2Cn2 L 

3/5

.

Astronomy

17

In addition, L is the distance the light that propagates through the disturbing atmosphere; k = 2Cn2 is the atmospheric structure coefficient, which is equal to ~10–14 m–2/3 (the value depends on environmental conditions, as explained in other rules in this chapter); and is the wavelength. Discussion

These results derive directly from the turbulence theory of the atmosphere, described elsewhere in this book (mostly in the Atmospherics chapter). Considerable effort has gone into confirming the accuracy of the theory, as described in the introduction to this chapter and several of the rules. The results shown here are for the simplifying situation in which the properties of the atmosphere are accurately measured or modeled by a Cn2 profile. It also assumes that r0 is smaller than the aperture that is equipped with the adaptive optics technology. Otherwise, an adaptive optic is neither necessary nor helpful. For example, adaptive optics provide no improvement for the relatively small apertures used by most amateur astronomers because the aperture is about the size of the Fried parameter, meaning that only tilt occurs in such systems. Tilt can be removed with a steering mirror with an appropriate tilt sensor controlling its orientation. The typical astronomical case involves correcting for the turbulence in a nearly vertical path through the atmosphere. The descriptions above for the Fried parameter apply only for constant atmospheric conditions. The complexity of computing r0 for the nearly vertical case can be avoided by assuming that r0 is about 15 cm. This also shows the wavelength dependence of the performance of an adaptive optics system. Some algebra reveals that the wavelength dependence of the number of actuators is λ–12/5, so longer wavelengths require fewer adaptive elements, as expected. To properly compensate for atmospheric turbulence, the number of actuators depends on the Fried parameter and the size of the optic. The optical surface must be divided into more moveable pieces than the maximum number of turbulent cells that can fit on the same area. If fewer actuators are used, then the atmosphere will cause a wavefront error that cannot be compensated. Tyson2 shows that a more accurate representation of the number of actuators is  0.05k 2 LCn2 D 5/3      ln 1 / SR  

6/5

,

where SR is the desired Strehl ratio. The Strehl ratio is a commonly used performance measure for telescope optics and essentially defines how closely an optical system comes to performing in a diffractionlimited way. Diffraction-limited imaging is usually assumed to require a Strehl ratio of 0.8. Another way to look at this issue is to investigate the fitting error for a continuous facesheet. The following equation shows the variance in the fitting error in radians squared as a function of the Fried parameter r0 and d, the actuator spacing:3 5/3

d wavefront variance  0.28   (rad 2 ).  r0 

18

Chapter 1

Thus, we see that, using the simple form of the rule, a 1-m aperture operating at a location that typically experiences a Fried parameter of 5 cm will need 400 actuators. This is typical of nearly vertical viewing associated with astronomical applications. There are at least two ways to implement the corrections in a wavefront. The first approach changes the shape of the primary mirror, which is rarely done at high bandwidth. The more common approach corrects a smaller optic located at a pupil. Because of the magnification of the telescope, the pupil will necessarily be smaller than the primary mirror, meaning that the number of actuators computed above must fit into this smaller area. References

1. H. Weichel, Laser System Design, SPIE Course Notes, p. 144, SPIE Press (1988). 2. R. Tyson, Principles of Adaptive Optics, p. 259, Academic Press (1991). 3. R. Dekany et al., “1600 Actuator Tweeter Mirror Upgrade for the Palomar Adaptive Optics System (PALAO),” Proc. SPIE 4007 (2000).

Bandwidth Requirement for Adaptive Optics To correct phase fluctuations induced by the atmosphere, an adaptive optics servo system should have a bandwidth of

fg 

0.4vw L

,

where vw is the wind velocity, is the wavelength, and L is the path length inducing the phase fluctuations. Discussion

This handy relationship indicates that the shorter the wavelength and the higher the wind velocity are, the faster the servo system needs to be controlled. The bandwidth is lowered as the path length increases because of the effect of averaging over the path. The bandwidth defined by this formula is often referred to as the Greenwood frequency fg. A more complete expression for the Greenwood frequency is

f g  0.102  k 2sec   Cn2  h    h  0  L

5/3

dh  

3/5

,

where  is the angle from the line of sight to the zenith, (h) is the vertical profile of the wind, and 𝐶 ℎ is the atmospheric structure function. As usual, k is With a little work, it can be shown that the Greenwood frequency is proportional to –6/5. An even simpler form of the rule is4 0.43vw , r0

where r0 is the Fried parameter.

Astronomy

19

Finally, it can be shown that there is a relationship between the Greenwood frequency and Strehl ratio SR. Several rules about the Strehl ratio, an important metric for the performance of laser and optical systems, are found in the Lasers chapter. It is shown5 that   f 2  SR  exp    G   ,   f s  

where fG and fs are the Greenwood frequency and system bandwidth, respectively. A related expression defines the Strehl ratio if the effects of the isoplanatic angle 0 and slew rate  of a system tracking a signal through the atmosphere are included:5 5/3   f 2     G SR  exp      2.5   .   f s   4f s 0   References

1. R. Tyson, Principles of Adaptive Optics, p. 36, Academic Press (1991). 2. J. Mansell, “Micromachined Deformable Mirrors for Laser Wavefront Control,” Ch. 2, p. 3, Ph.D. Dissertation, Stanford University (2002). 3. P. Berger et al., “AEOS adaptive-optics system and visible imager,” Proc. 1999 AEOS Technical Conference, U.S. Air Force, Maui, HI. 4. www-astro.physics.ox.ac.uk/~pfr/ObsTech/ObsTech3.pdf (2019). 5. S. R. Robinson, Ed., Emerging Systems and Technologies, p. 228, Volume 8 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). Additional Reference



aeos.ulg.ac.be/upload/ao.pdf (2020).

Photon Rate at a Focal Plane The photon rate at a focal plane from a star of magnitude m is1

S  NT

 1  2  D2 100.4 m ,  4

where S is the photon flux in photons/second, N is the irradiance of a magnitude-zero star [≈ 107 photons/(cm2 sec micron)] in the band centered on 0.55 microns, D is the diameter of the telescope (cm), T is the unitless transmittance of the atmosphere and optics,  is the band pass of interest (microns), m is the visual magnitude of the star, and  is the obscuration ratio, which represents the ratio of the diameter of the secondary mirror to the diameter of the primary mirror. The additional obscuration of the struts that hold the secondary mirror is included as well. The latter effect will not occur if the telescope is an off-axis design.

20

Chapter 1

Discussion

This rule allows the approximate calculation of the number of photons per second at the instrument focal plane. Additionally, Kailey2 gives us the following handy approximations:   

A difference of 1 magnitude results in a difference of about 2.5 (actually 2.512) in spectral irradiance. A difference of 5 magnitudes is a factor of 100 in spectral irradiance. A small-magnitude difference is equivalent to an equal percentage difference in brightness (e.g., 10.01 magnitude is ~1 percent dimmer than 10.00 magnitude).

This rule was developed for A-class stars (the hottest subclass of white stars with a surface temperature of about 9000 K and prominent hydrogen lines). It is valid for narrow visible bands (use with caution otherwise). Most on-axis reflecting telescopes have circular obscurations in the center of the aperture. Therefore, if  is 0.3, then

(1   2 )  0.71, 4 so the equation simplifies to S = 0.7NT D2 10–0.4 m. References

1. D. Schroeder, Astronomical Optics, p. 435, Academic Press (1987). 2. Private communications with Walt Kailey (1995). Additional Reference



D. Dayton, M. Duncan, and J. Gonglewski, “Performance Simulations of a Daylight Low-Order Adaptive Optics System with Speckle Postprocessing for Observation of Low-Earth Orbit Satellites,” Opt. Eng. 36(7), 1910–1917 (1997).

Reduction of Magnitude by Airmass The atmosphere absorbs about 0.2 magnitude per airmass. Discussion

Ground-based astronomers frequently represent atmospheric path length as airmass normalized to the zenith. Looking straight up, there is an airmass of 1. As the telescope’s line of sight is reduced in elevation (i.e., the zenith angle gets larger), the amount of air through which it must view is increased and reaches a maximum at the horizon. The total airmass for viewing an object at the horizon from sea level is about 10 times the vertical view because the densest part of the atmosphere is near the ground. In this rule, we will show the general calculation of the path length as a function of the elevation angle of the telescope. To start the calculation, let us first make the “flat-Earth” assumption. That is, let us take the case where the angle measured from the zenith is small (less than about 45°).

Astronomy

21

This allows a simple computation of the total concentration of airmass between the ground telescope and the vacuum of space. In performing this calculation, we assume that the density of the atmosphere decreases as an exponential of the altitude (h)  o e Lh , where L is the reciprocal of the scale height of the atmosphere, h is the altitude, and is the density of air molecules at the surface. A typical value for the scale height is 7 km, meaning that at an altitude of 7 km, the pressure and density are 1/e (37 percent) of their surface values. This ideal model atmosphere is easy to derive from the common model of an exponential pressure profile from the ground to space. The scale height H is the factor used in such a model to determine the rate at which pressure decreases with altitude according to  z P  P0 exp    H

 , 

where P0 is the surface pressure, and z is the altitude. L is 1/H. The total column of air along a path from the ground to space is found by the following integration, where s) is the density of air as a function of position along the integration path: 

 (s)ds. 0

Written in terms of the path over which we are viewing (s), the integral is 

 e

 Ls cos Z

o

ds 

0

o , L cos Z

where Z is the zenith angle. For this simple model, the elevation angle at which the airmass is ten times the vertical value is 5.7°. Now consider the more complex case of a curved Earth of radius Re. Here, we find that h and s are related by

h(s)  Re  Re2  s2  2sRe cos Z , where Z is the zenith angle. The path integral is now 

 e 0

 Lh ( s )

ds.

0

Although this looks impossibly complex, a little numerical analysis shows that (except for angles greater than about 80°) h and s are still related by the cosine of the zenith angle. This means that for a wide range of angles, the flat-Earth result works just fine. A detailed analysis shows that when the elevation angle is about 2.8°, the total molecular path is about 10 times the case for the shortest (vertical) path through the atmosphere.

22

Chapter 1

In any case, the horizontal view assures a very long atmospheric path. We are all familiar with the effects of such a path, as we have seen the intense red of the setting Sun. In this view, the path is so long that the blue component of the Sun’s light is scattered away, leaving only the long-wavelength component. Although the term airmass used to represent the pressure-weighted atmospheric path length was developed by astronomers and previously almost exclusively used by observatories, it is finding its way into the lexicon of the general electro-optical and security sensor practitioner.

Night-Sky Exposure Time with a Fixed Camera The maximum exposure time for night photography of the sky with a fixed, mounted camera is about 15–20 sec. Longer exposure times will result in noticeable streaking of the stars due to Earth’s rotation. Discussion

The night sky is a popular topic for photography by both amateurs and professionals. Cameras mounted on an equatorial mount track the rotation of Earth so that stars remain fixed on the focal plane, thus allowing very long duration exposures. However, night-sky photography with a simple fixed camera on a tripod is also popular. In this case, an upper limit on exposure time is set by the motion of the stars across the focal plane as Earth rotates. The angular rotation rate of Earth is easily computed as one 360° rotation every 24 hr (86400 sec). Simple division yields an angular rate of 0.0042 deg/sec = 0.0727 millirad/sec, or about 15 deg/hr. A Canon Rebel XSi camera with a 12.40-megapixel (4313 × 2875) focal plane having a pixel pitch of 5.14 µm, coupled with a 24-70-mm f/2.8 lens zoomed out to f = 24 mm for maximum sky coverage, will have an effective aperture of 8.57 mm, resulting in an optical point spread function (OPSF) of 1.86 microns and a pixel IFOV of 0.214 millirad. Dividing the IFOV by the angular rate of Earth’s rotation gives 2.95 sec for a star to cross a pixel. The amount of acceptable motion blur or streaking is subjective (and subject to artistic interpretation), but a 15-sec exposure will result in star streaks about 5 pixels long. Reference

1. www.exposureguide.com/night-sky-photography-tips (2019).  

Chapter 2 Atmospherics Introduction It is hard to imagine a subject more complex, and yet more useful, than the study of the propagation of light in the atmosphere. Because of its importance in a wide variety of human enterprises, considerable attention has been paid to this topic for several centuries. Initially, the effort was dedicated to learning how the apparent size and shape of distant objects depend on the properties of the atmosphere. The advent of modern science provided new tools for measuring and theorizing about the optical properties of the atmosphere. Much of the motivation for progress was related to the military, astronomy, and navigation at sea. Modern applications are still important, and the prospect of laser communication links has provided continued impetus for progress. Maturation of the field of spectroscopy led to a formal understanding of the absorption spectra of significant atmospheric species and their variation with altitude. Computer models1–3 that include virtually all that is known about the absorption and scattering properties of atmospheric constituents have been assembled and can provide very complete descriptions of transmission as a function of wavelength with a spectral resolution of about 1 cm–1. This is equivalent to a wavelength resolution of 0.1 nm at a wavelength of 1 micron. In addition to the current chapter, the reader may wish to review the Degraded Visual Environments chapter, which provides atmospheric absorption curves and discussion of the effects of various particulate obscurants. In addition to gradually refining our understanding of atmospheric absorption by considering the combined effect of the constituents, there is also a rather complete and elaborate theory of scattering in the atmosphere. These results have been extended greatly by computer modeling, particularly in the field of multiple scattering and Monte Carlo methods. For suspended particulates of known optical properties, reliable estimates of scattering properties for both plane and spherical waves can be obtained for conditions in which the optical thickness is not too large. Gustav Mie (1868–1957) was particularly influential, as he was the first to use Maxwell’s equations to compute the scattering properties of small spheres suspended in a medium of another index of refraction. A number of references suggest that Mie was not the first to solve the problem but was the first to publish the results. His work, along with that of Debye, is now generally called 23

 

24

Chapter 2

“Mie theory.” Rayleigh had already shown that scattering should vary as the fourth power of the wavelength using dimensional analysis arguments. Mie theory is often compared with the earlier approach of Airy. Technical and historical details can be found in Lee, Jr.4 Key technology advances range from measuring the expected optical properties at locations considered for an astronomical telescope to determining the amount of sunlight that enters the atmosphere and is subsequently scattered back into space. Laser technology has also been a key factor in new measurements of atmospheric behavior, in both scattering and absorption phenomena. Tunable lasers with very narrow linewidths have been employed to verify atmospheric models and have provided the ability to characterize not only the “clean” atmosphere but also the properties of atmospheres containing aerosols and industrial pollution. Laser radar is regularly used to characterize the vertical distribution of scattering material, cloud density, and other features of the environment, weather, and climate. New advances in EO technologies have allowed new insight into radiation transfer into, out of, and within the atmosphere. Satellite-based remote sensors have been measuring the radiation budget of Earth to define its impact on climatic trends. There are many other examples of space-based EO sensors that concentrate on measuring properties of the atmosphere, including the concentration of trace constituents in the stratosphere, ozone concentrations over the poles, and so on. Recently, at the urging of the military, measurements and improved theory have led to the development of methods for estimating and removing clear-air turbulence effects with important improvements for astronomers, imaging, and optical communications. New advancements in measuring the wavefront errors resulting from turbulence are included in adaptive optics to remove atmospheric effects in ground telescopes. This technology can remove, up to a point, a variety of optical aberrations induced by the constant variation in the index of refraction of the atmosphere. Modern adaptive optics systems provide imaging performance in ground telescopes that rivals space telescopes. As a side note, consider that even the casual observer of the night sky has noticed that planets do not twinkle but may not know why. This is because the angular size of planets is sometimes larger than the typical tilt aberration of the atmosphere (the so-called isoplanatic angle). Similarly, for the same reason, a passenger in a high-flying jet, viewing a city at night, will see no twinkling. We can expect continual improvement in our understanding of the atmosphere and the way that it interacts with light propagating within it. All of these improvements in theory, supported by advancements in instrumentation quality, will result in even more capable EO systems and allow them to reduce the perturbing effects of their operating environments. The interested reader can find technical articles in Optical Engineering, Applied Optics, and similar technical journals. At the same time, magazines such as Sky and Telescope occasionally include information about the way amateur astronomers are using new technologies to cope with the effects of the atmosphere. A few new books have come out that deal specifically with imaging through the atmosphere. SPIE is a good source for books, journals, and conference proceedings on these topics. References 1. modtran.spectral.com (2020).

Atmospherics

25

2. www.spiedigitallibrary.org/conference-proceedings-ofspie/0277/0000/Atmospheric-Transmittance-And-Radiance-The-LOWTRAN-5Code/10.1117/12.931910.short (2019). 3. R. Appleby and H. B. Wallace, “Standoff Detection of Weapons and Contraband in the 100 GHz to 1 THz Region,” IEEE Trans. Antennas and Propagation 55(11), 2944–2956 (2007). 4. R. L. Lee, “Mie theory, Airy theory, and the Natural Rainbow,” Appl. Opt. 37(9), www.usna.edu/users/oceano/raylee/papers/RLee_MieAiry_paper.pdf (2020).

Vertical Profiles of Atmospheric Parameters Atmospheric Parameter Humidity (g/m3) Temperature (K) Pressure (mB)

a (km–1) 0.308 0.01464 0.11762

b (km–2) 0.05 0.00081 0.00109

Using the following functional form and the values shown in the table, one can estimate vertical profiles of temperature, pressure, and humidity:1

f  h   f  0  exp  ah  bh 2  , where f(0) represents the surface value of each parameter, and h is the height in kilometers. Note that the pressure is in millibars. Discussion The fact that there is a strong exponential component to these models will not be a surprise to anyone who has studied the thermodynamics of the atmosphere. Both pressure and temperature can be evaluated for an equilibrium atmosphere by investigating the impact that gravity has on the pressure profile. Further analysis of the entropy and molecular properties of the idealized atmospheric constituents leads to the conclusion that both pressure and temperature exhibit an exponential change with altitude. Modeling the vertical profile of water vapor is not so easily done, but it should be clear that the thermodynamics of water, particularly the exchange of energy that generates changes in phase from liquid to vapor to solid (ice) are entwined with the temperature profile. The typical exponential property of the atmosphere can be seen in the first term of the equation. The quadratic term (provided by parameter b) helps fit data in MODTRAN2 calculations. The approximation works well up to a 4-km altitude in a marine environment. References 1. F. Hanson, “Coherent laser radar performance in littoral environments—a statistical analysis based on weather observations,” Opt. Eng. 39(11), 3044–3052 (2000). 2. modtran.spectral.com (2020).

26

Chapter 2

Visibility Distance for Rayleigh and Mie Scattering Visibility is a somewhat subjective measure of imaging capability. Different applications (meteorology, photography, aviation, air quality, transportation) have a different definition. In all cases, visibility is sensitive to atmospheric conditions and varies with wavelength. Discussion When the relative humidity is greater than roughly 75 percent, aerosols (haze particles) grow into the size range where Rayleigh scattering is important. As a rule of thumb, Rayleigh scattering is the type of scattering that reduces visibilities below the criterion for unrestricted visibility (20000 0 0 The satellite laser communications community (SLC) developed their own models for Cn2 with an emphasis on maritime environments.6 These data were derived from matching measurements in a tropical maritime environment, which may limit the applicability of these numbers when applied to other locations. Contrary to more conventional models, in this model the normal daytime –4/3 power law is not exhibited. Table 2.5 shows a comparison of SLC daytime and nighttime models. Tyson8 and Canuet9 provide important additional details (the former describes the detailed theory, and the latter focuses on the variety of models that have been developed). References 1. J. Accetta, “Infrared Search and Track Systems,” p. 287 in Passive Electro-Optical Systems, S. Campana, Ed., Vol. 5 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 2. R. Beland, “Propagation through Atmospheric Optical Turbulence,” p. 222, in Atmospheric Propagation of Radiation, F. Smith, Ed., Vol. 2 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 3. L. Biberman, “Weather, Season, Geography, and Imaging System Performance,” Ch. 29 in Electro-Optical Imaging System Performance and Modeling, L. Biberman, Ed., SPIE Press, pp. 29-33 to 29-37 (2001). 4. M. Friedman, “A Collection of Models,” Night Vision and Electronic Sensors Directorate, Ft. Belvoir, VA, www.zieg.com/links/sensors/NVThermManual5.pdf (2020). 5. M. Friedman, “A Turbulence MTF Model,” Night Vision and Electronic Sensors Directorate, Fort Belvoir, VA, available in NVTHERM at www.ontar.com (2020). 6. S. R. Robinson, Ed., Emerging Systems and Technologies, p. 177, Vol. 8 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 7. H. Hemmati, Ed., Near-Earth Laser Communications, CRC Press, p. 241 (2009). 8. R. Tyson, Principles of Adaptive Optics, Academic Press, p. 29 (1991).

Atmospherics

33

9. L. Canuet, “Atmospheric turbulence profile modeling for satellite-ground laser communication,” Master’s Thesis, Technical University of Catalonia, tinyurl.com/ycxd88f8 (2020).

Cn2 as a Function of Weather Cn2 is the index of the refraction structure constant and is widely used in the study of the optics of the atmosphere. Understanding its general properties is an important area of research. For altitudes above 15 m and up to an altitude of a few hundred meters, the following approximation1 can estimate Cn2 for various wind speeds and humidities: Cn2  3.8  10 14 W  2  10 15 T  2.8  10 15 RH  2.9  10 17 RH 2 1.1  10 19 RH 3  2.5  10 15 WS  1.2  10 15 WS 2  8.5  10 17 WS 3 5.3  10 13 ,

where W is the temporal hour weight (described below), T is the air temperature in K, RH is the relative humidity (percent), and WS is the wind speed (m/sec). Discussion A variety of Cn2 models abound in the literature, but few attempts try to capture its relationship to environmental conditions. Once the value of Cn2 is found, it can be scaled with altitude using an appropriate model, as shown in the rule “Cn2 Estimates” in this chapter. Yitzhaky et al.1 point out that an even more complete model is created when one includes the effects of aerosols. They do this by estimating the total cross-sectional area (TCSA), as below, and modifying the estimate of Cn2. The units of TCSA are cm2/m3: TCSA  9.69  10 4 RH  2.75  10 5 RH 2  4.86  10 7 RH 3 4.48  10 9 RH 4  1.66  10 11 RH 5  6.26  10 3 ln  RH  1.34  10 5 SF 4  7.30  10 3

and C n2  5.9  10 15 W  1.6  10 15 T  3.7  10 15  6.7  10 17 RH 2 3.9  10 19 RH 3  3.7  10 15 WS  1.3  10 15 WS 2  8.2  10 17 WS 3 2.8  10 14 SF  1.8  10 14 TCSA  1.4  10 14 TCSA 2  3.9  10 13 ,

where SF is the solar flux in kW/m2. Note the introduction of the concept of temporal-hour in the equation and a weighting function W associated with it. The temporal-hour is defined as one-twelfth of the time between sunrise and sunset;2 in winter, a temporal-hour is less than 60 minutes. Table 2.6 shows the values of W that should be used to change the estimate during the day. Figure 2.7 is the cumulative probability of Cn2 from the U.S. Navy’s R384

34

Chapter 2

Table 2.6 Values of W.

Sunrise

Sunset

Temporal Hour Interval Until –4 –4 to –3 –3 to –2 –2 to –1 –1 to 0 0 to 1 1 to 2 2 to 3 3 to 4 4 to 5 5 to 6 6 to 7 7 to 8 8 to 9 9 to 10 10 to 11 11 to 12 12 to 13 Over 13

Relative Weight (W) 0.11 0.11 0.07 0.08 0.06 0.05 0.10 0.51 0.75 0.95 1.00 0.90 0.80 0.59 0.32 0.22 0.10 0.08 0.13

1.00E-12

Cn2 in m^(-2/3)

1.00E-13 1.00E-14 1.00E-15 1.00E-16 1.00E-17 5

15

25

35

45

55

65

75

85

95

Figure 2.7 Cumulative probability of maritime Cn2. Data represent the results of experiments with a horizontal view near the ground.5,6

 

Atmospherics

35

database. This is a database of R384 detailed atmospheric measurements at multiple maritime locations across the globe, with a horizontal path. Each point on the curve shows the percentage of Cn2 values below that level. For example, 75 percent of the Cn2 values measured were below 1  10–14. References 1. Y. Yitzhaky, I. Dror, and N. Kopeika, “Restoration of Atmospherically Blurred Images According to Weather-Predicted Atmospheric Modulation Transfer Functions,” Opt. Eng. 36(11), 3064–3072 (1997). 2. D. Sadot and N. S. Kopeika, “Forecasting Optical Turbulence Strength on the Basis of Macroscale Meteorology and Aerosols: Models and Validation,” Opt. Eng. 31(2), 200–212 (1992). 3. M. Friedman, “A Collection of Cn2 Models,” Night Vision and Electronic Sensors Directorate, Ft. Belvoir, VA, www.zieg.com/links/sensors/NVThermManual5.pdf (May 1, 2002). 4. M. Friedman, “A Turbulence MTF Model,” Night Vision and Electronic Sensors Directorate, Ft. Belvoir, VA (2020). 5. E. Friedman and J. Miller, Photonics Rules of Thumb, 2nd ed., p. 56, McGraw-Hill (2004). 6. Technical Digest of the Naval Surface Weapons Center, p. 98, apps.dtic.mil/sti/pdfs/ADA323523.pdf (1996).

Impact of Weather on Visibility The attenuation coefficient for fog and haze is related to visibility according to1–3

  3.912 / V , where V is the daylight visual range (usually defined as contrast between target and background of 2 percent). The units of  are derived from the units of V. Discussion A similar rule is Koschmeider’s,4 in which visibility = 3/ for black targets against a bright sky. In this case,  is the total attenuation, including both scattering and absorption. See the rule “Equations for Empirical Visibility” in the DVE chapter for more details. Allard’s law4,5 applies to the visibility of lights at night (originally developed to estimate the nighttime range of lighthouses):

eV ET  I 2 , V where ET is the illumination detection threshold of the human eye, I is the power of the light source in watts, and V is the visibility in km. When using this equation, choose units

36

Chapter 2

that result in a meaningful value of ET. For example, we expect  to be in km–1, so V must be in km. Other units can be used so long as the product of the distance and attenuation coefficient is unitless. For ET to be in W/m2, the units of V in the denominator must be in meters. The equation above derives from a type of beam spread model, coupled with an attenuation term. Reference 6 describes a measure of pilots viewing down a runway as the visual threshold Et(lux), which is approximated by logEt = –5.7 + 0.64 logB, where B is the background luminance (cd/m–2). A lower threshold on Et is set at 6.8 × 10–6 lux, the night limit for background luminance. Hudson3 shows that the effect of rain on visual range and scattering coefficient  can be estimated from  ra in  1 .2 5  1 0  6 R / r 3 ,

where R is the rainfall rate (cm/sec), and r is the radius of the drop (cm). Alternatively, Ref. 1 gives the scattering coefficient of rainfall as  rain  0.248 f

0.67

,

where f is the rainfall rate in mm/hr. Oakley and B. Satherley5 provide some insight into the effect of aerosols into scattering in the atmosphere. They point out that, for a uniform distribution of particles of concentration D and radius a, the scattering coefficient is  rain  D  a 2 Q sc ,

where Qsc is the Mie scattering coefficient, which is a complicated function9,10 of the ratio (2a)/. As a increases, either by considering scattering at shorter wavelengths or by increasing the aerosol size, Qsc becomes 2. The result is that for a large particle size or short wavelength, the particles have a scattering cross-section twice their geometric size. For non-monochromatic radiation, the exponential relationship mentioned earlier in this rule may not apply. For example, we include an example using MWIR data for the band from 3–5 microns.10 The typical exponential decay in transmission with distance is not observed. For a band as large as the one shown in Fig. 2.8, this effect results in a more complicated relationship between intensity and distance. For a more extensive discussion of atmospheric effects on visibility, refer to the chapter on Degraded Visual Environments. References 1. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 87, available at ftp://ftp.bartol.udel.edu/jmc/pmt/rca_eo_hb.pdf (2020). 2. en.wikipedia.org/wiki/Visibility (2020). 3. R. Hudson, Infrared Systems Engineering, John Wiley & Sons, pp. 161–165 (1969).

Atmospherics

37

Figure 2.8 For a wide spectral bandwidth, non-exponential transmission is a result.

4. A. Bennett, Introduction to Atmospheric Visibility Estimation, Bristol Industrial and Research Associates Ltd., www.biral.com/wp-content/uploads/2015/02/ Introduction_to_visibility-v2-2.pdf (2020). 5. J. Oakley and B. Satherley, “Improving Image Quality in Poor Visibility Conditions Using a Physical Model for Contrast Degradation,” IEEE Trans. Image Processing 7(2), 167 (1998). 6. U.S. Department of Transportation, Federal Aviation Administration, United States Experience Using Forward Scatterometers for Runway Visual Range, March 1997, DOT-VNTSCFAA-97-1. 7. L. Biberman, “Weather, Season, Geography, and Imaging System Performance,” Chap. 29 in Electro-Optical Imaging System Performance and Modeling, L. Biberman, Ed., SPIE Press, pp. 29-33 to 29-37 (2001). 8. Private communications with Dick Kerr (2016). 9. omlc.org/software/mie/maetzlermie/Maetzler2002.pdf (2020). 10. Military Handbook Quantitative Description of Obscuration Factors for ElectroOptical and Millimeter-Wave Systems, DOD-HDBK-178(ER), U.S. Department of Defense, everyspec.com/DoD/DoD-HDBK/DOD-HDBK-178_11056 (1986).  

Bufton Vertical Profile of Wind Speed The vertical profile of wind speed is a factor in computing the properties of atmospheric turbulence. Bufton1 provides a good approximation for the profile:

38

Chapter 2

  h sin  z  12  2  v  h   2.5  18.5 exp     , 5    

where the velocity v is a function of height h, and z is the zenith angle of the wind direction. Velocity is in m/sec. Discussion The Bufton wind model provides an estimate of the vertical profile of wind. This model is useful in computing atmospheric properties that depend on wind speed, such as the Greenwood frequency, which is covered in the rule “Bandwidth Requirement for Adaptive Optics” in the Astronomy chapter. Reference 1. S. R. Robinson, Ed., Emerging Systems and Technologies, p. 228, Vol. 8 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993).

Index of Refraction of Air The index of refraction of air can be approximated by

P n  1  77.6(1  7.52 103 2 ) 106 , T where P is the pressure in millibars,  is the wavelength in microns, and T is the temperature in kelvin.1 Discussion Algorithms for expressing the index of refraction of air can be very important for computing ray traces related to imaging through the atmosphere, particularly descriptions of color properties of the atmosphere (rainbows, glory, and so on). In addition, because the optical effect of turbulence in the atmosphere is derived from variations in the index of refraction, the above expression can be useful in 𝐶 scaling, as described in Brown et al.1 The reference points out that fluctuations in the index depend on wavelength and temperature according to n  77.6(1  7.52  103  2 )

P  106 T . 2 T

This result is obtained by simply taking the derivative of the equation in the rule with respect to temperature. This type of result can be useful when dealing with observed scintillation and pointing jitter induced by local temperature fluctuations, as described in other rules in this chapter. In addition, one can estimate the change in index as pressure or temperature changes occur. An experimenter dealing with changing weather conditions while trying to perform long path experiments will find the application of this simple rule.

Atmospherics

39

Hutt2 has a simple method for estimating the density  of air:   1.286  0.00405T .

Here, density is in kg/m3, and T is in degrees Celsius. Sarazin4 suggests a small correction to the first equation of this rule:

P  4810e  n  1  77.61  7.52 103 2  106 1  , T PT   where e is the water vapor pressure in millibars. Another approach3 defines the index as a function of frequency rather than wavelength:

 526.3 12 11.69  22   Pdry  N dry  ( n  1)10 6   237.2  2  2  , 1   2 2  2  T   where N is the refractivity as defined in the equation (Ndry being the index for dry air), n is the index of refraction, is the wave number in cm–1,  is 114,000 cm–1,  is 62,400 cm–1, Pdry is the dry air pressure in kPa, and T is the temperature in kelvin. Finally, Kamerman5 compares different strategies for estimating the index of refraction of dry air and water vapor. They use the concept of reduced refraction A() to simplify the equations. Reduced refraction uses the fact that the index usually has the form

n  1 

P A(), T

where A() is defined below. For dry air, we can use either of the following expressions:6

AD1

AD 2

   15997 2406030   2.84382 10  8342.13  , 1 1   38.9  2 130  2      1.36 162.88   2.69578 109  28760.4  4  .  2   9

That is, for dry air, n– 1 can be expressed as either of the following forms:    P 15997 2406030  9 2.84382 10  8342.13  ,  1 1  T  38.9  2 130  2      P 1.36 162.88   2.69578 109  28760.4  4  .  2  T 

40

Chapter 2

These two algorithms match each other and the one provided in the rule almost exactly. Because the version in the rule is the simplest of the three, it is the one that most people will want to use. The reduced refractivity for water (subscript W) is provided by either of the two following formulae:6 1.36 162.88   Aw1 2.84382  10 9  24580.4  4  ,  2   0.004028 0.03238 2.6422     Aw 2 2.24756  10 7  295.235  . 6 4 2   Note that these two forms differ by about 3 percent. This difference seems not to have been resolved by the community, as both versions are in use. Using the formulas presented using the A(formulation, we can compute the index of refraction as

n  1 

1  AD ()PD  AW ()PW  , T

where D and W refer to the subscripts in the equations above. PD and PW are the partial pressure of dry air and water vapor, respectively. Reference 7 provides additional relationships. For absolute humidity  

 1285 RH  1 1  exp 5412    g/m3 , T 273 T   

where RH is the relative humidity. Absolute humidity can be derived from the dew point temperature: 

  1 1285 1 exp 5412   T   273 Td

 3   g/m .  

Relative humidity can be derived from the absolute humidity:

RH 

 T 1   1 exp 5412    , 1285  273 T   

and RH can be derived from the dew point temperature and ambient temperature:   1 1  RH  exp 5412     .   T Td  

Isterling8 provides a table with an interesting historical note: the evolution of estimates of the index of refraction of air from 1918 to 1972.

Atmospherics

41

References 1. W. Brown et al., “Measurement and Data-Processing Approach for Estimating the Spatial Statistics of Turbulence-Induced Index of Refraction Fluctuations in the Upper Atmosphere,” Appl. Opt. 40(12), 1863 (2001). 2. D. L. Hutt, “Modeling and Measurements of Atmospheric Optical Turbulence over Land,” Opt. Eng. 38(8), 1288–1295 (1999). 3. B. Kachoyan and C. Morgan, “Infrared Propagation in Radar Frequency Ducting Conditions,” Australian Maritime Operations Division Aeronautical and Maritime Research Laboratory, apps.dtic.mil/dtic/tr/fulltext/u2/b225421.pdf (2020). 4. M. Sarazin, Atmospheric Turbulence in Astronomy, ucolick.org/~max/AY289/ Lectures%202016/Lecture%202%202016%20Turbulence/Lecture2_2016_v1.pdf (2020). 5. G. Kamerman, “Laser Radar,” p. 88, in Atmospheric Transmission, M. Thomas and D. Duncan, Eds., Vol. 2 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 6. S. van der Werf, “Ray Tracing and Refraction in the Modified U.S. 1976 Atmosphere,” Appl. Opt. 42(3), 354–366 (2003). 7. “Military Handbook Quantitative Description of Obscuration Factors for ElectroOptical and Millimeter Wave Systems,” DOD-HDBK-178(ER), everyspec.com/DoD/DoD-HDBK/DOD-HDBK-178_11056 (2020). 8. J. Isterling, “Electro-Optical Propagation thorough highly aberrant media,” Ph.D. Dissertation, University of Adelaide, digital.library.adelaide.edu.au/dspace/ bitstream/2440/64292/8/02whole.pdf (2010).

Fried Parameter The Fried parameter r0 is computed as follows:1 L r0   0.423k 2 sec     C n2  z  dz    0

3/5

,

where k is the propagation constant of the light being collected and equals 2is the zenith angle of the descending light waves, L is the pathlength through which the light is collected, Cn2 is the atmospheric refractive structure function,  is the wavelength, and z is a dummy variable that represents the path over which the light propagates. Discussion Fried developed this parameter by determining a characteristic scale of an atmospheric path, commonly referred to as the Fried parameter, pronounced “r zero.” A diffractionlimited aperture of diameter r0 will have the same angular resolution as that imposed by atmospheric turbulence, i.e., the Fried parameter is the radius in which the incoming wavefront is approximately planar. In visible wavelengths, it ranges from ~3–30 cm. A small enough telescope will perform at its design limit even if the presence of the atmosphere is considered. For a vertical path with a typical Cn2 profile, r0 = ~15 cm.

42

Chapter 2

Continued development of the theory has led to new approximations and more accurate characterizations of the impact of the atmosphere on light propagation. This is particularly true for astronomical telescopes and free-space laser applications, such as optical communication. Proper characterization of Cn2 is necessary to get a good estimate of the Fried parameter. Note also that there is wavelength dependence in the results, hidden in the parameter k, which equals 2. Unfortunately, characterization of Cn2 is an imprecise empirical exercise. Astronomical sites measure it in a limited way, but it varies with location, season, viewing angle, and weather conditions, so it is usually only approximated. It also can be affected by such localized effects as the telescope dome and air flow within the telescope. Of course, attention must be paid to using the correct units. Because Cn2 is often expressed as m–2/3, meters are the preferred units. The Fried parameter is often used in adaptive optics to determine the required number of active cells and the number of laser-generated guide stars necessary for some level of performance. This is discussed in the Astronomy chapter in the rule “Number of Actuators in an Adaptive Optic.” The parameter can be estimated in a simple way. For example, the resolved angle of a telescope can be expressed as approximately /r0, or about 3.3 microradians for an r0 of 15 cm and a wavelength of 0.5 micron. Note that this result is consistent with those in the rule “Atmospheric Seeing” in the chapter on Astronomy. Acton3 notes that r0 may be estimated by a number of practical methods, the simplest of which relies on measurement of image motion:  2  r0  0.346  2 1/3   D 

3/5

,

where D is the telescope aperture diameter, and  is the rms angular image motion. References 1. D. Fried, “Optical Resolution Through a Randomly Inhomogeneous Medium for Very Long and Very Short Exposures,” J. OSA 56(10), 1372–1379 (1966). 2. C. Aleksoff et al., “Unconventional Imaging Systems,” p. 132, in S. R. Robinson, Ed., Emerging Systems and Technologies, Vol. 8 of The Infrared and ElectroOptical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 3. D. S. Acton, “Simultaneous Daytime Measurements of the Atmospheric Coherence Diameter with Three Different Methods,” Appl. Opt. 34(21), 4526– 4529 (1995).

Horizontal-Path Fried Parameter The general computation of the Fried parameter is a method for estimating the coherence length perpendicular to the path of light in the atmosphere. Here we show nomographs for estimating the Fried parameter (often abbreviated as r0) for a horizontal path length.

Atmospherics

43

Discussion For a horizontal path with constant Cn2, r0  1.68  C n2 Lk 2 

3/5

.

A convenient way to obtain estimates for this horizontal path is shown in Fig. 2.9.1 The dotted lines show that a 1000-m path in an atmosphere characterized by a refractive index structure constant Cn2 of 10–15 m–2/3 results in r0 of about 0.1 m for a wavelength of 600 nm. Figure 2.10 shows the contrast between day and night values of r0 and how it depends on the zenith angle and wavelength for a typical profile of Cn2. Reference 1. S. R. Robinson, Ed., Emerging Systems and Technologies, p. 181–182, Vol. 8 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993).

Figure 2.9 This nomograph shows r0 as a function of wavelength, Cn2, and path length.1

44

Chapter 2

Figure 2.10 Coherence length r0 as a function of wavelength at various altitudes for day and night conditions.1

Phase Error Estimation The maximum phase error induced by the atmosphere can be written as

maximum phase error  0.57k LCn2 D5/3 . Here, L is the distance through which the aberrated wave propagates, Cn2 is the atmospheric structure function, D is the aperture diameter, and k is the wave propagation constant  Discussion The stroke of actuators in a deformable mirror system must be able to accommodate the phase errors induced by the atmosphere and aberrations in the telescope. The rule shown here compares with a discourse by Tyson2 in which he shows the phase effects of various optical aberration terms induced by the atmosphere. Some algebra shows that what is shown above (as well as the terms described by Tyson) can be put in a form that includes the ratio of the aperture diameter and the Fried parameter r0. We can compare the various results using the following equation: D 2  n    r0 

5/3

.

Atmospherics

45

We will assume that r0 is expressed as (0.423k2Cn2z)–3/5, the value it takes for a constant Cn2. Tyson2 shows the following values for n: Variance in Phase (2) Piston One-dimensional tilt Two-dimensional tilt Focus The rule above, taking into account that it is described as the maximum (which we might assume is 3)

Value of n 1.0299 0.582 0.134 0.111 0.256

There is not complete agreement between the rule and Tyson, but all values fall in the same range. Consider the case where D = 1 m, Cn2 = 2 × 10–14 m–2/3, L = 5000 m (approximately the distance in the atmosphere over which turbulence is a significant factor), and = 1 micron. From these numbers, we get about 35 radians (about 6 waves, or about 3–6 microns) for the maximum piston stroke needed to accommodate the effects of the atmosphere. References 1. R. Mali et al., “Development of Microelectromechanical Deformable Mirrors for Phase Modulation of Light,” Opt. Eng. 36(2), 542–548 (1997). 2. R. Tyson, Principles of Adaptive Optics, p. 79, Academic Press (1991).  

Day vs. Night Scintillation Models for Laser Beams The scintillation of upward-pointed laser beams is different between day and night. The two equations below show that difference. The beams are pointed up at an elevation angle from an altitude H along a path of length L in an atmosphere with a turbulence parameter that varies with altitude Cn2(H):1 L

 z sin     0.56k C  H   1   H  0 2 x

7/6

L

 z sin     0.56k C  H   1   H  0 2 x

7/6

4/3

2 n

2/3

2 n

z   L

5/6

z   L

5/6

L  z

5/6

dz for daytime operation,

L  z

5/6

dz for nighttime operation,

where x2 is the phase variance created by the atmosphere. Note that the difference occurs in the exponent of the term  1  z sin   in the integral. 

H



Discussion The analysis covers the weak perturbation regime. Two versions are usually considered. The first is the case that applies for plane waves:

46

Chapter 2

L

2x  0.56k 7/6  Cn2  z  L  z 

5/6

dz.

0

In the second case, the waves are assumed to be spherical (as emanating from a point): L

5/6

z 2x  0.56k 7/6  Cn2  z    L 0

 L  z

5/6

dz.

L is the distance the beam travels. k is the wave number (), and Cn2(z) is the refractive index structure constant along the path from the transmitter to the receiver. Variance in the irradiance from a beam due to turbulence can be estimated as follows for the special case of a horizontal beam path. There is a relationship between the phase variance x2 and I2 (which is the variance in the intensity of the beam): 2

2I  e4x 1  42x for small 2x , which implies a maximum for x2. If we choose a maximum modulation of the intensity that occurs when I is such that the intensity falls to zero, then x has a maximum allowable value of 0.5. This is referred to as saturated scintillation. See the rule in this chapter related to aperture averaging for more discussion and references related to this topic. For the case in which Cn2 is constant (as might occur for a beam path parallel to the ground),

2x  0.31Cn2 k 7/6 L11/6 for plane waves, 2x  0.124Cn2 k 7/6 L11/6 for spherical waves. In each case, the standard deviation variation in irradiance is the square root of the term on the right. The range of applicability of these equations is x2 < 0.3. Note that there is a related result from Belen’kii et al.3 that shows that an estimate for the edge motion for horizontal path observation is     3 / 8  2.91k 2 C n2 L8/3 

3/5

.

This measure (which is exactly equal to the isoplanatic angle for an atmosphere with a uniform Cn2) is not quite right since it is a measure of the effect of all aberrations induced by the atmosphere, whereas edge motion derives almost entirely from tilt. For most cases, however, the equation above is a good place to start in estimating angle of arrival effects. For example, use a typical Cn2 value of 10–14, a pathlength of 375 m, and a wavelength of 0.5 micron to get an irradiance variance of about 0.12, which is equivalent to a standard deviation of about 35-percent variation in the intensity at the receiver. A word of caution should accompany the results shown above. Since the results derive from approximations developed from perturbation theory, which assumes that the

Atmospherics

47

effects of turbulence are small, it can be argued that the results are only reliable for values of x2 less than about 0.3. Shapiro et al.1 provide the additional details necessary to deal with beam paths that are not horizontal. To do so requires that Cn2 be known or estimated as a function of altitude. References

1. J. H. Shapiro, B. Capron, and R. Harney, “Imaging and target detection with a heterodyne-reception optical radar,” Appl. Opt. 20(19) (1981). 2. W. L. Wolfe, “Imaging Systems,” p. 6–19, Ch. 19 of The Infrared Handbook, W. L. Wolfe and G. F. Zissis, Eds., Environmental Research Institute of Michigan (1978). 3. M. Belen’kii, J. Stewart, and P. Gillespie, “Turbulence-induced edge image waviness: theory and experiment,” Appl. Opt. 40(9), 1321 (2001). 4. R. Tyson, Principles of Adaptive Optics, Academic Press, p. 131 (1998). 5. A. Ishimaru, “The Beam Wave Case and Remote Sensing,” p. 129, in Laser Beam Propagation in the Atmosphere, J. W. Strohbehn, Ed., Springer-Verlag (1978).  

Resolution Looking Down Fried shows that the down-looking resolution is 1 L ,  r0

where L is the altitude of the observing platform (which can be in space),  is the wavelength, and r0 is the Fried parameter. For typical atmospheric conditions and for a telescope placed above about a 10-km altitude, the resolution is limited to about 5 cm, regardless of the size of the observing telescope. Discussion

David Fried was a giant in the field of the optics of the atmosphere, a notoriously complex subject. Much of this work would appear to be highly abstract, but his work led to concrete applications. In a seminal paper,1 he developed a method for computing the resolution of a space or airborne sensor as a function of atmospheric conditions and observing altitude. Central to the computation is the determination of the Fried parameter r0. This is a measure of the impact that atmospheric variations in the index of refraction have on imaging. A version applicable to the propagation of plane waves is L r0   0.423k 2 sec     C n2  z  dz    0

3/5

,

where k is  is the viewing angle that we will take to be vertical in this calculation so that sec() is unity, and Cn2 is a characteristic function of the turbulence in the atmosphere, discussed elsewhere in this chapter.

48

Chapter 2

Fried chose to describe the performance of imaging systems looking at Earth as if the source were a point on the ground. This requires the use of a variant of r0 that describes correlation for a spherical wave propagating to the receiver. For this, the proper definition of r0 is 5/3 L   2 2 z  r0  0.423k sec Cn    z  dz   L 0  

3/5

.

The integral ranges from Earth’s surface (z = 0) to the altitude of the observer (z = L). Cn2 is estimated by any one of a number of models embraced by the community. For the purpose of this calculation, we follow the lead of Fried1 and use the following simple version of Cn2:

C n2  4.2

h 1/3 e  h /3200 . 1014

The value 3200 is the scale height that calibrates this model as a function of altitude. These calculations can be easily repeated for any of the other expressions of Cn2 found elsewhere in this chapter. Figure 2.11 plots r0 as a function of altitude for  of 0.4 micron. A larger r0 indicates a smaller adverse impact from the atmosphere (since the correlation length is longer). An interesting feature of this result is that it does not depend explicitly on the size of the aperture involved.

Figure 2.11 r0 as a function of altitude for the Cn2 described in the text.

Atmospherics

49

Figure 2.12 Resolution viewing downward from various altitudes.

Figure 2.12 illustrates the performance of Earth-viewing sensors based on the Fried model and the Cn2 model we have chosen, also using a wavelength of 0.4 micron. Of course, the actual resolution will depend only on the real properties of the atmosphere. The figure shows that above about 10 km, resolution is no better than about 5 cm. This result applies no matter what size aperture may be employed. Fried estimates that the overall sensitivity of the result to wavelength is small; resolution varies as 1 / 5  so that the change over the optical spectrum from this term alone is about 10 percent. Reference 1. D. Fried, “Limiting Resolution Looking Down Through the Atmosphere,” J. OSA 56(10) (1966). Additional References  

R. Tyson, Principles of Adaptive Optics, Academic Press, p. 33 (1998). M. Fitzmaurice, “Experimental Investigations of Optical Propagation in Atmospheric Turbulence,” NASA TR R-370, p. 41 (1971).

Isoplanatic Angle When atmospheric turbulence (measured by Cn2) is constant, estimate the isoplanatic angle o by using the Fried parameter r0 and the distance L to the layer in which the atmospheric turbulence is significant:1 r 0  0.6 0 , L

50

Chapter 2

Table 2.7 Values for the Paranal observing site.2

Parameter Best 5% Best 20% 50% Mean Seeing (arcsec) 0.43 0.56 0.75 0.86 200-mB wind (m/sec) 12 19 29 32 8.5 5.8 3.6 3.9 0 (msec) 4.1 3.3 2.6 2.6 0 (arcsec)

where L is the distance to the layer of turbulence,

r0   0.423k 2 C n2 L 

 3/ 5

,

and k is the propagation constant 2/. Discussion

The isoplanatic angle defines the range of viewing angles over which downwelling light has approximately the same phase. In other words, the isoplanatic angle is the angle between two stars for which the light from both stars passes through the same turbulent region. The detailed computation of isoplanatic angle is L   o   2.91k 2  Cn2  z  z 5/3 dz  . 0  

The best observing sites in the world have isoplanatic angles measured in arc seconds. Table 2.7 illustrates results for the Paranal site where ESO built its massive Very Large Telescope. The 200-mB wind parameter refers to the wind velocity at an altitude where the pressure is 200 millibars and o is the mean duration of the wind. References

1. R. Tyson, Principles of Adaptive Optics, Academic Press, p. 83 (1991). 2. M. Sarazin and A. Tokovinin, “The Statistics of Isoplanatic Angle and Adaptive Optics Time Constant derived from DIMM Data Venice 2001,” Beyond Conventional Adaptive Optics Conference, eso.org/gen-fac/pubs/astclim/ papers/venice2001/venice2001-msarazin.pdf (2020). Strehl Ratio of the Atmosphere

The performance of correction of the atmospheric aberrations can be computed as

SR  exp  2M  , where M2 is the residual wavefront error as a function of the number of modes corrected, as described below.1

Atmospherics

51

Discussion

Not surprisingly, the performance of an adaptive optics system depends on the quality and number of aberration modes detected and corrected. Winocur1 has data on the correction coefficients for the first 21 Zernike modes and a formula for the higher-order modes.1,5 Table 2.8 shows how the number of corrective modes improves the resulting Strehl ratio (SR) performance. The shaded cell entries are the values of –M2 for two different parameters: (1) the levels of aberration correction measured as the number of aberration modes M that are corrected, and (2) the relative size of the aperture to the Fried parameter D/r0. M is provided in the left column. In each of the shaded cells, D  2M  weighting coefficient     r0 

5/3

.

Table 2.8 Parameters used to determine the aberrations induced by the atmosphere. Noll5 provides the weighting coefficients.

Modes Corrected M 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Weighting Coefficient n(M) 1.0299 0.5820 0.1340 0.1110 0.0880 0.0648 0.0587 0.0525 0.0463 0.0401 0.0377 0.0352 0.0328 0.0304 0.0279 0.0267 0.0255 0.0243 0.0232 0.0220 0.0208

Zernike Aberration Names

Piston 1D Tilt x 2D Tilt y Defocus Astigmatism and defocus Astigmatism and defocus Coma and x-tilt Coma and y-tilt Coma Coma Third-order spherical

Ratio of the Aperture to Fried Parameter (D/r0) 1 5 10 0.36 0.00 0.00 0.56 0.00 0.00 0.87 0.14 0.00 0.89 0.20 0.01 0.92 0.28 0.02 0.94 0.39 0.05 0.94 0.42 0.07 0.95 0.46 0.09 0.95 0.51 0.12 0.96 0.56 0.16 0.96 0.58 0.17 0.97 0.60 0.20 0.97 0.62 0.22 0.97 0.64 0.24 0.97 0.67 0.27 0.97 0.68 0.29 0.97 0.69 0.31 0.98 0.70 0.32 0.98 0.71 0.34 0.98 0.72 0.36 0.98 0.74 0.38

52

Chapter 2

For modes higher than 21, Ref. 1 provides the following equation: SR  exp  0.2944 M 





3/2



 D / r0 

5/3

. 

For example, for M = 3 and D/r0 = 5, D   0.134    r0 

5/ 3

 1.96,

2 3

and e–1.96 = 0.14, which is the value shown in the table. This value is used to compute the SR using 2

S R  e  1 .

Aberration names are derived from SPIE references. Another way to view these results is SR  e

D  n  M    r0 

5/3

,

as depicted in Fig. 2.13.

  Figure 2.13 The Strehl ratio improvement achieved by perfectly compensating the first n atmospheric Zernike modes using the data in Table 2.8 and the equation 3/5  D  exp n  M     .   r0  

Atmospherics

53

With this result, we can determine the parameters that define a desired value of the SR in terms of the number of aberrations to be corrected and the ratio of the aperture to the Fried parameter: D  ln SR    r0   n  M  

3/5

.

This type of result is shown in Figs. 2.14–2.15. Tyson7 points out that the uncompensated turbulence wavefront variance is D 2  uncomp  1.02    r0 

5/ 3

and that removal of the tilt component leads to the following result: 

2 tiltcomp

D  0.134    r0 

5/3

.

Figure 2.14 The Strehl ratio that can be achieved as a function of the modes corrected and the relative size of the aperture to the Fried parameter r0. The results are tabulated in terms of the Zernike system of categorizing optical aberrations. The figure makes it clear that it takes many modes of correction for high-SR performance for large apertures.

54

Chapter 2

That is, removing tilt alone can improve the SR by a factor of

improvement factor 

e

D 0.134   r0 

e

D 1.02   r0 

5/3

5/3

1.667

 e0.886Q

.

Andrews and Phillips8 provide a different form of the SR that they find more accurate for a wider range of phase distortions: 5/3 SR  1   D / r0    

6/5

.

  Figure 2.15 This figure provides the foundation for a trade study that optimizes a design. Given a value of r0, the designer can trade the aperture diameter and number of Zernikes removed. Removing more Zernikes means a more complex and expensive adaptive optic system; reducing the number of Zernikes removed can be compensated by choosing a smaller value of D/r0 that will reduce the radiometric performance of the system. The figure also shows that a very high Strehl ratio requires both a high level of aberration correction and an aperture that is smaller than the local Fried parameter.

Atmospherics

55

Figure 2.16 Improvement in the Strehl ratio by removing tilt as a function of D/r0.

The benefit of removing tilt can be seen in Fig. 2.16, which shows that for more turbulent media (smaller r0) the benefit of removing tilt is significant. References 1. J. Winocur, “Modal compensation of atmospheric turbulence induced wave front aberrations,” Appl. Opt. 21(3) (1982).   2. S. R. Robinson, Ed., Emerging Systems and Technologies, p. 207, Vol. 8 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 3. Marcos A. van Dam et al., “The W. M. Keck Observatory Laser Guide Star Adaptive Optics System: Performance Characterization,” Publications of the Astronomical Society of the Pacific 118, 310–318, (2006). 4. J. H. Churnside, “Aperture averaging of optical scintillations in the turbulent atmosphere,” Appl. Opt. 30(15) (1991). 5. R. Noll, “Zernike polynomials and atmospheric turbulence,” J. OSA 66(3), 207 (1976). 6. P. Bely, Ed., The Design and Construction of Large Optical Telescopes, p. 338, Springer (2003). 7. R. Tyson, Principles of Adaptive Optics, CRC Press, p. 36 (2010). 8. L. C. Andrews and R. L. Phillips, Laser Beam Propagation through Random Media, SPIE Press (1998).  

56

Chapter 2

Aperture Averaging Atmospheric scintillation effects can be reduced through the use of aperture averaging, which increases with aperture size. Discussion

Scintillation effects are a result of turbulence in the atmosphere and result in the timevarying intensity of signals being detected. The effect is a function of the wavelength of the transmitted light, as well as the size of the receiving aperture. Increasing the size of the receiver aperture reduces the power variance. The common metric to quantify this effect determines the variance of a receiver of aperture D relative to the variance that would occur if the aperture were a simple point (no averaging). Aperture averaging benefits from the nonlinear reduction in scintillation with an increasing aperture size. In many cases, aperture averaging is related to optical communication systems since there is an effort to make the aperture just big enough to support effective power detection and scintillation suppression without making the aperture unnecessarily large. In this rule, we show the wavelength and aperture dependence of the averaging factor (often called F). Several forms exist for F. Yura and McKinley1 show that a relatively simple equation can be used to represent the scintillation averaging effect as a function of aperture size: F

1  D2  1  1.1   ho sec  

7/6

,

where D is the aperture size,  is the wavelength, ho is the scale height (computed below), and  is the zenith angle. ho is defined as  L 2  2   Cn     d    0  L 2    Cn    5/ 6 d   0 

6/ 7

and indicates a height that exhibits the average Cn2 (the atmospheric turbulence structure function discussed in detail elsewhere in this chapter). L is the length of the path over which the light propagates. The integrals are performed along the path. The parameter is zero at the receiver and equal to L at the light source. Another way to express the aperture averaging parameter F is2 F

where so is defined as

1 7/3

 D 1    so 

,

Atmospherics

57

so  0.96 ho sec . Yuksel et al.5 provide other approaches to computing F, one of which is from Andrews:6   kD 2   F  1  1.062    4L  

7 / 6

.

It applies to weak turbulence situations. This version is convenient for illustrating the wavelength and aperture size dependence of F (shown in Fig. 2.17). The results show that the greatest opportunity for aperture averaging (smallest averaging factor) occurs for the shortest wavelengths and the largest apertures. For longer wavelengths, the effects of turbulence are reduced, and aperture averaging is less effective. In addition, smaller apertures have less effect in aperture averaging, as expected.

Figure 2.17 The combined effects of aperture size and wavelength in aperture averaging (vertical axis). Larger apertures result in a lower value of F (more suppression of scintillation). The wavelength ranges from 0.4–3.5 microns, and the aperture diameter ranges from near 0 to 0.15 m. A pathlength of 2000 m was assumed.

58

Chapter 2

Churnside3 provides additional estimates of the aperture averaging factor for a variety of operating conditions. The result for some simplifying assumptions is 7/6

 kD2  F  1 1  1.07   .  4L 

Another estimate is provided by Isterling,4 where I2(D) is the irradiance flux variance over the circular aperture D: 5/ 6   0.51 2I 1  0.6912/5  0.49 2I I    1,   D   exp   1  0.18d 2  0.5612/5 7 / 6 1  0.90 d 2  0.62 d 2 12/5  I I   2 I

where  kD 2 d   4L

 . 

Additional computations are needed to fully determine I2(D). The first step is to determine the influence of the pathlength as follows: L   2  L   0.56 k 7 / 6  C n2      0 L

5/6

 L  

5/6

d ,

where k is the optical wave number given by k = (2)/, and L is the path length. Next, the log-normal distribution must be determined:

2I  exp  42   1. This result is used in the I2 equation above. References

1. H. Yura and W. McKinley, “Aperture averaging of scintillation applications,” Appl. Opt. 22(11) (1983). 2. J. F. Belsher, “Moments of Common Atmospheric Turbulence Profiles,” Report No. TR-1038, The Optical Sciences Company (1990). 3. J. H. Churnside, “Aperture averaging of optical scintillations in the turbulent atmosphere,” Appl. Opt. 30(15) (1991). 4. J. Isterling, “Electro-Optical Propagation through Highly Aberrant Media,” Ph.D. Dissertation, University of Adelaide, pp 42, 45, digital.library.adelaide.edu.au/ dspace/bitstream/2440/64292/8/02whole.pdf (2020). 5. H. Yuksel et al, “Averaging and Correlation Function Measurements in Strong Atmospheric Turbulence for Optical Wireless Applications,” Proc. SPIE 7091, 70910N (2008). 6. L. C. Andrews and R. L. Phillips, Laser Beam Propagation through Random Media, SPIE Press (1998).

Atmospherics

59

Adaptive Optics Influence Function

Actuation of an element of an adaptive optics array will influence neighboring actuators. The magnitude of this effect (which is characterized by a single number, the mirror fitting parameter), will determine the wavefront correction performance of the array, as described here. Optimal performance is achieved by statistical methods. Discussion

The wavefront variance σ2 resulting from the use of adaptive optics depends on the atmosphere through which the light is propagating (through the Fried parameter r0), and d is the spacing of the actuators and the influence function according to

2fit for minimum error  i  d / r0 

5/3

 2fit /min .

We can find the Strehl ratio reduction for the mirror fitting error according to Table 2.9. Table 2.9 Mirror fitting parameter i.1

0.229 0.284 0.399 1.297 0.319 0.141 0.134 0.130

Gaussian Pyramid Itek mirror Piston only Tyson (Gaussian) Greenwood and Fried Noll Fried

For example, a Gaussian fitting function (Tyson’s value of 0.319) applied to adaptive optics with 3-cm actuator spacing in an atmosphere where r0 = 10 cm results in a Strehl ratio of 5/3 SR  exp  2fit /min   exp 0.319 3/10   0.87.  

The structure of the equations shows that increasing the density of actuators (smaller d) and/or better atmospheric conditions (larger r0) increases the Strehl ratio. Reference

1. S. R. Robinson, Ed., Emerging Systems and Technologies, Vol. 8 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993).  

Shack–Hartmann Noise The variance (square of the standard deviation) in the wavefront resulting from non-perfect sensing in a Shack–Hartmann sensor can be expressed (in radians2) as1

60

Chapter 2

2

2 2  L   K  Ks  K N Nd  2 variance  tilt   0.86     b , 2 Ks  r0   

where Ks is the average number of detected signal photons in each subarray of the Hartmann sensor (explained later), Kb is the number of background photons detected in each subarray, KN is the read noise in each pixel, Nd is the number of detectors in each subarray, L is the outer scale of the atmosphere, r0 is the Fried parameter, and tilt2 is the variance in the tilt measured at each element of the Shack–Hartmann sensor. Discussion

The Shack–Hartmann sensor is widely used to determine the wavefront error so that it can be corrected by an active optical system. The sensor divides the telescope pupil into a number of small areas (or subarray) and determines the wavefront tilt in each area. The ensemble of tilts measured in each subarray is used to fit the overall wavefront shape and to command the correction system. Noise or other error corrupts the tilt measurements and leads to incomplete correction of wavefront error. To implement such a system, groups of pixels in a typical focal plane are assigned to a particular location in the pupil. Each group of pixels (a subarray) represents a point in the array of samples of the wavefront sensor. A wavefront entering such a subarray is imaged in the part of the focal plane assigned to it, falling in the center of the subarray only if there is no tilt. A tilted wavefront will be brought to focus in a noncentered location. The center of light of the spot that is formed is determined by measuring the intensity of the light in each pixel and performing something akin to a center of mass calculation. The location of the center of the spot indicates the 2D tilt for the particular subarray. The outer scale describes a feature of the turbulence of the atmosphere that indicates the source of the kinetic energy of the atmosphere. In most cases, the outer scale at low altitude is about one-half the altitude, becoming 100 m or more away from the ground where free flow occurs. The Fried parameter indicates the lateral spatial scale over which the phase of rays arriving at the sensor are about the same. The following formula is a simpler version of the one in the rule. It explicitly shows the role of the signal-to-noise ratio (SNR) in determining the variance in the measurements. The noise-equivalent angle of the individual tilt measurements2 is 2 tilt  0.35

2 , ds2 (SNRv )2

where tilt is the rms tilt error, is the wavelength, ds is the subaperture diameter, and SNRv is the voltage SNR. Figure 2.18 illustrates the layout and function of the Shack–Hartmann sensor. Note that the spots vary in position over the pupil. That is, they do not form a uniform grid. The locations of the spots provide the information that indicates what type of tilt imperfections are present in the subaperture of the wavefront. This type of data can be input to a computer to command positioners to deform a mirror (for phase conjugation) and reduce these wavefront errors. The result is a much cleaner image.

Atmospherics

61

Figure 2.18 Image of subaperture spots.3,4 A close look reveals that the spots do not form a uniform grid, which results from the wavefront tilt that is being measured. © User:2pem / Wikimedia Commons / CC-BY-SA-3.0.

References 1. D. Dayton, M. Duncan, and J. Gonglewski, “Performance Simulations of a Daylight Low- Order Adaptive Optics System with Speckle Postprocessing for Observation of Low-Earth Orbit Satellites,” Opt. Eng. 36(7), 1910–1917 (1997). 2. S. R. Robinson, Ed., Emerging Systems and Technologies, p. 215, Vol. 8 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 3. ecee.colorado.edu/ecen5606/2014/Wavefront_SensingAOL2014.pdf (2020). 4. en.wikipedia.org/wiki/Shack%E2%80%93Hartmann_wavefront_sensor (2020).

Laser Beam Wander Variance is Approximately Proportional to the Cube of the Pathlength A number of researchers have addressed the issue of wander and jitter of a laser beam propagating in the atmosphere. The difference between these two effects is that wander defines the movement of the beam center (or the location of the point of maximum intensity), whereas jitter is a short-term effect that results in wavefront tilt in the receiver plane. Jitter is seen as an apparent pointing error, and it is much smaller than wander.4  Discussion Simple model A relatively simple beam wander variance  model is1  1.83 Cn2 –1/6 L17/6,

62

Chapter 2

where L is the path length, and the path is horizontal. The square root of this number is the standard deviation in the beam wander. Cn2 is the atmospheric structure constant, and  is wavelength in consistent units. More recent and detailed models are described later in this rule. The nighttime value of Cn2 is about 10–14 m2/3. For a path length of 5000 m and a wavelength of 0.5 micron,≈ 78 mm. (The title of the rule derives from the fact that 17/6 is approximately 3.) A whole generation of atmospheric scientists have worked on the problem of laser beam propagation in the atmosphere. Ultimately, all of the work derives from seminal analysis performed by Rytov and Kolmogorov. Fried has also made important contributions to the theory. Military and astronomical scientists have extended the theory and have made considerable progress in demonstrating agreement between theory and experiment. The theory is too complex to repeat here. Fortunately, the effect on propagation can be expressed with relatively simple algebraic expressions, such as the one shown above. As with any rule related to the atmosphere, the details of the conditions really determine the propagation that will be observed. This result assumes that the values of Cn2 along the path is constant and is of such a value that the turbulence effect falls into the category of “weak.” This means that the variance in the beam intensity is less than about 0.54. Otherwise, the assumptions inherent in Kolmogorov's adaptation of Rytov's work no longer apply, and the results are flawed. Use of this rule defines the size that a receiver must encounter the bulk of a beam used for communications, tracking, or other pointing-sensitive applications. The mathematics behind this analysis, which was first done by Tatarski, is beyond the scope of this book. Suffice it to say that the result shown above is a substantial simplification of the analysis that must be performed. For example, Wolfe and Zissis2 provide a more complete analysis and show how the beam wander is translated into motion of the centroid of the beam in the focal plane of a receiver. Complete model

More modern analysis4 expands the first result by including the effects of the beam size in both wander and jitter. This is illustrated in Fig. 2.19. In Andrews et al.,4 the parameter rc defines the radius of the average shape over which the center of the beam wanders. This size includes the impact of beam wander and diffraction. Results were obtained for both collimated and focused beams, as shown below (note that for these results, the impact of path length is at the power of 3):

collimated beam  0  0 : r

2 c

1/3 0

 2.42C L W 2 3 n

  2W 2 1/6  1   0 2 0 2   .   1  0W0  

For a focused beam, the comparable result is well approximated by

focused beam  0  0 : r

2 c

1/3 0

 2.72C L W 2 3 n

 8  2W 2 1/6  0 0 1     9 1 0.5  02W02     

Atmospherics

63

 

Figure 2.19 (a) Beam wander as described by movement of the “hot spot” (instantaneous center) within the beam. (b) The long-term spot size WLT is the result of beam wander, beam breathing, and diffraction. The shaded circles depict random motion of the short-term spot size WST in the receiver plane.4

for a Gaussian beam of radius W0.0 is 2/L0, where L0 is the outer scale of the turbulence, which is typically tens of meters.5 L is the path length. In addition, jitter-induced pointing error (defined as pe2) will occur with results for both collimated and focused beams.4 This refers to motion of the center of the distribution of light of radius rc: 2 5/3 1/6  L   2W0    Cr2W02 / r02   , collimated beam:   0.48    1   2 2 2   2W0   r0    1  Cr W0 / r0   2 pe

where Cr is a scaling factor, usually ~2; and 2

5/3

 L   2W0  focused beam:   0.54      2W0   r0  2 pe

 8  C 2W 2 / r 2 1/6  r 0 0 1   . 2 2 2   9  1  0.5Cr W0 / r0  

References 1. H. Weichel, Laser System Design, SPIE Course Notes (1988). 2. W. L. Wolfe and G. F. Zissis, Eds., The Infrared Handbook, Environmental Research Institute of Michigan (1978). 3. J. Isterling, “Electro-Optical Propagation thorough highly aberrant media,” Ph.D. Dissertation, University of Adelaide, digital.library.adelaide.edu.au/dspace/ bitstream/2440/64292/8/02whole.pdf (2020). 4. L. C. Andrews et al., “Beam wander effects on the scintillation index of a focused beam,” Proc. SPIE 5793 (2005). 5. www.ing.iac.es/Astronomy/development/hap/10.html (2020).

64

Chapter 2

Pulse Stretching in Scattering Environments The increase in transmitted laser pulse duration Δτ caused by scattering in the atmosphere can be estimated as1

 

1.5  L  0.3  1  2.25a2rms   1  1 ,   2   c  arms 

where L is the propagation distance, c is the speed of light, a is the single scatter albedo ≈ 1, and is the product of the scatter cross-section per unit volume and the propagation distance.  ranges from 12 to 268 in the experiments illustrated in the reference, and it is unitless. rms is the rms scatter angle ≈ 30° for water, and it is expressed in radians. Discussion

This result comes from an analysis, using some simplifying assumptions, of the multiple scattering that occurs in the atmosphere. In addition, the formulation uses some results from the classical theory of electron scattering of electromagnetic waves. The rule breaks down for optical depths  in excess of about 300, so it will not work well for dense fogs and clouds. Those involved in laser communications in the atmosphere will find this rule helpful because pulse stretching limits the data rate of the channel, as shown by the situations in which the last photons to arrive from a first pulse are still arriving (having gone through many multiple scattering paths) as the first light arrives from a second pulse. This would confound the receiver and prevent it from properly interpreting the data. Stotts1 shows that this formulation compares well with both experiments and more complex simulations using Monte Carlo methods. In view of the simplicity of the rule, this is a most attractive place to start. Systems that require more accuracy can use the Monte Carlo methods referenced in Stotts.1 The results therein show that pulse stretching on the order of microseconds results from situations in which fog or clouds are present. This will have a profound impact on the ability of a pulsed communication system. Reference

1. L. Stotts, “Closed Form Expression for Optical Pulse Broadening in MultipleScattering Media,” Appl. Opt. 17(4), 504 (1978). Optimal Truncation of a Gaussian Beam Propagating in the Atmosphere

In a series of papers,1–3 Yura provided details on the optimum truncation ratio d/D that yields the maximum mean on-axis far-field irradiance of a laser beam propagating in an atmosphere with a certain value of the Fried parameter r0. A simple polynomial expression provides for an accurate estimate of the result:

 d / D Optimal  3 105 x5  0.0007 x 4  0.0063x3  0.0194 x 2  0.0377 x  0.8981,

Atmospherics

65

where D is the transmitting aperture diameter, x is the ratio of D/r0, and d is the size of the beam defined as the 1/e2 beam diameter. Discussion

Many experimenters deal with the propagation of Gaussian laser beams in atmospheric conditions. Analysis shows the optimal aperture through which the beam should be projected to maximize the on-axis intensity. The rest of this rule provides details for maximizing the far-field irradiance. The effort described in the reference deals with balancing the size of the far-field beam at the expense of the total power delivered. The maximum far-field on-axis irradiance is about 81 percent of what would be obtained if the same total power were uniformly distributed over the circular aperture (so-called “top hat” situation). For beam projection in vacuum, the following equations compute the angular size of the projected power:

G A     exp  22 / 02  , 2

2

 2   2 b  0      ,  d   D  b

2 exp   2   1

,

where  = D/d. The equations have the property that the effective diameter of the aperture is D eff  d

tanh  D 2 / 2 d 2  .

When the same beam is projected through an atmosphere characterized by the Fried parameter r0, the resulting Strehl ratio is SR 

1 D  1  2  eff   r0 

2

.

Yura has a single equation whose solution defines the conditions for optimal irradiance: 2  D  2    r0  2 2 20  exp  0   1 1  exp  02   1   



where

0   D / d Optimal .



   , 2   

66

Chapter 2

Figure 2.20 shows the on-axis intensity as a function of the key parameters. Figure 2.21 shows the optimal value of d/D as a function of D/r0; the solid line shows the results of manipulating the previous equation, and the dotted line shows the fit provided by the polynomial of the rule. Perhaps more useful is the graph that shows the maximum on-axis intensity as a function of D/r0, as shown in Fig. 2.22. An encouraging result of the analysis is that for D/r0 = 0, the same on-axis intensity is obtained as is found for propagation in vacuum.

Figure 2.20 On-axis intensity as a function of the key parameters as a function of the ratio of the beam diameter in the aperture to the truncation diameter for various values of D/r0.1

Figure 2.21 The optimal d/D choice as a function of D/r0.1

Atmospherics

67

  Figure 2.22 The on-axis irradiance for uncompensated turbulence as a function of D/r0.1  The optimum value of d/D has been used to obtain the maximum mean on-axis irradiance compared with the irradiance that could have been achieved if the transmitting aperture was uniformly illuminated, rather than by a Gaussian profile. The computations for this graph use the polynomial presented in the rule.

References

1. H. T. Yura, “Optimum Truncation of a Gaussian Beam for Propagation Through Atmospheric Turbulence,” Aerospace Report No. TR-94(4925)-215 (Nov. 1994). 2. H. T. Yura, “Optimum truncation of a Gaussian beam for propagation through atmospheric turbulence,” Appl. Opt. 34(15) (1995). 3. H. T. Yura, “Optimum truncation of a Gaussian beam in the presence of random jitter,” J. OSA A 12(2), 375 (1995).

Increased Requirement for Rangefinder SNR to Overcome Atmospheric Effects In weak atmospheric turbulence, the required SNR of a rangefinder using laser pulses must be increased to overcome the added scintillation. The increase is about



 ESNR  exp  



1  2I erf 1 (2Pd  1  I2  , 2 

where ESNR is the required increase to the SNR (required signal to noise in turbulence is the ESNR multiplied by the required signal to noise in calm air), Pd is the probability of detection requirement, and erf-1(x) is the inverse error function, which is defined in any number of

68

Chapter 2

books on advanced engineering, mathematics, or statistics. Note that erf-1 is not the complementary error function erfc(x). erf–1(x) has the property that erf –1(erf(x)) = x, where I is the variance of the log intensity ratio; examples are provided below. Discussion

Propagation through atmospheric turbulence broadens a laser beam. In addition, it causes the center of the beam to meander. This combination of broadening and beam wander causes the energy to be distributed over a larger angular area than when the atmospheric effect is not present. This means that less energy is put on the target, thus reducing the signal reflected to the sensor. Therefore, the SNR will be smaller in turbulent conditions than in calm conditions. To achieve a fixed level of probability of detection and probability of false alarm, the SNR must be increased. This rule does not include the attenuation effects that also occur in the atmosphere as a result of particulate scattering and absorption. Those effects are rather easily included in the calculation. Attenuation properties of the atmosphere are discussed in the rule “Atmospheric Attenuation and Visibility: Beer’s Law” in the DVE chapter. The following examples show how this rule can easily be used to determine the impact of the atmosphere on the performance of any type of active system. For wavelengths associated with doubled YAG laser light (532 nm), near the ground Cn2 is about 10–14 at night and 1.7 × 10–14 during the day. For light turbulence,2

  L   0.56k 2 I

7/6

5/6

  0 C    L  L

2 n

 L  

5/6

d .

After doing the integral, we find that

2I  L  0.123k7/6Cn2L11/6 , where L is the distance over which the observations are made, and k = 2/ . To achieve a probability of detection of 0.99 at night, we complete the remainder of the terms in the equation:  E SNR  exp  





2  I erf 1 (2  0.99  1 

1 2 I . 2 

Using the equation for this rule, we find that the enhancement in the SNR must be 1.49. That is, the radiometrics of the system must be considered to ensure that the combination of laser power and receiver sensitivity leads to a value of SNR 50-percent larger than would be needed to conduct the same experiment in turbulence-free air. The prior calculations were for nighttime conditions. During the day Cn2, is about 1.7 × 10–14, and the enhancement requirement jumps to 68 percent.

Atmospherics

69

References

1. R. Byren, “Laser Rangefinders,” p. 103, in Active Electro-Optical Systems, C. Fox, Ed., Vol. 6 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 2. J. Isterling, “Electro-Optical Propagation thorough highly aberrant media,” p. 43, Ph.D. Dissertation, University of Adelaide, digital.library.adelaide.edu/au/dspace/ bitstream/2440/64292/8/02whol.pdf (2020).

Free-Space Link Margins The atmosphere has a distinct impact on the ability of terrestrial laser communications. The data in Table 2.10 for wavelengths of 1550 nm indicates the relative impact of different conditions. Discussion

Atmospheric absorption, scatter, and scintillation will all decrease the SNR and, if bad enough, will eliminate the ability of an electro-optical system to detect a target or send information to another location. Table 2.10 provides some guidelines for the link margins suitable for various weather types. As in all weather conditions, the real world can exhibit both spatial and temporal variation, so some care must be taken when using the numbers in the table. The practitioner is encouraged to get the local weather statistics for his link to determine the link margin needed for a give locale. Obviously, Adelaide and Tucson will need a lower margin for a given reliability than Seattle or Halifax. Visible wavelengths perform slightly worse, and the long-wave infrared (LWIR) slightly better. Carlson1 gives the first five entries; the last entry is from the author. Other rules in this chapter, the Appendix, and the DVE chapter provide additional information about how attenuation varies with rainfall or snowfall rates. Reference

1. R. Carlson, “Reliability and Availability in Free Space Optical Systems,” Optics in Information Systems 12(2) (2001). Table 2.10 Suitable link margins.

Weather Condition Urban haze Typical rainfall Heavy rainfall Typical snow, heavy downpour, or light fog White-out snowfall or fog Heavy-to-severe fog

Required Link Margin (dB/km) 0.5 3 6 10 20 30–120

70

Chapter 2

Summary of Phase Modulators for Adaptive Optics Adaptive optics systems need a method for inducing phase delays in the light propagating through the system. Mechanical actuation of a deformable mirror (DM) similar to a pizza pan is the conventional way of describing what happens in such a system. Mansel et al.1 provide a summary of the many other ways that the phase delay can be implemented. Discussion

A number of factors go into the choice of the implementation of phase modulators listed in Table 2.11 are summarized by Mansel et al.1 The key technologies are lead zirconate titanate piezoelectric (PZT), a ceramic material; lead magnesium niobate (PMN:RE), an electrostrictive multilayer actuator; and micro-electro-mechanical systems (MEMS). Reference

1. J. Mansel et al., “Deformable Mirror Development at Stanford University in HighResolution Wavefront Control,” Proc. SPIE 4493 (2002). Table 2.11 Values to consider for phase modulators.

Large Small Low

PZT or PMN 4 7 Low

Surface MEMS ~1 0.3 Low

Membrane DMs ~10 2 High

Liquid Crystal ~1 ~0.1 Low

Smooth

Smooth

Smooth

Smooth

Piston

Low 50 Low

High Good Low

Low Poor High

Low Good Low

Low Good Low

Yes

No

Yes

No

Yes

Yes

No

Maybe

Yes

Maybe

High

Good

Poor

Medium

Poor

10

1000

15

1000

10,000

Low

High

Low

Low

Low

Zero

High

Low

Low

Medium

Linear

Linear

Parabolic

Parabolic

Nonlinear

Characteristic

Ideal DM

Throw (m) Spacing(mm) Crosstalk Influence Function Cost Pixel Flatness Roughness Scalable to 1M Actuators Circuitry Integration Laser Power Handling Response Time (sec) Fabrication Complexity Thermal Variations Response to Voltage

Atmospherics

71

Telescope Seeing Created by a Dome Floor Bely1 has a rule defining the “seeing” [imaging quality shown here as encl and measured as the full width half maximum (FWHM) of the point spread function] that can be achieved in a dome as a function of heat flux from the floor qs and the enclosure diameter Dd:  en cl  2 0 .9 D d 1/ 5 q s4 / 3 .

Discussion

Heat from the floor of an enclosure leads to atmospheric turbulence inside the dome that degrades the imaging performance that can be achieved. This simple rule captures the key parameters involved. The units of seeing in this case are arc seconds. Bely1 indicates that typical heating rates are about 3 W/m2. Reference

1. P. Bely, Ed., The Design and Construction of Large Optical Telescopes, p. 353, Springer (2003).

Telescope Seeing Due to Still or Ventilated Air A number of experiments have shown a simple formula for the optical disturbance in a telescope dome with still, isothermal air. Image quality is indicated as m and the angular image spread (FWHM) in arc seconds:1  m  0.4  Tm6 / 5 ,

where Tm is the difference between the optical mirror surface and ambient air temperature in degrees Celsius. Discussion

An additional related result pertains to domes where a ventilation system is used. This improves the image quality according to the following equation:  m  0 .1 8 F r  0 .3  T m ,

where Fr abbreviates the Froude number, which is a function of the air velocity of the ventilation system V; D is the aperture diameter; and g is the acceleration of gravity. T and T are the temperature and temperature fluctuations inside the dome, respectively:

Fr 

TV 2 . TgD

A different expression for the Froude number is2

V 2 Fr  , gL

72

Chapter 2

where L is a length scale (which is the same as D for a heated mirror), and and are the air density and density fluctuation, respectively. Note that there is inconsistency in the definition of the Froude number: it is commonly expressed as the square root of the equation that appears above. It is clear from the equations that Fr increases with V so that m decreases, thereby improving image quality. References 1. P. Bely, Ed., The Design and Construction of Large Optical Telescopes, p. 357, Springer (2003). 2. J. Cheng, The Principles of Astronomical Telescope Design, Astrophysics and Space, Springer (2009).  

 

Chapter 3 Acquisition, Tracking, and Pointing Introduction The fundamental purpose of an acquisition, tracking, and pointing (ATP) system is to keep a moving target in the FOV of a sensor, allowing the development of a time history of the object. As such, ATP is a system-level problem involving sensors, algorithms to detect targets, algorithms to assign multiple detections to multiple active target tracks (detect-totrack assignment), control loops to drive a pointing system (typically gimbals) to maintain the FOV centered on the target, and finally the gimbals themselves. Successful development of an ATP system requires expertise in optics, sensors, algorithms, real-time processing, control loops, and gimbal systems. Note that this is very different from an intelligence, surveillance, and reconnaissance (ISR) system, which simply gathers imagery of an area with functions such as detection, recognition, and identification (DRI) performed at least quasi-independently (although rules for DRI are included in this chapter). While functions such as ISR or “scene understanding” can benefit from temporal data, they are not fundamentally time-based functions, whereas ATP at its core is a temporal process. While ATP usually involves a pointing system, a special case involves the use of a fixed line of sight (LOS) sensor collecting a video sequence. In this case, the target is detected and tracked as its image moves across the image (focal plane). ATP as a field of study has its roots in radar processing, and in fact, many detection and tracking algorithms were first developed in the radar community and later adapted to electro-optical (visible) and infrared (EO/IR) systems. Radar systems usually emit a scanning beam, which has implications for data rates and coverage rates compared to today’s staring array focal planes, which provide faster coverage and high data rates. Early EO/IR systems commonly used scanning techniques with one or a few pixels scanned across the scene to build an image. With these early systems, the adaptation of radar detection algorithms was relatively straightforward due to the similarity between a scanning radar beam and a (passive) scanning EO/IR sensor. As EO/IR technology evolved to produce high-pixel-count staring array focal plane arrays (FPAs), while many of the radar detection principals are still applicable, the algorithms have needed to be re-cast to more accurately represent the simultaneous collection of millions of pixels (as opposed to the sequential collection of a scanning system). As discussed in the “Tracker vs. Detection” rule in this chapter, we often separate detection and tracking into two independent functions, but they are ultimately a single 73

 

74

Chapter 3

problem in which the detection involves summing target energy along a trajectory. Even after we separate the problem into detection and tracking, tracker complexity (in this case, tracker means the detect-to-track assignment problem) is determined by detection and false alarm statistics. While ATP is a rather mature field, there remain significant challenges and exciting opportunities. Tracking through obscuration, feature-aided tracking, and tracking many interacting targets represent a few of the current challenges. The use of multi- or hyperspectral data for target detection is a separate and evolving field, at this time limited for ATP applications by the data volumes and processing requirements. Integration and utilization of other data sources to help with either detection or detect-to-track assignment is another evolving area. Three of the big names in ATP are included in the references below. Blackman and Popopli1 is a comprehensive classic in the tracking community. Bar-Shalom and Li2 addressed (and, as of this writing, continue to address) radar tracking problems of multiple, maneuvering targets with various levels of detection and false alarm performance. The SPIE series of annual conferences on detection and tracking of unresolved targets chaired by Drummond,3 as well as his own technical contributions to multisensor tracking, is a rich source of material, as is the SPIE series of conferences on acquisition, tracking, and pointing.4 The paper by Sanders-Reed5 and the references contained therein illustrate the migration of maximum likelihood detection from radar to staring array focal plane systems. References 1. S. Blackman and R. Popoli, Design and Analysis of Modern Tracking Systems, Artech House (1999). 2. Y. Bar-Shalom and X.-R. Li, Estimation and Tracking: Principles, Techniques, and Software, Yaakov Bar-Shalom (1998). 3. Proc. SPIE, “Signal and Data Processing of Small Targets” (1989–2015), O. E. Drummond, Ed. and author. 4. Proc. SPIE, “Acquisition, Tracking, & Pointing” (1987–2006), then “Acquisition, Pointing, & Tracking and Laser Systems Technologies” (2007–2011). 5. J. N. Sanders-Reed, “Maximum Likelihood Detection of Unresolved Moving Targets,” IEEE Trans. AES 34(3) (1998).

Correct Measure of Detection Performance Detection performance needs to be characterized by both the probability of detection Pd and probability of false alarm Pfa. Discussion Each of the authors have seen requirements specifications that mandate a given Pd or Pfa but not both. Of course, what the developer of the specification probably means in this case is, “Give me at least this Pd, and I’ll live with whatever (reasonable) Pfa results,” or more commonly, “Do the best you can with Pd, but the Pfa must be no more than specified.” This is sloppy thinking. If a Pd is specified but no Pfa is specified, we just declare continuous detection, regardless of conditions. We will have 100-percent Pd but also 100-percent Pfa

Acquisition, Tracking, and Pointing

75

(no sensor needed). Similarly, if a maximum Pfa is specified but no Pd, we can turn the system off and get zero false alarms (but zero detections, as well). While the above may appear somewhat farcical (but all too common), a more subtle error is specifying a SNR. This is generally done on the assumption that the signal has a Gaussian distribution such that for a given SNR threshold, one can use the error function to compute Pd and Pfa. However, when one is far out on the tails of a Gaussian (e.g., 6 standard deviations), small changes in the shape of the curve can have a large impact on performance, and many distributions are not truly Gaussian, either in the original signal or following some preprocessing step. For example, photon statistics obey a Poisson distribution, and certain maximum likelihood processing algorithms result in a Beta distribution.1,2 In these cases, specifying a target SNR instead of Pd and Pfa may lead to unexpected and disappointing results. A graph that plots Pd on one axis and Pfa on the other axis, with a set of curves for various target characteristics (SNR, temperature), is called a receiver operator curve (ROC).1 References 1. J. N. Sanders-Reed, “Maximum Likelihood Detection of Unresolved Moving Targets,” IEEE Trans. Aerospace and Electronic Systems 34(3) (1998). 2. S. C. Pohlig, “An Algorithm for Detection of Moving Optical Targets,” IEEE Trans. Aerospace and Electronic Systems 25(1), 56–63 (1989).

Tracker vs. Detection Detection and tracking are part of one mathematical process, often separated into detection and tracking. However, they are really one operation, which means that if detection is improved, then tracking can be simplified. Discussion In the following discussion, “tracker” refers to the detect-to-track assignment algorithm in which M current detections are assigned to N active tracks with any unused detections used to initiate new tracks. Maximum likelihood detection theory for faint, unresolved, moving targets (Fig. 3.1) results in both optimal detection performance (maximum probability of detection Pd with a minimum probability of false alarm Pfa) and good trajectory estimation.1 The result consists of a “data normalizer” to reduce background clutter to essentially white noise, combined with assumed velocity filters (AVFs) that sum all target energy along different hypothesized trajectories (Fig. 3.2). If the summed energy exceeds a threshold, a detection along that trajectory is declared. Numerous approximations trade-off computational requirements against performance. The first approximation uses the clutter suppression normalizer and then performs a pairwise binary quantization to generate a sequence of binary quantized images.2 The AVF now takes the form of an M out of N test in which all possible (valid) trajectories are tested (Fig. 3.3). The binary quantization threshold is set to pass a constant

76

Chapter 3

number of hits [a constant false alarm rate (CFAR) process], which should correspond to the maximum number of hits that can be processed in a given timeframe. The above M out of N test is one form of a multiple hypothesis tracker (MHT), widely regarded as the “best” detect-to-track assignment algorithm.3 An MHT is distinguished by the fact that all detections are assigned to all tracks with a probability based on how well the particular detection matches a given track. In principle, all possible trajectories are evaluated, but in practice, this rapidly leads to an unmanageable number of combinations, and hence various techniques are used to reduce the number of possible trajectories.

Figure 3.1 Maximum likelihood detection using an assumed velocity filter bank. Reprinted with permission from IEEE Trans AES 34(3) (1998).

Figure 3.2 Simultaneous detection and tracking summing target energy along an assumed velocity trajectory.

Acquisition, Tracking, and Pointing

77

Figure 3.3 Binary quantization M out of N test.

Detect-to-track assignment algorithms can generally be divided into single, hard assignment algorithms (in which each detection is assigned once and permanently to a track) and soft, multiple assignment algorithms (such as an MHT, in which each detection is assigned to multiple tracks). The required sophistication for the tracker strongly depends on the complexity of detection and false alarm statistics. Consider the case in which it is known that there is one and only one target, Pd is high, and Pfa is very low. In this case, the detect-to-track assignment is straightforward: assign THE detection to THE track. When there are more than one target but still high Pd and low Pfa (in other words, most of the hits are valid target detections), we move to simple single, hard assignment algorithms such as global nearest neighbor (GNN) algorithms that attempt to minimize the total distance between detections and their assigned tracks.4 As the false alarm rate increases, one is led to more complex, multiple assignment algorithms such as the probabilistic data association/jointprobabilistic data association (PDA/JPDA) classes of algorithms,5 and finally, in the case of high Pfa and low Pd (due to low signal), to MHTs. With good detection and low false alarm rates, a much simpler tracker (GNN) can be used. As detection performance decreases and more false alarms are generated, more complex trackers are required. References 1. J. N. Sanders-Reed, “Maximum Likelihood Detection of Unresolved Moving Targets”, IEEE Trans AES, 34(3) (1998). 2. F. C. Shelly and J. N. Sanders-Reed, “GEODSS ATDS Detection Algorithm Evaluation”, MIT Lincoln Lab Project Report ETS-126 (Jan 1996). 3. S. Blackman and R. Popoli, Design and Analysis of Modern Tracking Systems, Artech House (1999). 4. J. N. Sanders-Reed, “Multi-Target, Multi-Sensor, Closed Loop Tracking,” Proc. SPIE 5430 (2004). 5. Y. Bar-Shalom and E. Tse, “Tracking in a Cluttered Environment with Probabilistic Data Association,” Automatica 11 (Sept. 1975).

78

Chapter 3

6. P. Chu, “Efficient Detection of Small Moving Objects,” MIT Lincoln Lab Technical Report 846 (July 1989).

Detection Criteria Targets can be easily detected if half (or more) of the photons in the pixel are from the target and half are from noise sources. Discussion The SNR of a detector is es es  en

.

In this equation, es– represents the number of electrons generated in the detector by signal photons [and for point sources, the background photons within the instantaneous field of view (IFOV)]. It is equal to the rate of arrival at the focal plane of signal photons times the quantum efficiency of the detector and the integration time. en– is the number of electrons generated by all of the noise sources. It consists of contributions from photon noise in the background along with the contribution from leaked background clutter along with internal detector noise. If e-s= e-n, then the SNR is es 2es

, or 0.71 es .

Because the generation of electrons from the signal is proportional to the arrival of photons from the target, and this number is generally quite large, the SNR can be quite high. This is rarely the result expected by the uninitiated. Signal-to-noise ratios must still be greater than about 6 for good target detection prospects;1 therefore, more than 70 photons per integration time should fall on the detector (assuming a 70-percent quantum efficiency). If no noise sources are present, then this equation is reduced to es , and only about 25 signal electrons are needed for detection. Photons from noise sources in the scene generate electrons in direct proportion to the quantum efficiency at their wavelength. This is because both the signal and background photon fluxes follow Poisson statistics. For laser detection, avalanche photodiodes, and UV and visible systems, the noise is frequently dominated by photon noise from the target and background. Therefore, this rule holds true if the total of the target and background photo-generated electrons is higher (on the order of 50 electrons or more), because this ensures a current SNR in excess of 6 or 7. This also holds true for IR systems, as both their targets and backgrounds generate many more electrons. This rule does not apply for detection in high clutter or background conditions where noise sources other than that of the detector dominate.

Acquisition, Tracking, and Pointing

79

Reference 1. Burle Electro-Optics Handbook, Burle Industries, Lancaster, PA, p. 111–1112 ftp://ftp.bartol.udel.edu/jmc/pmt/rca_eo_hb.pdf (1974).

Signal-to-Noise Ratio Requirements An SNR of 6 is adequate to perform most sensing and tracking functions. Any more than that is not needed, and any less will not work well. Discussion Targets can be sensed with certainty to a range defined by the SNR and allowable false alarm rate. Beyond that range, they are not usually detectable. This rule derives directly from standard results in a variety of texts that deal with target detection in noise. See the rule “Probability of Detection Estimation” in this chapter for additional details on how to compute that factor for a variety of conditions. There are some cases in which the probability of false alarm Pfa can be raised, which allows a small drop in the SNR for a required probability of detection Pd. For example, if one is willing to tolerate a Pfa of 1 in 100, then the requirement on SNR drops from 6 to about 4 for 90-percent Pd. Conversely, there are some applications in which a much higher SNR is required (e.g., optical telecommunication receivers where a high SNR is required to achieve the required very low bit error rate).1 This rule assumes “white noise,” in which noise is present in all frequencies with the same probability, and no clutter (spatially varying background structure). The situation is different with noise concentrated at a particular frequency or with other characteristics. In general, if you have a priori knowledge of the characteristics, you can design a filter to improve performance. However, if you do not know the exact characteristics (which is usually the case with clutter), then your performance will be worse than expected by looking at tables tabulated for white noise. The more complex case in which the noise has a non-uniform power as a function of frequency cannot be dealt with so easily, because all sorts of possible characterizations can occur. The probability of detection increases rapidly as the SNR increases. For example, at a false alarm of 10–4, doubling the SNR from 3 to 6 results in the probability of detection increasing from about 0.10 to well above 0.95. A probability of detection in excess of 0.90 can be achieved only with a probability of false alarm of around 1 in 1 million if the SNR is about 6. In most systems, a Pfa of at most 1 in 1 million is about right. It must also be noted that typical false alarm rates are “per pixel.” For a large focal plane, there may be around 1 million pixels. Therefore, the requirement of a Pfa of around 1 in 1 million limits the system to 1 false alarm per frame. References 1. J. Miller and E. Friedman, Optical Communications Rules of Thumb, McGrawHill, p. 43 (2003). 2. Burle Electro-Optics Handbook, Burle Industries, p. 112, ftp://ftp.bartol.udel.edu/ jmc/pmt/rca_eo_hb.pdf (1974).

80

Chapter 3

3. D. Wilmot et al., “Warning Systems,” p. 61, in Countermeasure Systems, D. Pollock, Ed., Vol. 7 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993).

Psychometric Function The psychometric function (Fig. 3.4) is well matched by a Weibull function as follows:1

P  x   1  1    2

x    



,

where P is the fraction of correct responses, x is the stimulus strength,  is a parameter that determines the steepness of the curve,  is the guess rate (0.5), and  is the stimulus strength at which 75 percent of the responses are correct. Discussion Bijl and Valeton1 point out that “the probability of a correct response (in a detection task) increases with stimulus strength (that is, contrast). If the task is impossible because the contrast is too low, the probability of a correct response is 50 percent (guess rate), and if the task is easy, the observer score will be 100-percent correct. The relationship between stimulus strength and probability of a correct response is called the psychometric function.”

Figure 3.4 Example of the psychometric function. The fraction of correct responses gradually increases with stimulus strength (i.e., contrast) from 50 percent (representing chance) to 100 percent. The threshold is defined as the stimulus strength at which the observer scores 75 percent correct. The threshold is independent of the decision criterion of the observer.

Acquisition, Tracking, and Pointing

81

An example is plotted in the figure. Threshold is defined as the stimulus strength at which the observer scores a predefined correct level (e.g., 75 percent). This figure plots the function for various values of β. These plots assume a γ value of 0.5 and an α value of 3 (hence, they all converge at that stimulus point). Similarly, Burle2 and Bailey3 give a related value for a probability of detection Pd as a function of contrast as

Pd 

1 1  1  exp  4.2(C / Ct  1) 2  , 2 2

where C is the contrast of interest, other than Ct, which is the threshold contrast of 50percent correct detection. A minus sign is used when C < Ct where the  is shown. The probability P1 that a human will search a field that is known to contain one target and lock onto the target with his foveal vision for a sufficient time (say, 1/4 of a second) is difficult to estimate, but Burle2 and Bailey3 suggest a relationship   700   at   P1  1  exp       t ,   G   as  

where at is the area of the target, as is the area to be searched, t is the time, and G is a congestion factor, usually between 1 and 10, that represents the density of image intersections. See the chapter on Human Vision for a discussion of the fovea. The total probability of detection = P1Pd η, where is the overall degradation factor arising from noise. References 1. P. Bijl and J. Valeton, “Bias-Free Procedure for the Measurement of the Minimum Resolvable Temperature Difference and Minimum Resolvable Contrast,” Opt. Eng. 38(10), 1735–1742 (1999). 2. Burle Electro-Optics Handbook, Burle Inc., pp. 120–124, ftp://ftp.bartol.udel. edu/jmc/pmt/rca_eo_hb.pdf (1974). 3. H. Bailey, “Target Detection Through Visual Recognition: A Quantitative Model,” Rand Corporation (Feb. 1970). Additional References  

P. Bijl, A. Toet, and J. Valeton, “Psychophysics and Psychophysical Measurement Procedures—Introduction,” in The Encyclopedia of Optical Engineering, R. Driggers, Ed., Marcel Dekker (2003). K. Brunnstroem, B. Schenkman, and B. Jacobson, “Object Detection in Cluttered Infrared Images,” Opt. Eng. 42(2), 388–399 (2003). 

82

Chapter 3

Optical Blur Should Be Oversampled by FPA Pixels (Don’t Overdo It!) If one is attempting to both detect and locate a target, to maximize the information content of the image, the optimal relationship between the PSF and the 1D physical dimension of a pixel w is

w

PSF dimension . X

The suggested value for X is 1.5 for target pointing but 2 to 4 for imaging. Discussion Aperture diameter, wavelength, and optics quality determine the spot size (defined as the point spread function, or PSF) on an FPA. The diameter of a point target imaged onto an FPA is set by the Airy disk relating the PSF diameter d to the aperture diameter D, the focal length f, and the wavelength of light λ:

d

f , D

where α is a geometric shape factor. For diffraction-limited, uniformly illuminated circular apertures, this is α = 2.44, and 84 percent of the energy is present in this disk. For nondiffraction-limited optics, it is larger. It is also a different parameter if the aperture is square or triangular. It is advantageous to oversample the optical blur spot, ideally to reduce the pixilation for no modulation transfer function (MTF) degradation. This has been the architecture employed in consumer cellphones that have 80-cent plastic optics sampled by many-megapixel focal planes. This results in unused oversampling at the best and loss in sensitivity at the worst. The oversampling should be optimized, but that is a difficult, nuanced subject. Recent advances in small-pixel FPAs can take this oversampling too far with useless results. For ATP applications, if the spot size d is much smaller than the pixel size w, then all of the energy is deposited in a single pixel, which maximizes the probability of detection Pd, but one has no idea where within the pixel the target spot is located. On the other hand, if the spot size is much larger than the pixel dimension, one can compute the centroid of the target signal across multiple pixels to obtain subpixel location accuracy. For faint targets, however, spreading the signal among multiple pixels will reduce the SNR such that the target may no longer be detectable. Balancing the desire to both simultaneously detect and locate an unresolved target results in the optimal relationship between the pixel size and PSF with X = 1.5.1 However, there are some nuanced issues. First, most things imaged are not point sources, and optics are rarely diffraction limited. Second, most images are not created in a single wavelength but over a broad band pass. Third, and most important, the phasing or the spacing of the Airy disk over a given pixel cannot be guaranteed and is in fact random for almost all applications. Last, the smaller the pixel is, the fewer photons it will collect (dividing the Airy disk multiple times), resulting in lower signal to noise and likely higher crosstalk, and eventually poorer image quality.

Acquisition, Tracking, and Pointing

83

The wavelength selected should be to the long side of the maximum radiant input or, in the worst case, the longest wavelength of the band pass. The latter is especially true for the mid-wave infrared (MWIR), as the maximum radiant flux for terrestrial scene temperatures is close to the 5-micron cut-off. In the long-wave infrared (LWIR), most terrestrial objects peak around 11 microns, so 12 microns is appropriate. For the visible and short-wave infrared (SWIR), it depends on the circumstances; it is suggested to use the longest wavelength of the band pass, not the middle wavelength. There is a misplaced consumer idea that more pixels mean better resolution and better images. This was historically correct and only true if the optics support the pixel count, but it is no longer valid with 1.3-micron and smaller megapixel FPAs in consumer electronics. The same is becoming true in the IR. References 1. M. J. Tsai and F. A. Rogal; “Position Estimation of Optical Point Targets Using Staring Array Detectors,” Proc SPIE 1096 (1989). 2. R. Driggers, “Infrared detector size: how low should you go?” Opt. Eng. 51(6), 063202 (2012). 3. G. Holst and R. Driggers, “Small detectors in infrared system design,” Opt. Eng. 51(9), 096401 (2012). 4. J. Jenkins et al., “Fabrication of small pitch, high definition (HD) 1kx2k/5μm MWIR focal-plane-arrays operating at high temperature (HOT),” Proc. SPIE 10177 (2018). 5. R. N. Clark, “Digital Cameras: Does Pixel Size Matter? Factors in Choosing

a Digital Camera,” www.clarkvision.com/articles/does.pixel.size.matter (2020).

Dwell in Cell To detect a faint moving target, the optimal sensor integration time is the time it takes the target to cross a single pixel on the focal plane. This is known as the “dwell in cell” criteria. Discussion All sensor systems exhibit dark current dc, which increases as the square root of the integration time dt. The signal contribution from a target increases linearly with time. So long as a target is in a single pixel, the SNR increases as the square root of the integration time: SNR 

signal  dt  dt . dc  dt

Thus, with sufficiently long integration times, even the faintest target will (usually) eventually generate a sufficient SNR to be detected. This principle is used on many astronomical observations using low-noise CCD detectors (Fig. 3.5). However, in the case of high background levels, the background signal can saturate the pixel before the target signal exceeds the noise level.

84

Chapter 3

Figure 3.5 SNR as a function of time showing the effect of a target crossing a pixel.

However, once the target moves out of pixel, the only contribution is dark current, and hence the SNR begins to decrease. Thus, the optimal integration time is the time it takes the target to cross the pixel. Shorter integration times do not maximize signal collection, leading to lower SNRs, whereas longer integration times add only noise, thus reducing the SNR. If the SNR generated by the dwell-in-cell integration time is insufficient to detect the target, then multiple frames should be used to perform multiframe detection.1 Reference 1. J. N. Sanders-Reed, “Maximum Likelihood Detection of Unresolved Moving Targets,” IEEE Trans. AES 34(3) (1998).

Probability of Detection Estimation A good approximation for the computation of the probability of detection is1

I I 1 Pd  1  erf  s t  2I  2 n 

   , 

where Is,t,n is the detector current associated with s (signal), t (threshold), and n (noise), and erf refers to the error function. Discussion Kamerman2 has a different approximation, as follows:   1  1   1   Pd  1  erf  SNR  ln   Pfa     2 2      for SNR > 2 and Pfa between 10–12 and 10–3. As in any other approximation, the limits of the application of the simplified version must be considered before using the rule. However, the rules shown above are quite broad in their range of application. These rules assume “white noise,” which means that the noise that is present in the system has equal amplitude at all frequencies. This approximation is required to develop any general results because the spectral

Acquisition, Tracking, and Pointing

85

characteristics of noise in real systems are more complex and can, of course, take on an infinite number of characteristics. This assumes point source detection, without consideration of resolution or “lines across a target.” The detection of targets is described in mathematical terms in a number of contexts. However, there is commonality between the formulations used for radar, optical tracking, and other applications. The above equations are based on theoretical analysis of the detection of point sources in white noise and backed by empirical observations. The calculation of the probability of detection Pd of a signal in white noise is unfortunately quite complex. The exact expression of this important parameter is2  x 2  A2   2 

1    Pd   xe  VT

 e 

xA cos y

0



dy dx,

where x, y are the integration variables, A  2 SNR , VT  2ln  Pfa  .

The mention of erf refers to the following standard statistical function: erf  x   

1

 

x

x

2

e  t dt

2 x t 2  e dt.  0

A useful approximation4 of erf shows that the complementary error function erfc(x) may be expressed as a polynomial: erfc(x) = T exp(–Z2 – 1.26551223 + T(1.00002368 + T(0.37409196 + T(0.09678418 + T(–0.18628806 + T(0.27886807 + T(–1.13520398 + T(1.48851587 + T(–0.822152223 + T(0.17087277)))))))))), where Z is the absolute value of x, and

T

1 . 1  0.5Z

Once erfc(x) has been computed, erf(x) is found by the following simple formula: erf(x) = 1 – erfc(x), which provides accurate results for erf(x) for x from 0 to 1.2. This approximation is useful for SNRs of 1 to 10 and is accurate to within 1.2 × 10–7. Figures 3.6 and 3.7 provide graphical relationships of Pd, SNR, and FAR (false alarm rate). Figure 3.8 shows related results for active sensing of two classes of targets: (1) specular or well-resolved rough targets, and (2) rough targets.5 Both examples in Fig. 3.8 assume a Pfa of 10–6.

86

Chapter 3

Figure 3.6 The relation of Pd, FAR, and SNR. Reprinted with permission from Ref. 1.

Figure 3.7 Probability of detection and false alarm for various SNRs (3, 4, 5, and 6).

Acquisition, Tracking, and Pointing

87

  Figure 3.8 SNR and Pd for active sensors for two classes of surface roughness presuming a FAR of 10–6. The left panel describes performance for specular or well-resolved rough targets. The right panel shows performance against rough targets.5

References 1. Burle Electro-Optics Handbook, Burle Industries, p. 111, ftp://ftp.bartol.udel. edu/jmc/pmt/rca_eo_hb.pdf (1974). 2. G. Kamerman, “Laser Radar,” p. 45, in Active Electro-Optical Systems, C. Fox, Ed., Vol. 6 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 3. K. Seyrafi and S. Hovanessian, Introduction to Electro-Optical Imaging and Tracking Systems, Artech House, pp. 147–157 (1993). 4. W. Press et al., Numerical Recipes, Cambridge University Press, p. 164 (1986). 5. “Military Handbook Quantitative Description of Obscuration Factors for ElectroOptical and Millimeter Wave Systems,” DOD-HDBK-178(ER), Section 5, p. 10 (1986).

 

Limits of Position Estimation When measuring the location of a point target using a quad cell (which is analogous to using 4 pixels on an FPA to sample the PSF distribution), the standard deviation of the angular position error σ can be estimated using



PSF . 2 SNR

88

Chapter 3

Discussion It should be little surprise that the position error estimate is proportional to the PSF and inversely proportional to the SNR. This rule provides a quantitative assessment of that relationship. The limiting accuracy with which a target can be located using a quad cell is1

 3 / 16 

     D

2

  n / 8

SNR

2

,

where (λ/D) is the angular resolution of the imaging system, λ is the wavelength of radiation, and D is the aperture diameter. The variable n is the angular subtense of the target divided by the angular resolution (λ/D) of the system. In other words, n is the target angular extent expressed in units of the system angular resolution. The first term (3/16)2 is for an unresolved point target, whereas the second term (n/8)2 is added in the case of extended targets. In both cases, we see that as the SNR increases, the angular error decreases. For an unresolved point target, the angular error reduces to



3    1 .   16  D  SNR

We introduce the aperture shape parameters α (α = 2.44 for a circular aperture) and use the expression for the angular extent of the PSF, where PSF = ()/D), to get



PSF PSF 3    1  0.589  .   16  D  SNR SNR 2SNR

The above is valid for SNRs up to about 100. Beyond 100 or 200, a variety of other effects limit the computationally enhanced noise-equivalent angle. The limit in performance of such systems is about 1/100 of the pixel IFOV, although higher performance has been demonstrated in some specialized scientific systems after much calibration. Of course, the minimum pixel SNR must exceed about 6, or the target will not be reliably tracked. A number of other authors have derived similar or related expressions. From Held and Barry,3

subpixel resolution 

1.22  . 8D SNR

Compared to the fourth equation, the constant is π/8 = 0.39 instead of 3π/16 = 0.589. Shao and Colavita4 provide the same form as the third equation:

subpixel resolution 

3  ; constant  0.59. 16 D SNR

Stanton et al.5 provide this version:

Acquisition, Tracking, and Pointing

89

LSB 

pixel IFOV , SNR

where LSB is the least significant bit of greatest resolution, and pixel IFOV is the field of view of a pixel. Pixel IFOV is the larger of diffraction, aberration blur, or the geometric pixel footprint. In most cases, a root sum of squares of these factors is the proper value to use. McComas7 provides this form:

subpixel resolution 

0.61 . D SNRaverage

Lastly, the measurement error in the angular separation of two objects is just like the rule above except that if the SNRs are approximately the same, then subpixel resolution 

SNR1 or 2 2

,

and the angular resolution is subpixel resolution 

2 pixel field of view . SNR

Rowlands et al.8 have measured the likely noise-equivalent angle (NEA) of the fine guidance sensor for the James Webb Space Telescope. The data was obtained for a variety of pixel elements in a 4×4 detection array at wavelengths between 0.8 and 2.5 μm. We can compare the theory above with their results in the following way. As shown in Fig. 3.9, a typical value for the angular subtense of a single pixel in the detection array is 60 milliarcseconds (mas). Since four detectors are used, the value for /d in the analysis is 240 mas. For the case shown in the figure of 500 electrons, the SNR is 22 (assuming a photon-noise-limited, low-detector-noise design) and the estimate from the sixth equation for NEA is 6.5 mas. The figure shows that the simulation yields a value of about 6.8 mas. The figure also shows that the comparison of theory and results from the simulation is very good in both absolute value and the relationship between NEA and SNR. Finally, we address the performance of a scanning system. In a scanning system, the blur circle produced by the target at the focal plane is scanned over the FPA element. The FPA element can be sampled faster than the time it takes the blur to move over the FPA. This provides a rise-and-fall profile with which the location can be calculated with accuracy greater than the pixel footprint or blur extent. The higher the SNR is, the faster the samples can be added and the more accurate the amplitude level will be, both of which increase the accuracy of the rise-and-fall profile. Lloyd6 points out that the accuracy for cross-scan, or when the only knowledge is that the target location falls within a detector angular subtense (DAS), is subpixel resolution 

DAS . 12

90

Chapter 3

Figure 3.9 Comparison of theory and simulation for NEA. The thick curves derive from the sixth equation in this rule. The thin lines derive from Rowlands et al.8 The values ‘36’, ‘60,’ and ‘84’ refer to the size of the pixels used in sampling the light, in milliarcseconds.

References 1. G. A. Tyler and D. L. Fried; “Image-position error associated with a quadrant detector”, J. Opt. Soc. Am. 72(6), 804–808 (1982). 2. J. Miller, Principles of Infrared Technology, Kluwer, pp. 60–61 (1994). 3. K. J. Held and J. D. Barry, “Precision Optical Pointing and Tracking from Spacecraft with Vibrational Noise,” Proc. SPIE 616 (1986). 4. M. Shao and M. Colavita, “Long-Baseline Optical and Infrared Stellar Interferometry,” Annual Rev. Astronomy and Astrophysics 30, 457–498 (1992). 5. C. Stanton et al., “Optical Tracking Using Charge Coupled Devices,” Opt. Eng. 26, 930–938 (Sept. 1987). 6. M. Lloyd, “Fundamentals of Electro-Optical Imaging Systems Analysis,” pp. 42– 44, in Electro-Optical Systems Design, Analysis, and Testing, M. Dudzik, Ed., Vol. 4 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 7. B. K. McComas and E. Friedman, “Wavefront Sensing for Deformable SpaceBased Optics Exploiting Natural and Synthetic Guide Stars,” Opt. Eng. 41(8) (2002). 8. N. Rowlands, J. Hutchings, R. G. Murowinski, and R. Alexander, “NGST fine guidance sensor,” Proc. SPIE 4850 (2003).

Acquisition, Tracking, and Pointing

91

Multisensor Tracking When multiple sensors are used to track an object, the optimal method of combining detections depends on the available bandwidth between sensors. There are four recognized ways of combining detections from multiple sensors:1–7 1. 2. 3. 4.

Independent tracking (no fusion), Measurement fusion, Track or hierarchical fusion, and Central fusion.

Discussion

Independent tracking involves, as the name implies, independent tracking in which each sensor performs its task independent of all other sensors. Measurement fusion combines raw data from multiple sensors to generate a composite data set prior to detection and tracking. This implies very high bandwidth and low latency (usually all sensors are located on a common platform). A classic example of measurement fusion combines images from co-aligned red, green, and blue (RGB) focal planes to form a single composite image prior to detection processing. More complex methods might involve combining 2D imagery with a laser rangefinder to provide azimuth, elevation, and range data for detection and tracking. Track or hierarchical fusion is used when bandwidth is limited and latency is long. In this case, each sensor performs its own tracking and periodically provides a track file update to a central track fusion processor. The central processor may or may not provide feedback in the form of updated, composite track files to the individual sensors. A classic example would be widely distributed sensors, perhaps on separate aircraft. In this case, each sensor must be controlled, and the pointing updated to keep the target in the FOV. The latency of communicating between such distributed sensors is often sufficiently long that good LOS pointing to keep the target in the FOV cannot be maintained if the sensor requires continuous updating from other sensors. Central fusion can be used if all sensors are co-located with high bandwidth and low latency. In this case, each detection from each sensor is supplied to a common tracking function. Each detection can be weighted based on the accuracy of the individual sensor, but all detections are assigned to the track(s). A classic example of this approach (described in Sanders-Reed1) might involve multiple co-aligned sensors looking through a common aperture. The pointing control for the aperture is provided by the central track function, using appropriately weighted input from all sensors. References

1. J. N. Sanders-Reed, “Multi-Target, Multi-Sensor, Closed Loop Tracking,” Proc. SPIE 5430 (2004). 2. O. E. Drummond and S. S. Blackman, “Challenges of Developing Algorithms for Multiple Sensor, Multiple Target Tracking,” Proc. SPIE 1096 (1989). 3. O. E. Drummond, “Multiple Sensor Tracking with Multiple Frame, Probabilistic Data Association,” Proc. SPIE 2561 (1995). 4. O. E. Drummond, “Track Fusion with Feedback,” Proc. SPIE 2759 (1996).

92

Chapter 3

5. O. E. Drummond, “Tracklets and a Hybrid Fusion with Process Noise,” Proc. SPIE 3163 (1997). 6. O. E. Drummond, “Hybrid Sensor Fusion Algorithm Architecture and Tracklets,” Proc. SPIE 3163 (1997). 7. O. E. Drummond, “On Track and Tracklet Fusion Filtering,” Proc. SPIE 4728 (2002).

Johnson Criteria The Johnson criteria1 is a well-accepted industry norm describing the resolution requirements for a well-trained human observer to detect, recognize, and identify (DRI) an object in a single still image. The original Johnson criteria (Table 3.1) give the number of cycles (1 cycle = 2 pixels) across an object to achieve a 50-percent success rate (to extend the Johnson criteria to other than a 50-percent success rate, see the rule “Extension of Johnson Criteria to Other than 50 Percent”). Discussion

To properly apply this rule, we begin with a few critical definitions and explanations:    

Detection is the recognition of an anomaly, something different than the background. Orientation applies to nonsymmetric objects and represents the ability to determine, at least crudely, the direction of the long axis of an object. Recognition is the ability to determine the class of an object, e.g., vehicle, person, building, and so on. Identification is the ability to determine the specific type of object within a class, e.g., tank, truck, or car within the class “vehicle,” or to differentiate a uniformed, armed soldier from a civilian in the class “person.” This task does not usually include identifying a specific person or vehicle.

A line pair is a particular way to define spatial resolution. It is equal to a dark bar and a white space, often called one cycle, across the critical dimension. Neophytes sometimes confuse this with the number of pixels, but it is not the same. It can be roughly assumed to be twice the number of pixels, assuming perfect phasing across the target, focal plane, and display. When 4.0 line pairs are quoted, it is equal to identifying a pattern with 4 bars and equal-width spaces between them. This requires a minimum of the footprints of 8 pixels across the target and often as much as 11 (8.0/0.7). Table 3.1 Johnson criteria for target detect, orient, recognize, and identify.

Capability Detection Orientation Recognition Identification

Cycles across Target 0.75 2.00 3.00 6.00

Acquisition, Tracking, and Pointing

93

The standard USAF resolution test pattern13 shows the typical resolution pattern and the alternating dark and white lines mentioned above (Fig. 3.10). When discussing DRI and the number of cycles across a non-symmetric object such as a person, the “critical” dimension to use is usually the smaller dimension (the width of the person, not their height). Resolution as described above will result in proper performance by 50 percent of the observers asked to perform the observational task under nominal conditions. More detail on the expected number of line pairs on a specific target is contained in Table 3.2, from Johnson.1 If you are unsure about the number of cycles required and need to do a calculation, use the nominal values in Table 3.3 as a guide. The Johnson criteria were developed by a military analyst viewing single still imagery to identify military forces. In this case, the “trained observer” is indeed well trained, familiar with the type of imagery (backgrounds, resolution, etc.), and looking for well-known objects. As a result, many practitioners tend to round up slightly, say, to 1 full cycle for detection and 4 cycles for recognition. Orientation does not seem to be commonly used in current literature or practice, hence the reference to DRI instead of DORI. The Johnson criteria were developed at a time when photographic film was the standard for surveillance applications. As a result, degradation factors common to modern digital electronic systems, including sensor pixelization, display resolution limits, and other factors, must be compensated for prior to applying the Johnson criteria.

Figure 3.10 USAF resolution test target. Table 3.2 More detail on the expected number of line pairs on a target.2

Target Truck M48 tank Stalin tank Centurion tank Half-track Jeep Command car Soldier 105 Howitzer

Detection 0.9 0.75 0.75 0.75 1.0 1.2 1.2 1.5 1.0

Orientation 1.25 1.2 1.2 1.2 1.5 1.5 1.5 1.8 1.5

Recognition 4.5 3.5 3.3 3.5 4.0 4.5 4.3 3.8 4.8

Identification 8.0 7.0 6.0 6.0 5.0 5.5 5.5 8.0 6.0

94

Chapter 3

Table 3.3 If you are unsure and need to do a calculation, use the corresponding nominal value as a guide.

Detection Classification Reading alphanumeric Recognition Identification

Typical Minimum Required N/A 1 2.5 2.5 5

Nominal Value (Applicable When More Detail Is Unknown) 0.75 2 2.8 3 6

Typical Maximum Required 1.5 3 3.5 4 N/A

The Johnson criteria assumes grey scale imagery. High-contrast color (or multispectral) imagery could (depending on target and background characteristics) result in significantly different results. In a related vein, the Johnson criteria does not directly address SNR or contrast issues. Obviously, a high-contrast target will be easier to detect, whereas a zero-contrast target may be impossible to detect. Significant work has been done over the years since Johnson first published his results. That work has confirmed his results but also extended it in many areas to probabilities other than 50 percent (see the rule “Extension of Johnson Criteria to Other than 50 Percent”), to address levels and types of background clutter, SNR and blur, aspect ratio and viewing angle, visible light versus infrared, the importance of specific distinguishing characteristics, and weather. As of publication, the most recent and comprehensive review and summary of this material was by Sandia National Laboratories.6 The advent of full motion video (FMV) is a game changer both for human and computer DRI. At a simple detection level, an unresolved point target of low contrast may be impossible to detect in a single image but easy to detect as it moves in a video sequence. Similarly, the human brain does a wonderful job of integrating across a video sequence and subconsciously integrating information from multiple frames (super-resolution). As a result, expect a performance improvement when using FMV. Finally, as noted above, the Johnson criteria were developed for military objects and may not apply for all classes of objects. Power line and cable detection come to mind. A cable may be severely subpixel (perhaps filling 10 percent of a pixel) in the cross-cable direction, but because it extends tens or hundreds of pixels along the line, it is still easily detectable. The Johnson criteria use the highest spatial frequency visible FJ at the apparent target-to-background contrast to quantify the target acquisition range. Range is proportional to FJ, with the proportionality depending on task difficulty. To use the criteria correctly, target area and contrast refer to averages over the group of targets involved in the scenario. The Johnson criteria are also used to quantify sensor image quality. First, one calculates the target-to-background contrast and determines the highest spatial frequency visible (to a human) through the entire system (telescope, focal plane, electronics, and display) with all of its noise sources. Then, the range is given for a particular observation task based on the Johnson criteria. The actual range depends on task difficulty, which is

Acquisition, Tracking, and Pointing

95

determined by experiment and/or experience. Although the Johnson criteria have some difficulty when noise is present (as there is spread in the data), since the Johnson criteria are applied at a system level to the final display image, noise is nevertheless included in the criteria. The Johnson criteria are inherent to the classic FLIR performance codes of NVTHERM-IP and Night Vision Integrated Performance Model (NV-IPM) and have become part of the basic lexicon in the electronic imaging community. These codes were developed by the Night Vision and Electronic Sensors Directorate [a part of U.S. Army Communication–Electronics Command (CECOM)], which has a rich history in this field. Its staff were pioneers of the basic concept of using the number of resolution elements across a target to detect, recognize, or identify it. Historically, the subject first appeared in 1940s literature, authored by Albert Rose, J. Coltman, and Otto Shade, involving research into the human eye and perception. In the late 1950s, John Johnson (of the U.S. Army) experimented with the effect of resolution on one’s ability to perform target detection, orientation, recognition, and identification functions using image intensifiers. This was followed by Johnson, Ratches, Lawson, and others from the 1950s through 1970s. The effects of signal to noise were added in the 1970s by Rosell, Wilson, and Gerhart, and Vollmerhausen added to the models in the 1990s. Driggers,12 Vollmerhausen, and others have refined this concept in the early twenty-first century. The work continues; for example, the U.S. Army is developing a new metric based on the Johnson criteria to accommodate digital imagery. The Johnson criteria use the highest spatial frequency seen through the sensor and display at the apparent targetto-background contrast to quantify image “quality.” The Johnson criteria relate to the average contrast at one frequency, so there are problems (e.g., with sampled imagers, image boost, and digital filters) that make the Johnson criteria conservative for 2D imaging sensors. The new metric will attempt to accommodate these situations. Called the target task performance metric (TTP), it is equal to2 1/2

 C MTF     TTP    TGT d ,  CTF       where CTGT is the contrast of the target, is the sampling frequency, MTF(is the modulation transfer function as a function ofandCTFis the contrast transfer function as a function of Then the range is calculated from range 

ATGT N required

TTP ,

where ATGT is the area of the target, Nrequired is a value for the N required. Nrequired represents the task difficulty and desired probability of success; the value of Nrequired is established experimentally for a particular target set and task. Johnson never addressed reading alphanumerics. However, this has become increasingly important with security, identification, and surveillance systems, and a body of work is developing that indicates that the required number of cycles is about 2.5 for a 50-percent probability.4,5

96

Chapter 3

Last, there has been much discussion and controversy about the actual number of line pairs needed to do the functions, depending on clutter, spectral region, quality of images, test control, display brightness, and a priori knowledge. However, rarely have there been suggestions that the above numbers are incorrect by more than a factor of 3. References

1. J. Johnson, “Analysis of image forming systems,” in Image Intensifier Symposium, AD 220160 (Warfare Electrical Engineering Department, U.S. Army Research and Development Laboratories, Ft. Belvoir, Va., 1958), pp. 244–273. 2. J. Howe, “Electro-Optical Imaging System Performance Prediction,” pp. 92, 99, in Electro-Optical Systems Design, Analysis, and Testing, M. Dudzik, Ed., Vol. 4 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 3. R. H. Vollmerhausen, E. L. Jacobs, and R. G. Driggers, “New Metric for Predicting Target Acquisition Performance,” Opt. Eng. 43(11) (2004). 4. J. Miller and J. Wiltse, “Resolution Requirements for Reading Alphanumerics,” Opt. Eng. 42(3), 846–852 (2003). 5. J. Wiltse, J. Miller, and C. Archer, “Experiments and Analysis on the Resolution Requirements for Alphanumeric Readability,” Proc. SPIE 5076 (2003). 6. T. S. Sjaardema, C. S. Smith, G. C. Birch, “History and Evolution of the Johnson Criteria,” Sandia Report SAND2015-6368 (2015). 7. www.zieg.com/links/sensors/NVThermManual5.pdf (2019). 8. L. Biberman, “Introduction: A Brief History of Imaging Devices for Night Vision,” in Electro-Optical Imaging: System Performance and Modeling, L. Biberman, Ed., SPIE Press, pp. 1-11 through 1-16 (2000). 9. physicstoday.scitation.org/doi/abs/10.1063/1.3069240?journalCode=pto (2019). 10. J. Ratches et al., “Night Vision Laboratory Static Performance Model for Thermal Viewing Systems,” ECOM Report ECOM-7043, apps.dtic.mil/dtic/tr/fulltext/ u2/a011212.pdf (1975). 11. G. Gerhart et al., “The Evaluation of Delta T Using Statistical Characteristics of the Target and Background,” Proc. SPIE 1969, 11–20 (1993). 12. R. G. Driggers and M. H. Friedman, Introduction to Infrared and Electro-Optical Systems, 2nd edition, Artech House (2012). 13. “Military Standard Photographic Lenses,” MIL-STD-150A, Sec. 5.1.1.7 (1959). 14. “Military Handbook Quantitative Description of Obscuration Factors for ElectroOptical and Millimeter Wave Systems,” DOD-HDBK-178(ER), Fig. 5-5 (1986). 15. Burle Electro-Optics Handbook, Burle Industries, p. 121, ftp://ftp.bartol.udel.edu/ jmc/pmt/rca_eo_hb.pdf (2019).

Extension of the Johnson Criteria to Other than 50 Percent The Johnson criteria provide for a 50-percent probability for target acquisition tasks (N50). To estimate higher or lower probabilities, one can use this empirical curve fit:1–3

Acquisition, Tracking, and Pointing

97

E

 N    N 50   PN   , E  N  1    N 50  where N50 is the number of cycles needed to be resolved across the target dimension for 50 percent of the observers to get the target choice correct (e.g., the Johnson criteria values of 0.75 cycles for detection, 3 cycles for recognition, and 6 cycles for identification), N is the number of cycles actually resolved across the target, and E is an empirical scaling factor equal to

 N  E  2.7  0.7  .  N50  Discussion

The Johnson criteria were originally developed to yield the probability of the detection, recognition, and identification or DRI of military targets with a 50-percent correct probability. Much work has been done to extend the 50 percent to higher numbers (e.g., for weapon targeting systems) and lower probabilities (e.g., for general surveillance systems). Frequently, the engineer will be asked for various levels of probability. The above equations are based on curve fits to empirical data. A similar equation is provided by Vollmerhausen.4 The difference is that MTFcutoff/K replaces N/N50, where K is fitted by experimental data and the modulation transfer function, a measure of the optical performance of the system. Table 3.4 can be used for some specific probabilities and represents modernized data from Holst.5 The equation assumes that the observer has plenty of time for the observation task. Also, the term probability has a unique meaning in this case. Given that a group of observers try to detect, recognize, or identify each vehicle in a group of targets, then the probability is the fraction of correct answers out of the total number of tries. Table 3.4 Suggested multiplier from N50 to Nx. For example, to go from N50 to N95, multiply N50 by 2.

Probability Nx 0.98 0.95 0.80 0.70 0.50 0.30 0.10

Multiplier to Go from N50 to the Probability on the Left >2 and ≤3 2 1.5 1.2 1 0.75–0.80 0.5

98

Chapter 3

Figure 3.11 Probability of detection, classification, recognition, and identification as a function of the number of pixels across the critical dimension.

The empirical equation for E has varied over the years, and related but different versions may be found in the references. The version used here was the most recent and acceptable as of the publication of this book. However, authorities are refining this, so the equation might experience slight changes in the future. Figure 3.11 shows the performance described by the above equation as a function of N for each of the tasks. On the horizontal axis is the number of resolution elements across the critical dimension, and the vertical axis shows the probability of successfully completing the task of detection, classification, recognition, or identification. This is based on the number of resolution elements (not line pairs) for N50 of 1.5 for detection, 3.0 for classification, 6.0 for recognition, and 12.0 for identification. Thus, when there are 3 cycles across the critical dimension of a target, there are 6 pixels, and the plot shows a 50-percent probability of classification. When there are 10 pixels, the probability of correct recognition jumps to almost 90 percent. References

1. J. Johnson, W. Lawson, “Performance modeling methods and problems,” Proc. IRIS Imaging Systems Group (1974). (This is the original reference publication but is not readily available. The following two references are more readily available). 2. J. Howe, “Thermal Imaging Systems Modeling—Present Status and Future Challenges,” Proc. SPIE 2269, 538–550 (1994).

Acquisition, Tracking, and Pointing

99

3. T. A. Sjaardema, C. S. Smith, and G. C. Birch, “History and Evolution of the Johnson Criteria,” Sandia Report SAND2015-6368, Sandia National Laboratories,

prod-ng.sandia.gov/techlib-noauth/access-control.cgi/2015/156368.pdf (2020). 4. R. Vollmerhausen et al., “Influence of Sampling on Target Recognition and Identification,” Opt. Eng. 38(5), 763 (1999). 5. G. Holst, CCD Arrays, Cameras, and Displays, JCD Publishing, pp. 364–365 (1996). 6. J. Howe, “Electro-Optical Imaging System Performance Prediction,” p. 92, in Electro-Optical Systems Design, Analysis and Testing, M. Dudzik, Ed., Vol. 4 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993). 7. “Night Vision Thermal Imaging Systems Performance Model: User’s Manual & Reference Guide,” U.S. Army Night Vision and Electronic Sensors Directorate, Modeling & Simulation Division, Fort Belvoir, VA, www.zieg.com/links/ sensors/NVThermManual5.pdf (2020). 8. FLIR92 User’s Manual, U.S. Army. 9. Software User’s Manual for TV Performance Modeling, p. A36, Sept. 1991. 10. R. Harney, “Information-Based Approach to Performance Estimation and Requirements Allocation in Multisensor Fusion for Target Recognition,” Opt. Eng. 36(3) (1997). 11. R. Driggers et al., “Targeting and Intelligence Electro-Optical Recognition Modeling: A Juxtaposition of the Probabilities of Discrimination and the General Image Quality Equation,” Opt. Eng. 37(3), 789–797 (1998). 12. J. Ratches et al., “Night Vision Laboratory Static Performance Model for Thermal Viewing Systems,” ECOM Report ECOM-7043 (1975).

Identification and Recognition Improvement for Interpolation Performance increases with the conditions and interpolation technique, but results vary widely. Generally, identification range improvement increases from 20 percent to 65 percent with simple interpolation. Discussion

The authors of this book have long heard anecdotal evidence of increased DRI ranges as a result of simple interpolation between pixels to pseudo-enhance the display resolution. The above rule is based on one of the few actual tests of this concept.1 Pixel interpolation is very useful when combined with electronic zoom (e-zoom). E-zoom makes the picture bigger and overcomes the limited MTF of the display and eye. However, e-zoom is often the trivial duplication of pixels. When the duplicated group of pixels becomes easily visible to the eye, the pixel structure corrupts the original image. When the pixel structure is visible, the eye cannot spatially integrate the underlying image. To avoid this situation, interpolation is used, which can make a substantial difference in the operation of real-world sensors.

100

Chapter 3

For example, imagine the electronic zoom being so large that the pixilation is readily apparent. In this case, the loss of resolution (MTF compression) from the display or video recorders has little impact on the final system resolution, ignoring spurious response. This becomes especially important when dealing with surveillance and security legal criteria and post-surveillance processing issues, as these situations involve factors other than detection and are frequently recorded or digitally transmitted. Many algorithms exist for obtaining super-resolution, which take pixel interpolation a step further. Rather than a simple interpolation, they increase the signal frame spatial content. This is typically done by sampling a scene faster than the displayed rate and imposing a slight pixel offset (sometimes using the natural optical flow, a deliberately imposed “dither,” target motion, or jitter to accomplish the offset). For point sources (or sharp edges), a Gaussian blur across several pixels can be used to calculate a position much more accurately than the IFOV and to provide super-resolution accuracy. One of the authors (Sanders-Reed) has worked extensively with super-resolution techniques.6 The utility appears to depend substantially on the “consumer” of the enhanced video. Given a raw video sequence and a super-resolution-enhanced video sequence, about the most he (as the “consumer” of the video) is comfortable stating is that “the two sequences appear different, but I have been unable to extract additional information from the super-resolution video.” This appears to be because the human eye/brain combination does a very good job of integrating information across a video sequence and achieving the same net effect. However, given a single still image from the raw video sequence and a single image from the super-resolution-enhanced video, there can be a noticeable improvement. The bottom line appears that for a human “consumer” of video, superresolution may be of marginal value, but for algorithms that operate on a single image at a time, super-resolution may provide improved performance. The above discussion assumes that resolution is the limiter, not noise. Superresolution and pixel interpolation do not help to reduce noise. References

1. R. Vollmerhausen and R. Driggers, Analysis of Sampled Imaging Systems, SPIE Press, pp. 105–108 (2000). 2. J. Schuler and D. Scribner, “Dynamic Sampling, Resolution Enhancement, and Super Resolution,” in Analysis of Sampled Imaging Systems, R. Vollmerhausen and R. Driggers, SPIE Press, pp. 125–138 (2000). 3. J. Miller, Principles of Infrared Technology, Kluwer, pp. 60, 61, 292 (1994). 4. N. Nguyen and P. Milanfar, “A Computationally Efficient Superresolution Image Reconstruction Algorithm,” IEEE Trans. Image Processing 10(4) (2001). 5. Private communications with Rich Vollmerhausen (2003). 6. D. R. Gerwe and P. Menicucci, “A Real Time Superresolution Image Enhancement Processor,” Advanced Maui Optical and Space (AMOS) Surveillance Technologies Conference, amostech.com/2009-technical-papers (2020).

Acquisition, Tracking, and Pointing

101

Resolution Requirement 1. For scanning sensors, the required system cut-off spatial frequency in cycles per milliradian (generally determined by the detector cut-off spatial frequency) can be estimated from Fcutoff 

1.5 N  R . w

2. For staring sensors, the required half-sample frequency in cycles per milliradian can be estimated from Fhalf  sample 

0.85 N  R , w

where Fcutoff is the required system resolution in units of cycles per milliradian (1/detector active area subtense), Fhalf-sample is the half-sample rate of sensor (0.5/detector pitch), N is the required number of cycles or line pairs across the target (typically 1 for detection, 3 for recognition, and 6 for identification), w is the square root of the target projected area in meters, and R is the slant range in kilometers. Discussion

This rule assumes that the resolvable spatial frequency is 65 percent of the system’s cutoff frequency for scanning systems. In some cases, the resolvable frequency can be nearer the cut-off frequency. The factor 1.5 can vary from 1.5 to 1.2. For staring sensors, the factor can vary from 0.7 to 1.0. That is, resolved frequency is normally beyond the half-sample rate (sometimes called the Nyquist frequency). The required resolutions of an electro-optical system supporting a human operator depend on the target recognition (or detection) range and the number of pixel pairs or line pairs required for a human to perform the function at a given level. In general, humans can resolve frequencies in the range of 60–80 percent of the system’s resolution. The Johnson criteria rule in this chapter describes the number of line pairs needed to resolve various types of targets. A system that can resolve about 5 line pairs across a target can provide very effective target recognition. If you have a 3-m target at a range of 1 km, using this rule, we obtain FR 

1.5 N  R (1.5)(5)(1)  = 2.5. w 3

Therefore, the required number of cycles per milliradian for recognition of the target is about 2.5. References

1. B. Tsou, “System Design Considerations for a Visually Coupled System,” pp. 520– 521, in Emerging Systems and Technologies, S. Robinson, Ed., Vol. 8 of The Infrared and Electro-Optical Systems Handbook, J. S. Accetta and D. L. Shumaker, Eds., Infrared Information Analysis Center and SPIE Press (1993).

102

Chapter 3

2. R. Vollmerhausen and R. Driggers, Analysis of Sampled Imaging Systems, SPIE Press, pp. 92–95 (2000). 3. “Night Vision Thermal Imaging Systems Performance Model: User’s Manual & Reference Guide,” U.S. Army Night Vision and Electronic Sensors Directorate, Modeling & Simulation Division, Fort Belvoir, VA, www.zieg.com/links/ sensors/NVThermManual5.pdf (2020).

Resolution Required to Read a Letter To reliably read an English alphanumeric character (letter), the system should clearly display at least 2.8 cycles along the letter’s height.

Discussion By definition, one’s ability to read an alphanumeric depends on resolution. Generally, letters have a high probability of being correctly read when between 2.5 and 3.0 cycles across the height of the letter can be resolved by the human viewing the letter (whether printed or on a display). The curve of identifying the letter (Fig. 3.12) is very steep between 2 and 3 cycles; with 2 or less, it is practically indistinguishable, and it is almost always readable when 3 or more cycles are displayed, as shown here. One of the authors (Miller) has conducted several field experiments leading to the results presented in the figures. You can prove this for yourself by observing the numbers in Fig. 3.13. It is almost impossible to read the letters in the low-resolution image, yet it is easy to read those in the highresolution image. Of course, the ability to resolve two or three cycles across the letter depends on its distance from the observer and the quality of the imaging system (eyes). This rule is based on English block letters only; this does not apply to highly detailed language characters, such as Kanji, Hindi, and Arabic symbols. A cycle defines the resolution required to separate high-contrast white and black lines, (generally 2 pixels). Assuming a Kell2 factor (which defines the effective resolution of a discrete display device required to avoid beat frequency patterns) of approximately 0.7, digital imaging requires about 4 cycles or 8 pixels across the height of the letter.

Figure 3.12 The numbers of correct readings of an alphanumeric as a function of cycles. The plot represents data from 50 individuals viewing three different letters (C = diamonds, F = triangles, and B = squares).

Acquisition, Tracking, and Pointing

103

Figure 3.13 Images of letters at various resolutions. When these were originally displayed, they were of 2.0, 2.5, 3.0, and 4.0 cycles, respectively; however, they have suffered an additional resolution loss due to the frame grabbing and book printing process.

Figure 3.14 Contour plot of the Fourier transform of the entire Helvetica alphanumeric set, based on a 32- × 32-pixel frame where the letter height occupied the vertical 32 pixels.

In addition to the empirical evidence of Figs. 3.12 and 3.13, theoretical analysis supports these assertions. Figure 3.14 clearly indicates that the power spectral content of English alphanumerics is concentrated below four cycles per letter height. The requirements for a human to successfully read these alphanumerics is somewhat lower and is demonstrated by empirical results to be about 2.5 cycles for about a 50 percent success and 3.0 cycles for a much higher success rate. Figure 3.15 is a cumulative plot of the power in Fourier space for the entire alphabet and numerals in the Helvetica font. This represents the power spectral contents of the difference between the characters. About half of this distribution falls below 2.5 cycles, indicating that most letters can be distinguished with such resolution.

104

Chapter 3

Figure 3.15 Cumulative distribution of the standard deviation of the 36 alphanumeric characters in the Helvetica font.

Another interesting fact is that the photons (ph) passing through your eye from a printed single letter on a piece of paper (not display) is ~109 per second. The assumptions made for this calculation are 1. Normal reading light intensity of 500–1000 lux. This is almost exactly 1 W/m2 at maximum eye sensitivity (555 nm). 2. Distance from the reading material to the eye: 35 cm. 3. Area of a typical 12-point-font letter: 2 mm2. 4. Iris diameter when reading: 5 mm. Working through the numbers results in a rate of 2.0 × 109 ph/sec. Of course, our reading involves “negative” detection; we read black on white, so the actual letters send no signals to the reader. To deal with this situation, the calculation assumes that a black piece of paper has one white letter on it.

References 1. J. Miller and J. Wiltse, “Resolution Requirements for Alphanumeric Readability,” Opt. Eng. 42(3), 846–852 (2003). 2. S. C. Hsu, “The Kell Factor: Past and Present,” SMPTE Journal 95(2), 206–214 (1986). 3. W. A. Smith, Modern Optical Engineering, McGraw-Hill, p. 355 (1990). 4. J. Wiltse, J. Miller, and C. Archer, “Experiments and Analysis on the Resolution Requirements for Alphanumeric Readability,” Proc. SPIE 5076 (2003). 5. Private communications with John Wiltse and Cynthia Archer (2003 and 2019).

Acquisition, Tracking, and Pointing

105

Detection Nomograph The complexities of detection, classification, recognition, and identification can be simplified using a nomograph (Fig. 3.16).

Discussion Reference 1 provides a technique for incorporating all the key factors of detection in one handy graphic. Figure 3.16 shows a nomograph that can be used to avoid complex calculations relating the conditions under which various detection criteria might be met. The criteria (the Johnson criteria) are provided in Table 3.51 as the spatial frequency needed for 50-percent probability for certain tasks using different targets and imaging technologies.

Figure 3.16 This nomograph provides a simple way to relate various types of imaging performance, target contrast, and lighting conditions as detailed in the Discussion.

106

Chapter 3

Table 3.5 Johnson criteria n for task accomplishment (50-percent probability level).1

Probability of Accomplishing Task

Number of Cycles Resolved Across Target Critical Dimension Detection Classification Recognition Identification

Thermal Sensors, Tank Target 0.5

1.0

2.0

4.0

8.0

Hand-Held Thermal Viewer, Man Target 0.5

1.5

1.5

1.5

-

Day Sights and Image Intensifiers, Tank Target 0.5

1.0

2.0

3.0

6.0

Image Intensifiers, Man Target 0.5

1.0

1.0

1.0

-

Figure 3.16 illustrates two example situations. In both, the analysis applies to nighttime full-moon viewing. In the first, indicated by solid lines, the target has an inherent contrast of 4 that is reduced by the atmosphere to an apparent contrast of 0.5. This set of conditions produces a spatial frequency of about 2.7, which results in the following performance: detection at about 5.7 km, classification at 3 km, recognition at 2 km, and identification at 1 km. The other example (dotted lines) changes the inherent contrast of the target to a much lower level; this dramatically impacts the performance: detection at about 1.5 km, classification at 0.6 km, recognition at 0.5 km, and identification at 0.2 km. Reference

1. “Military Handbook: Quantitative Description of Obscuration Factors for ElectroOptical and Millimeter Wave Systems,” DOD-HDBK-178(ER), p. 8 (1986).

Correcting for Probability of Chance Vollmerhausen1 states that the probability of chance must be accounted for before using experimental data to calibrate DRI models:

Pmodel 

Pmeasured  Pchance , 1  Pchance

where Pmodel is the probability to use in the model, Pmeasured is the probability measured, and Pchance is the probability of correctly identifying (or recognizing) the target or target class just by chance.

Acquisition, Tracking, and Pointing

107

Discussion

Even a blind squirrel will occasionally get a nut. That is, by chance, a number of correct answers will result even if you are guessing. Models for DRI must have this “chance” probability of guessing correctly removed using the above equation. Guessing does not inherently reduce the chance of doing things right. This can be determined only if you know the truth with certainty. The point is that developing models requires that the amount of guessing be known and appropriately considered. If four targets (target classes) are used in the experiment, then Pchance is 0.25, and if 12 targets are used, then Pchance is 0.08. To compare model predictions with field data, the above formula is inverted as follows: Pmeasured = Pmodel(1 – Pchance) + Pchance. As an example, assume that the Johnson criteria are used to find the probability of classifying tracked versus wheeled vehicles. The critical dimension of the vehicles (square root of area) is 4 m. From the Johnson criteria rule, the N50 (the number of cycles across the critical dimension for 50 percent of respondents to correctly perform the observation task) for this task is 2. In general, if there are only two categories, then the observer will be correct half the time with their eyes closed! The model probabilities must be corrected for chance as follows:

Pmeasured  Pmodel 1  Pchance   Pchance , Pmeasured  0.5 1.0  0.5   0.5, Pmeasured  0.75. Now consider another example. A scanning forward-looking infrared (FLIR) system with an IFOV of 0.1 mrad (and a cut-off frequency Fcutoff of 1/0.1 cycles/mrad) is used to detect a target with a critical dimension w of 4 m. Therefore, from the resolution requirement rule,

R

w Fcutoff 1.5 N50



(4) 1

,



0.1 , (1.5)(2) R  13.3 km. R

Thus, at a range of 13.3 km, the task is performed correctly 75 percent of the time. References

1. Private communications with Rich Vollmerhausen, 2003. 2. R. Duda, P. Hart, and D. Stork, Pattern Classification, 2nd ed., John Wiley & Sons, New York, 2000, pp. 20–83.

108

Chapter 3

National Image Interpretability Rating Scale NIIRS Rating Level

Examples of Exploitation Tasks (Visible)

0

Interpretability of the imagery is precluded by obscuration, degradation, or very poor resolution.

1 (>9 m GSD)

Distinguish between major land use classes (urban, forest, water, etc); detect a medium-sized port facility; distinguish between taxiways and runways.

Distinguish between urban and rural areas; identify a large wetland (>100 acres); delineate coastal shoreline.

2 (4.5–9.0 m GSD)

Detect large buildings (hospitals, factories); detect military training areas.

Detect multilane highways; detect strip mining; delineate extent of cultivated land.

3 (2.5–4.5 m GSD)

Detect individual houses in residential neighborhoods; detect trains on tracks but not individual cars; detect a helipad; identify a large surface ship in port by type.

Detect vegetation/soil moisture differences along a linear feature; identify golf courses; detect reservoir depletion.

4 (1.2–2.5 m GSD)

Identify farm buildings as barns, silos, or residences; identify by general type, tracked vehicles, and field artillery; identify large fighters by type.

Distinguish between two-lane improved and unimproved roads; detect small boats (3–5 m) in open water.

5 (0.75– 1.20 m GSD)

Detect large animals in grasslands; identify a radar as vehicle mounted or trailer mounted.

6 (0.40– 0.75 m GSD)

Identify individual telephone/electric poles in residential neighborhoods; or the spare tire on a medium-sized truck.

7 (0.2–0.4 m GSD)

Identify individual railroad ties; or fitments and fairings on a fighter-sized aircraft.

Examples of Exploitation Tasks (Multispectral) Interpretability of the imagery is precluded by obscuration, noise, poor registration, degradation, or very poor resolution.

Detect an automobile in a parking lot; detect disruptive or deceptive use of paints or coatings on buildings at a groundforces installation. Detect a foot trail through tall grass; recently installed minefields in groundforces deployment areas; or navigational channel markers and mooring buoys in water. Detect small marine mammals on sand or gravel beaches; distinguish crops in large trucks; detect underwater pier footings.

Examples of Exploitation Tasks (Infrared) Interpretability of the imagery is precluded by obscuration, noise, degradation, or very poor resolution. Distinguish between runways and traffic ways; detect large areas (>1 km2) of marsh or swamp. Detect large aircraft and large buildings; distinguish between naval and commercial port facilities. Distinguish between large and small aircraft, distinguish between freighters and tankers of 200 m or more in length; identify individual thermally active flues running between the boiler hall and smoke stacks at a thermal power plant. Identify the wing configurations of small fighter aircraft; detect a 50-m2 electrical transformer yard in an urban area.

Distinguish between single-tail and twin-tail fighters; identify outdoor tennis courts.

Distinguish between thermally active tanks and APCs, or between a 2-rail and 4-rail launcher; identify thermally active engine vents atop diesel locomotives.

Identify automobiles as sedans or station wagons; identify antenna dishes on a radio relay tower.

Acquisition, Tracking, and Pointing

NIIRS Rating Level

Examples of Exploitation Tasks (Visible)

8 (0.1–0.2 m GSD)

Identify windshield wipers on a vehicle or rivet lines on a bomber aircraft.

9 (90°, the azimuth must discontinuously change to 180°, and now elevation begins to decrease from 90°. Also note that this definition of azimuth is somewhat tied to operations on the surface of Earth in a latitude-and-longitude coordinate system. As discussed in the Photogrammetry chapter, if one is using a Cartesian coordinate system such as UTM, 0° azimuth will not point along a line of northing, and so a correction must be applied.

Photonic Noise Sources The photon signal-to-noise ratio (SNR) of an EO sensor is SNRPhotons 

2 kSignal

kSignal , 2 2  kPhoton  kNoise

where k̇ Signal is the photon rate from the signal source; k̇ Photon is the photon rate from the various noise terms discussed in this rule, including thermal emission from optics (lenses and mirrors), the housing around the detector, and background sources including those in and out of the field of view; and k̇ Noise is the equivalent photon rate associated with other sources of noise such as dark current, read noise, and thermal noise generated in the detector and read-out electronics.

698

Appendix

Note that the SNR computed above is the same as the “current” SNR generated by a detector exposed to the various noise sources mentioned in the equation. The current SNR is designated as SNRcurrent in the remainder of this discussion. Discussion The discussion is divided into separate sections that show how the various terms in the SNR are computed. These subsections include emissions from sensor optical elements and background created by blackbody emission from the background light fields both within and outside the sensor FOV. Contribution from Sensor Optical Elements Consider a flat lens or mirror radiating in the direction of the detector. The geometry of the problem is shown in Fig. A.11. The radial size of the mirror is rm. The irradiance at the detector must take into account the contribution from each area element of the mirror and each area element of the detector. The distance from the area element to the detector l is given by l  L2  r 2 , where L is the distance from the mirror center to the detector, and r is the distance of the area element from the optical axis. In the equations below, Ad is the detector area, m is the emissivity of the mirror surface, and N is the radiance of the mirror surface in watts/m2/sr/micron. As usual,  is the wavelength. H is the wattage imposed on the detector by the mirror under consideration. Throughout, we will use the fact that for Lambertian surfaces H    m Ad  N  cos dAd    m Ad   N  cos dA   m Ad  N  cos rdrd  cos  / l 2 , N   W ,

since

Figure A.11 The various parameters used in the equations of this discussion.

Appendix

699

cos  

L L  r2 2

,

H    m Ad N  2  cos 2 rdr /  L2  r 2  , rm

0

H   2L  m Ad N   rdr /  L2  r 2  , rm

2

0

H    m Ad N  H    m Ad N 

2 m

r , rm2  L2 1 . L2 1 2 rm

If L, the position of the contributing optical element, is about a focal length from the detector, we can make the following approximation:

H 

 m AdW  m AdW .  1  4 f #2 4 f #2

 is the solid angle that the mirror subtends with respect to the detector, described by

rm2 A  2, 2 2 rm  L L where A is the area of the mirror. Using the definition of solid angle above,

H   N   m Ad . Exactly the same result is obtained when one views the detector as being within a radiation cavity. In that case, the irradiance of any area of the cavity equals the radiance of the wall times the FOV seen by the detector times the wall emissivity. This is in watts/cm2 and so must be multiplied by the detector area to compute the watts incident on the detector. Each source of radiation directly in the view of the detector produces a contribution equal to that above. Any source seen as a reflection from a closer mirror contributes

  N m Ad  , where the terms in parentheses are properties of the source of radiation, and  is the reflectivity of the intervening element in the optics train. For any particular mirror, the emissivity and reflectivity are related by = 1 – . If we assume that all components form the same solid angle with the detector, then some factoring can be done, and for n components n

H   N  Ad    m   m    m 2   m 3     m n 1   N   m Ad   p 1  N   m Ad p 1

where p is a counting index.

n  1 ,  1

700

Appendix

The contribution of the housing is similar:

H   N h  h Ad , where h is the FOV of the housing viewed from the detector, and h is the emissivity of the housing surface. The sum of the two solid angles will equal  steradians since the detector will be sensitive to the hemisphere of surfaces around it. Now we review how we will use this result to determine the impact of mirror emission on SNR. In all cases, H is the total power reaching the detector in its sensitive band. This can be converted to the photoelectron rate by

    k     H  , hc where k̇ is the photoelectron rate associated with the blackbody emission of the mirrors and housing, h is the Planck’s constant, c is the speed of light, and  is the quantum efficiency of the detector. This is also the variance in the photoelectron noise. Thus, the SNR of the system is SNRCurrent 

kSignal

2 2 kSignal  k  k Mirror

Housing

 

kSignal

2 k Signal  k  k Mirror

,

Housing

where the definitions of the terms in the radical appear below.  is the integration time. The total irradiance of the detectors is found by integrating over the wavelength range to which they are sensitive:  m Ad n  1 kMirror  N      d ,  1  hc A khousing  d h h  N      d . hc

Note that the SNR increases with the square root of the integration time in this idealized case. This calculation (and those that follow) do not include all the noise sources that will affect performance. For example, additional terms would be needed to describe detector and background noise. In real systems, perpetual integration is defeated by 1/f noise and other factors. In each case, the range of integration will depend on the location of the filter (or the detector spectral sensitivity if it limits the wavelength performance of the sensor). If it is at the front of the optical train, then only the background is filtered, and all emissions from the housing and mirrors are able to proceed to the detector unfiltered. If the filter is in the proximity of the detector, all sources are filtered, and the range of integration is over the filter band.

Appendix

701

External Background Noise

The calculation that follows derives the rms noise associated with backgrounds in the FOV. We shall find that the result is the same as that for emission of the optical elements. This should not be surprising when the process of translating radiation to the focal plane is considered. The radiation from each source is superimposed so one could expect the functional dependence on the optical properties of the system to be the same as for radiation emitted by the mirrors and other optical elements. 𝐴 𝐹𝐿, where FL is the The field of view (FOV) of a detector is 𝐼𝐹𝑂𝑉 telescope focal length, and √Ad is the length dimension of the detector. Thus, the area seen by the detector is





2

Area  R Ad / FL , where R is the range from sensor to target. Each element of that area contributes to the background noise by imposing an average photon flux associated with photon statistics. The average flux per unit area of background is Nb, where Nb is the radiance of the background in units of watts/cm2/sr-micron, and  is the solid angle formed by the sensor entrance aperture with respect to each element of the viewed background. That is,   D 2 / 4 R 2 . Therefore, the DC component of background is



SignalDC  Nb D 2 / 4 R 2 R Ad / FL



2

 Ad

Nb . 4 f #2

The rms noise associated with this background is its square root times the square root of the detection bandwidth: NoiseDC  bandwidth

Ad N b . 4 f #2

Out-of-FOV Sources

Sources out of the FOV contribute to the focal plane flux just as if they were in the FOV, except that the intensity is reduced by the rejection performance of the optical baffling and geometry. That is, an out-of-field source still produces a photon variance that contributes to system noise, but the impact of the source is reduced by the performance of the baffling design. If Neff is the flux that would have to appear in the field to generate the same irradiance as the out-of-field source and Noa is the radiance of the out-of-field source, then N eff  N oa e 0.30 angle ,

702

Appendix

where angle is the angle between the optical axis and an out-of-field source. The functional relationship and the number 0.30 have been derived from empirical data derived from Wolfe.1 Reference

1. W.L. Wolfe, “Imaging Systems”, Chapter 19 of The Infrared Handbook, W.L. Wolfe and G.F. Zissis, editors, Environmental Research Institute of Michigan, 1978, p.19-25.

Guidelines for Writing SI Units The National Institute of Standards and Technology (NIST) in the U.S. has published a helpful guide to the rules and style conventions of using the International System of Units (SI) in written documents.1 This allows for clarity and disambiguation of technical content in formal manuscripts. A sample of some of the basic rules are provided below: The prefix symbols Y (yotta), Z (zetta), E (exa), P (peta), T (tera), G (giga), and M (mega) are printed in uppercase letters while all other prefix symbols are printed in lowercase letters (see Table 5). Prefixes are normally printed in lowercase letters. When spelled out in full, unit names are treated like ordinary English nouns. Thus, the names of all units start with a lowercase letter, except at the beginning of a sentence or in capitalized material such as a title. In keeping with this rule, the correct spelling of the name of the unit °C is “degree Celsius” (the unit “degree” begins with a lowercase “d” and the modifier “Celsius” begins with an uppercase “C” because it is the name of a person). Unit symbols are printed in lowercase letters except that (a) the symbol or the first letter of the symbol is an uppercase letter when the name of the unit is derived from the name of a person; and (b) the recommended symbol for the liter in the United States is L [see Table 6, footnote b]. Examples: m (meter), s (second), V (volt), Pa (pascal), lm (lumen), WB (weber). The guide also provides a useful checklist for reviewing manuscripts, as well as a comprehensive list of conversions between SI units. Reference

1. A. Thompson and B. Taylor, “Guide for the Use of the International System of Units (SI),” NIST Special Publication 811, pp. 12, 14, 31 (2008).

Appendix

703

Derivation of the Third Equation in the Rule “The Relation of Ideal D* To View Angle” Consider a detector as shown in Fig. A.12. It views an optical system, schematically shown

Figure A.12 Geometry of detector and optics.

here as a single lens or mirror. As shown in the first equation, D* is inversely proportional to the square root of the irradiance I impinging on the detector. Each differential area dA radiates to the detector via Planck’s radiation law. In this case, the Planck function is presented in terms of the frequency of the radiation rather than the more common wavelength presentation. This is done to allow the reader to see this alternate form. The irradiance at the detector is I

 2hv

3

cos    d dAd dv

c   e hv / kT  1 2

,

where the first integral is over the solid angle formed by the area element of the detector and the lens, and the second is over the area of the lens. The third integral is over the frequencies for which the detector is sensitive. Here, measures the solid angle from the area element of the detector to the lens. The lens is assumed to be Lambertian, and π in the denominator converts from irradiance to radiance (watts/cm2sr). Every element of the detector is assumed to see all of the lens, so the integral over Ad is trivialized, and I

Ad  2h 3 cos    d d  c 2  e h / kT  1

,

where the frequencyintegral now ranges from c to ∞, where c is the lowest frequency for which the detector is sensitive. This is equivalent to the longest wavelength, which is called the cut-off wavelength. The solid angle is da/r2, as shown below, and

704

Appendix

da = r2 sind2. Thus, d   2 sin d . The integral is

 2h d  3

I  Ad 2

c 2  e h / kT  1 

sin    cos    d ,

where the angle integral is from 0 to max/2, where max is the full angle viewed by the detector. The angle integral is equal to sin2(max/2)/2. We have already defined that

D*  fAd D 

DfAd Ad  . 2hfPb 2hPb

Since we anticipate integrating over we must express Pb as a function of frequency, or Pb = nhwheren is the background photon rate. Thus,

D* 

Ad 1 . 2 n h

In the earlier integrals, I was expressed in power. To convert to photon rate n, we divide by h(the watts per photon). I then becomes 2     2h d  ni  Ad 2 sin 2   2 h / kT ,  1  2  2c  e

where ni is the number of photons incident on the detector per second. The integral may now be completed: 2     2h d  ni  Ad 2 sin   2 h / kT ,  1  2  2c  e 2

3

2    kT  1 x  2Ad sin    2  x dx,  2  h  c e 2

where x = h/kT, and the limits of integration are xc to infinity. xc defines the cut-off of the detector performance in scaled frequency units: xc = h/kT. The approximation 1/(ex – 1) ≈ e–x has been used. This applies when h/kT « 1, which is true in optical and infrared cases. The integral can be shown to be

e x  x 2  2 x  2  ,

Appendix

705

so 3

  kT   x 2  2A ni   2 d sin 2     e  x  2 x  2   . 2  h    c

After some manipulation, this is found to be (for a background at 300 K)

 300  D    T  *

5/ 2

1.3  1011 e xc / 2 ,  sin xc xc2  2 xc  2 2

which is the third equation in the rule. Reference

1. R. Kingston, Detection of Optical and Infrared Radiation, Springer-Verlag, p. 17 (1978).

1 n B   xi n i 1 Be Greater than Average

     

Index analog-to-digital conversion (ADC), 185, 639 anastigmat, 640 Andrade’s beta law, 321 angle of incidence (AOI), 428, 486, 640 angular diffraction blur spot, 601 angular resolution, 242 aperture averaging, 56 aperture shape factor, 276 aplanat, 640 apodization, 640 Arrhenius equation, 379 ASICS, 132 aspect ratio, 291, 316 assumed velocity filter (AVF), 75 astigmatism, 410, 524, 535 atmospheric seeing, 12 Auger recombination, 203 avalanche photodiode (APD), 193, 194, 641 averted vision, 235 AVI, 361, 363, 364 axial stress, 312 azimuth, 355

1/f noise, 199, 218, 219, 220, 221 Abbe limit, 602 Abbe number, 143, 433, 639 aberrations, 410, 435, 529 spherical, 424 third-order, 424 transverse, 423 absolute humidity, 639 absolute magnitude, 4 absorptance, 639 absorption, 196 coefficient, 393 plankton, 396 achromatopsia, 227, 639 A-class star, 20 acousto-optical tunable filter (AOTF), 468, 639 acquisition, tracking, and pointing (ATP), 73 acute vision, 243 adaptation time, 258 adaptive optics, 1, 16, 498, 639 adhesive, 516 advanced driver assistance systems (ADAS), 161, 173 aerodynamic heating, 634 air breakdown, 286 airmass, 20, 640 Airy disk, 274, 275, 422, 440, 464, 599, 601, 640 albedo, 640 algal bloom, 395 Allard’s law, 640 aluminum, 342

background limited in performance (BLIP), 198, 205, 642 background-limited infrared photodetector (BLIP), 582 background-limited performance (BLIP), 198 bandgap, 186, 189 bathymetry, 390, 396, 642 beam intensity, 278 Beer’s law, 29, 162, 196, 393, 398, 642 Benham disk, 228 707

 

708  

bidirectional reflectance distribution function (BRDF), 282, 623, 624, 630, 643 bidirectional scatter distribution function (BSDF), 446, 447 bidirectional transmittance distribution function (BTDF), 447 binocular efficiency, 481 bioluminescence, 400, 642 Black’s equation, 379 blackbody, 2, 567, 570 temperature, 2 blazed grating, 472 Bmp, 362 bonded mirror, 313 brightness, 271, 272, 277, 567 brightways, 252 Cn2, 30, 33 calibration, 590 carbon-silicon carbide, 340 Cassegrain, 425, 643 Cauchy equation, 337 centering, 520 centroid, 643 charge capacity, 184 charge-coupled device (CCD), 187, 217, 643 chemical polishing, 497 chip yield, 159 chlorophyll a, 395 circular variable filter, 471 clean room, 375 cleaning, 515 coating anti-reflection (AR), 484, 640 black, 335 shift, 485, 486 coefficient of thermal expansion (CTE), 313, 340 cold shield, 644 collimating lens, 532 collimating optics, 532 color blindness. See achromatopsia color vision, 231

Index

commercial off the shelf (COTS), 146 complementary metal–oxide semiconductor (CMOS), 132, 191, 196, 197, 217, 644 compressive stress, 306, 308, 312 cone density, 238 cones, 225, 234, 236 Conrady formula, 338 constant false alarm rate (CFAR), 76 contrast, 358, 359, 398 contrast threshold function (CTF), 245 Cooke triplet, 434 corneal luminance, 232 corner cube, 286 correlated double sampling (CDS), 223, 644 cost, 129 optics, 142, 150 reduction, 138 creep, 314, 321 Critical Laser Enhancing Atmospheric Research (CLEAR), 31 cryocooler, 644 cryogen, 381 cyanoacrylates, 522 Dall–Kirkham, 425 damage, 333 damping, 534 dark current, 210, 214, 235 dark ring, 599 DC background, 212 DC pedestal, 219 declination, 355 defect density, 159 deflection, 293, 295, 298, 301, 318, 320, 322, 329 defocus, 414 deformable mirror, 70, 302 deformation, 330 degraded visual environments (DVEs), 161 depth of field, 415, 416 depth of focus, 414 design process, 596

Index

detection, recognition, and identification (DRI), 73, 359 detectivity, 181, 207 detector angular subtense (DAS), 418 deuteranomaly, 255 deuteranopia, 255 dewar, 383, 645 diamond turning, 523, 524 diffraction, 230, 436, 438, 439, 440 limited, 242, 577, 601 spreading, 273 theory, 277 diffuse attenuation coefficient, 393, 396 digital focal plane array (DFPA), 185 digital image, 156 digitization noise (DN), 213 digitizer, 213, 645 dimensional analysis, 364 dimensional stability, 497 Dirac delta function, 645 distributed aperture sensor (DAS), 265, 645 dome, 71, 305 Doppler frequency shift, 269, 270 double-Gauss, 434 doublet, 516 downwelling, 646 dyschromatopsia, 254, 646 early investment, 135 eccentricity, 238, 239, 646 economics, 129 effective diameter, 65 effective focal length, 421, 602, 646 elastomeric bond, 313 electromagnetic spectrum, 564 electro-optical and infrared (EO/IR) system, 73 electro-optics (EO), 1 emissivity, 584, 620, 631, 646 encircled energy (EE), 458, 460 energy density, 273 Engelbart’s law, 134 entrance pupil diameter, 418 error function, 67

709

etalon, 647 etendue, 272, 279, 576, 578, 647 Fabry–Pérot etalon, 474, 477 failure pressure, 371 false alarm rate (FAR), 85, 647 Faraday rotation, 350 fast Fourier transform (FFT), 494 field of regard (FOR), 648 field of view (FOV), 417, 418, 540, 648 figure error, 524 fill factor, 182 film chemical, 360 thin metal, 346 finesse, 475–477 fixed pattern noise (FPN), 217 flexural rigidity, 293 f-number (f/#), 419, 421, 424, 435, 647 foam, 294, 324 focal plane array (FPA), 73, 132, 159, 175, 199, 214, 648 focal plane arrays (FPAs), 175 forward-looking infrared (FLIR), 107, 186 Foucault knife-edge test, 527, 528 fovea, 225, 238, 648 fracture toughness, 308 Fraunhofer line, 2 free spectral range, 477 friction coefficient, 310 Fried parameter, 16, 17, 41, 43, 649 Froude number, 71 f-stop, 397 full motion video (FMV), 94 full width half maximum (FWHM), 71, 279 fundamental frequency, 301, 302, 331 galactic latitude, 6, 7 galling, 370 Gaussian beam, 274, 278, 279 Gaussian function, 236 general image quality equation (GIQE), 609

710

Index

 

generation–recombination current, 204 generation–recombination noise, 199 GIF, 362 gimbal, 374 glass strength, 307 graphene, 186 gray body, 631, 649 Greenwood frequency, 18, 649 ground sample distance, 609, 649 guide star, 13 H.264, 363 Hagen–Rubens relationship, 627 Hartmann test, 530 haze, 260 head-mounted display (HMD), 264, 650 helium, 382 Herzberger formula, 338 Hickman, G. Daniel, 389 HITRAN, 30 Hopkins ratio, 450 horizon, 357 Hufnagel–Valley (HV) 5/7 model, 30 human body, 629 human eye, 225, 271 human skin, 630 humidity, 25 hyperfocal distance, 416 illuminance, 650 image intensifier, 384, 385 image orientation, 506 imperfections, 515 index of refraction, 337, 391, 433, 650 Infrared Astronomical Satellite (IRAS), 7 inhomogeneity, 533 insolation, 622 instantaneous field of view (IFOV), 78, 416–419, 541, 602 integral, 366 integrated circuit (IC), 130, 131 integrating sphere, 531 integration time, 216

intelligence, surveillance, and reconnaissance (ISR), 73 irradiance, 622 ISO 10110, 519 isoplanatic angle, 30, 49 James Webb Space Telescope, 2, 651 jansky, 6, 651 jitter, 15, 61, 499 job search, 158 Johnson noise, 211 Joule–Thomson (JT) cryocooler, 383 effect, 651 JPEG, 362 Karow factor, 520, 521 Keck, 1, 651 kelvin, 651 key length value (KLV), 363 kTC noise, 199, 217, 223 lab tools, 535 Lambertian, 435, 449, 568, 575, 580, 591, 623, 624, 652 lamprosity, 237, 652 laser, 267, 390, 652 beam quality, 272, 273, 276 cross-section, 636 HeNe, 272, 650 rod, 283 YAG, 68 latency, 264, 265 lead zirconate titanate (PZT), 70 learning curves, 139, 140 lidar, 269 Doppler, 281 scanning, 270 line of sight (LOS), 73, 264, 463, 653 line pair, 376 linear polarizer, 493 liquid water content (LWC), 169 log intensity ratio, 68 long-wave infrared (LWIR), 83 lossless compression, 363

Index

lossy compression, 362 lubricant, 374 macular pigment, 260 magnification power, 464 magnitude zero, 6 Maiman, 268 main-sequence star, 2 mean time to failure (MTTF), 379 mercury-cadmium-telluride (HgCdTe), 188, 189, 191, 209 dark current, 200 mesopic, 239 metal, 370 dissimilar, 370 Metcalfe’s law, 133 microbolometer, 192 micro-electro-mechanical systems (MEMS), 70 micrometeoroids, 333, 653 microyield stress, 313 mid-wave infrared (MWIR), 83 Mie scattering, 36 Mie theory, 24 Miles equation, 613 MIL-PRF-13830B, 518, 519 minimum resolvable contrast, 607 model-based pose estimation (MBPE), 539, 543, 556 MODTRAN, 25, 30, 654 modulation transfer function (MTF), 82, 450, 451 monochromatic light, 231 Monte Carlo, 23 Moon, 10 Moore’s law, 130–134 Motion JPEG, 361 MOV, 361 MPEG, 363, 654 MPEG2, 361, 363 MPEG4, 361, 363 multilayer insulation (MLI), 351 multiple hypothesis tracker (MHT), 76 nadir, 655

711

Narcissus, 603 natural frequency, 302, 533 Navarro model, 230 neutral density filter, 377 Newton’s rings, 527 noise bandwidth, 216 noise variance, 198 noise-equivalent power (NEP), 181, 655 noise-equivalent temperature difference (NEΔT), 223, 581, 582, 593, 655 nomograph, 436, 655 nonuniformity, 221, 222 normalized difference vegetation index (NDVI), 122, 638 numerical aperture, 602 Nyquist frequency, 656 obscuration, 656 ocular field, 243 off the shelf (OTS), 146, 656 old age, 250 opponent processing, 253 optical density (OD), 377 optical mounting, 536 optical path difference (OPD), 304, 310, 527, 529, 656 optical tabletop, 533 optical transfer function (OTF), 246, 657 oversizing, 520 Paranal, 50 parsec, 4 particulate scattering, 68 peak-to-valley (PV) error, 465 percent learning rate (PLR), 139 permeability, 346 phase change, 382 phase conjugation, 60 phase modulators, 70 photoconductive (PC), 195, 206 photolithography, 132 yield, 158 photometry, 561 photomultiplier tube (PMT), 386

712  

photopic, 263 response, 237 photovoltaic (PV), 195 pigment concentration, 396 pixel, 360, 602, 658 Planck function, 3 plane parallel plate (PPP), 409, 410 plumbing, 378 plume, 619, 631–633, 658 point spread function (PSF), 82, 462, 659 pointing error, 63, 373 Poisson statistics, 198 Poisson’s ratio, 659 power spectral density (PSD), 114, 444, 534, 659 precision elastic limit, 313 preload, 309, 311 prism, 411 probability of detection, 608 protanomaly, 255 protanopia, 255 pupil diameter, 230 pupil size, 226, 233, 481 quantization error, 386 quantum dot (QD), 185, 191 quantum efficiency, 15, 181, 183, 185, 186, 187, 234, 660 R0A, 209, 661 radiometry, 561, 587, 591, 660 random vibrations, 612 range equation, 608 rate reversal, 373, 374 RAW, 362 Rayleigh, 24, 628 criterion, 599 limit, 242 range, 279 scattering, 660 read-out integrated circuit (ROIC), 132, 159, 174, 191, 197, 199, 661 red, green, and blue (RGB), 91 reduced refraction, 39

Index

reflectance, 401, 402 reflectivity, 627, 628 relative edge response (RER), 609, 610 resolution, 361, 376 resonant frequency, 537 responsivity, 180, 183, 193 retainer, 311 retaining ring, 310 retina, 228, 229, 262 transfer function, 246 retroreflector, 285 Ritchey–Chretien, 425 rod density, 239, 240 rods, 235, 236 root mean squared (rms), 466 root sum squared (rss), 217, 461, 462 saccades, 227, 247, 248, 661 safety factor, 304, 377, 611 sag, 517 satellite laser communications (SLC), 32 scan mirror, 320 scatter, 531 scattering coefficient, 396 scintillation, 45, 67, 523, 661 saturated, 46 scissor integration, 366 scotopic response, 236 scratch-dig, 518, 662 screw threads, 372, 373 seawater absorption coefficient, 397 extinction coefficient, 397 salinity, 391, 396 self-weight deflection, 296, 298, 302 Sellmeier formula, 338 Serrurier truss, 328 Shack–Hartmann, 662 sensor, 59, 530 shipping, 374, 375, 377 short-wave infrared (SWIR), 83 shot noise, 211 signal-to-clutter ratio (SCR), 113 signal-to-noise ratio (SNR), 14, 67, 75, 221, 223, 359, 606, 609

Index

size, weight, power, cost (SWAP-C), 156, 597 slope of waves, 402 small-angle approximation, 408 small pixels, 156 Snell angle, 624 Snell’s law, 623, 630 Snellen chart, 241 Snellen fraction, 240 solar irradiance, 579 solar radiation, 622 solar reflection, 621 solid angle, 11, 271, 367, 368 Sparrow limit, 602 specific detectivity, 181, 184, 207 speckle, 378 spectral radiant power, 181 spectral responsivity, 181 spectrometer, 434 speed of light, 369 spin-cast mirror, 327 stability, 321 stagnation temperature, 634 Stahl, 152 star test, 532 stereo baseline length, 261 stereograph, 261 strained layer superlattice (SLS), 662 Strategic Defense Initiative (SDI), 1 stray light, 336, 445, 446, 447 Strehl ratio, 13, 50, 53, 59, 65, 273, 452, 453, 456–458, 663 stress birefringence, 310 stress optic coefficient, 311 stress relaxation, 498 stroke of actuators, 44 structural response, 611 subaperture, 663 subarray, 60 support points, 298 surface irregularities, 530 morphology, 620 roughness, 444 scatter, 442

713

synthetic guide stars, 1 telescope, 147, 315, 327, 330, 513 Schmidt, 418 temperature gradient, 347 tensile stress, 306, 308 test failure, 381 thermal conductivity, 351 thermal gradient, 507 thermal noise, 198 three-mirror anastigmat (TMA), 430 throughput, 588, 589 thrust, 632 TIFF, 362 tilt, 45 time constant, 346 thermal, 347 tolerance, 143, 150, 153, 154, 462, 523 torque, 309 total integrated scatter (TIS), 442, 448 transmissibility, 534 transmission, 377 transmittance, 27, 28, 29 triangulation, 543–545, 549–553 trichromatic color theory, 252 tritanopia, 255 troland, 228, 230 turbidity, 390 Verdet constant, 350, 666 vernal equinox, 355 vibration isolation, 614 vibrations, 302 video compression, 361 visible glass, 432 visual magnitude, 3 von Neumann, 132 wander, 61, 652 water, 369 wavefront, 666 wavefront error (WFE), 272, 330, 426, 450, 525, 530, 666 waveguide, 477 Weibull statistics, 307

714  

well capacity, 184, 213 white noise, 218, 220 wind force, 617 wind speed, 402 window, 303 Zernike, 53, 466 ZERODUR®, 497 Zipf’s law, 133

Index

John Lester Miller has over 40 years of experience in the infrared, laser, and electro-optics industry. He has published over 100 papers and four previous books, and he holds eight patents (and several pending). For several years, Miller has been one of the chairs of the planet’s largest infrared technology conference at SPIE DCS and has chaired many sessions for the Military Sensing Symposia and a corporate photonic conference. He currently leads his own consulting company, Cascade Electro-Optics (www.cascadeeo.com), that services our community with consulting and executive board advisories. Previously, he was with FLIR Systems for 19 years and one week in positions including Chief Technical Officer for the Government Division. Prior to that, Miller was with Lockheed Martin for 12 years in Orlando, Utica, and Denver working on electro-optical systems. Before that, he worked at Mauna Kea and Mount Wilson and Palomar observatories in infrared electro-optic astronomy and Rockwell International on space-based infrared systems and infrared systems on the Space Shuttle. Miller has worked on 17 mergers and acquisitions, and is on several boards of directors for companies in the photonics industry. Recently, he has lead photonics R&D projects on 3D infrared, IRST, degraded visual systems, ADAS sensors for self-driving vehicles, robotic vision, gas sensing, cubesats, vision for artificial intelligence, and other emerging topics. He has degrees in physics, graduate studies in astronomy, and an MBA in management of technology. He is a SPIE Fellow.

Edward J. Friedman is a retired rocket scientist with 48 years of space telescope experience. He holds a B.S. in Physics from the University of Maryland (1966) and a Ph.D. in Physics from Wayne State University (1972). His career focused on spacebased optical systems for monitoring Earth’s climate and weather, laser communications, ocean science, planet detection, astronomy/astrophysics, defense and intelligence systems. Since retiring, he has become a regular consultant to NASA concentrating on evaluating proposals for space science missions. While working, he was a Boeing Technical Fellow based on his expertise in the field of electro-optics, a Visiting Scientist at the National Center for Atmospheric Research, studying application of remote sensing technology in climate studies, Chairperson of the organizing committee of the 2009 Guidance and Control Conference of the American Astronautical Society and editor of the Proceedings, and Adjunct Faculty and Thesis Supervisor of a successful doctoral candidate in optics at University of Colorado. The student was Brian McComas, who is one of the authors of this book. Friedman is the author of three books on optics, including the first and second editions of Photonics Rules of Thumb with John Miller (2nd Edition, McGraw-Hill, 2004).

Jack Sanders-Reed has over 40 years of experience covering the spectrum from hard x-ray through millimeter-wave and includes basic theoretical development through full system flight test demonstrations and operational system implementation. SandersReed is a recognized leader in degraded visual environment (DVE) sensing and phenomenology, photogrammetry, and acquisition, pointing, and tracking (ATP). He holds a Ph.D. in Physics from Case Western Reserve University and a high-tech MBA from Northeastern. He is a Fellow of SPIE and a Boeing Technical Fellow and Chief Technologist in the Avionics Systems Technology portion of Boeing’s corporate R&D organization, Boeing Research & Technology (BR&T). He has worked in medical imaging developing dual-energy digital radiography theory and laser film camera optical systems at Picker X-Ray; surface science and laser light scattering involving Fourier optics, and heterodyne detection to measure Angstrom level capillary ripples on liquid surfaces at Case Western Reserve University (as well as xray diffraction studies using synchrotron radiation at Brookhaven National Lab); maximum likelihood faint moving target detection theory and multitarget tracking at MIT Lincoln Laboratory; and pilot vision systems involving distributed aperture sensors (DAS) combining live and synthetic vision for head-mounted displays (HMDs) for use in DVE conditions, range-resolved Doppler (lidar) imaging (RRDI), and sensing for automation at Boeing. Along the way, he built two laser laboratories. He is the author and developer of the Visual Fusion motion analysis software package (detection, tracking, photogrammetry). Sanders-Reed is the author of 29 peer reviewed and conference papers and six patents, is an invited guest lecturer at MIT for an annual short course in high-speed imaging and motion analysis, and chairs the SPIE Situation Awareness in Degraded Environments conference.

Katie Schwertz is a Senior Design Engineer with Edmund Optics in Tucson, AZ, with a decade of optical and optomechanical design experience. She received her B.S. in Optics from the University of Rochester Institute of Optics and an M.S. in Optical Sciences from the University of Arizona’s James C. Wyant College of Optical Sciences. Her Master’s work focused on Optomechanics and culminated in co-authoring the SPIE Field Guide to Optomechanical Design and Analysis in conjunction with Jim Burge. Her expertise lies in both optical and optomechanical design for commercial and industrial applications across a wide variety of sectors including life sciences, defense, and machine vision. She is also well-versed in designing for manufacturability and production, as well as product and process development. She is a Senior Member of SPIE and has served in a variety of volunteer capacities through the society, including her current role on the SPIE Board of Directors.

Brian McComas has over 30 years of experience working on electro-optical systems for military, astronomical, remote sensing, and industrial use. He holds a B.S. in Physics from Cal Poly State University, an M.S. in Physics from the University of Connecticut, Storrs, and an M.S. and Ph.D. in Electrical Engineering from the University of Colorado, Boulder. He started his career at IBM East Fishkill as an Optical Engineer working on the automated inspection systems for IBM’s mainframe computer components. After leaving IBM, he joined Santa Barbara Research Center as a Member of Technical Staff. He served as a Calibration REA for the Moderate-Resolution Spectroradiometer and worked on the Enhanced Thematic Mapper Plus. Later, he joined Ball Aerospace and Technologies Corp. in Boulder, Colorado, where he worked on astronomical and remote sensing instruments and several national assets as an ElectroOptical Systems Engineer. After leaving BATC, he joined Northrop Grumman Space and Technology as a Senior System Engineer. He served as the Optical Verification Lead for the James Webb Space Telescope. McComas is currently a Senior Principal Systems Engineer at an aerospace company in Tucson, AZ, where he develops visible and infrared electro-optical systems.