Robotic Navigation and Mapping with Radar [1 ed.] 9781608074839, 9781608074822

Focusing on autonomous robotic applications, this cutting-edge resource offers you a practical treatment of short-range

145 32 62MB

English Pages 377 Year 2012

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Robotic Navigation and Mapping with Radar [1 ed.]
 9781608074839, 9781608074822

Citation preview

Contents Overview: Part I: Fundamentals of Radar and Robotic Navigation A Brief Overview of Radar Fundamentals. An Introduction to Detection Theory. Robotic Navigation and Mapping. Part II: Radar Modeling and Scan Integration Predicting and Simulating FMCW Radar Measurements. Reducing Detection Errors and Noise with Multiple Radar Scans. Part III: Robotic Mapping with Known Vehicle Location Grid-Based Robotic Mapping with Detection Likelihood Filtering. Feature-Based Robotic Mapping with Random Finite Sets. Part IV: Simultaneous Localization and Mapping Radar-Based SLAM with Random Finite Sets. X-Band Radar-Based SLAM in an Off-Shore Environment. Martin Adams is a professor in the department of electrical engineering and a member of the Advanced Mining Technology Centre (AMTC) at the University of Chile. He received his D.Phil. in engineering science at the University of Oxford. John Mullane is a research scientist at Projective Space Pte. Ltd., Singapore. He holds a Ph.D. in electrical and electronic engineering from Nanyang Technological University. Ebi Jose is a research scientist at Projective Space Pte. Ltd., Singapore. He holds a Ph.D. in electrical and electronic engineering from Nanyang Technological University. Ba-Ngu Vo is a professor at the School of Electrical, Electronic and Computer Engineering at the University of Western Australia. He received his Ph.D. in engineering from Curtin University.

ISBN-13: 978-1-60807-482-2 ISBN-10: 1-60807-482-X

BOSTON

LONDON

www.artechhouse.com

ADAMS MULLANE JOSE VO

WITH Robotic Navigation and Mapping RADAR

Focusing on autonomous robotic applications, this cutting-edge resource offers a practical treatment of short-range radar processing for reliable object detection at ground level. This unique book demonstrates probabilistic radar models and detection algorithms specifically for robotic vehicles on land and at sea, in coastal environments. It examines grid-based robotic mapping with radar, based on measurement likelihood estimation. Moreover, this book reformulates the feature-based robotic mapping and simultaneous localization and map building (SLAM) problems, so that detection, as well as spatial, uncertainty can be incorporated in a principled manner. Engineers and researchers find detailed coverage of SLAM — an area referred to as the Holy Grail of autonomous robotic research.

Robotic Navigation and Mapping WITH RADAR

Martin Adams John Mullane Ebi Jose Ba-Ngu Vo

Robotic Navigation and Mapping with Radar

Robotic Navigation and Mapping with Radar Martin Adams John Mullane Ebi Jose Ba-Ngu Vo

artechhouse.com

Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the U.S. Library of Congress. British Library Cataloguing in Publication Data A catalog record for this book is available from the British Library.

ISBN-13: 978-1-60807-482-2 Cover design by Vicki Kane © 2012 Artech House All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. 10 9 8 7 6 5 4 3 2 1

To Hui-Shin, James, and our child to be, and their patience despite the midnight hours spent preparing this book. M. A.

Contents Preface

xiii

Acknowledgments

xv

Acronyms

xvii

Nomenclature

xxi

Chapter 1 Introduction

1

1.1  Isn’t Navigation and Mapping with Radar Solved? 1.1.1 Applying Missile/Aircraft Guidance Technologies to Robotic Vehicles 1.1.2 Placing Autonomous Navigation of Robotic Vehicles into Perspective 1.2  Why Radar in Robotics? Motivation 1.3  The Direction of Radar-Based Robotics Research 1.3.1  Mining Applications 1.3.2  Intelligent Transportation System Applications 1.3.3  Land-Based SLAM Applications 1.3.4  Coastal Marine Applications 1.4  Structure of the Book References

11 12 19 19 21 24 25 27 28

Part I Fundamentals of Radar and Robotic Navigation

33

Chapter 2 A Brief Overview of Radar Fundamentals

35

2.1  2.2  2.3  2.4  2.5 

Introduction Radar Measurements The Radar Equation Radar Signal Attenuation Measurement Power Compression and Range Compensation 2.5.1  Logarithmic Compression 2.5.2  Range Compensation 2.5.3 Logarithmic Compression and Range Compensation During Target Absence 2.5.4 Logarithmic Compression and Range Compensation During Target Presence

1 2

35 36 38 40 43 44 44 45 47 vii

viii

Contents

2.6  Radar-Range Measurement Techniques 2.6.1  Time-of-Flight (TOF) Pulsed Radar 2.6.2  Frequency Modulated, Continuous Wave (FMCW) Radar 2.7  Sources of Uncertainty in Radar 2.7.1  Sources of Uncertainty Common to All Radar Types 2.8  Uncertainty Specific to TOF and FMCW Radar 2.8.1  Uncertainty in TOF Radars 2.8.2  Uncertainty in FMCW Radars 2.9  Polar to Cartesian Data Transformation 2.9.1  Nearest Neighbor Polar to Cartesian Data Conversion 2.9.2  Weighted Polar to Cartesian Data Conversion 2.10  Summary 2.11  Bibliographical Remarks 2.11.1  Extensions to the Radar Equation 2.11.2  Signal Propagation/Attenuation 2.11.3  Range Measurement Methods 2.11.4  Uncertainty in Radar References

51 51 53 58 59 68 68 70 72 73 73 76 76 76 77 78 78 79

Chapter 3 An Introduction to Detection Theory

81

3.1  Introduction 3.2  Concepts of Detection Theory 3.3  Different Approaches to Target Detection 3.3.1  Non-adaptive Detection 3.3.2  Hypothesis Free Modeling 3.3.3  Stochastic-Based Adaptive Detection 3.4  Detection Theory with Known Noise Statistics 3.4.1  Constant PfaCFAR with Known Noise Statistics 3.4.2 Probability of Detection PDCFAR with Known Noise Statistics CFAR 3.4.3 Probabilities of Missed Detection PMD and CFAR Noise Pn with Known Noise Statistics 3.5 Detection with Unknown Noise Statistics—Adaptive CFAR Processors 3.5.1  Cell Averaging—CA-CFAR Processors 3.5.2  Ordered Statistics—OS-CFAR Processors 3.6  Summary 3.7  Bibliographical Remarks References

81 82 84 84 85 86 87 87 88

89 90 94 100 101 102

Chapter 4 Robotic Navigation and Mapping

105

4.1  Introduction 4.2  General Bayesian SLAM—The Joint Problem 4.2.1  Vehicle State Representation 4.2.2  Map Representation 4.3  Solving Robot Mapping and Localization Individually

105 107 109 112 115

89

Contents

ix

4.3.1  Probabilistic Robotic Mapping 4.3.2  Probabilistic Robotic Localization 4.4  Popular Robotic Mapping Solutions 4.4.1  Grid-Based Robotic Mapping (GBRM) 4.4.2  Feature-Based Robotic Mapping (FBRM) 4.5 Relating Sensor Measurements to Robotic Mapping and SLAM 4.5.1 Relating the Spatial Measurement Interpretation to the Mapping/SLAM State 4.5.2 Relating the Detection Measurement Interpretation to the Mapping/SLAM State 4.6  Popular FB-SLAM Solutions 4.6.1  Bayesian FB-SLAM—Approximate Gaussian Solutions 4.6.2  Feature Association 4.6.3  Bayesian FB-SLAM—Approximate Particle Solutions 4.6.4  A Factorized Solution to SLAM (FastSLAM) 4.6.5  Multi-Hypothesis (MH) FastSLAM 4.6.6  General Comments on Vector-Based FB SLAM 4.7  FBRM and SLAM with Random Finite Sets 4.7.1  Motivation: Why Random Finite Sets 4.7.2  RFS Representations of State and Detected Features 4.7.3  Bayesian Formulation with a Finite Set Feature Map 4.7.4  The Probability Hypothesis Density (PHD) Estimator 4.7.5  The PHD Filter 4.8  SLAM and FBRM Performance Metrics 4.8.1  Vehicle State Estimate Evaluation 4.8.2  Map Estimate Evaluation 4.8.3 Evaluation of FBRM and SLAM with the Second Order Wasserstein Metric 4.9  Summary 4.10  Bibliographical Remarks 4.10.1  Grid-Based Robotic Mapping (GBRM) 4.10.2  Gaussian Approximations to Bayes Theorem 4.10.3  Non-Parametric Approximations to Bayesian FB-SLAM 4.10.4  Other Approximations to Bayesian FB-SLAM 4.10.5  Feature Association and Management 4.10.6  Random Finite Sets (RFSs) 4.10.7  SLAM and FBRM Evaluation Metrics References

116 116 117 117 118 120

146 148 149 149 150 152 152 155 156 156 157

Part II Radar Modeling and Scan Integration

163

Chapter 5 Predicting and Simulating FMCW Radar Measurements

165

5.1  Introduction 5.2  FMCW Radar Detection in the Presence of Noise 5.3  Noise Distributions During Target Absence and Presence

165 166 168

121 122 124 124 126 128 129 130 130 133 133 135 137 138 142 145 145 145



Contents

5.4 

5.5  5.6 

5.7  5.8 

5.3.1  Received Power Noise Estimation 5.3.2  Range Noise Estimation Predicting Radar Measurements 5.4.1  A-Scope Prediction Based on Expected Target RCS and Range 5.4.2  A-Scope Prediction Based on Robot Motion Quantitative Comparison of Predicted and Actual Measurements A-scope Prediction Results 5.6.1  Single Bearing A-Scope Prediction 5.6.2 360° Scan Multiple A-Scope Prediction, Based on Robot Motion Summary Bibliographical Remarks References

168 169 173 173 174 176 177 177 179 188 192 193

Chapter 6 Reducing Detection Errors and Noise with Multiple Radar Scans

195

6.1  Introduction 6.2  Radar Data in an Urban Environment 6.2.1  Landmark Detection with Single Scan CA-CFAR 6.3  Classical Scan Integration Methods 6.3.1  Coherent and Noncoherent Integration 6.3.2  Binary Integration Detection 6.4  Integration Based on Target Presence Probability (TPP) Estimation 6.5  False Alarm and Detection Probabilities for the TPP Estimator 6.5.1  TPP Response to Noise: PfaTPP 6.5.2  TPP Response to a Landmark and Noise: PDTPP 6.5.3  Choice of ap, TTPP (ap,l) and l TPP 6.5.4  Numerical Method for Determining TTPP (ap,l) and PD 6.6  A Comparison of Scan Integration Methods 6.7  A Note on Multi-Path Reflections 6.8  TPP Integration of Radar in an Urban Environment 6.8.1  Qualitative Assessment of TPP Applied to A-Scope Information 6.8.2  Quantitative Assessment of TPP Applied to Complete Scans 6.8.3  A Qualitative Assessment of an Entire Parcking Lot Scene 6.9  Recursive A-Scope Noise Reduction 6.9.1  Single A-Scope Noise Subtraction 6.9.2  Multiple A-Scope—Complete Scan Noise Subtraction 6.10  Summary 6.11  Bibliographical Remarks References

195 196 198 198 198 201 204 206 206 209 210 211 213 214 215 215 215 221 223 225 227 228 229 230

Part III Robotic Mapping with Known Vehicle Location

233

Chapter 7 Grid-Based Robotic Mapping with Detection Likelihood Filtering

235

7.1  Introduction 7.2  The Grid-Based Robotic Mapping (GBRM) Problem

235 237

Contents

7.3 

7.4  7.5 

7.6  7.7 

xi

7.2.1  GBRM Based on Range Measurements 7.2.2  GBRM with Detection Measurements 7.2.3  Detection versus Range Measurement Models Mapping with Unknown Measurement Likelihoods 7.3.1  Data Format 7.3.2  GBRM Algorithm Overview 7.3.3  Constant False Alarm Rate (CFAR) Detector 7.3.4  Map Occupancy and Detection Likelihood Estimator 7.3.5  Incorporation of the OS-CFAR Processor GBRM-ML Particle Filter Implementation Comparisons of Detection and Spatial-Based GBRM 7.5.1  Dataset 1: Synthetic Data, Single Landmark 7.5.2  Dataset 2: Real Experiments in the Parking Lot Environment 7.5.3  Dataset 3: A Campus Environment Summary Bibliographical Remarks References

239 241 243 245 245 246 247 247 249 250 251 251 252 259 261 262 263

Chapter 8 Feature-Based Robotic Mapping with Random Finite Sets

267

8.1  Introduction 8.2  The Probability Hypothesis Density (PHD)-FBRM Filter 8.3  PHD-FBRM Filter Implementation 8.3.1  The FBRM New Feature Proposal Strategy 8.3.2  Gaussian Management and State Estimation 8.3.3  GMM-PHD-FBRM Pseudo Code 8.4  PHD-FBRM Computational Complexity 8.5  Analysis of the PHD-FBRM Filter 8.6  Summary 8.7  Bibliographical Remarks References

267 268 269 271 272 274 275 275 279 280 281

Part IV Simultaneous Localization and Mapping

283

Chapter 9 Radar-Based SLAM with Random Finite Sets

285

9.1  Introduction 9.2  SLAM with the PHD Filter 9.2.1  The Factorized RFS-SLAM Recursion 9.2.2  PHD Mapping—Rao-Blackwellization 9.2.3  PHD-SLAM 9.3  Implementing the RB-PHD-SLAM Filter 9.3.1  PHD Mapping—Implementation 9.3.2  The Vehicle Trajectory—Implementation 9.3.3  Estimating the Map 9.3.4  GMM-PHD-SLAM Pseudo Code

285 286 286 287 288 290 290 292 292 293

xii

Contents

9.4  9.5  9.6  9.7 

RB-PHD-SLAM Computational Complexity Radar-Based Comparisons of RFS and Vector-Based SLAM Summary Bibliographical Remarks References

Chapter 10 X-Band Radar-Based SLAM in an Off-Shore Environment 10.1  Introduction 10.2  The ASC and the Coastal Environment 10.3  Marine Radar Feature Extraction 10.3.1  Adaptive Coastal Feature Detection—OS-CFAR 10.3.2  Image-Based Smoothing—Gaussian Filtering 10.3.3  Image-Based Thresholding 10.3.4  Image-Based Clustering 10.3.5  Feature Labeling 10.4  The Marine Based SLAM Algorithms 10.4.1  The ASC Process Model 10.4.2  RFS SLAM with the PHD Filter 10.4.3  NN-EKF-SLAM Implementation 10.4.4  Multi-Hyphothesis (MH) FastSLAM Implementation 10.5  Comparisons of SLAM Concepts at Sea 10.5.1  SLAM Trial 1—Comparing PHD and NN-EKF-SLAM 10.5.2 SLAM Trial 2—Comparing RB-PHD-SLAM and MH-FastSLAM 10.6  Summary 10.7  Bibliographical Remarks References

293 295 299 299 300

301 301 303 305 306 308 311 311 313 314 314 314 317 317 318 318 322 324 326 326

Appendix A The Navtech FMCW MMW Radar Specifications

329

Appendix B Derivation of g(Zk|Zk–1,Xk) for the RB-PHD-SLAM Filter

331

B.l  The Empty Strategy B.2  The Single Feature Strategy

331 332

Appendix C NN-EKF and FastSLAM Feature Management

333

Index

335

Preface A reviewer’s comment made at the review stage of this book stated that “There are many designers, analysts, and users of radar who would be interested in studying how radar can be used in the control of robotic vehicles. There is an entire community currently trying to apply radar technology to automobile guidance and safety, an area having much in common with the robotics application. It is time for these diverse communities to communicate.”

. . . a comment with which we, the authors, very much agree. It is true that much of the robotic navigation literature is aimed entirely at robotics researchers, and a similar statement could be made regarding the radar research literature, even when they encroach on each others areas. In robotics, sensory information is often considered in the form of simple range value decisions, with accompanying bearing and possibly elevation information. This allows information from the world surrounding a robotic vehicle to be represented relative to the vehicle’s estimated location in space, forming what is known as a robotic map. Uncertainty in the sensing process is then typically represented spatially by “smearing” the range, bearing, elevation values in space, yielding a region, rather than a point, from which the range decision actually originated. This type of analysis is not surprising, given the sensors typically used by robotics engineers. Affordable laser range finders, provide outputs which are range decisions, at various bearing/elevation values. Stereo vision can be used to infer the spatial coordinates of points in space, based on image disparity. Therefore, possibly as a result of the experience gained with these sensors, the analysis of radar in robotics often follows a similar line of thinking. By the use of suitable received power thresholds, range decisions are once again extracted, and sensing uncertainty is treated in the spatial domain as just described. Interestingly, radar and tracking engineers have interpreted sensory informa­ tion differently. To a radar engineer, received power values at successive range, bearing, and possibly elevation, values are related to the possible existence of an object in space. Hence, uncertainty in the sensing process is primarily formulated in terms of the decision itself as to whether or not a landmark even exists at the range, bearing/elevation coordinates in question. Such an analysis usually gives rise to the well known concepts (in the radar domain) of probabilities of detection and false alarm, and significantly less emphasis, if any, is placed on the possible spatial uncertainty of a hypothesised landmark. Roughly stated, it can be said that robotics and radar engineers seem to think in different spaces. Robotics engineers generally translate sensor uncertainty into the spatial space, whereas radar engineers translate sensor uncertainty into the detection or existence space. The obvious question is: Who is right? A core aim of this book is to show that each school of thought has important mathematical consequences, xiii

xiv

Preface

resulting in their corresponding engineering restrictions, and that both should be considered. In the opinions of the authors, this had led to a great divergence in the “language” used in the robotics and radar literature. For example, to the radar engineer, it is has long been considered elementary that: Radar can detect more than one landmark within a beam; Detecting an object is enhanced if the power of the receiver output, as a function of range, is considered; Integration over several repetition periods within each beam is useful and that clutter should be considered when forming a radar map. To the robotic navigation engineer, since the location in space of landmarks (possibly obstacles) is of prime importance, the above concepts are rarely, if at all, considered, and various forms of maps are interpreted from sensor data, in which the uncertainty is considered to be spatial. Going back to the reviewer’s comment, he/she continued with some very useful advice: “If the authors wish to communicate outside their own community, they must use a common language. The entire IEEE community relies on the IEEE Dictionary to standardize the special terminology of electrical and electronics terms. The robotics engineers are making a mistake if they set themselves apart from these other users.”

. . . it is time indeed for “these diverse communities to communicate!” This is the main motivation for this book. The application of radar to robotic vehicle navigation has enlightened us to the “language barrier” that seems to exist between robotics and radar engineers. So have we broken this language barrier, in writing this book? To say so would be somewhat overly ambitious, but we would like to think that we have made a step in the right direction. The first three authors of this book are robotics engineers and the fourth is a tracking specialist. During all parts of the book, we have tried to encompass the interests of robotics, radar and tracking researchers. On occasions we may make statements which seem elementary to radar experts - if so, we apologise and ask for your patience. Similarly, on occasions we may make statements which seem controversial to robotics engineers - if so, we ask for your open mindedness. A major contribution of the book is the joint probabilistic treatment of both landmark existence, as well as spatial uncertainty in Chapters 4, 8, 9 and 10. Recent methods from stochastic geometry and point process theory are shown to have a direct impact on the treatment of radar data and the modelling of robotic maps. The methods provide a means of bridging the above “language barrier” in terms of inter­ preting measurements. The joint probabilistic analysis encompasses probabilities of detection and false alarm, representing target birth and clutter instantiations - terms which are common in the radar engineer’s dictionary. It further encompasses the spatial uncertainty information, often considered insignificant to radar engineers, but common in the robotic engineer’s dictionary. The great benefits of combining radar, and its accompanying decades of asso­ ciated literature, with the relatively recent plethora of research in robotic navigation and mapping, are demonstrated throughout the book. Radar and robotic navigation and mapping fundamentals are the focus of the earlier chapters, in order to pro­ vide the background knowledge necessary for their combination in later chapters. Concepts for “Robotic navigation and mapping with radar” are then presented, with pseudo-code implementation examples and experimental demonstrations and results in urban and off-shore coastal environments.

Acknowledgments All of the experimental work demonstrated in this book was carried out at Nanyang Technological University (NTU), Singapore, within the coastal environment of Singapore and at the Universidad de Chile, Santiago, Chile. In particular, the land based experimental work was funded under the Academic Research Fund (AcRF) Grant, RG 10/01, Singapore, and the Conicyt - Fondecyt Research Project No. 1110579, Ministerio de Educación, Chile. The marine based research was funded in part by the Singapore National Research Foundation (NRF) through the SingaporeMIT Alliance for Research and Technology (SMART), Center for Environmental Sensing and Modelling (CENSAM). The work of Ba-Ngu Vo is supported by the Australian Research Council under the Future Fellowship FT0991854. The application of Finite Set Statistics (FISST) to robot mapping and SLAM has been inspired by the multi-source, multi-target work of Ronald P.S. Mahler, Lockheed Martin Tactical Systems, Minnesota, USA. For the experiments carried out in Singapore, the authors are indebted to S­amuel Keller, ETH Zürich, Switzerland, for his masters dissertation work, which contributed to Chapters 2 and 10. Thanks are also given to Nick Patrikalakis, Franz Hover and Anthony Yeo all of whom, during their time at the SMART Center, helped in the coordination of the coastal sea trials used to generate the results in Chapter 10. We are also grateful for the help of Akshay Rao in gathering marine data during the sea trials. The committed technical support of Chai Soon Tan and Chiu Kiat Chia at NTU is also gratefully acknowledged. The software developed during the NTU undergraduate final year project of Hui Chin Tan and the NTU masters thesis of Tobias Gunawan were used in the acquisition of land based radar data in some of the experiments. For the work performed in Chile, the authors would like to thank Daniel Lühr for his contributions throughout the book and particularly in Chapter 6. The patient help of Paul Vallejos, Isao Parra and Fernando Bernuy during field trials is also gratefully acknowledged. Permission to use the Advanced Mining Technology Center (AMTC) while was kindly given by its director, Javier Ruiz del Solar. The authors would like to thank the Department of Electrical Engineering, Universidad de Chile, for the provision of the facilities for completing this book. Finally we would like to acknowledge our families for their support and p­atience during the writing, correcting, editing and seriously late nights spent on this book.

xv

Acronyms Acronyms Introduced in Chapter 1 RADAR GPS GOT GOLIS IMU INS PPI HD SAR FMCW SLAM ACC FISST PHD ASC MMW EKF

Radio Detection and Ranging Global Positioning Systems Go Onto Target Go Onto Location in Space Inertial Measurement Unit Inertial navigation System Plan position Indicator Hausdorff Distance Synthetic Aperture Radar Frequency Modulated Continuous Wave Simultaneous Localization and Map building Adaptive Cruise Control Finite Set Statistics Probability Hypothesis Density Autonomous Surface Craft Millimeter Wave radar Extended Kalman Filter

Acronyms Introduced in Chapter 2 RCS TOF AMCW VCO FFT SNP SCR SNR CFTD LADAR

Radar Cross Section Time Of Flight Amplitude Modulated Continuous Wave Voltage Controlled Oscillator Fast Fourier transform Signal to Noise Power Signal to Clutter Ratio Signal to Noise Ratio Constant Fraction Timing Discriminator Laser Detection and Ranging

Acronyms Introduced in Chapter 3 CFAR CA-CFAR OS-CFAR

Constant false Alarm Rate Cell Averaging - CFAR Ordered Statistics - CFAR

xvii

xviii

Acronyms

GM PDF CUT IID

Gaussian Mixture Probability Density Function Cell Under Test Independent and Identically Distributed

Acronyms Introduced in Chapter 4 GA RM GB GBRM FB FBRM FastSLAM JCBB MH RFS PHD DOF ML NN NNSF ICNN SCNN MHT FoV CPHD MeMBer MC SMC RB GMM EM SEIF D-SLAM ESEIF SAM iSAM PDAF JPDAF TDOA RMSE

Genetic Algorithm Robotic Mapping Grid Based Grid Based Robotic Mapping Feature Based Feature Based Robotic Mapping Factored Solution to SLAM Joint Compatibility Branch and Bound Multiple Hypothesis Random Finite Set Probability Hypothesis Density Degrees of Freedom Maximum Likelihood Nearest Neighbor Nearest Neighbor Standard Filter Individual Compatibility Nearest Neighbor Sequential Compatibility Nearest Neighbor Multiple Hypothesis Tracking Field of View Cardinalized PHD Multi-target Multi-Bernoulli Monte Carlo Sequential Monte Carlo Rao-Blackwellized Gaussian Mixture Model Expectation Maximization Sparse Extended Information Filter Dense SLAM Exactly SEIF Smoothing and Mapping Incremental SAM Probabilistic Data Association Filter Joint PDAF Time Difference of Arrival Root Mean Square Error

Acronyms

xix

Acronyms Introduced in Chapter 6 TPP MSE SSA-CFAR TBD

Target Presence Probability Mean Square Error Scan by Scan Averaging CFAR Track Before Detect

Acronyms Introduced in Chapter 7 GBRM-SL ML GBRM-ML O E U DL FAL MDL NDL HMM NASSE

Grid based Robotic Mapping based on Spatial likelihoods Measurement Likelihood Grid based Robotic Mapping based on Measurement Likelihoods Occupied Empty Unknown Detection Likelihood False Alarm Likelihood Missed Detection Likelihood Noise Detection Likelihood Hidden Markov Model Normalized Average Sum of Squared Error

Acronyms Introduced in Chapter 9 UKF MAP EAP

Unscented Kalman Filter Maximum A Posteriori Expected A Posteriori

Acronym Introduced in Chapter 10 AIS

Automatic Identification System

Nomenclature

Nomenclature Introduced in Chapter 2 q Discrete range bin location within a radar A-Scope Q Maximum number of range bins within a radar A-scope Sradar Radar A-Scope array of general received power values Sradar(q) A-Scope general received power value component at range bin q Slin Radar A-Scope array of linear received power values lin S (q) A-Scope linear received power value component at range bin q Slog Radar A-Scope array of logarithmic received power values (dB) Slog(q) A-Scope logarithmic received power value component at range bin q (dB) log_rc S Radar A-Scope array of logarithmic received power values (dB) with range compensation Slog_rc(q) A-Scope logarithmic received power value component at range bin q (dB) with range compensation ST Transmitted radar power GAntenna Radar antenna gain r Target range from the radar r(q) Range value corresponding to discrete range bin q GRCS AR l L fb AFMCW Snoise(q) r i c ti Df tm Dr t vT(t) AT wc Td vR(t – t i)

Target radar cross section Effective radar antenna receiver area Radar electromagnetic radiation wavelength Radar losses (³ 1) FMCW radar beat frequency variable Constant of proportionality relating fb, to r(q) Linear received radar power comprising only noise Range to target i down range Speed of electromagnetic energy propagation Time taken for energy to reach target i from the radar, and return Radar frequency/chirp bandwidth Pulse width of TOF radar Radar range resolution Radar signal transmission time Transmitted FMCW radar signal FMCW carrier signal amplitude FMCW carrier frequency FMCW chirp repetition time FMCW received echo signal

xxi

xxii

Nomenclature

AR vbeat(·) f bi fo ri

FMCW received signal amplitude Beat frequency signal produced in FMCW process FMCW beat frequency corresponding to target i FMCW chirp start (lowest) frequency Range to target i

f ri Doppler shift beat frequency component caused by range of target i, relative to the radar f di Doppler shift beat frequency component caused by velocity of target i, relative to the radar i V Radial velocity of target i, relative to radar lin Smin Radar’s minimum detectable signal rmax Radar’s maximum range capability STh_noise Thermal noise generated by the radar Boltz K Boltzmann constant T Absolute temperature T0 Standard temperature (IEEE standards, T0 = 290K) Df BW Radar receiver bandwidth SRec_Noise Fn SRec_In SRec_Out GRec p(GRCS) �RCS G q Slin(q, q) Slin(x, y)

Actual noise generated by the radar Radar receiver’s noise figure Input power at radar’s receiver Output power from radar’s receiver Radar receiver gain PDF representing target RCS fluctuations Expected (average) value of GRCS Radar scanner bearing angle Received power in A-Scope range bin q, at bearing angle q Received power transformed to Cartesian coordinates x, y

Nomenclature Introduced in Chapter 3 k Discrete time index Xk Robot/radar location at time k M Mathematical representation of a map g(r|M, Xk) Measurement likelihood of measuring r given Xk and M N(·;m,S) Gaussian PDF, with mean m and variance S H0 Target absence hypothesis H1 Target presence hypothesis p(Slin|H0/1) Received linearized power PDF given target absence/presence hypotheses Exponential(lta) Exponential noise PDF with parameter lta S�lin Expected (mean) linearized received radar power Stp Variance of assumed Gaussian received power PDF during target presence Pfa Probability of false alarm PD Probability of detection

Nomenclature

xxiii

PMD Probability of missed detection Pn Probability of noise SCFAR Adaptive received power threshold to achieve a Constant False Alarm Rate lin_sum S Sum of the linearized power in CFAR window W Half window width of CFAR test window T(Slin(q)) Test statistic applied to a CFAR window �SNP (q) Estimated signal to noise power in range bin q η g Threshold to achieve CFAR property G Number of CFAR processor guard cells kos The chosen, ordered statistic in the OS-CFAR processor

Nomenclature Introduced in Chapter 4 Uk Control input applied to robotic vehicle at time k k  U History of control inputs applied to robotic vehicle from time 0 up to and including time k X State space in which vehicle maneuvers k X History of vehicle poses (trajectory) from time 0 up to and including time k n X Dimensionality of the state space X Zk Exteroceptive spatial measurement(s) recorded from the robotic vehicle, at time k Zk History of exteroceptive measurements from time 0 up to and including time k Mk Portion of map M which has been observed by the exteroceptive sensors up to, and including, time k pk|k–1(Xk, Mk|×) Predicted, conditional joint PDF on Xk and Mk at time k pk|k(Xk, Mk|×) Posterior, conditional joint PDF on Xk and Mk at time k g(Zk |×) Conditional vector based measurement likelihood n X Dimensionality of an individual map element f veh( ) Vector function representing the vehicle motion model vk Noise vector representing uncertainties in the vehicle motion model V vk Forward velocity noise component of vk Steering angle noise component of vk vkγ x� Continuous time robotic vehicle velocity component along the x axis y� Continuous time robotic vehicle velocity component along the y axis φ� Continuous time robotic vehicle yaw rate about the z axis xk, yk, fk Discrete time robotic pose coordinates at time k V Continuous time desired forward velocity input signal to the robotic vehicle gS Continuous time desired steering angle signal to robotic vehicle

xxiv

Nomenclature

LWB Robotic vehicle’s wheel base between rear and front wheels DT Time between discrete updates k – 1 and k f map( ) Vector function representing the map transition model xki , yki Cartesian coordinates of the ith environmental landmark at time k p Number of point landmarks in map M m(x,y) Random variable representing the occupancy state of a cell at x, y Inverse of m(x,y) m(x, y) mi Location of ith landmark in a feature based map p (£ p) Number of features observed up to time k (in Mk) (σ r2 )ik Range measurement variance of the ith landmark at time k (σ r2θ )ik Bearing measurement variance of the ith landmark at time k Spatial measurement covariance of the ith landmark at time k Rki radar radar Cartesian coordinates of the radar’s location at time k (xk , yk ) hspatial(Mk, Xk) Vector function relating Xk and Mk to a spatial measurement wkspatial Noise vector representing uncertainties in a spatial measurement spatial Range noise component of wk (wkr )i spatial Bearing noise component of wk (wθk )i Zkpower Vector of A-Scope power values at time k Sklin (θ : 0 ® 2 π) A stacked vector of A-Scope power values, for all bearing angles comprising a full radar scan, at time k hpower(Mk, Xk) Vector of functions relating relative map locations to measured, received power values wkpower Noise vector representing uncertainties in a received power measurement Vector of binary detection decision hypotheses at time k Zkdet , H0/1 detection h ( ) Vector of detection functions (e.g. CFAR) O (×) Log-odds representation of a probability zk True joint SLAM state, containing Xk and Mk, at time k � ς k|k -1 Predicted mean vector of zk, at time k, given all measurements up to and including time k – 1 Posterior, estimated mean vector of zk, at time k ζˆ k k f SLAM( ) Vector function representing the SLAM transition model Pk|k–1 Predicted error covariance matrix of zk, at time k, given all measurements up to and including time k – 1 Pk|k Posterior estimated error covariance matrix of zk at time k Kk Kalman gain ÑX Jacobian operator containing first derivatives with respect to state X ÑU Jacobian operator containing first derivatives with respect to the input vector U Ek Measurement covariance matrix at time k Qk SLAM transition model process noise matrix 2 dM( ) Mahalanobis statistical difference between two vectors

Nomenclature

zki � ζ(i)

xxv

ith measurement in Zk

ith sample (particle) drawn from pk|k (zk|×) ζ(ki|)k Weight associated with particle � The ith hypothesized vehicle trajectory M Feature based map represented as a Random Finite Set (RFS) Mk Explored RFS map up to, and including, time k �k Estimate of Mk M Zk RFS sensor measurement at time k Zk History of RFS measurements up to, and including, time k g(Zk|×) Conditional set based measurement likelihood FoV(Xk) Region of the Field of View (FoV) of the exteroceptive sensor(s) from location Xk Bk RFS of the new features at time k Dk(m, Xk) Predicted measurement set resulting from map landmark m and robot location Xk Ck Clutter measurement set caused by spurious measurements, which may depend on robot location Xk Map elements of M not in Mk, at time k M k M f ( ) Transition function representing the RFS map transition model fkB( ) Transition function representing the RFS of the new landmarks which enter the FoV at time k zk Number of measurements in Zk zki ith measurement in Zk mk Number of map landmark elements in Mk mi ith feature in Mk � k|k -1 Predicted number of map landmark elements in Mk m � Estimated number of map landmark elements in Mk mk|k vk( ) Intensity or Probability Hypothesis Density (PHD) function at time k vk|k–1( ) Predicted intensity (PHD) function at time k vk|k( ) Estimated intensity (PHD) function at time k bk( ) Intensity (PHD) function representing Bk ck(z|×) Intensity (PHD) function representing Ck Yk General state variable, with which PHD Filter is introduced (c) � d (M , M) 2nd-order Wasserstein error metric for comparing estimated � with true RFS map M RFS map M c 2nd-order Wasserstein error metric cut-off parameter k|k wk(i) (Xk|k)(i)

Nomenclature Introduced in Chapter 5 aT (t) y(t) aR(t – t)

Thermal noise present in transmitted FMCW signal Phase noise in transmitted FMCW signal Thermal noise in the received FMCW signal

xxvi

Nomenclature

Dy(t, t) Differential phase noise present in FMCW beat frequency � r Estimated range to a target log_ rc � S (u) Expected received power after logarithmic compression and range compensation in range bin u tp s  Rayleigh distribution parameter signifying target presence Rayleigh(s tp) Rayleigh distribution with parameter s tp (Slin(u))(i) ith linearized received power sample in range bin u, sample drawn from a distribution � xk|k -1 Predicted x coordinate of robot at time k (Similarly for � yk|k -1 and φ�k|k -1 � xk -1|k -1 Estimated x coordinate of robot at time k – 1 (Similarly for � yk -1|k -1 and φ�k -1|k -1) � Predicted robot pose vector at time k X k|k -1 � k -1|k -1 Estimated robot pose vector at time k – 1 X � xki |k -1 , � yki |k -1 Predicted Cartesian coordinates of ith landmark at time k � xki -1|k -1 , � yki -1|k -1

Estimated Cartesian coordinates of ith landmark at time k – 1

� r ki |k -1, θ�ki |k -1 Predicted i �kRCS G |k-1 i �kRCS G -1|k -1

range and bearing values to the ith landmark at time k Predicted RCS of landmark i at time k

Estimated RCS of landmark i at time k – 1

P 2  � Slin ⋅ Slin  Pearson’s coefficient of determination, to determine correlation S lin and Slin between �

Nomenclature Introduced in Chapter 6 s Laplace variable fSlin-sum (s) Characteristic function of a sum of linearized power values L Total number of scans integrated CA-CFAR PD Probability of detection of the CA-CFAR processor, based on a single scan PfaCA-CFAR Probability of false alarm of the CA-CFAR processor, based on a single scan PDNC Probability of detection of the non-coherent integrator NC Pfa Probability of false alarm of the non-coherent integrator G(×) Gamma function Mbinary Threshold number of scans for which a CFAR threshold must be exceeded, to declare a detection Probability of detection of the binary detection integrator PDbinary Pfabinary l I(q, l)

Probability of false alarm of the binary detection integrator Scan number Indicator function, indicating a detection in range bin q and scan l

� TPP (q, l) Estimated Target Presence Probability (TPP) in range bin q and scan l P

Nomenclature

xxvii

ap Estimated TPP smoothing parameter TPP H0 / 1 Landmark absence/presence hypothesis according to the TPP processor TPP PD Probability of detection of the TPP integrator PfaTPP Probability of false alarm of the TPP integrator TPP T (ap, l) TPP threshold � TPP (q, l)  Probability of estimating TPP value P � TPP (q, l) in range bin q and P  P  scan l lcrit Number of TPP iterations at which threshold envelopes meet N0/1 Number of 0/1 values in a sequence of indicator function values I(q, l) n(q, l) Noise signal in range bin q and scan l ån Noise variance in an A-Scope during target absence �n Estimate of the noise variance in an A-Scope during target absence S ad Estimated noise variance smoothing parameter a�d (q, l) Time varying estimated noise variance smoothing parameter � S RN_lin (q, l) Reduced noise, linearized received power value in range bin q, scan l � S RN_log (q, l) Reduced noise, logarithmically compressed received power value in range bin q, scan l over c Over subtraction factor for spectral noise subtraction dfloor Floor parameter for spectral noise subtraction

Nomenclature Introduced in Chapter 7 k pk|k(Mk|Z ,

Xk) Posterior estimate of the map density at time k, given all measurements and robot locations up to, and including, time k k–1 k , X ) Prediction of the map density at time k, given all measurements pk|k–1(Mk|Z up to, and including, time k – 1 and the current robot location Xk p(Zk|Mk, Xk) General sensor model giving a distribution on Zk, given Mk, Xk mki , mk Estimate of the probability of a landmark existing in grid cell i, at time k, with predefined spatial coordinates i mk , mk Estimate of the probability of no landmark existing in grid cell i, at time k, with predefined spatial coordinates m Number of a-priori assigned grid cells P(mk|zk) Probability of landmark occupancy of a given cell, given all individual measurements up to, and including, time k Mz(·) Mass function containing sensor evidence Mm(·) Mass function containing map evidence detector H0/1 Landmark absence/present hypothesis according to a chosen detector Sdetector Power threshold of a chosen detector, above which a landmark detection is assumed detector probability of detection of the chosen detector PD

xxviii

Nomenclature

Pfadetector Probability of false alarm of the chosen detector X Cell occupancy transition matrix Poo Probability of an occupied cell remaining occupied Peo Probability of an empty cell becoming occupied Poe Probability of an occupied cell becoming empty Pee Probability of an empty cell remaining empty m¢ Predicted occupancy variable, obtained from X f detection( ) Detection function of received power data det Detection measurement likelihood (ML) at time k P(zk ) det � Estimate of the detection ML at time k P(zk ) SNP h Actual received SNP from the radar yk Joint state containing the occupancy variable mk and the ML P(zkdet ), at time k �k Estimate of the joint state containing the occupancy variable ψ mk and the ML P(zkdet ), at time k power k p(yk|(z ) ) Posterior joint density on cell occupancy, and associated ML (i) Particle representing p(ykç(zpower)k) ψk (i) Weight associated with particle ψ k(i) wk mo Number of occupied cells, according to ground truth

Nomenclature Introduced in Chapter 8 wk( j) Weight of the jth Gaussian component in a GMM representation of vk½k(m½Xk) at time k Jk Number of Gaussians used to represent vk|k(m½Xk) at time k ( j) Mean value of jth Gaussian in vk|k(m½Xk) µk

Covariance matrix of jth Gaussian in vk|k(m½Xk) Pk( j) ( j) wb,k Weight of the jth Gaussian component in a GMM representation of the birth (new landmarks) intensity bk(m½Xk) at time k Jb,k Number of Gaussians used to represent bk(m½Xk) at time k ( j) Mean value of jth Gaussian in bk(m½Xk) µb,k Pb(,jk)

Covariance matrix of jth Gaussian in bk(m½Xk)

( j) Intensity vG ,k

q(i)(z,Xk) Ek( j) Kk( j) V U(·)

(PHD) function corresponding to each Gaussian component the GMM Gaussian spatial sensor likelihood Measurement covariance associated with the jth Gaussian Kalman gain associated with the jth Gaussian component Sensor surveillance region volume Uniform distribution

Nomenclature

xxix

Nomenclature Introduced in Chapter 9 mchosen (Xk)(i) ηk(i)

(

)

A chosen landmark, according to a given strategy ith sample trajectory up to time k Weight of ith sample trajectory at time k

vk(i|)k m (X k )(i) ith trajectory conditioned PHD of Mk ωk(i, j) Weight

of the jth Gaussian component of the ith particle trajectory conditioned PHD ωb(i,,kj) Weight of the jth Gaussian component of the birth feature PHD, conditioned on the ith particle trajectory µk(i, j) Mean of the jth Gaussian component of the ith particle trajectory conditioned PHD (i , j)  Covariance of the jth Gaussian component of the ith particle pk|k trajectory conditioned PHD � (i) ith sample vehicle pose at time k, before resampling X k (i)

η�k

(i) � k|k -1 m (i) � k|k m

Weight of ith sample trajectory at time k, before resampling Predicted number of features in Mk, at time k Estimated number of features in Mk, at time k

Nomenclature Introduced in Chapter 10 lclutter Exponential clutter parameter K Gaussian smoothing kernel dfk–1 Rotation angle, treated as a vehicle input, recorded at time k from a gyroscope x Random perturbation in the ASC x direction vk -1 y Random perturbation in the ASC y direction vk -1 vkφ -1

Random perturbation in the ASC’s yaw angle

Chapter 1

Introduction Since the machinery became available to permit the transmission and reception of electromagnetic waves in the late 1800s, radio detection and ranging (RADAR) sensors have been used in a plethora of applications. These range from aircraft de­ tection and ranging, weather monitoring, missile detection, police speed trap sens­ ing, and even the complex application of interplanetary measurements, for mapping the surface of Mars from Earth. In the past decades, advances in digital technolo­ gies and antenna designs have resulted in more compact radars, capable of relative range and velocity estimation, within the limits necessary for car collision warning systems and unmanned air vehicle applications. Such systems are also now economi­ cally viable in automation applications such as mining. Correspondingly, over the past decades, robotic vehicle applications that attempt to exploit radar’s versatility have emerged. In most robotic vehicle applications, range-measuring devices are crucial for estimating the distance to obstacles and inferring the location of the ve­ hicle itself. Often based on cameras and laser range finders, many autonomous ro­ bot applications are limited to short-range applications, due to the restricted ranges at which these sensors can supply reliable information. As this book title implies, the aim is to present solutions to the autonomous navigation and mapping of robotic vehicles based on the use of radar as the ex­ teroceptive sensor. However, since the beginning or World War II, a plethora of navigation systems containing radar have been extensively used. Therefore the aim of Section 1.1 is to define the navigation and mapping problems of robotic vehicles, and place them into perspective with current applications that successfully em­ ploy radar for navigation and mapping. Section 1.2 then asks the question of why specifically radar should be used in robotic vehicle navigation and mapping. Past autonomous robotic projects have most often used sensors such as sonar and vision, with a recent trend that favors scanning laser range finders. Therefore, Section 1.2 focuses on comparisons of radar measurements with laser range readings to mo­ tivate the application of radar in robotics. Section 1.3 discusses the general trend of radar-based autonomous robotics, giving examples of current applications and projects. Finally, Section 1.4 briefly outlines the structure of the book.

1.1  Isn’t Navigation and Mapping with Radar Solved? Radar has been, and still is, an integral component of very sophisticated navigation and mapping systems. Research and development into its design characteristics, data 1. Note that Radar is an acronym, although from here on its shall be used as a word and written in its lower­ case form.





Introduction

interpretation, and integration into military applications were greatly accelerated dur­ ing World War II. Radar is fundamental to aircraft tracking from the ground, aircraft early warning collision detection systems, ground terrain imaging, and, mainly prior to the installation of the g1obal positioning systems (GPS), missile/bomb guidance. Since aircraft successfully navigate, and are tracked, all over the world and based on past demonstrations of precise missile navigation over large distances, an intuitive thought could be that navigation and mapping with radar is a solved issue. Based on this reasoning, the application of radar in the navigation and mapping of robotic vehicles may be thought to be trivial, and the questions naturally arise as to why 1. The autonomous navigation and mapping of robotic vehicles is still an un­ solved fundamental problem; 2. Relatively few robotic vehicle projects and applications deploy radars; 3. This book should be written. Since so many “forms” of navigation and mapping have already been achieved, the main aim of this section is to place the robotic vehicle navigation and mapping prob­ lems into perspective. It answers the questions of why the navigation and mapping tech­ nologies of such systems are not directly transferable to land-based robotic vehicles and why autonomous robot navigation and mapping are still fundamental research areas. This section therefore briefly highlights the current state of the art in radar navi­ gation and mapping. It makes particular reference to missile guidance with radar, since such devices also attempt driver-less navigation to a target. This can include the autonomous interpretation of sensor data for autonomous navigation, which, in general, is the aim of robotic vehicles. In addressing this problem, it will be shown that the direct application of missile guidance technology to robotic vehicle naviga­ tion at ground level is far from trivial. Further comparisons will be made with aircraft navigation systems, which, al­ ready after World War II, were capable of combining radar and known map data for the purpose of navigation. Important differences between the use of the terms “navi­ gation” and “mapping” in the radar and robotics communities will be highlighted. 1.1.1  Applying Missile/Aircraft Guidance Technologies to Robotic Vehicles

The term missile guidance refers to the techniques used to guide a missile or bomb successfully to its target. Missiles have their own source of power, such as an engine or a rocket, whereas guided bombs do not, and depend on an initial launch speed and altitude to reach their targets. Two guidance categories for such weapons are go-ontotarget (GOT) systems and go-onto-location-in-space (GOLIS) systems [1, 2]. GOT systems are guided by either remote control or by homing methods. Such homing methods usually require external support in the form of an externally lo­ cated radar broadcaster to continuously “illuminate” the target with radio waves for detection by the missile’s radar. Fully autonomous homing methods exist, but due to the small size limitations of radars that form part of a missile, only large targets can be reliably intercepted. .  In comparison with robotic vehicles that use cameras, laser range finders, and sonars. .  The detection precision of radars is approximately inversely proportional to the size of its antenna.

1.1  Isn’t Navigation and Mapping with Radar Solved?



GOLIS systems operate based on preset target information and require navi­ gational guidance. Sometimes referred to as self–contained guidance systems, they come in three categories: 1.  Fully autonomous, in which the missile relies on preset target coordinates and inertial guidance only; 2.  Dependent on natural sources, in which self-contained inertial sensor infor­ mation is combined with that of other sensors capable of inferring positional information from celestial, terrestrial, or magnetic sources; 3.  Dependent on artificial sources, in which self-contained inertial sensor infor­ mation is combined with that of other sensors capable of inferring positional information from artificial sources such as GPS. The general aim of an autonomous robotic vehicle is to reach a target based on preset information. The vehicle can use onboard sensors capable of inferring infor­ mation about its surroundings, but not, in general, the target, which may simply be a location rather than an object. This aim is in line with that of the GOLIS system just described, and therefore the possibilities of applying the three categories high­ lighted above to land-based robotic vehicles are now addressed. 1.1.1.1  Autonomous Robotic Navigation Based Only on Inertial Guidance

In the missile guidance literature, navigation based only on inertial guidance is referred to as the “fully autonomous case.” A self-contained navigation system, also extensively used in aircraft and missiles, is based on inertial measurement units (IMUs). IMUs contain a triad of linear accelerometers and a triad of rate gyroscopes capable of measuring linear accelerations with respect to, and angular velocities about, a set of orthogonal axes. IMUs exist in two forms. In the gimbaled ver­ sion, they are mounted on a gyro-stabilized platform, so that their orthogonal axes should correspond to an Earth-centered navigation frame. In the simpler to design and often less expensive strap-down versions, the orthogonal axes are fixed with respect to the vehicle’s body frame, meaning that accelerations and angular rates are made with respect to that frame [3]. In each case, the linear accelerations can be integrated twice with respect to time to give relative displacements, and the rate gyroscope readings integrated once to give angular displacements. In the case of strap-down systems, the integrated displacement information must be transformed to a global (e.g., Earth centered) navigation frame, by using the integrated angular rate information. The advantage of such inertial navigation systems (INSs) is that they are selfcontained and do not communicate with the outside world. They are therefore not prone to any kind of jamming from external influences. The initial versions of the German V-2 missiles during World War II were guided only based on IMU technol­ ogy and a preset estimate of the time necessary to reach their targets [4]. Modern systems, which adopt solid state ring laser gyros, have been reported to provide incredible accuracies of just meters over intercontinental ranges of 10,000 km, without additional inputs [1], although such reports are also the subject of great controversy [5].



Introduction

In comparison, land-based robotic vehicles which have attempted location estimation based on IMUs have had very limited success [6–12]. Such vehicles are shown to drift within seconds, or at best minutes, from their intended paths, using such technologies. The main reason for this is that land-based vehicles undergo very low accelerations in comparison with air vehicles. IMUs measure accelera­ tions, which are corrupted both with noise, due to their internal electronics, and offsets due to misalignments. In most land-based robotic applications, the actual accelerations are often comparable or only slightly larger than the noise values of the typically low cost IMUs used. Since these need to be integrated (twice) the effects of biases and random walk are greatly magnified in the resulting displace­ ment estimates. Also, in most land-based vehicles, strap-down IMUs are typically used, due to their massively lower cost and size in comparison with their gimbaled (gyro stabilized) counterparts. The displacement estimates from these devices, however, must be transformed to a global navigation frame, which introduces further errors, since the integrated angular rate information must be used in the transformation equations. Nebot et al showed that the errors in the resulting displacement estimates increase with the sixth power of time in such unaided systems [11]. To demonstrate the problems of location estimation, based only on IMU, Figure 1.1 shows a car test vehicle with different types of IMU mounted on its roof, as well as a GPS unit. The ability of the different IMUs to estimate the path, relative to an initially given coordinate, were assessed in the trial. The vehicle was manually driven around a flat test loop three times, and its IMU aided GPS and IMU only based location estimates plotted [12]. The data from the most accurate of the 3 IMUs was used in this test. Figure 1.2 shows both the combined GPS and IMU (dashed line, labelled “GPS + IMU”) and IMU only path (solid line) estimates, superimposed on the same graph. Since the vehicle was manually driven, it was known that the trajectory started at approximately the same loca­ tion (to within a meter) as the end point (0,0). The drift in the trajectory estimated from the IMU only is evident, in comparison with the more accurate IMU aided GPS estimate. It should be noted that higher accuracy trajectory estimation may be possible with IMU for longer periods of time than that shown in Figure 1.2 with advanced calibration, tuning, and fusion methods [11, 12]. However, the figure highlights the problem of the inevitable drift in location estimates based on the use of IMU at ground vehicle speeds. IMUs have been reportedly used with greater, although still limited, success in airborne robotic applications [13]. 1.1.1.2  Autonomous Navigation Dependent on Natural Sources

Due to the limited navigational accuracy possible with IMU/INS in land-based ro­ botic vehicles, aiding information based on exteroceptive sensors has been inves­ tigated. Again, before analyzing robotic systems of this type, it is worth briefly studying the history of the use of exteroceptive sensors in navigation. Indeed, some of the earliest sensors used for aiding navigation were radars, used in aircraft. 4.  In this test, the DMARS-i strap-down IMU from Inertial Science. Inc.

1.1  Isn’t Navigation and Mapping with Radar Solved?



GPS Receiver

Various IMUs Figure 1.1  A zoomed view of Figure 1.2, showing the estimated vehicle’s path based on GPS only.

The first radar systems deployed for aiding aircraft navigation were discussed as early as 1947 in Ridenour’s Radar System Engineering and Hall’s edited collection of articles in Radar Aids to Navigation, as part of the work carried out at the MIT Ra­ diation Laboratory between 1940 and 1945 [14, 15]. In 1943, large scale use of the Plan Position Indicator (PPI) form of a radar display was evident, in which received

60

30 y(m) 0 Start End (IMU)

End (GPS + IMU)

0

30

60 x(m)

90

120

Figure 1.2  The estimated vehicle’s path based on IMU aided GPS (dashed line) and IMU only (solid line). .  More commonly referred to as the “Rad Lab.”



Introduction

power information was plotted in polar coordinates with respect to a radar’s position. It was clear that such information could be useful for navigation over land, as human navigator’s could match the PPI display data to maps of the landscape over which they were flying. Figure 1.3 demonstrates this possibility with a photograph taken from a PPI radar scan, recorded at an altitude of 1,600 ft, and its corresponding sketch map. Trained navigators would be able to search simple sketch maps for matches with the radar PPI indicator display. Upon a successful match, the aircraft’s location and yaw angle could be determined. Such manual matching of PPI displays and maps could be made more robust with the introduction of artificial microwave beacons on the ground, which were easily detected by radar, providing high-powered returns of the radio en­ ergy at large distances. Such beacons would have been located at known locations. At that time, radar systems were too large and heavy for installation in many aircraft, and they were complicated to use with large power requirements. Devel­ opment into radar design therefore focussed on minimum weight and size, and simplicity of use, resulting in the AN/APS-10 system. This radar was able to display PPI information at ranges between 30 and 50 nautical miles, distances considered useful in aircraft, travelling at relatively high speeds. Reference [14] states that the primary success of the AN/APS-10 radar system was first due to its low weight (ap­ proximately 56 kg) and size, and the fact that it typically took only 1 hour of flight experience to train a navigator to adjust and use the set correctly. The position of the radar and hence aircraft, over the surface of the earth could be updated by manually, visually matching PPI scan displayed information to known land maps. This was the birth of navigation based on map matching, and after World War II the natural progression of this technology would be to automate the matching, and hence navigation process, by removing the human operator. Automatic map-matching techniques, which compare radar images with in­ formation stored in computer memory, have dominated much of the radar naviga­ tion literature [16–18]. Such techniques attempt to determine location through a

(a) Photograph of the PPI display of the Potomac river.

(b) Simple map of the Potomac river corresponding to the PPI radar image.

Figure 1.3  Photograph of a PPI radar scan taken from an altitude of 1,600 ft and a corresponding sketch from the map. (Adapted from Ridenour, 1947 [14].)

1.1  Isn’t Navigation and Mapping with Radar Solved?



map-matching or correlation processes and are also capable of determining and correcting biases in radar measurements [19, 20]. A technique for automatically matching ship-borne radar images to geo-referenced satellite images is discussed by Torres-Torriti in [20]. Here it is noted that due to the polar distortions of radar scans, and the possible partial or total occlusion of objects, a robust correspondence technique can be based on the Hausdorff distance (HD) metric. A ship’s location, speed, and radar biases, which affect range and bearing measurements, are esti­ mated by formulating the HD metric between the radar data and geo-referenced satellite images. Computations can be simplified by representing satellite images in terms of edges. Th HD metric is a criteria to measure how dissimilar the shapes of two sets of points in space are. The radar image is translated, rotated, and scaled until a best fit, with respect to the HD metric is determined. Successful matches of imaged coast line data were presented, allowing ship location, speed, and biases to be estimated. Possible estimation errors were also noted when angular sectors of the radar data contained too few useful measurements. Observability issues related to the problems of lacking radar data were addressed by Guesalaga in [21]. A comparison of radar map matching correlation algorithms is given by Novak in [18]. The aim of the comparisons was to select suitable correlation criteria for accurate position location and computational efficiency. Correlation criteria based on “normalized cross correlation,” “edge difference,” “edge ratio” and “edge dif­ ference normalized” were analyzed. Of particular importance in the evaluations were the ease with which optical image data could be compared with radar image data, whether or not a constant false alarm rate and resulting probability of match detection could be mathematically derived for the correlation method and its com­ putational efficiency. Correlation based on the “edge ratio” method was found to be favorable, in which the objective was to determine the least number of radar data values, or cells, that need to be averaged to detect an edge with a low prob­ ability of false alarm and correspondingly high probability of detection. This was analyzed when data sets contained water only, land only, and both water and land areas within the cells under consideration, for the purpose of using coastal images and rivers for navigation. Other techniques that correlate radar and map data for location estimation are based on template matching [17, 22]. In [17], single look, synthetic aperture radar (SAR) images were correlated with map features. It was proposed that certain types of features within SAR images are brighter (hedges and leading edges of woodland) and darker (roads and rivers) than the data from their surroundings. These features were sought for matching purposes and represented with a binary template. This work showed that standard template-matching techniques often failed with real data and that methods were needed which take proper account of the nature of radar noise for successful template matching and hence location estimation. In the early 1970s, an airborne, radar data acquisition experiment was car­ ried out by the Sandia Laboratories, Albuquerque, NM to determine the “degree of uniqueness” of wide beam radar images of different ground target types [16, 23]. The aim was to experimentally determine the “memory” against which mea­ sured radar could be compared for location estimation, and to synthesize this memory to represent different terrain types. By synthesizing such a memory it was hoped that prior operational flights would no longer be necessary and that



Introduction

successful location estimation would be possible based on map matching with this synthesized memory only. By combining experimental scan results and theoretical radar reflection models, different terrain types were modeled as dense ensembles of scatterers, corresponding to expected received power contour data. If such a com­ puterized memory of received power contour data provided unique enough patterns to allow the robust matching of airborne radar data to take place, when flying over diverse land areas, its implications would have been immense. The necessity of having prerecorded data from a site would become unnecessary and robust naviga­ tion could take place when flying into previously unmapped regions. The Sandia Laboratory synthetic memories were proposed to be used in a scenario in which satellite images of the ground terrain below a missile’s/aircraft’s path would be used to trigger components from the memory for map matching and hence location esti­ mation. Such radar adapted, synthetic memories were generated from large areas, in the vicinities of Albuquerque, NM, and Tonopah, NV, which included differing radar reflectivity information. The results of then comparing actual radar data with this memory data base, however, remained inconclusive regarding their levels of robustness, required for reliable location estimation. In the robotics literature, in 1997, Lu and Milios presented the robot navigation problem as constraints between robot positions and used these to fuel a technique similar to map matching, known in the robotics literature as scan matching [24]. It­ erative algorithms were developed, which would register a range scan to previously acquired scans to compute relative robot motion, in unknown environments. The concept considered the entire robot trajectory as the frame of reference for globally consistent range scan alignment. This procedure showed promising results, being able to correct already existing estimated robot paths based on odometry, by using an iterative scan registration technique. The collected data could also then be cor­ rected accordingly, to produce consistent maps. However, in contrast to navigation in the air, at ground level, vehicles can pass in close proximity to objects, or pass from one room to another, temporarily or indefinitely occluding crucial parts of the environment, which need to be sensed for reliable, continuous scan matching. Hence, in map-matching techniques for ground-based robotic vehicles, only parts of the registered scans can be expected to overlap. For this reason, the scan-matching method of [24] provides an itera­ tive method and depends on an initial approximate alignment of the scans under consideration. Hence with poor relative robot motion estimates, these registration techniques can become fragile. 1.1.1.3  Autonomous Navigation Dependent on Artificial Sources

A useful artificial source of information is the GPS system, which became fully operational in 1994 [25]. Since the introduction of GPS, a natural choice has been to integrate this technology into robotic vehicle navigation systems. The GPS sys­ tem provides synchronized location and time information anywhere in the world, in any weather conditions, where there is a direct line of sight to at least 4 of the 24 satellites currently used in the system. While it is an obvious navigational choice for vehicles that maneuver in the air or at sea, where such direct lines of sight usu­

1.1  Isn’t Navigation and Mapping with Radar Solved?



ally exist, its use is much more restrictive in robotic vehicle applications. Problems that restrict the use of GPS in autonomous robotic applications are 1. GPS signals are unavailable indoors and underwater, eliminating its possible use with robotic vehicles in such environments, 2. GPS signals are often fully, or partially, blocked in built, forested, or jungle environments, depending on a vehicle’s location, meaning that continuous signal reception, and hence localization, is interrupted. 3. In built and forested environments, multipath signals are possible, which can corrupt the GPS-based location estimates. 4. When using GPS, the satellites with which the receiver is communicating can change their configurations. Such changes are known to introduce estimated location jumps. 5. GPS does not provide instantaneous direction of motion information, which can only be inferred over time by interpolating sequential GPS location es­ timates during vehicle motion. For slow-moving (e.g., ground) vehicles this can result in inaccurate estimates of this critical parameter. Problems 2 to 4 are compounded by the fact that the estimation accuracy of lo­ cation and direction of travel, in land-based robotic vehicle applications, is usually far more critical than for missiles, aircraft, and ships. A few meters, or even tens of meters, of location error in the air or at sea is usually of little consequence, where as at ground level, on roads which are only a few meters wide, or in narrow corridors such errors are intolerable. A great deal of work in land-based robotics has there­ fore focused on the potential gains in location estimation accuracy of combining GPS and IMU [6, 11, 12]. In much of the literature, IMUs short term displacement estimation ability is used to smooth any intermittent communications between a vehicle and the GPS satellites. To demonstrate problems 2 to 4, an example that continues the experiment of Figure 1.2 is given. Figure 1.4 shows a zoomed view of the vehicle’s path estimated by a fused GPS and IMU system, during the same multiloop trajectory of Figure 1.2. During this trajectory, GPS communication, based on at least four satellites, was initially established, but due to the fact that the experiment was carried out in the vicinity of tall buildings, GPS communications were intermittent. First note the great improvement, over the IMU estimated trajectory of Figure 1.2, shown as the dashed line in Figure 1.4. It should be further noted that the vehicle was manu­ ally driven and therefore was known to have completed its multi-looped trajectory within a meter of its starting location (0, 0). However, the combined GPS + IMU estimated path shows the final position of the vehicle to be approximately 10 m away from its initial position. The GPS only path was improved by combining it with that of the IMU, which provided relative displacement information, albeit drifting, during GPS outages. In the path section between labels A and B, a GPS outage occurred, so that the estimated path became dependent on the IMU only. This section of the path is highlighted as a thicker line in Figure 1.4. At point C, GPS data became available again, initializing a new location estimate at C. A resulting jump (labeled) from B to C of approximately 6 m is apparent in the data, before GPS reception resumed.

10

Introduction

B 60 C

Jump

40 A y (m) 20

0

Start

End (GPS + IMU) 0

20

40

60

x (m) Figure 1.4  A zoomed view of Figure 1.2, showing the estimated vehicle’s path based on GPS only.

In the case of Figure 1.4, the labeled jump could be “smoothed” over, however, if the resumed GPS location estimate at point C is to be believed, this would require the trajectory estimate before point B to be corrected. It is also possible to observe such “jumps” in the location estimates during continuous GPS reception. Such jumps can be the result of changes in the satellite configuration or number of available satellites used to estimate a vehicle’s position. The problem with attempting to “smooth over” any such jumps in this case is that it is not immediately clear whether or not the GPS location estimate before or after the jump is more correct. To further demonstrate the problem of using combined IMU and GPS posi­ tion estimation systems, even in areas with no buildings, an integrated GPS aided IMU system was mounted on a car and tested in terms of its ability to estimate the vehicle’s location, when driven around a known trajectory, in a jungle-type environ­ ment. Figure 1.5(a) shows the known ground truth path, over which the vehicle was driven. This corresponds to the dirt track in the image. Figure 1.5(b) shows the path estimated by the system, when two GPS outages occur. These “outages” can result from signal blocking from trees or as a result of satellite configuration changes as mentioned earlier. When GPS outages occur at points A and C, the aiding of the IMU is evident, as it attempts to provide continuing path estimates beyond these points based on the measured vehicle accelerations and roll, pitch, and yaw angular rates. As expected, these location estimates drift with time and distance traveled, and the scale of the accumulated errors is evident when the GPS signals return at points B and D, placing the estimated vehicle trajectory back on course.

1.1  Isn’t Navigation and Mapping with Radar Solved?

11

Start/End Start/End

D

C

Scale: 0

0.5

Scale: 0

1.0 km

0.5

1.0 km

B A

(a) True vehicle path.

(b) Estimated IMU-GPS path.

Figure 1.5  Satellite images with superimposed true vehicle path and the path estimated by a combined IMU-GPS, system.

It should again be noted that these experimental results do not necessarily rep­ resent the state of the art in achievable GPS aided IMU navigation at ground level, but are intended to demonstrate typical navigational errors which can occur and how critical these are at ground level. 1.1.2  Placing Autonomous Navigation of Robotic Vehicles into Perspective

In view of this analysis, a number of key issues differentiate the problems of the ro­ botic navigation of vehicles, and its related technology of robotic mapping, to those of missile and aircraft guidance. These can be summarized as follows: 1. The scale of the location accuracy required is significantly different. When planning robotic tasks, which require maneuvers in corridors or on roads, vehicle location estimation errors of less than a meter can be catastrophic. 2. GPS does not, in general, solve the problem, since satellite outages and con­ figuration changes cause interruptions in the location estimates, as well as unacceptable estimated location errors. 3. The use of IMUs of a size and cost reasonable enough to fit in most ground vehicles can only aid, but not solve, the problem. In contrast to missiles and air­ craft, the accelerations and angular velocities executed by most robotic vehicles are comparable or sometimes even below the noise levels of many IMUs. 4. In contrast to most autonomous missile and aircraft tasks at the scale of which many robotic vehicle tasks are to take place, limited, or often no, a priori map information of the environment is available. 5. In a similar vein to radar-based map-matching, robotic scan-matching meth­ ods have been developed [24]. However, such methods are iterative and re­ quire an initial approximate alignment of the scans for success, and their performance suffers when significant occlusion occurs. 6. In indoor applications, where plan maps of buildings may be available, many symmetries in the prior maps (plans) typically exist. Attempting to match sensor data to such maps can therefore result in multiple potential location estimates.

12

Introduction

7. As a consequence of differences 4 and 6, robotics engineers have made their own definitions of “robotic maps” as mathematical descriptions of environmental objects to which relative vehicle-to-object sensor measurements can be related. As a result of these differences, it seems that the research paths of robotics engineers has somewhat diverged from those of navigation, tracking, and radar engineers. In light of these discussions, some of these diversions seem to have been necessary, given the different problems at hand. At the same time, however, the research endeavors of the authors of this book, have benefited greatly from the plethora of past experience and literature in the general application of radar to navigation and mapping. We would suggest that the integration of radar into the fundamental navigation and mapping problems of robotics can only be possible with direct recourse to this literature. In addition, although only acknowledged to a limited extent in the robotics community, we would also argue that prior naviga­ tion work on radar is of direct relevance to robotic navigation and mapping con­ cepts which do not employ radar. Automatic object detection and feature extraction algorithms applied to camera images, laser range finders, and sonar are prone to misinterpreting even the existence of true objects, let alone their precise locations. This has been acknowledged for decades in the radar detection and navigation lit­ erature, with principled methods to minimize uncertainty in these combined areas. The book therefore continues by further motivating the use of radar in the typically relatively small spatial scale of robotic vehicle applications and throughout presents concepts based on research in robotics, radar, tracking, and general navigation.

1.2  Why Radar in Robotics? Motivation Autonomous robotic navigation is widely researched in a diverse range of envi­ ronments, be it indoors [26], a general outdoor scene [27], in mines [28], under­ water [29], or even the planetary domains [30]. A given environment can contain a vast array of environmental landmarks and may require the deployment of various active and passive sensing methods that respond to different frequencies within the electromagnetic spectrum. In order to obtain a truly autonomous system, absolute information from the surrounding environment needs to be incorporated into esti­ mation theoretic localization and mapping algorithms. Such information, however, is subject to numerous sources of uncertainty. To introduce the type of information produced by imaging radars, the results of an experiment will be used. Figure 1.6 shows a vehicle carrying a 360° scanning radar capable of range measurements of up to 200 m. The vehicle also has two 180° scanning laser range finders (labeled) and a forward-looking camera (not visible in the photo). Scanning laser range finders of this type are very common in recent autonomous robotic systems, and are considered by many robotics engineers .  The vehicle is called the “RobuCAR” from Robosoft (http://www.robosoft.com). .  The radar is from Navtech Radar Ltd, and is a 77-GHz frequency modulated continuous wave (FMCW) device. .  T  he laser range finders are the Sick LMS 200 devices, capable of reported range measurements of up to 80 m with a distance resolution of 10 mm.

1.2  Why Radar in Robotics? Motivation

13

360˚ scanning radar

180˚ scanning laser range finders

Figure 1.6  A vehicle carrying 2 laser range finders and a 360° scanning MMW radar.

as one of the most reliable exteroceptive sensors for reliable robot navigation. This robotic vehicle and sensor configuration is the source of many of the experimental results throughout this book. The robotic vehicle was remote controlled to follow a 5 km long trajectory around a campus environment. Figures 1.7 and 1.8 show two sets of measurements of a typical roadside scene from the camera, laser, and radar, which operate at three different electromagnetic frequencies. Note the differing environmental representa­ tions and measurements reported by each sensor. The passive camera sensor interprets

Grass Reflections

Tree Reflections

Figure 1.7  Example of an outdoor scene as captured from a monocular camera, laser, and radar sensor. White (left scene) and black (left scene) depict the laser range measurements, while in the right scene, the radar data is represented on a shaded scale.

14

Introduction

Figure 1.8  A section of the outdoor scene captured by the monocular camera, laser range ­finders, and radar, showing a missed landmark detection by the laser range finder.

the scene through optical waves impinging on photo-diodes, whose raw measurements are difficult to statistically quantify, in terms of landmark/non landmark detections. The raw laser measurements comprise the relative range and bearing to hypothesized landmarks, whereas the imaging radar’s raw measurements consist of reflected signal intensities at discrete relative range and bearing intervals. Raw laser measurements are projected onto the camera image plane and plot­ ted as white dots in the left-hand images of Figures 1.7 and 1.8. In these vehiclescentric image planes, it is evident that many close landmarks exist and intersect the horizontal scanning planes of both the radar and laser range sensors. The close range tree, lamp post, and fire hydrant landmarks intersect these scanning planes as small, almost circular regions, whereas the more distant grass embank­ ments and buildings intersect these planes as less defined and typically larger regions. The right-hand images of Figures 1.7 and 1.8 can be considered to be similar to a classical radar sector plan position indicator (PPI) display, with data, plotted as shaded values, proportional to the received power from different parts of the environment. The vehicle center reference point is at the center of the lower edge of the display. The lighter areas correspond to higher power returns, located at the range bearing values from which they were sensed. The laser range measurements are also superimposed onto the radar measurement plane as black, dots in the right figures. It is immediately apparent in the right-hand images that, although some regions of high radar reflectivity correspond to the black laser range points, laser and radar data are significantly different. The laser appears to provide highly accurate range-bearing point values when it correctly detects an object. On the other hand, the radar data emanating from the landmarks appears to be reasonably precise for the landmarks at close ranger but becomes “smeared” in both range and angular space from the more distant landmarks. Also, at larger ranges, it can be seen that the radar produces relatively high power returns from the grass embankment in the scene. Initially then, in terms of detect­ ing the roadside landmarks, the laser range finder may appear to be the more

1.2  Why Radar in Robotics? Motivation

15

useful sensor. Figure 1.8, however, highlights one serious and common source of sensor uncertainty, a missed detection by the laser sensor. Due to the inherent randomness in any process, all of the raw sensor measure­ ments are subject to measurement noise, be it in range and/or bearing values or in the reflected intensities for the radar and camera sensors. The reported measure­ ments may also be subject to the less widely considered internal and unknown sensor biases that can corrupt the accuracy reported in the raw measurements [31]. Over successive frames, uncertainty may also be introduced as to which landmark generated which measurement. This is due to sensor noise and/or platform loca­ tion uncertainty and is commonly referred to as data or feature association uncer­ tainty [32–34]. Note that the raw measurements reported by the laser are in fact the output reported from an internal landmark detection algorithm, hypothesizing the presence or absence of environmental objects in the noisy reflected laser beam. Due to the large dimensionality of the raw measurements from the camera and radar sensors, which include received power values at multiple ranges, and resulting trac­ tability problems in processing them, landmark detection [35, 36] and/or feature extraction methods [37–39] are typically adopted to condense the data for use in autonomous navigation. Consequently, the total uncertainty associated with the measurement is expanded to include detection (or feature extraction) uncertainty as well as spurious measurements. Collectively, in this book, all these sources of “randomness” are referred to as measurement uncertainty. To continue the experimental analysis and motivation for radar, further experi­ mental measurement comparisons based on the vehicle and sensor suite of Figure 1.6 are now made. Laser range and radar measurement data was recorded from the location marked “X” in the parking lot environment of Figure 1.9. Various objects are labeled within the photograph. The results of single 360° radar and laser range scans of the environment are superimposed onto the sector PPI type representation in Figure 1.10. Again, received power values by the radar are represented on a shaded scale, with lighter shades indicating higher power returns. Laser range val­ ues are represented with black dots. Note that at the time the scans were recorded, there was a single car in the lot (labeled), which was not present when the photo­ graph was taken. From Figure 1.10, it can be seen that the radar measurement receives high power reflections from the two labeled lamp posts, which are not detected by the laser range-finder. The radar also receives high powered signals from the low lying bush (labeled) in front of the four large coconut trees (in the top right of Figure 1.9), as well as from the all four of the trees themselves. Due to the occluding car and bush (labeled in Figure 1.10), the laser sensor fails to detect two out of the four trees (labeled as “occluded trees”), as well as completely missing the low lying bush. Hence, despite the fact that radar tends to spatially blur the information available in the scene, it is less susceptible to occlusions, penetrating further into an environ­ ment than optical waves, and has the potential to provide more complete informa­ tion in terms of less missed detections. This robotic vehicle and sensor suite, together with the parking lot environment of Figure l.9, will be used for further experiments in radar modeling, improvement of radar detection probabilities and grid-based mapping with radar in Chapters 5, 6, and 7, respectively.

16

Introduction Entrance Passage

Short wall

Tree

Low lying bush

X

Vehicle Path Figure 1.9  A photographic overview of the parking lot testing ground within a university campus. The approximate vehicle path is shown along with the static location. “X” at which single radar and laser range finder scans were recorded.

Lamp posts missed by laser

Single car

Low lying bush

Figure 1.10  Superimposed PPI type scans from a radar and laser corresponding to the environment and vehicle location shown in Figure 1.9. Note that during the scan, a single car was present at the position shown, which was not present when the photograph of Figure 1.9 was taken.

1.2  Why Radar in Robotics? Motivation

17

A final experiment will be shown here to compare the mapping, as opposed to single scan, capabilities of radar with laser range finders. This experiment in­ volved the acquisition of several successive radar and, for comparison purposes, laser range scans recorded along the white trajectory shown superimposed onto a satellite image of a campus environment in Figure 1.11. The exact details of the mapping techniques used in this experiment will be the subject of Chapter 7, and here Figures 1.11, 1.12, and 1.13 are “borrowed” from that chapter for mo­ tivating the use of radar. Due to the difficulty in quantifying ground truth (i.e., what the actual map should be) the satellite image is provided, at this stage, to provide qualitative evidence of the true landmarks that could be mapped by the sensors. Figure 1.12, shows an occupancy grid map generated based on all of the radar data collected during the trajectory. As its name implies, this is simply a grid in which each cell is assigned an occupancy probability, based on the collected and processed sensor data. Lighter cells are deemed to have a low probability of occu­ pancy, and darker cells have a high probability of occupancy. The gray regions at the edges were not captured within the field of view of the radar and hence remain at their initialized probabilities of occupancy of 0.5 (unknown). Comparison with the satellite image of Figure 1.11 shows that the concrete walls of the moat as well as the front of the Heritage building (both labeled) are represented on the grid map with high probabilities of occupancy. The map also shows the ability of the radar to penetrate deeply into the foliage.

Foliage

Moat

Heritage Building

Figure 1.11  A campus environment with the vehicle’s path (white) superimposed.

18

Introduction

300

250

200

Meters

150

100

50

0

0

0

0

50

150 100 Meters

200

250

300

350

Figure 1.12  Occupancy grid mapping with radar.

Figure 1.13 shows the occupancy grid map, constructed under the same tech­ niques as Figure 1.12 this time using only the acquired laser range data. It can be seen in the figure that, although the sections of the map with high probability of occupancy appear less blurred than those of the radar, a great deal of map occu­ pancy information is lost in comparison. In particular, the front wall of the Heritage building is incomplete, due to occlusions from the moat and bushes in front of the building, a problem that affects the radar map to a much lesser extent. Further, in the lower left parts of the image in Figure 1.11, buildings can be seen with foliage occluding them from the vehicle’s trajectory. In the radar-based map of Figure 1.12, these buildings are clearly visible, but they are missed in the laser-based map of Figure 1.13. Although based largely on qualitative analyses, this section has shown the ability of radar to outperform popular laser range finders. This motivates fur­ ther investigation into the use of radar in robotics and is one of the driving forces behind the scope of this book, which is to provide quantified performance analyses for radar-based navigation in robotics based on principled scientific techniques.

1.3  The Direction of Radar-based Robotics Research

19

300

250

200

Meters

150

100

50

0

0

0

0

50

100 150 Meters

200

250

300

350

Figure 1.13  Occupancy grid mapping with laser.

1.3  The Direction of Radar-based Robotics Research Since the early 1990s, radars with good short-range capabilities, focused pencil-beam antenna patterns, and relatively small radar antenna apertures, have been used as rangemeasuring devices in various applications. Indeed, in the current grounded autono­ mous robotics community worldwide, radar has been the sensing solution of choice in applications such as the mapping of mines [40], automotive driving aids [41, 42], autonomous straddle carriers in container ports [43, 44], and in autonomous field ro­ botics in general [45–48]. This recent interest is largely due to the advantages radars offer over other range-measuring sensors, as their performance is less affected by dust, smoke, fog, rain or snow, and obviously any ambient lighting conditions. 1.3.1  Mining Applications

Examples of such hostile sensing conditions occur on a daily basis in the mining industry. For safety purposes, radar can play an important role in this application area. Recently, several accidents have occurred in mines. In August 2010, 33 miners

20

Introduction

(a) Sensor suite.

(b) Harsh operating environmental conditions.

Figure 1.14  The ACFR sensor suite, including radar, laser range, and vision-based sensors, and the extreme dust conditions within which reliable sensing must take place. (Courtesy of the Australian Centre for Field Robotics (ACFR), The University of Sydney.)

were trapped underground during the Copiapó mining accident in Chile, and a suc­ cessful rescue was achieved in October 2010. Also, damage to buildings has occurred on a regular basis due to nearby mining activities that have taken place decades, or even centuries, before such buildings were constructed. Such often abandoned mines can cause mining subsidence damage. The mapping of underground mines is therefore of enormous importance in society [49], as a lack of accurate maps of inactive mines poses a serious threat to public safety. According to a recent article, “tens of thousands, perhaps even hundreds of thousands, of abandoned mines exist

(a) The Acumine 360° scanning radar.

(b) The AMTC vehicle.

Figure 1.15  Photographs of the 94-GHz Acumine millimeter wave radar and an experimental, autonomous vehicle from the Advanced Mining Technology Center (AMTC), University of Chile, used for some of the experiments. (Vehicle picture courtesy of AMTC. University of Chile.)

1.3  The Direction of Radar-based Robotics Research

21

today worldwide” [50]. Current technologies for creating such maps rely on manu­ ally obtained triangulation data and dated maps, which must be referenced to GPS data recorded above ground. With its relatively good atmospheric penetrating abil­ ity, short-range radar has the potential to provide such maps. Also in the mining industrial application example, radar can form a crucial component of the typically stringent production flow requirements. For planning, modeling, and conducting tasks in underground mines, accurate data from such environments are needed. Obtaining data becomes more difficult in areas where humans have restricted access. Dangerous working conditions and difficult access suggest the use of autonomous mobile robots to acquire maps of such environ­ ments [28]. For example, autonomously acquiring the volume of the material al­ ready removed from a mine is of critical importance when assessing the profitability of remining a previously excavated mine. Therefore, accurate volumetric maps are of great commercial interest. Autonomous sensing solutions for determining the surface profile of regions in underground and open-pit mines is also a research issue required for the safe approach of drilling machines. For these reasons, many of the world’s robotics research groups, which use radar, are financed by mining industries. Two such centers include the Australian Centre for Field Robotics (ACFR). University of Sydney, and the Advanced Mining Technology Center (AMTC). University of Chile. The ACFR has long history of working with and designing FMCW W-band radar for use in mobile robotics. Recent work has focused on the fusion of radar data with laser and vision-based sensors, as shown in the sensor configuration of Figure 1.14(a) [51]. Multiple sensor data acquisition in challenging environments, such as those of extreme dust shown in Figure 1.14(b), has been carried out for the purposes of generating synchronized sensor data sets [52] and the modeling of per­ ceptual failure modes in autonomous operations [53]. Also at ACFR, the seminal simultaneous localization and map building (SLAM) work of [54] used a W-band radar sensor for feature-based SLAM experimental analysis, while reflectively pat­ terns from leisure craft were examined in [55]. In a similar vein, the AMTC is also exploring the application of short-range radar in both underground and open-pit mine scenarios. Figure 1.15(a) shows the Acumine scanning FMCW radar, which is currently being applied to surface profil­ ing in open-pit mines, underground mine mapping, and outdoor autonomous robot navigation. Within the Automation and Robotics Group radar modeling, sensor fusion, and automatic process control are under investigation for mining safety and efficiency improvement activities.

1.3.2  Intelligent Transportation System Applications

Many automotive radar systems are currently under development. Such radars neces­ sarily have small antenna apertures, allowing them to be fitted easily into cars. Ex­ amples of three such systems are shown in Figure 1.16. Radars of this type must be designed for minimum power requirements, a useful range capability, and minimum mutual interference when in the presence of other cars with similar equipment [56]. Various devices are equipped with electronic scanning systems such as the FMCW

22

Introduction

Receiving antenna Transmitting antenna

100 mm

(a) Toyota radar.

(b) Denso radar.

(c) Bosch radar. Figure 1.16  Photographs of various radar systems for use in automatic driving aids.

(a) An automotive long range radar (Courtesy of C. Lundquist, Linköping University.)

(b) Production radar mounted into a car. (Courtesy of Volvo Car Corporation.)

Figure 1.17  Radar devices used for road mapping at Linköping University, Sweden.

1.3  The Direction of Radar-based Robotics Research

23

(a) Camera view 1.

(b) Intensity map 1.

(c) Camera view 2.

(d) Intensity map 2.

Figure 1.18  The image in (a) shows the driver’s view of the intensity map in (b), and the image in (c) shows the driver’s view of the intensity map in (d). Darker areas in the intensity map represent a higher concentration of mapped objects. (Courtesy of C. Lundquist, Linköping University.)

Bosch radar (Figure 1.16(a)), which relies on beam conversion, and the Denso unit (Figure 1.16(b)), which uses phased array methods. The physical dimensions of each radar in Figure 1.16 are in the vicinity of 10 cm by 10 cm with a depth of only approximately 7 cm, allowing them to be fitted with ease in the vicinity of most car front grills. At the same time, they are capable of reported detection ranges of between 2 and 120 m to 150 m. Such systems are currently envisaged to provide reliable vehicle detection for a driver in the presence of fog or other undesirable atmospheric conditions and are

24

Introduction

envisaged for use in autonomous platooning and highway lane following. Solutions for adaptive cruise control (ACC) based on automotive radar were introduced by the Jaguar and Mercedes car companies in 1999, in which the throttle and brakes were automatically operated to maintain driver headway. Commercially available driver warning systems for trucks have been produced by Eaton Vorad, with systems capable of detecting and tracking many vehicles up to hun­ dreds of meters away within its field of view. Its technical specifications and a review are given in the Darpa Grand Challenge team’s technical paper “Arctic Tortoise” [57]. At Linköping University, Sweden, in collaboration with the Volvo Car Corpora­ tion, the problems of joint clustering and estimation are being investigated for the purpose of road-based mapping of stationary objects [58]. As can be seen in Figure 1.17 this is again based on a compact automotive radar sensor, mounted discretely in the front grill section of a car. This work demonstrates the potential usefulness of a compact radar unit providing long-range obstacle map information to a driver, which in adverse weather conditions could be invaluable. Figure 1.18 shows initial radar-based estimates of the current and future surroundings of the car, together with the driver’s view on both urban and country highways. In line with concepts introduced later in this book, mapping is based on the recently introduced concepts of finite set statistics (FISST) [59] and adopts the probability hypothesis density (PHD) filter [60]. This concept allows the detection and spatial estimation errors of objects reported by the radar to be minimized in the resulting map. 1.3.3  Land-Based SLAM Applications

At the Pascal Institute, Blaise Pascal University, Clermont-Ferrand, France, the SLAM problem has been addressed, based on FMCW radar as the sole exteroceptive sensor. The device has a K-band, 24-GHz transmitter and is referred to as the K2Pi pan­ oramic radar, shown in Figure 1.19. In this work, ego motion based on the ­FourierMellin transform method of matching consecutive scans is introduced [61, 62]. Impressive SLAM results, as shown in Figure 1.20, have been achieved without the

Figure 1.19  The K2PiMMW FMCW scanning radar. (Courtesy of IRSTEA, TSCF Research Unit, Aubiere, France.)

1.3  The Direction of Radar-based Robotics Research

25 0 50 100

Distance Y (m)

150 200 250 300 350 400 450

(a) A plan view of an environment for SLAM experiments.

400 350 300 250 200 150 100 50 Distance X (m) (b) SLAM path and map estimates.

0

Figure 1.20  SLAM results using the K2Pi panoramic radar. The dotted line represents the estimated vehicle path, and the mapped objects are represented as black dots. (Courtesy of ‘Pascal Institute’. Blaise Pascal University, Clermont-Ferrand, France.)

need for explicit data association techniques, due to the scan-matching methods used. Currently static environments are assumed in the work, but the K2Pi radar has an FMCW Doppler measurement capability, giving the potential for SLAM in dynamic environments in the future. Further work has focussed on the reduction of radar distortion due to vehicle motion during the scanning process [63]. At Nanyang Technological University, Singapore, the authors of this book started their work with a land-based 77-GHz FMCW radar from Navtech Radar Ltd. This device was introduced in Figure 1.6, and most of the land based analyses and results in this book are based on this radar, with initial modeling, mapping, and SLAM investigations appearing in [64–66]

1.3.4  Coastal Marine Applications

At the University of Kwazulu, marine radar is used as part of a project on autonomous sea craft development for deep sea rescue operations [67]. A test vehicle, based on a rigid, inflatable pleasure craft, shown in Figure 1.21, uses a commercially available radar for obstacle detection. The work focusses on modeling marine radars, and ex­ ploiting the multiple-line-of sight imaging capabilities of radar for obstacle detection. Under the Singapore–MIT Alliance, Center for Environmental Sensing and Monitoring (CENSAM) project, the radar modeling, mapping, and SLAM con­ cepts of the authors were extended to applications in a marine environment [68]. Figure 1.22 shows an autonomous surface craft (ASC), based on a standard kayak with side stabilizers, carrying a commercially available marine radar used for coastal SLAM experiments. The aim of the CENSAM project is to automate the monitoring

26

Introduction

Figure 1.21  Autonomous marine craft navigation, based on a rigid, inflatable pleasure craft and a commercially available radar. (Courtesy of Chiemela Onunka, Mechatronics and Robotics Research Group. University of Kwazutu Natal, Durban, South Africa.)

Figure 1.22  Photograph of an ASC used for SLAM experiments in a cluttered coastal marine environment.

1.4  Structure of the Book

27

of coastal environments. To this end, autonomous surface and underwater vehicles are necessary for the deployment of environmental monitoring sensors at strategic coastal locations. This requires vehicles to be able to estimate their own location, and the state of their surroundings at all times, for which autonomous navigation and mapping are crucial components. Chapter 10 will present the results of process­ ing marine radar data for SLAM at sea.

1.4  Structure of the Book The book is divided into four parts. In Part I, Fundamentals of Radar and Robotic Navigation, the foundations are laid for understanding radar measurements and their related uncertainty and extracting useful mapping information from them. Chapter 2 provides the tools to understand the principles of operation of the radars commonly used for the relatively short distance applications in robotics and their sources of error. Chapter 3 provides the necessary background knowledge of detection theory, with prin­ cipled methods for extracting landmark information from radar data, with quantifiable probabilities of detection and false alarm. Chapter 4 gives an overview of state-of-theart vector-based mapping and SLAM methods used in autonomous robotics research and highlights reasons why many of these methods are inappropriate for radar. The chapter then focusses on recent developments from the field of stochastic geometry, which provide a set based framework suitable for radar-based mapping and SLAM. Part II, Radar Modeling and Scan Integration, presents recent work that focuses on modeling FMCW radar and increasing the probability of detection of targets based on scan integration methods. In particular, Chapter 6, focuses on the type of radar used in most robotic applications in the world—the FMCW device. The principle of opera­ tion of these radars, in the presence of experimentally determined noise sources, will be examined for the purposes of both simulating their data and for providing predicted measurements within stochastic SLAM frameworks. Chapter 6 analyzes the possibility of using multiple scans of static scenes to improve the probability of detection of land­ marks while maintaining a low probability of false alarm. A computationally efficient recursive method will be introduced, which not only increases the probability of detec­ tion of landmarks, but can also be exploited further to reduce noise in radar images. Part III, Robotic Mapping with Known Vehicle Location, addresses the prob­ lem of autonomous robotic mapping with radar. The two most common classical metric maps, namely grid and feature based, are implemented and extended to make them viable for robust mapping with radar in the presence of realistic mea­ surement uncertainty. Chapter 7 introduces the concept that the variable to be esti­ mated in occupancy grid mapping is not related to the spatial measurements, which are typically used in classical approaches. A reformulation of the problem, which estimates the true measurement likelihoods of a sensor, is introduced and applied to grid-based mapping experiments in urban environments. Chapter 8 then applies the set formulation introduced in Chapter 4 to the feature-based map representation problem. Implementation details of the PHD filter will be explained in which, con­ trary to classical robotic methods, the probabilities of detection, based on methods from Chapter 3, will be used to account for the joint nature of map object existence and spatial uncertainties.

28

Introduction

Finally, Part IV, Simultaneous Localization and Mapping, directly applies the ra­ dar and robotics analyses of Parts I, II, and III, to the full SLAM problem. Chapter 9 uses FMCW radar for land-based SLAM experiments. Comparisons of the random set based methods, deemed more appropriate to radar, with state-of-the-art, vectorbased EKF and FastSLAM are given. Chapter 10 then implements EKF, FastSLAM, and a version of the set; based SLAM termed Rao-Blackwellized PHD_SLAM, in a marine environment, with time of flight X-band radar. It is demonstrated that for classical, vector-based methods to function at all in such environments, where many spurious returns from the sea surface are possible, stringent radar data noise reduc­ tion is necessary. This results in large information loss from the marine radar data. In contrast, the recently devised set-based methods for SLAM do not require such noise reduction methods, allowing successful SLAM to take place with all feature detections in the presence of high clutter. Each chapter is accompanied by its own introduction and summary, and culmi­ nates with a bibliographical results section to allow references to related work to be read separately from the concepts presented.

References [1]

Zarchan, P., Tactical and Strategic Missile Guidance, AlA A Tactical Missile Series—Prog­ ress in Astronautics and Aeronautics, Vol. 157, American Institute of Aeronautics and Astronautics, Washington, DC, 1994. [2] Tsipis, K., “Cruise Missiles,” Scientific American, June 1977, pp. 20–29. [3] Titterton, D., and J. Weston, Strapdown Inertial Navigation Technology. Radar, Sonar and Navigation Series 17. London, UK, and Reston, VA: Institution of Engineering and Tech­ nology and American Institute of Aeronautics, 2004. [4] Dungan, T. D., V-2: A Combat History of the First Ballistic Missile, Cambridge, MA: Westholme Publishing, 2005. [5] Mackenzie, D., Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance, Cambridge, MA: MIT Press, 1993. [6] Sukkarieh, S., E. M. Nebot, and H. F. Durrant-Whyte, “A High Integrity IMU/GPS Naviga­ tion Loop For Autonomous Land Vehicle Applications,” IEEE Transactions on Robotics and Automation, Vol. 15, June 1999, pp. 572–578. [7] Banhan, B., and H. F. Durrant-Whyte, “Inertial Navigation Systems for Mobile Robots,” IEEE Transactions on Robotics und Automation, Vol. 11, June 1995, pp. 328–342. [8] Dissanayake, G., S. Sukkarieh, E. M. Nebot, and H. F. Durrant-Whyte, “The Aiding of a Low-Cost Strapdown Inertial Measurement Unit Using Vehicle Model Constraints for Land Vehicle Applications,” IEEE Transactions on Robotics and Automation, Vol. 17, October 2001, pp. 731–747. [9] Borenstein, J., and L. Feng, “Gyrodometry: A New Method for Combining Data from Gyros and Odometry in Mobile Robots.” In Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, April 1996, pp. 423–428. [10] Lamon, P.. and R. Siegwart, “Inertial and 3D-Odometry Fusion in Rough Terrain: Towards Real 30 Navigation.” In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, September 2004. [11] Nebot, E., and H. F. Durrant-Whyte, “Initial Calibration and Alignment of Low Cost Iner­ tial Navigation Units for Land Vehicle Applications,” Journal of Robotic Systems, Vol. 16, No. 2, February 1999, pp. 81–92.

1.4  Structure of the Book

29

[12] Liu, B., M. D. Adams, and J. I. Guzman, “Multi-Aided Inertial Navigation for Ground Vehicles in Outdoor Uneven Environments.” In Proceedings of the IEEE International Conference on Robotics and Automation, Barcelona, Spain, April 2005, pp. 4703–4708. [13] Kim, J., and S. Sukkarieh, “Real-Time Implementation of Airborne Inertial-SLAM,” International Journal of Robotics and Autonomous Systems, Vol. 55, No. 1, 2007, pp. 62–71. [14] Ridenour, L. N., Radar System Engineering, MIT Radiation Laboratory Series, Volume I, New York and London: McGraw-Hill, 1947. [15] Hall, J. S., Radar Aids to Navigation. New York and London: McGraw-Hill, 1947. [16] Cooper, J. A., and L. T. James, “Location Estimation and Guidance Using Radar Imagery,” IEEE Transat·tions on Aerospace and Electronic Systems, Vol. 11, No. 6, November 1975, pp. 1383–1388. [17] Caves, R. G., P. J. Harley, and S. Quegan, “Matching Map Features to Synthetic Aperture Radar (SAR) Images Using Template Matching,” IEEE Transactions on Geoscience and Remote Sensing, Vol. 30, No. 4, pp. 680–685. [18] Novak, L., “Correlation Algorithms for Radar Map Matching.” In IEEE lnt’l. Conf. on Decision and Control, Quebec, Canada, December 1976, pp. 780–790. [19] Taylor, D. W., “Local Area Navigation: A Tool for GPS-Denied Geolocation.” In Proceedings of the SPIE-Aerosense Conference, Orlando, FL, 2003, pp. 125–136. [20] Torres-Torriti, M,. and A. Guesalaga, “Automatic Ship Positioning and Radar Biases Cor­ rection Using the Hausdorff Distance.” In 10th lnt’l. Conf. on Information Fusion, Quebec, Canada, July 2007, pp. 1–8. [21] Guesalaga, A. R., “Recursive Estimation of Radar Biases Using Electronic Charts,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 40, No. 2, April 2004, pp. 725–733. [22] Duda, R. O., P. E. Hart, and D. O. Stork, Pattern Classification and Scene Analysis, New York: John Wiley and Sons, 1996. [23] James, L. T., “Map-Matching Estimation of Location Using Range Scan Radar Images.” In IEEE National Aerospace and Electronics Conference, May 1974, pp. 423–428. [24] Lu, F., and E. Milios, “Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans,” Journal of Intelligent and Robotic Systems, Vol. 18, No. 3, 1997, pp. 249–275. [25] Kaplan, E., and C. Hegarty, Understanding GPS Principles and Applications. Norwood, MA: Artech House, 2005. [26] Leonard, J. J., and H. F. Durrant-Whyte, “Dynamic Map Building for an Autonomous Mobile Robot.” In Proceedings of the IEEE International Workshop on Intelligent Robots and Systems, Ibaraki, Japan, July 1990, pp. 89–96. [27] Guivant, J., E. Nebot, and S. Baiker, “Autonomous Navigation and Map Building Using Laser Range Sensors in Outdoor Applications,” Journal of Robotic Systems, Vol. 17, No. 10, October 2000, pp. 565–283. [28] Nuchter, A., H. Surmann, K. Lingermann, J. Hertzberg, and S. Thrun, “6d SLAM with an Application in Autonomous Mine Mapping.” In International Conference on Robotics and Automation (ICRA), New Orleans, LA, April 2004, pp. 1995–2003. [29] Eustice, R. M., H. Singh, and J. J. Leonard, “Exactly Sparse Delayed-State Filters for View-Based SLAM,” IEEE Transactions on Robotics, Vol. 22, No. 6, December 2006, pp. 1100–1114. [30] Lacroix, S., A. Mallet, D. Bonnafous, G. Bauzil, S. Fleury, et al., “Autonomous Rover Navigation on Unknown Terrains, Functions and Integration,” International Journal of Robotics Research, Vol. 21, No. 10, October 2002, pp. 917–942. [31] Perera, L. D. L., W. S. Wijcsoma, and M. D. Adams, “The Estimation Theoretic Bias Cor­ rection Problem in Map Aided Localisation,” International Journal of Robotics Research, Vol. 25, No. 7, July 2006, pp. 645–667.

30

Introduction [32] Makarsov, D., and H. F. Durrant-Whyte, “Mobile Vehicle Navigation in Unknown Envi­ ronments: A Multiple Hypothesis Approach,” IEE Proceedings of Control Theory Applications, Vol. 142, No. 4, July 1995, pp. 385–400. [33] Wijesoma, W. S., L. D. L Perera, and M.D. Adams, “Toward Multidimensional Assignment Data Association in Robot Localization and Mapping,” IEEE Transactions on Robotics, Vol. 22, No. 2, April 2006, pp. 350–365. [34] Niera, J., and J. D. Tardos, “Data Association in Stochastic Mapping Using the Joint Com­ patibility Test,” IEEE Transactions on Robotics and Automation, Vol. 17, December 2001, pp. 890–897. [35] Clark, S., Autonomou Land Vehicle Navigation Using Millimetre Wave Radar, PhD Thesis, Australian Centre for Field Robotics, University of Sydney, 1999. [36] Williams, S., Efficient Solutions to Autonomou.t Mapping and Navigation Problems. PhD Thesis, Australian Centre for Field Robotics, University of Sydney, 2001. [37] Adams, M. D., and F. Tang, W. S. Wijesoma, and C. Sok, “Convergent Smoothing and Segmentation of Noisy Range Data in Multiscale Space,” IEEE Transactions on Robotics, Vol. 24, No. 3, June 2008, pp. 746–753. [38] Se, S., D. Lowe, and J. Little, “Mobile Robot Localisation and Mapping with Uncertainty Using Scale­Invariant Visual Landmarks,” International Journal of Robotics Research, Vol. 21, No. 8, August 2002, pp. 735–758. [39] Ribas, D., P. Ridao, J. D. Tardos, and J. Niera, “Underwater SLAM in a Marina Environ­ ment.” In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, October 2007. [40] Brooker, G. M., S. Scheding, M. V. Bishop, and R. C. Hennessy, “Development and Ap­ plication of Millimeter Wave Radar Sensors for Underground Mining,” Sensors Journal, IEEE, Vol. 5, No. 6, December 2005, pp. 1270–1280. [41] Langer, D., and T. Jochem, “Fusing Radar and Vision for Detecting, Classifying and Avoiding Roadway Obstacles.” In IEEE Symposium on Intelligent Vehicles, September 1996. [42] Rohling, H., and R. Mende, “OS CFAR Performance in a 77-GHz Radar Sensor for Car Applications.” In Proceedings of the CIE International Conference of Radar, October 1996, pp. 109–114. [43] Durrant-Whyte, H. F., “An Autonomous Guided Vehicle for Cargo Handling Operations,” International Journal of Robotics Research, Vol. 15, No. 5, 1996, pp. 407–440. [44] Durrant-Whyte, H. F., D. Pagac, B. Rogers, M. Stevens, and G. Nelmes, “An Autonomous Straddle Carrier for Movement of Shipping Containers,” IEEE Robotics and Automation Magazine, September 2007. [45] Brooker, G., M. Bishop, and S. Scheding, “Millimetre Waves for Robotics.” In Australian Conference for Robotics and Automation, Sidney, Australia, November 2001. [46] Langer, D., “An Integrated MMW RADAR System for Outdoor Navigation,” In IEEE International Conference on Robotics and Automation, Minneapolis, MN, April 1996, pp. 417–422. [47] Foessel, A., “Radar Sensor Model for Three-Dimensional Map Building.” In Proc. SPIE, Mobile Robots XV and Telemanipulator and Telepresence Technologies VII, H. M. Choset, D. W. Gage, and M. R. Stein, editors, Vol. 4195, SPJE, November 2000. [48] Foessel, A., J. Bares, and W. R. L. Whittaker, “Three-Dimensional Map Building with MMW RADAR.” In Proceedings of the 3rd International Conference on Field and Service Robotics, Helsinki, Finland, June 2001. [49] Morris, A., D. Kurth, W. Whittaker, and S. Thayer, “Case Studies of a Borehole Deployable Robot for Limestone Mine Profiling and Mapping.” In International Conference on Field and Service Robotics (FSR ’03), Lake Yamanaka, Japan, 2003. [50] Belwood, J., and R. Waugh, “Bats and Mines: Abandoned Does Not Always Mean Empty,” BATS Magazine, Vol. 9, No. 3, 1991.

1.4  Structure of the Book

31

[51] Underwood, J. P., A. Hill, T. Peynot, and S. J. Scheding “Error Modeling and Calibration of Exteroceptive Sensors For Accurate Mapping Applications,” Journal of Field Robotics, Vol. 27, No. 1, 2010, pp. 2–20. [52] Peynot, T., S. Scheding, and S. Terho, “The Marulan Data Sets: Multi-Sensor Perception in a Natural Environment with Challenging Conditions,” International Journal of Robotics Research, Vol. 29, No. 13, 2010, pp. 1602–1607. [53] Peynot, T., J. Underwood, and S. Scheding, “Towards Reliable Perception for Unmanned Ground Vehicles in Challenging Conditions.” In International Conference on Intelligent Robots and Systems (IROS), St. Louis, MO, October 2009. [54] Dissanayake, M. W. M. G., P. Newman, S. Clark, H. F. Durrant-Whyte, and M. Csorba, “A Solution to the Simultaneous Localization and Map Building (SLAM) Problem,” IEEE Trans. Robotics and Automation, Vol. 17, No. 3, June 2001, pp. 229–241. [55] Brooker, G., C. Lobsey, and R. Hennessy, “Low-Cost Measurement of Small Boat RCs at 94 GHz.” In Proceedings of IEEE International Conference on Control, Automation and Robotics, 2006. [56] Rohling, H., and M. M. Meinecke, “Waveform Design Principles for Automotive Radar Systems.” In CIE International Conference on Radar, Beijing, China, October 2001. [57] Ruhkick, R., “DARPA Grand Challenge from the Last Frontier to the Next Frontier, Technical Report, Artic Tortoise technical paper, 2004. [58] Lundquist, C., L. Hammarstrand, and F. Gustafsson, “Road Intensity Based Mapping Us­ ing Radar Measurements with a Probability HypoThesis Density Filter,” IEEE Transactions on Signal Processing, Vol. 59, No. 4, April 2011. [59] Mahler, R., Statistical Multisource Multitarget Information Fusion, Norwood, MA: Artech House, 2007. [60] Mahler, R., “Implementation and Application of PHD/CPHD Filters.” In International Conference on Information Fusion, Seattle, WA, 2009. [61] Checchin, P., F. Gerossier, C. Blanc, R. Chapius, and L. Trassoudaine, “Radar Scan Match­ ing Slam Using the Fourier-Mellin Transform.” In Proceedings of the Field and Service Robotics Conference, 1999, pp. 151–161. [62] Gerossier, F., P. Checchin, C. Blanc, R. Chapuis, and L. Trassoudaine, “Trajectory-Oriented EKF­ SLAM Using the Fourier-Mellin Transform Applied to Microwave Radar Images.” In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, October 2009. [63] Vivet, D., P. Checchin, and R. Chapius, “On the Fly Localization and Mapping Using a 360° Field­ of-View Microwave Radar Sensor,” In International Conference on Intelligent Robots and Systems (IROS) Workshop on Planning, Perception and Navigation for Intelligent Vehicles, St. Louis, MO, October 2009. [64] Mullane, J., B. N. Yo, M. Adams, and W. S. Wijesoma, “A Random Set Approach to SLAM.” In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) Workshop on Visual Mapping and Navigation in Outdoor Environments, Japan, May 2009. [65] Mullane, J., M. Adams, and W. S. Wijesoma, “Robotic Mapping Using Measurement Likelihood Filtering,” International Journal of Robotics Research, Vol. 2, No. 28, 2009, pp. 172–190. [66] Jose, E., M. D. Adams, J. Mullane, and N. Patrikalakis, “Predicting Millimetre Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, May 2010, pp. 960–971. [67] Onunka, C., and C. Bright, “Autonomous Marine Craft Navigation on the Study of Radar Obstacle Detection.” In Proceedings of the 11th IEEE International Conference on Control, Automation, Robotics and Vision (ICARCV 2010), Singapore, December 2010. [68] Mullane, J., S. Keller, A. Rao, M. Adams, A. Yeo, et al., “X-Band Radar Based SLAM in Singapore’s Off-Shore Environment.” In The Eleventh International Conference on Control, Automation, Robotics and Vision (ICARCV 2010), Singapore, December 2010.

pa r t i Fundamentals of Radar and Robotic Navigation

Chapter 2

A Brief Overview of Radar Fundamentals

2.1  Introduction This chapter is designed to serve as a source of reference for the remaining chapters of this book. An understanding of the fundamentals of radar is essential before it can be applied to the complex tasks of object detection, scene understanding, and autonomous mapping and navigation. Although the chapter touches on many of the fundamental principles of radar operation, it in no way can, nor is it meant to, replace classic radar text books, which present the fundamentals of radar in much more detail, such as those of Skolnik [1], Barton [2, 3], Toomay et al. [4], Levanon [5], and for millimeter wave (MMW) radar, Currie et al. [6]. The chapter focuses on the fundamentals needed for the radar modeling, detection, mapping, and SLAM applications covered in Chapters 5 through 10. Section 2.2 begins with an explanation of the form that radar measurements can take, explaining the multiple line-of-sight imaging capabilities of radar, with example data. This is followed by a derivation and analysis of the simple radar equation in Section 2.3, upon which many of the ensuing analyses are based. Section 2.4 highlights the imaging capabilities of radar, due to its ability to provide received power information, not only from objects directly in its line of sight, but also those that are physically masked. This offers the potential for radar to provide more information than its single line-of-sight counterparts, such as laser range finders, and examples demonstrating the penetration abilities of radio waves are given. The measured power returned from environmental objects can span several orders of magnitude, and Section 2.5 explains common forms of signal dynamic range compression that are applied. The effects of logarithmic compression and range compensation on the received signals, during both object absence and are presence, are examined with examples. The radars used in autonomous robotic applications mainly adopt two range measurement methods, and in Section 2.6, detailed explanations of these are given. In particular, the time of flight (TOF) and frequency modulated continuous wave (FMCW) range estimation methods are derived, including FMCW Doppler relative velocity estimation methods. A topic of extreme importance in autonomous robotic applications is the sources of uncertainty in radar, which is the subject of Section 2.7. Sources common to all radar types are introduced, such as receiver noise, interference and clutter, object radar cross section (RCS) fluctuations, multi-path effects, and speckle. Essential for modeling particular radar types, is an understanding of the uncertainties introduced due to the range estimation techniques employed. Therefore Section 2.8 explains the reasons for range errors in the commonly used TOF and FMCW radars, and methods to 35

36

A Brief Overview of Radar Fundamentals

minimize such errors. Finally Section 2.9 shows some of the problems in converting scanned, received radar data from its natural polar form to the Cartesian form necessary for Cartesian-based image processing or grid representation. To maintain the application-oriented nature of the book, the principles presented in this chapter are illustrated as far as possible with examples from actual radar measurements and scans. Also, extensive references are made to concepts and algorithms within the modeling, detection, mapping, and SLAM chapters, that rely on the underlying radar principles presented.

2.2  Radar Measurements A radar is an active sensor that transmits electromagnetic energy into an environment and receives a component of that energy after reflection from single or multiple objects. Many such devices, and all of those used in the experiments presented throughout this book, provide an array of received power values, the indices of which correspond to discrete range bins, between the minimum and maximum working range of the radar. A plot of the received power values versus range is known as an A-scope display and in the early days of radar was displayed on a cathode ray tube. A simple way of interpreting the A-scope is to consider the power “seen” by the receiver as a function of time, after the transmission of the radio energy. This time is proportional to the range from which each received power value results, whether it be due to noise, interference, or an actual object. Figure 2.1 shows an A-scope that resulted from the MMW FMCW radar of Figure 1.15(a), with its minimum and maximum working ranges set to 5 m and 100 m, respectively, and a range resolution (range bin size) set to 1.82 m. This A-scope corresponds to a single bearing value taken from a scan, with a metal pole at a range of 21 m from the radar. Note that the A-scope is made up of discrete power values, corresponding to the discrete range bins (one unique power value every 1.82 m of range). The A-scope is generally referred to as

Sradar = [S radar (1) S radar (2) … S radar (q) … S radar (Q)]

(2.1)

where Sradar is an array of Q received power values corresponding to discrete range values. Hence, with all of the radar data used in this book, the range space can be considered to be divided into Q discrete range bins, and Sradar (q) refers to the general received power at range bin q. It should be noted, however, that the received power can be expressed in many different ways, with correspondingly different units. First it can be expressed as a normalized linear variable, with no units as in Figure 2.1, since the received power value may be normalized relative to the transmitted power or some arbitrary power value. In this form, the A-scope will be referred to as Sradar ¬ Slin where

1.

Slin = [S lin (1) S lin (2) … S lin (q) … S lin (Q)]

Sometimes also referred to as an R-scope display.

(2.2)

2.2  Radar Measurements

37

450 400 350

S lin

300 250 200 150 100 50 0

02

04

06

08

0

100

Distance /m Figure 2.1  Discrete received power values shown on a linear scale, corresponding to each range bin.

Also, as will be demonstrated in Section 2.5, it may be more useful to express the received power in logarithmic form, in decibels (dB), so that Sradar ¬ Slog where

Slog = [S log (1) S log (2) … S log (q) … S log (Q)]

(2.3)

Finally, the received power values can also be further processed, in an attempt to remove the effect of range, as will be discussed in both Sections 2.3 and 2.5. In this form Sradar ¬ Slog-rc where “rc” indicates “range compensation,” i.e.,

Slog_rc = [S log_rc (1) S log_rc (2) … S log_rc (q) … S log_rc (Q)]

(2.4)

Equation 2.4 is the format of the received A-scope outputs of the radars in Figures 1.6 and 1.15(a), since the internal electronics logarithmically compress the power signal and apply range compensation. Throughout this book, however, processing of the received power values, for example, for object description, detection, and noise reduction, will be applied to the most convenient form of Sradar, and in every case, for clarity, Sradar and Sradar will be expressed in their appropriate forms Slin, Slog, or Slog-rc and Slin, Slog, or Slog_rc, respectively. At a particular scanning bearing angle, Sradar can be considered to comprise the complete information received by the radar over its entire range space when it is plotted against the range bin number q. As the radar swash plate scans the environment in a horizontal plane, multiple A-scope arrays, Sradar(q : 1 ® Q,qi), similar to the A-scope recorded at bearing angle qi are recorded, resulting in a complete, 2D 2.

This is the term given to the scanning surface that reflects the radio waves, so that they are directed at the bearing (and for 3D radars, elevation) angle(s) required.

38

A Brief Overview of Radar Fundamentals 20

85

10 80

0

y/m

-10

75

-20 70

-30 -40

65

-50 -60

60 -40 -30 -20 -10

0

10 20 30

40

x/m Figure 2.2  A PPI section of the 360° received power scan Slog_rc(q : 1 ® Q, Q) from which the single A-scope of Figure 2.1 was extracted. The metal pole is located at coordinates (–1, –21) and darker shades indicate higher received power values. The shaded bar to the right shows the received power values, in dB.

360° set of measurements Sradar(q : 1 ® Q, Q). A section from the full 360° scan from which the A-scope of Figure 2.1 was extracted is shown in Figure 2.2. This form of the data is referred to as a plan position indicator (PPI) when the radar is located at the origin, as in Figure 2.2, or as a sector PPI, which was already introduced in Figures 1.7 and 1.8, when the origin is shifted relative to the radar. In Figure 2.2, Slog-rc(q : 1 ® Q, Q) is plotted, creating a reduced dynamic range for the range of shades necessary to represent all received power values. Higher received power values are represented as darker shaded areas, and the metal pole is evident at approximate coordinates (–1, –21). Note that this particular radar has a minimum working range of 5m, represented by the circular region centered on the origin. Before continuing the discussion on radar fundamentals, an understanding of the relationship between the received power from an object and its range from the radar is necessary. This relationship describes the dependence of the linear form of the received power Slin and an object’s range, as well as other parameters, and is referred to as the radar equation.

2.3  The Radar Equation The simple form of the radar equation gives a relationship between the expected received power Slin from a target, its range r, and its RCS GRCS as well as the characteristics of the transmitter and receiver antenna(s). As its name implies, it is a simplification of the true relationship between these quantities; however, in much of the literature, the simple radar equation forms the basis of many radar processing applications. It is 3.

Which in most texts is considered to be a continuous variable; hence the argument (q) is dropped in Section 2.3.

2.3  The Radar Equation

39

therefore of fundamental importance, and will be referred to throughout this book, along with reasons for noncompliance of real radar systems with this equation. The derivation of the simple radar equation begins with an expression for the power density that would result at points in space from an isotropic antenna [1]. Hence at distance r from an antenna with transmission power St

Power density (isotropic antenna) =

ST 4π r 2

(2.5)

A radar becomes useful when a directive, as opposed to isotropic, antenna is used to radiate S­t into a particular direction. The ratio of increased power in a particular direction compared with that from an isotropic antenna is defined as the antenna gain GAntenna, which case

Power density (directive antenna) =

ST GAntenna 4π r 2

(2.6)

The portion of this power that is incident on the target would be the actual area of the target that intersects the sphere, centered at the antenna, with radius r. An isotropic target would theoretically reradiate this power in all directions. For detection by the radar, the quantity of importance is the power reradiated back in the direction of its antenna, a quantity that defines a target’s RCS GRCS as follows

S G   Γ RCS  Power density of received signal at radar =  T Antenna     4π r 2   4π r 2 

(2.7)

Hence GRCS encapsulates many properties of the target and corresponds to its size, as seen by the radar, which is, in general, unrelated to its physical size. The units of GRCS are m2. Since Equation 2.7 gives the power density, the actual power received by the radar will be Slin given by

S G   Γ RCS  S lin =  T Antenna  AR    4π r 2   4π r 2 

(2.8)

where AR is the effective receiver area of the antenna. From antenna theory,

GAntenna =

4π AR λ2

(2.9)

where l is the electromagnetic radiation wavelength [7]. Most radars utilize the same antenna for transmission and reception. Therefore, with the addition of a radar loss factor L (L ³ 1), substituting Equation 2.9 into Equation 2.8 yields the classical radar equation

Received Power S lin =

ST (GAntenna )2 λ 2 G RCS G RCS µ 4 (4π )3 r 4 L r

assuming constant L for a given radar. .

A theoretically perfect antenna, which radiates energy uniformly in all directions.

(2.10)

40

A Brief Overview of Radar Fundamentals

The radar equation can be written in its alternate form, which gives the range to a target as 1



1

æ S (G )2 λ 2 G RCS ö 4 æ G RCS ö 4 Target range r = ç T Antenna ÷ µ ç lin ÷ (4π )3 S lin L è ø è S ø

(2.11)

As stated above, Equations 2.10 and 2.11 constitute a simple form of the radar equation, which in practice often fails to correctly characterize radar performance. In particular, the actual received power is typically less than that predicted by Equation 2.10 due to many possible factors, including atmospheric conditions along the radio wave propagation path. Also, target RCS, GRCS is best described as a statistical quantity, since it varies with radar to target viewing angle and the target material properties. Further, the received power Slin must be extracted in the presence of other received power signals, caused by interference from other sources. Such interference is statistical in nature, as is the receiver’s noise power that corrupts Slin. Therefore, instead of attempting to improve the deterministic modeling of radar detection and range measurement, much of the successful work with radar bases its modeling on the simple radar equation accompanied by stochastic analyses. Later, in Section 2.7, the sources of uncertainty in radar power and range measurements will be discussed, which naturally leads to the stochastic-based detection theory concepts introduced in Chapter 3.

2.4  Radar Signal Attenuation Radio waves can penetrate certain non-metallic objects, which sometimes explains the multiple line-of-sight objects within a received A-scope. Documented power loss (total attenuation dB) curves, as a function of a radar’s transmission frequency, are given in Figure 2.3 for various materials. This limited, but useful, penetration property can be exploited in mobile robot navigation, since extra information can be available from multiple line-of-sight targets. Due to the curves in Figure 2.3, radar waves at the transmission frequencies of the two MMW and the X-band radars used in this book can be expected to penetrate materials such as glass, plywood, and Kevlar with little attenuation. Even clay brick suffers only a 13 dB loss at 77-GHz, the frequency of the radar used in most of the land-based experiments throughout this book. This implies the possibility of such a radar-detecting objects of sufficient RCS through such materials. To introduce the target penetration and reflection capabilities of the radars used, tests were carried out with three different masking objects. In a section of a scan, shown in Figure 2.4, a trihedral reflector of high RCS and a sheet of wood of thickness 0.8 cm were placed at ranges of 14 m and 8.5 m, respectively, so that the wood visually occluded the high RCS reflector from the RADAR. This 5.

Although it should be noted that, depending on the environmental conditions, these can be the results of multiple path reflections also, as will be introduced in Section 2.7.1.5.

2.4  Radar Signal Attenuation

41

35

Painted Board

Concrete Block

Clay Brick

30

Total attenuation (dB)

25 3/4" Plywood

20 15

10

Glass

5 0

Kevlar Sheet 3

5

8 9.41

30 20 Frequency (GHz)

50

77 94

Figure 2.3  The expected received power loss when radio waves penetrate various materials, versus radio frequency. The dashed vertical lines correspond to the frequencies of the radars used in this work: 9.41-GHz (marine TOF radar), 77-GH³, and 94-GH³ (FMCW land-based radars). (Adapted from Ferris and Currie, 1998 [8]).

40 20

Range (m)

30

15 20

Mud bank 10

10

RADAR reflector

Wooden plane

0

5

10 0

1

2

3

4

Range (m) Figure 2.4  A section of a PPI containing a RADAR reflector of RCS 177 m2, 14 m from the RADAR, and a wooden sheet of thickness 0.8 cm visually occluding the reflector. The wooden sheet was located 8.5 m from the RADAR. The shaded scale represents the received power in dB.

42

A Brief Overview of Radar Fundamentals

ensured that no part of the RADAR reflector fell directly within the beamwidth of the RADAR, so that if it was detected, it must be due to the radio waves penetrating the wood. Figure 2.4 shows relatively high-powered returns from the two features-down-range even though, visually, one occludes the other. The experiment was also repeated for a perspex sheet of thickness 1.0 cm (Figure 2.5). The results show the penetrating capabilities of the radar, as the radar reflector and the mud bank still yield discernible received power values at larger range values than the occluding objects. In contrast, Figure 2.6 shows a similar experiment using an occluding metal sheet of thickness 3 mm. In this case, the metal reflector “masks out” any power returns from objects behind it. This is to be expected as radio waves are known to exhibit minimum penetration into and through metallic objects [8]. The results of object penetration by RADAR waves motivates further development of A-scope prediction and mapping based on multiple line-of-sight feature detection, which is one of the aims of this book. Radar detection, for multiple lineof-sight features will be the subject of Chapter 6, and a framework for incorporating such detections, taking into account their probabilities of detection, as well as their spatial uncertainty, will be the subject of Chapters 7, 8, 9, and 10.

25

40 20

Range (m)

30

Mud Bank

15

20

RADAR Reflector

10

10

Perspex Sheet

0

5

10 0

1

2

3

4

Range (m) Figure 2.5  A RADAR reflector of RCS 177 m2, 14 m from the RADAR, and a perspex sheet of thickness 1.0 cm, 8.5 m from the RADAR. Again, the reflector is visually occluded from the RADAR.

2.5 Measurement Power Compression and Range Compensation

43

25

60

50 20

Building

40

Range(m)

15

30

Mud Bank

10

20

Metal Sheet

5

0 3

2

10

1

0

0

Range(m)

Figure 2.6  A RADAR reflector of RCS 177 m2, 14 m from the RADAR, and a steel metal sheet of thickness 0.3 cm, 7.0 m from the RADAR. Again, the reflector is visually occluded from the RADAR, and this time no apparent power fluctuations are received from the radar reflector.

2.5  M  easurement Power Compression and   Range Compensation Based on the simple radar equation introduced in Section 2.3, each received power component Sradar(q) may or may not be dependent on the range r(q) corresponding to range bin q, depending on whether or not an object is present at range r(q). In the case of target presence at range bin q, the simple radar equation gives

S lin (q) =

ST (GAntenna )2 λ 2 G RCS G RCS = β (4π 3 )L[r(q)]4 [r(q)]4

(2.12)

where St, GAntenna, l, and L are the assumed constant values defined in Equation 2.10, which can be combined into the constant b = (ST (GAntenna)2l2/4p 3L). It can be seen from Equation 2.12 that Slin(q) can have values spanning several orders of magnitude due to two reasons. 1. The RCS of a target GRCS can span several orders of magnitude. 2. The range dependency [r(q)]4 in the denominator has a large effect on the received power. For example, if a target is detected at a range of r(q) = 10 m, the received power would be 160,000 times larger than that received from the same target (same RCS, GRCS) at a range r(q) = 200 m.

44

A Brief Overview of Radar Fundamentals

For these reasons, the received power signal within radar’s is typically processed in two ways to greatly reduce the dynamic range of the received signal, so that the signal is computationally manageable. This processing takes the form of 1. L  ogarithmic compression — so that the received power is represented in dB, i.e.,

S log (q) = 10log10 [S lin (q)]

(2.13)

2. R  ange compensation — to remove, or in some cases reduce, the dependency of the received power signal Slog(q) on range, making it depend only on a target’s RCS GRCS. 2.5.1  Logarithmic Compression

Substituting Equation 2.12 into Equation 2.13 yields

æ G RCS ö S log (q) = 10log10 ç β ÷ è [r(q)]4 ø

(2.14)

and expanding the logarithm of the right side gives

S log (q) = 10 log10 (β ) + 10 log10 (G RCS (q)) - 40 log10 (r(q))

(2.15)

Hence on a logarithmic scale, it can be seen that Slog(q) varies logarithmically with both GRCS(q) and r(q). 2.5.2  Range Compensation

Note that the last term in Equation 2.15 corresponds to the large attenuation in the received power due to the range of the target. Because of this attenuation, many radar systems include range compensation electronics in their design to minimize its effect. Such range compensation electronics in general attempt to cancel the effect of the final –40log10(r(q)) term in Equation 2.15. 2.5.2.1  FMCW Radar Range Compensation

It will be shown in Section 2.6.2 that for a range measuring radar that uses the FMCW process, its range estimate is directly proportional to an electronically derived beat frequency signal. Hence range bins correspond to discrete spectral frequency values. Therefore, in the case of FMCW radar, the A-scope values Sradar can be represented as a received power versus range spectrum by plotting the appropriate form of Sradar(q) against the range bin number q. Since range r(q) is proportional to a beat frequency variable fb,

fb = AFMCW r(q)

(2.16)

2.5 Measurement Power Compression and Range Compensation

45

a simple +40 dB/decade high pass filter would theoretically eliminate the final term in Equation 2.15, removing this range dependency giving

S log_rc (q) = S log (q) + 40log10 (fb ) = S log (q) + 40log10 (AFMCW ) + 40log10 (r(q))



(2.17)

Substituting Equation 2.15 into 2.17 gives

S log_rc (q) = 10log10 (β) + 40log10 (AFMCW ) + 10log10 (G RCS (q)) RCS = Constant + 10log10 (G (q))

(2.18)

Equation 2.18 is important since it gives a direct theoretical calibration between a target’s RCS and the received power value, provided the received power is represented on a logarithmic scale, with range compensation applied. In all of the experiments involving FMCW radar, this equation will be assumed to represent the actual form of the radar’s output power value, since logarithmic and range compression of this nature is typical in FMCW radars and is present in the FMCW radars used in this book. It is also important to note, however, that this high pass filter term 40 log10(fb) has two effects on the power spectrum (A-scope): 1. During noise-only sections of the A-scope, a ramp signal of +40 dB/decade is superimposed onto the noise. 2. During target presence sections of the A-scope, the received power theoretically becomes proportional to the log of the target’s RCS and independent of range. 2.5.3 Logarithmic Compression and Range Compensation During Target Absence

Point 1 in the previous section is evident, as during noise-only sections of the Ascope, where the received power Slin(q) = Snoise(q),

S log_rc (q) = 10 log10 (S noise (q)) + 40 log10 (fb ) = 10 log10 (S noise (q)) + 40 log10 (r(q)) + Constant



(2.19)

Hence the noise is logarithmically compressed and is superimposed onto a range dependent 40 dB/decade ramp, implemented by a 40 dB/decade high pass filter. This can be seen (in Figure 2.7, where Slog_rc is plotted for 1 < q < Q, where in this case the total number of range bins Q = 800, corresponding to a maximum range of 200 m. The ramp effect of the range compensation, high pass filter is evident up to ranges of approximately 140 m. Theoretically, the high pass, range compensation filter should maintain an increasing additive signal as range increases. In reality, however, the effect of “roll-off “causes the attenuation of the actual signal at ranges beyond approximately 140 m (i.e., at the highest beat frequencies), due to other amplification stages within the radar.

46

A Brief Overview of Radar Fundamentals 100 S log rc /dB

80 60 40 20 0 0

50

100 Distance /m

150

200

Figure 2.7  The actual output of the radar, for a single swash plate bearing angle, during target absence (received noise only). This corresponds to the logarithmically compressed received power, with range compensation applied. Note the “roll-off” at high beat frequencies (ranges beyond approximately 140 m) for this radar.

For much of the processing of the received A-scopes in the ensuing chapters, it is necessary to make use of the linear received power values. This requires a reversal of the range compensation effect, followed by applying an inverse log operation. The removal of the range compensation effect can be seen in Figure 2.8 where Slog(q) = Slog-rc(q) – 40log10(r(q)) is plotted. As expected, the effect of range on the received noise signal is reduced as Equation 2.19 now becomes S log_rc (q) = 10 log10 (S noise (q))



(2.20)

The dependence on range, however, is not completely eliminated, as theoretically suggested. This is because of nonideal amplification within the actual radar and the fact that radar Equation 2.12, upon which the fourth power of range dependency is assumed, gives only an approximate relationship between received power and range. This less-than-ideal operation becomes greatly exaggerated if the resulting noise power A-scope is then linearized, as shown in Figure 2.9, where S lin (q) = 10(S



log

(q) / 10)



(2.21)

S log /dB

0

0

50

100 Distance /m

150

200

Figure 2.8  Logarithmic received power during target absence, with the range compensation filter effect reversed.

2.5 Measurement Power Compression and Range Compensation

47

0.2

S lin

0.15 0.1

0.05 0 0

50

100 Distance /m

150

200

Figure 2.9  Linear received power during target absence, with the range compensation filter effect reversed.

is plotted for 1 £ q £ 800. The fact that the linearized A-scope does not display uniform noise across the range/frequency spectrum is evidence that within the actual radar, the logarithmic compression and range compensation electronics are less than ideal, which is mainly likely to be the result of electronic cross-talk within the radar (i.e., power at certain frequencies “leaking” into the radar’s receiver). This will be explained in Section 2.7.1.2. However, by analyzing similar A-scope displays with targets present, it will be shown that although the actual radar electronics do not appear to exactly mimic the theoretical concepts, this will be of minor consequence due to the typically huge difference in the received power values when a target is present. 2.5.4  L  ogarithmic Compression and Range Compensation During Target Presence

The logarithmically compressed received A-scope with range compensation applied, Slog-rc, corresponding to the single target case (at a range of 21 m) of Figure 2.1 is shown in Figure 2.10, where in this case the radar’s range is limited to 100 m. This is the output A-scope from the radar, since logarithmic compression and range compensation take place electronically within it. The effect of removing the range compensation filter (assuming it to be a perfect 40 dB/decade high pass filter) and linearizing the received power is shown in Figures 2.11 and 2.12, respectively. Note that in the linearized plot of Figure 2.12 the noise cannot be seen on the scale necessary to display the received power from the target. It is interesting to display the same three forms of the A-scope displays in Figures 2.10 through 2.12, when the same cylindrical target is simply moved to a new range of approximately 36 m. The corresponding A-scopes are shown in Figures 2.13 through 2.15. This experiment utilized a target that was a cylindrical metal pole, so that it exhibits approximately the same RCS when viewed from any direction. Hence when the pole was moved from 21 m to 36 m, its RCS can be assumed to have remained approximately the same. Note that in Figures 2.10 and 2.13, the value of Slog-rc, corresponding to the pole in each case (almost 80 dB on the scale shown), is almost the same. This demonstrates the usefulness of the range compensation filter, as the

48

A Brief Overview of Radar Fundamentals

80 70

S log rc /dB

60 50 40 30 20 10

0

20

40

60

80

100

Distance/m Figure 2.10  Logarithmic received power during target presence, with the range compensation filter applied, for a target at 21 m from the radar.

received power value Slog-rc is almost independent of range and is a only a function of the targets RCS, and therefore in this case is approximately the same. Figures 2.11 and 2.14 show logarithmically compressed received power values log S , in which both the spike from the target and noise can be visualized, but a drop in received power is evident (from approximately 26 dB to 18 dB) as the same target is moved from 21 m to 36 m, showing a range dependency. Finally Figures 2.12 and 2.15 show the huge difference in the received power Slin on a linear power scale (power axis limits set the same in each case), as predicted by the radar range Equation 2.10, when GRCS remains constant but range r increases. 30 20

S log /dB

10 0 -10 -20 -30 -40 -50

0

20

40

60

80

100

Distance /m Figure 2.11  A-scope of Figure 2.10 with the range compensation filter effect reversed.

2.5 Measurement Power Compression and Range Compensation

49

450 400 350

S lin

300 250 200 150 100 50 0

0

20

40

60

80

100

Distance/m Figure 2.12  Linearized version of the A-scope of Figure 2.11.

Importantly, note that in both Figures 2.12 and 2.15, the received power values are orders of magnitude higher than the received power noise, possibly caused by interference, in Figure 2.9. All of the A-scope displays in Figures 2.10 through 2.15 show valid forms of the received power signal in each range bin. The A-scopes in the form of Slog-rc of Figures 2.10 and 2.13 are very useful, as they allow a target to be identified by its RCS

80 70

S log rc/dB

60 50 40 30 20 10

0

20

40

60

80

100

Distance/m Figure 2.13  Logarithmic received power during target presence, with the range compensation filter applied, for the same cylindrical target as shown in Figure 2.10, now moved to an increased range of 36 m.

50

A Brief Overview of Radar Fundamentals

30 20

S log/dB

10 0 –10 –20 –30 –40 –50

0

20

40

60

80

100

Distance/m Figure 2.14  A-scope of Figure 2.13 with the range compensation filter effect reversed.

value at different ranges, since the effect of range on the received power value is approximately removed. Due to the logarithmic compression, they also allow both the target and noise power to be visualized on the same graphs. Therefore, this is the form of most of the results shown throughout this book. However, in order to further process the A-scopes, it is often necessary to manipulate the linearized A-scopes

450 400 350

S lin

300 250 200 150 100 50 0

0

20

40

60

Distance/m Figure 2.15  Linearized version of the A-scope of Figure 2.14.

80

100

2.6  Radar-Range Measurement Techniques

51

Slin with the range compensation effect removed. For example, when applying the detection theoretic methods, to be introduced in Chapter 3, the linearized power values are necessary. Due to the large dynamic range of the power values, however, detection results will still often be displayed in logarithmic form. Importantly, in all cases, the signal displayed will be given (Slog-rc, Slog, Slin, and so on).

2.6  Radar-Range Measurement Techniques As seen in the previous sections, the location of objects in space can be estimated when a radar transmits electromagnetic energy and observes the returned echo. The estimation of position can be achieved through many means, the simplest of which, at a theoretical level, is the transmission of a pulse, after which the receiver is activated to receive the echo. The reception of an echo indicates the presence of a target, and the elapsed time between pulse transmission and reception is a measure of the range to the target. A radar can also be operated in a continuous, rather than a pulsed, mode, provided that a means to separate the high-powered transmitted signal from the often extremely low-powered received signal exists. Amplitude modulated continuous wave (AMCW) range estimation is one of the simplest forms of continuous mode range measurement. This consists of measuring the number of wavelengths and phase (fractions of a wavelength) between transmitted and received sinusoids. Radars that implement this form of range estimation are rare, since phase detection improves with large chosen values of the modulating wavelength, which, without further processing, results in impractical electromagnetic frequencies for useful radar transmission. Therefore, the range measurement techniques explained in the following sections are based time-of-flight (TOF) and frequency modulated continuous wave (FMCW) methods. For the reasons highlighted in the following, and since most robotics applications are based on FMCW radar, particular attention is paid to this type of range estimation process. 2.6.1  Time-of-Flight (TOF) Pulsed Radar

In time-of-flight (TOF) radar, separation of the echo and transmitted signals is estimated on the basis of time. Since the possibility of multiple targets down range has been noted in Section 2.4, each target is given an index i. The TOF principle is simple, since the range ri to target i is proportional to the time ti taken for the transmitted energy to reach the target and return, i.e.,

ri =

cτ i 2

(2.22)

where c is the speed of electromagnetic energy propagation, often taken as the speed of light. This simple sequence is shown schematically, for ideal radar characteristics, in Figure 2.16. The pulse is considered received if it exceeds a detection threshold, simplistically shown in the lower diagram. In a real TOF radar, the pulse of electromagnetic energy can be generated as a result of “extreme” amplitude modulation, in which the transmitted signal amplitude is at 100 percent for a very short time and

52

A Brief Overview of Radar Fundamentals Transmitted power

Time/range

Received power

ti

Time/range

Received power Detection electronics − select range value Detection threshold ti

Time/range Output range r i

Figure 2.16  TOF range estimation based on perfect radar pulse transmission with no noise.

0 percent for the remaining time. This type of radar, a commercially available Xband, TOF marine device, is used in the coastal navigation experiments of Chapter 10. This is used for feature detection at sea, as part of an autonomous navigation experiment. The X-band is the frequency range corresponding to 8–12.5 GHz (corresponding to transmission wavelengths of 2.40–3.75 cm), and the particular device used in the experiments operates at a frequency of 9.410 GHz. [9] shows that if the X-band sinusoid is modulated with a pulse in the time domain of width tm, then the frequency band width of the pulse can be determined as the difference in frequencies at which the pulse power level drops to 50 percent of its maximum value—the –3 dB points. This bandwidth can be shown to be

∆f BW =

1 τm

(2.23)

The TOF radar range resolution, which is the radar to target return distance corresponding to the pulse high time tm, is given by

Dr =

cτ m 2

(2.24)

Between them, Equations 2.22 and 2.23 can be used to determine the range resolution, in terms of the band width of the transmitted signal, i.e.,

∆r =

c 2∆f BW

(2.25)

For TOF radar to achieve a high range resolution (small Dr in Equations 2.24 and 2.25), the pulse band width must be very high, requiring a very short time duration for the pulse. This, in turn, implies that for reliable signal reception, a large amount of electromagnetic energy must be transmitted over the very short pulse time duration, implying extremely high peak power transmission. For example, the X-brand, TOF radar used in the experiments of Chapter 10 produces a

2.6  Radar-Range Measurement Techniques

53

peak output power of 4 kW during its pulse transmission and has a range resolution of 20m over a maximum range capability of 36 nautical miles. While such specifications at sea are adequate, for an autonomous vehicle operating on land requiring accurate range to target estimation over much shorter distances, this range resolution would be unacceptable. A useful way of greatly increasing the range resolution and decreasing the high transmission peak power is to effectively lengthen the pulse and apply some form of modulation. This is the reason that short range radars often use continuous wave (CW) methods for range estimation, as demonstrated in the land based experiments in this book. 2.6.2  Frequency Modulated Continuous Wave (FMCW) Radar

For the reasons of lower transmission power and relatively high-range resolution, explained in Section 2.6.1, radars used in robotic applications, such as those of Figures 1.6 and 1.15(a), base their range estimation techniques on the FMCW process. An introduction to the FMCW technique for obtaining target range is therefore given in this section. An understanding of this process is necessary for quantifying the noise in the range/power estimates, a detailed analysis of which will be the subject of Chapter 5. This will ultimately be applied to the prediction of measurements based on a current estimate of a vehicle and environmental state in a mobile robot navigation framework. Figure 2.17 shows a schematic block diagram of a typical FMCW radar transceiver. In the figure, the input voltage to the voltage controlled oscillator (VCO) is a ramp signal. The VCO generates a signal of linearly increasing frequency Df over a frequency sweep period Td. This linearly increasing chirp is transmitted via the antenna. An FMCW radar measures the distance to an object by mixing the received signal with a portion of the transmitted signal [10]. The mixer output (point

VCO

Coupler

V Antenna Circulator

Linearizer t

DS P processor Power spectral density

FF T

B A/D converter

Low pass anti-aliasing filter

High pass filter

Low noise amplifier

Mixer

Lograthmic compressor

Figure 2.17  Schematic block diagram of a MMW radar transceiver. (Ó 2010 IEEE, Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Rader Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010)

54

A Brief Overview of Radar Fundamentals

B in Figure 2.17) then contains a signal, with beat frequency proportional to the target range. This signal is then passed through the 40 dB/decade, high pass (range compensation) filter (Section 2.5.2.1) for ultimately obtaining a constant returned power value, which is independent of range, for a target of given RCS. This signal is then convolved with a Blackman window [11], to reduce the spectral leakage in the frequency spectrum output of the fast Fourier transform (FFT) processor [12]. Each element of the FFT result is multiplied by its complex conjugate to obtain the signal power spectral density. The logarithm of the power spectral density is then obtained to compress the large dynamic range of the power value estimates [13], which is the output of the radar. 2.6.2.1  Relationship Between Range and Beat Frequency

From Figure 2.17 the transmitted signal in the FMCW process, υT(t), as a function of time t is

éæ é æ Df ö t ù æ Df ö ö ù υT (t) = AT cos êω c t + 2 π ç ÷ ò tdt ú = AT cos êçω c + π ç ÷ t ÷ t ú è Td ø 0 û è Td ø ø ûú ë ëêè

(2.26)

where the AT is the amplitude of the carrier signal, ωc is the carrier frequency, Df is the FMCW swept frequency (chirp) bandwidth, and Td is the chirp repetition time, as defined in Figure 2.18. At any instant of time, the received echo signal, υR, from target i, is shifted in time from the transmitted signal by a round trip time, ti. The received signal is

éæ ù ö æ Df ö υ R (t - τi ) = AR cos êçω c + π ç ÷ (t - τ i )÷ (t - τ i )ú è Td ø ø êëè úû

(2.27)

where Ar is the received signal amplitude. In the mixer, the received signal is mixed with a portion of the transmitted signal in an analog multiplier, which can be mathematically described by



éæ æ Df ö ö ù υ T (t)υR (t - τ i ) = AT AR ´ cos êç ω c + π ç ÷ t ÷ t ú è Td ø ø ûú ëêè

éæ ù ö æ Df ö ´ cos êç ω c + π ç ÷ (t - τ i )÷ (t - τ i )ú è Td ø ø ëêè ûú

(2.28)

Using the well-known trigonometric identity for the product of two sine waves cos A cos B = 0.5[cos(A + B) + cos(A – B)], the output of the mixer, υout(t – ti) can be written as AT AR (B1 + B2 ) 2



υ out (t - τi ) =

6.

And decreasing the dynamic range of the signal necessary for further processing.

(2.29)

2.6  Radar-Range Measurement Techniques

55

Frequency Transmitted frequency

Received frequency

f bi f

fo

τi

Time

Td

Chirp return delay time

Beat frequency fb 0

Time Figure 2.18  Ideal FMCW frequency chirp definitions.

where



é æ ö æ æ Df ö æ Df ö ö ù B1 = cos ê2 ç ω c + π ç ÷ (t - τ i )÷ t - ç ω c - π ç ÷ τ i ÷ τ i ú , è Td ø è Td ø ø úû ø è êë è é æ æ Df ö ö æ æ Df ö ö ù B2 = cos ê2π ç ç ÷ τ i ÷ t - ç ω c + π ç ÷ τ i ÷ τ i ú è Td ø ø úû è êë è è Td ø ø



(2.30)

The second cosine term B2 is the signal containing the beat frequency. The output of the low-pass filter (which is intrinsic in the mixer stage) consists of this beat frequency component B2 and noise components with similar frequencies to the beat frequency, while other frequency components are removed. The output of the filter within the mixer stage of Figure 2.17, which selects the difference frequency signal component in Equation 2.29, is given by



υbeat( t, τ i) =

é æ æ Df ö ö æ AT AR æ Df ö ö ù cos ê2 π ç ç ÷ τ i ÷ t - ç ωc + π ç ÷ τ i ÷ τi ú è Td ø ø ûú 2 è ëê è è Td ø ø

(2.31)

It can be seen that the received power in each radar range bin is derived from the amplitude of this signal and that the beat frequency carries the range information. In particular the beat frequency fbi in Equation 2.31 is given by

fbi =

Df τ i Td

(2.32)

56

A Brief Overview of Radar Fundamentals

Assuming the speed of radio waves to be c, then the delay time between the transmitted and received chirps will be τi =



2r i c

(2.33)

where ri is the range to the object being sensed. Substituting Equation 2.33 into 2.32 gives the relationship between the electronically measured beat frequency and range

fbi =

2r i Df c Td

®

ri =

cTd i f 2 Df b

(2.34)

Hence closer objects produce signals with lower beat frequencies and vice versa. Equations 2.34 can also be derived directly by examination of Figure 2.18, which shows the FMCW frequency shift versus time, known as the transmitted and received “chirps.” The derivation based on this diagram becomes useful, when Doppler measurements are to be taken into consideration, which will be considered in Section 2.6.2.2. The Navtech Electronics and Acumine radars used in this book both use the sawtooth-type modulation shown, although triangular modulation is also possible [1]. In reality, it is impossible for the swept frequency to return from fo + Df back to fo instantaneously, and a small delay is inevitable (exaggerated in Figure 2.18). The solid line indicates the transmitted frequency chirp, and the dashed line corresponds to the frequency recorded at the receiver a time ti later. As in Equation 2.22, ti corresponds to the delay time of the transmitted signal traveling to and back from target i and is given by Equation 2.33. By applying similar triangles in Figure 2.18, it can be seen that

Beat Frequency fbi =

Df τ i Td

(2.35)

and substituting Equation 2.33 into 2.35 yields

fbi =

2r i Df c Td

®

ri =

cTd i f 2 Df b

(2.36)

as in Equation 2.34. The beat frequency signal of Equation 2.31 will be pursued further in Chapter 5, where the effect of receiver power and system leakage phase noise will be addressed, so that FMCW spectra (A-scope displays) can be realistically simulated and predicted. However, the diagrammatic form of the chirp in Figure 2.18 is useful to gain an understanding of Doppler FMCW radar and is the subject of the next section. 2.6.2.2  Doppler Measurements

In the previous analysis, target i was assumed to be stationary. If this is not the case, a Doppler frequency shift is superimposed onto the received chirp signal, as shown in Figure 2.19. In this case, with the correct processing electronics, both the targets range and radial velocity can be estimated. In this figure, triangular, as opposed to

2.6  Radar-Range Measurement Techniques

57

Frequency

f ri

f ri + f di

f di

Received frequency

f

Transmitted frequency

fo

τi

Time

Td

Beat frequency f ri f ri

i + fd f di

0

Time

Figure 2.19  FMCW chirp under Doppler frequency shift.

sawtooth, modulation is shown. The FMCW radar of Figure 1.19 has this capability, using the triangular FMCW modulation method shown. From Figure 2.19, it can be seen that the beat frequencies are now made up of a component due to the range to target i, fri, and a component due to the relative vei locity, fd. If one assumed that the measured beat frequencies (now fri - fdi and fri + fdi ) were proportional to range, the direct use of Equations 2.36 would yield incorrect range estimates. The received FMCW chirp is now shifted (upward in the example shown in the figure). This results in one section of the beat frequency graph being decreased to fri - fdi and the other increased to fri + fdi . For a target approaching the radar, the portion of the beat frequency fbi (up), which results during the increasing section of the FM chirp, will be given by fbi (up) = fri - fdi



(2.37)

and similarly, during the decreasing portion of the FM chirp,

fbi (down) = fri + fdi

(2.38)

The frequency component fri caused by the range to the target can be extracted by averaging the beat frequency over an entire modulation cycle giving

fri =

(

)

1 i f (up) + fbi (down) 2 b

(2.39)

Similarly, the doppler frequency can be determined either by halving the difference i between fb (up) and fbi (down) or by the installation of a frequency counter, which can be switched every half modulation cycle, allowing fbi (up) and fbi (down) to be measured separately. The danger with this approach occurs if fri < fdi (target at close

58

A Brief Overview of Radar Fundamentals

range, with high speed). This results in the averaging and differencing methods just described being reversed. Once fri has been determined, the range to target i, from Equation 2.36, is now given by ri =



cTd i fr 2 Df

(2.40)

and the target’s radial velocity Vi can be determined from Vi =



cfdi 2 fo



(2.41)

2.6.2.3  Multiple Line-of-Sight Targets

In the presence of multiple targets down range, the mixer output (point B in Figure 2.17) will contain multiple beat frequency signals accordingly. In linear FMCW radars (in which a linear chirp is produced, as in Figures 2.18 and 2.19) there will be a beat frequency component for each target, already referred to as fbi corresponding to target i. The FFT section of the FMCW radar (Figure 2.17) will then produce a power value corresponding to each beat frequency value fbi (as well as to all other frequency bins in the interval fo to fo + Df). Hence Equations 2.36 (for range only measuring FMCW radars) and 2.40 and 2.41 (in the case of Doppler FMCW radar) can be applied for each target i. Indeed, in the case of FMCW radar, Equation 2.36 gives the relationship between the range to the discrete range bins r(q) introduced in Section 2.2 and the electronically measured beat frequencies. The beat frequency space fo to fo + Df is divided into Q discrete frequency values, and the FFT processor, after logarithmic compression, then yields the received power values Slog_rc(q), 1 £ q £ Q, at those discrete beat frequency values, which is the final output of the FMCW radar. This is the reason that, uniquely under the FMCW process, the wellknown A-scope can be referred to as a received power spectrum.

2.7  Sources of Uncertainty in Radar Radar noise is the unwanted power that impedes the performance of the radar. If the received power from a target is comparable to, or falls below, the mean value of the noise power, then the detection of objects and correct estimation of their range lin is likely to fail. This gives rise to the concept of a minimum detectable signal Smin , which is the weakest received power value that the receiver can detect. From radar lin Equation 2.11, the value of Smin can be used to compute a term known as the maximum radar range rmax, given by 1



7.

rmax

1

æ S (G )2 λ 2 G RCS ö 4 æ G RCS ö 4 = ç T Antenna ÷ µ ç lin ÷ 3 lin (4π ) Smin L è ø è Smin ø

Averaging fb i (up) and f bi (down) will yield fdi instead of fri and vice versa.

(2.42)

2.7  Sources of Uncertainty in Radar

59

In radar design and use, it is therefore imperative that the mechanisms by which radar noise is formed and propagated are understood. In the following sections, an overview of the causes and effects uncertainty common to all radar types is given, followed by sections that specifically focus on uncertainty in TOF and FMCW radars. 2.7.1  Sources of Uncertainty Common to All Radar Types 2.7.1.1  Receiver Noise

Equation 2.42 gave the theoretical maximum possible range capability of a radar lin in terms of the minimum power Smin , which its receiver can detect. Often, however, lin Smin is not known for a particular amplifier, and even if it is, Equation 2.42 gives an over estimate of the true maximum range rmax. Amplifiers are typically accompanied by data that give their minimum operating signal-to-noise power (SNP) and a quantity known as their noise figure, Fn. This section therefore derives the maximum radar range attainable in terms of these typically available quantities. Thermal noise is generated in any RADAR receiver’s electronics. The theoretical noise power is given by STh_Noise, where S Th_Noise = K BoltzT ∆f BW



(2.43)

where KBoltz is the Boltzmann constant, T is the temperature, and fBW is the receiver bandwidth [14]. Therefore, an ideal receiver with power gain GRec would produce a noise power GRecSTh_Noise. In actual radar receivers, however, the noise power is usually greater than GRecSTh_Noise. The exact causes of this are in general complex, and a useful way to characterize a real radar receiver is to consider its power noise to be equal to the ideal thermal noise of Equation 2.43, multiplied by the noise figure Fn, which is d­efined as Fn =

=

S Rec_Noise S Rec_Noise = GRec S Th_Noise K BoltzT0 Df BW GRec Actual Receiver Noise Ideal Receiver Noise at Standard Temperature

(2.44)

where SRec_Noise is the actual receiver’s output noise power and T0 is the standard temperature. In most radars, SRec_Noise represents the noise power at the output of the intermediate frequency (IF) amplifier, which, for the FMCW radar in Figure 2.17, is labeled as “Low noise amplifier” and follows the mixer. The gain of the receiver GRec is defined as GRec =



S Rec_Out S Rec_In

(2.45)

meaning that Equation 2.44 can be rewritten as 8.

Fn =

(S Rec_In S Th_Noise ) (S Rec_Out S Th_Noise )

According to IEEE standards, T0 = 290K.

(2.46)

60

A Brief Overview of Radar Fundamentals

which shows that the noise figure can be viewed as the loss in signal-to-noise power as a signal passes through the receiver. Between them, Equations 2.44 and 2.46 can be further rearranged to give S Rec_In =



K BoltzT0 ∆f BW Fn S Rec_Out S Rec_Noise

(2.47)

lin This is a useful relationship, since it allows the minimum detectable power Smin to be defined, in terms of a minimum receiver SNP, i.e.,

 S Rec_Out  lin Smin = K BoltzT0 ∆f BW Fn  Rec_Noise  S min



(2.48)

Hence, with knowledge of a receiver’s noise figure and the minimum SNP of the IF amplifier necessary for detection, the radar equation yields a maximum range capability in terms of these quantities. Therefore rmax can be expressed in terms of lin available data, rather than Smin , giving 1

rmax

æ ö4 ST (GAntenna )2 λ 2 G RCS =ç ÷ è (4π )3 K BoltzT0 Df BW Fn (S Rec_Out S Rec_Noise )min L ø



(2.49)

1 ö4

æ G RCS µ ç Rec_Out Rec_Noise ÷ S )min ø è (S

This analysis has demonstrated the effect of a receiver’s noise figure and minimum detectable SNP on the maximum range achievable for any radar, based on the radar equation. In particular, a minimum detectable signal is necessary for any kind of radar operation. Since the analysis has focused on the received power and its associated noise, no assumptions have been made regarding the radar type or its method of range estimation. Logarithmic Compression of Noise

It is interesting to note the effect of receiver noise under the logarithmic compression of the received power signal from a target. Assuming additive receiver noise Snoise, Equation 2.12 becomes S lin (q) = β



G RCS + S noise (q) [r(q)]4

(2.50)

Under the logarithmic compression described in Section 2.5.1, Equation 2.14 becomes

æ βG RCS ö S log (q) = 10log10 ç + S noise (q)÷ 4 è [r(q)] ø

(2.51)

which can be written as

éβ G RCS S log (q) = 10log10 ê ê [r(q)]4 ë

æ S noise (q) ç1 + çè βG RCS [r(q)]4

(

)

öù ÷ ú ÷ø ú û

(2.52)

2.7  Sources of Uncertainty in Radar

61

and expanding the logarithm gives

é é βG RCS ù S noise (q) ê S log (q) = 10log10 ê + 10log 1 + 10 ú 4 êë βG RCS [r(q)]4 ë [r(q)] û

(

)

ù ú úû

(2.53)

Note that if the target’s RCS GRCS is large and/or the range r(q) is small

S noise (q)

(βG RCS

[r(q)]4

)

� 1

(2.54)

and the second term in Equation 2.53 can be expanded as a Taylor series, which, to first order, allows it to be approximated as

é βG RCS ù S noise (q) S log (q) » 10log10 ê + 10 log10 (e) ú 4 βG RCS [r(q)]4 ë [r(q)] û

(

)

(2.55)

This shows that, on a logarithmic power scale, the noise in the received power signal Slog(q) from a target at range r(q) (range bin q) and RCS GRCS is approximately additive and scaled by a factor (r(q)4 log10(e)/bGRCS). However, as the range r(q) to a target increases and/or its RCS GRCS decreases

S noise (q)

(β ΓRCS

[r(q)]4

)

� 1

(2.56)

and according to Equation 2.52

S log (q) » log10 (S noise (q))

(2.57)

Equations 2.55 and 2.57 between them explain why, on a logarithmic power scale, the received power signal from a fixed target (constant GRCS) appears to be corrupted with reduced noise values at small ranges but soon becomes buried in a signal with higher noise power, corresponding to Snoise, as its range r(q) is increased. 2.7.1.2  Interference and Clutter

Uncertainty in radar measurements is also attributable to external noise sources. In the case of radar, such disturbances result from natural sources such as sunlight to manmade sources such as mobile transmitting devices, fluorescent lights, and engine ignition systems. If a radar will be operated near known sources of interference, it could be useful to determine the frequency bandwidth(s) within which the interfering source(s) radiate their energy. Choosing a radar operating frequency outside of these bandwidths would be a useful initial deterrent. A radar’s front end receiver amplifier’s electronics is typically tuned to have an extremely high gain to the transmitted frequencies, but low gain to all other signals. This will diminish the effect of many sources of interference, except those that emit broadband energy, such as sunlight, which emits energy over a very wide frequency band width, typically

62

A Brief Overview of Radar Fundamentals

encompassing the operating frequencies of all radar devices. A further protection against interference is that radar receiver antenna gains are designed to make them directional, so that, theoretically, energy from only one particular direction (normally the opposite direction in which the energy was transmitted) is detected, thus eliminating interference from other directions. Radar antenna gain characteristics, however, often include side lobes—angular regions outside the reception direction of interest, in which local gain maxima occur. These are a means by which interference can degrade radar performance. An obvious form of interference is that of cross talk, sometimes referred to as the ghost in the machine problem. A transmitting device that operates near to the receiver is the transmitter itself, and, even worse, it operates at frequencies at which the receiver is tuned to receive. In a perfect radar system, the receiver should be completely isolated from the transmitter, so that the only way it can be influenced by any of the energy transmitted is after the energy is reflected by a target in the direction in which receiver’s aperture points. In reality, however, leakage occurs due to the relatively high frequencies used, and stray capacitances between the transmitter and receiver result in an undesired direct communications link between them. This form of interference must be minimized at the design stage. Another form of interference is that which is caused deliberately, possibly to mask real targets, known as jamming. This can take the form of electronic countermeasures. Certain types of jammers take advantage of the side lobes just discussed. Known as standoff jammers, these direct high-powered signals toward-known radar locations from a particular direction, attempting to shield targets that would normally be detected by the radar in another direction. The only way in which such jamming can occur is through the side lobes of the radar. A self-screening jammer on the other hand is carried by the target attempting to elude detection [9]. If a radar attempts to track such a target, jamming energy is directed into the radar’s main detection lobe. Another form of interference is clutter, which can be deliberate or natural. During World War II, aluminum metal foil or metalized glass fiber (collectively referred to as chaff) was often dispersed into the air to try and direct radar detectors away from aircraft. Chaff appears as targets on radar screens or can swamp the entire field of view with false targets. Clutter is defined as unwanted echoes that exceed the detection threshold of a radar. These correspond to the detection of objects of noninterest. Clutter can result from ground, sea, rain, animals, insects, chaff, and atmospheric turbulence. Discrimination between true targets and clutter is an extremely active research topic in radar and is of such relevance that a signal-to-clutter ratio (SCR) as well as the usual SNR must be defined. The effects of clutter play an important role in autonomous navigation applications, since the detection of such objects misleads the navigation filters used in automatic vehicle localization and mapping. Techniques to minimize the effects of clutter involve stochastic navigation filters, which exploit the randomness of clutter detections as opposed to the “less random” nature of actual target detections. This is a major topic in Chapters 8 and 9, where stochastic geometry is applied to the problems of autonomous mapping and SLAM. Also, in Chapter 10, the effects of sea clutter will be

2.7  Sources of Uncertainty in Radar

63

discussed, along with methods to minimize its disturbance on SLAM in coastal environments. 2.7.1.3  Received Power Fluctuations—Swerling Models

In much of the radar literature, the effects of receiver noise during target presence are assumed to be greatly outweighed by target RCS fluctuations, resulting in a random behavior of the received power at the radar’s antenna [1]. Under the Swerling models, a target’s RCS is assumed to fluctuate due to a real target typically consisting of many scattering elements, which result in correspondingly many returns to the radar. Further, even a single scattering element will, in general, over time produce many different power returns to a radar, due to changes in the relative incidence angle between the radar and target, which is a residue to relative motion and the frequency and polarization of the transmitted wave. As a result, Swerling models are used to characterize these fluctuations. These assume that a target’s RCS follows a Rayleigh distribution with statistical independence occurring either on a scan-to-scan basis (slow fluctuations) or between individual A-scopes, termed pulse to pulse (fast fluctuations). In particular, a scan corresponds to a complete 360° revolution of the radar’s swash plate after it completely scans the surveillance region. During the detection of a target, a radar typically receives many signals (termed pulses) within a small angular increment of the surveillance region. If the received signal amplitude is constant over that small angular increment, but changes the next time that increment is scanned, the RCS of the target may have changed. This slow type of fluctuation is known as scan-to-scan fluctuation. Conversely, if the received power values change fast enough such that they appear to be independent between each “pulse,” the fluctuation is termed pulse to pulse. These phenomena give rise to four of the five Swerling models as follows. ·

p(G RCS ) =



·

·



Swerling I model; The received power per pulse within an incremental section of a scan is assumed constant but is uncorrelated between scans. The distribution of the target’s RCS, p(GRCS), used to describe this scan-to-scan fluctuation is Rayleigh in form, and is given by 1 ˆ RCS

G

æ G RCS ö exp ç - RCS ÷ è Gˆ ø

(2.58)

where Gˆ RCS is the average RCS over all target fluctuations. Swerling II model: The fluctuations are much faster than under the Swerling I model and assumed uncorrelated on a pulse-to-pulse basis. This pulse-topulse fluctuation model is described by the same PDF as Equation 2.58. Swerling III model: This model assumes scan-to-scan correlation as in the Swerling I model, but with a Chi-squared PDF given by p(G RCS ) =

4G RCS

(

Gˆ RCS

)

2

æ 2G RCS ö exp ç - RCS ÷ è Gˆ ø

(2.59)

64

A Brief Overview of Radar Fundamentals

·

Swerling IV model: In this model, the fluctuations are pulse to pulse, as under the Swerling II model, but the PDF is that of Equation 2.59.

Models I and II correspond to targets that comprise many independent scatters, of which none are dominant, and is often used to represent received power fluctuations from aircraft. Models III and IV correspond to targets made up of a single dominating scatterer, with smaller, independent scatterers, and are used to represent the fluctuations from objects such as a missile [15]. To complete the set of Swerling models, a final case remains: ·

Swerling V model: This represents an assumed non-fluctuating target. The received power is assumed unknown with no RCS fluctuation.

Note that the Swerling V model could correspond only to a target that is a perfect sphere, exhibiting exactly the same material properties when viewed from all angles. Other fluctuation models considered in the literature include Ricean distributions, with comparisons given in [16]. In this book, the Swerling I model is assumed (Chapters 6 and 7) and adopted for radar data modeling and grid-based mapping, respectively. 2.7.1.4  Speckle

Most objects can be considered to consist of several reflectors or scatterers (a concept that gave rise to the Swerling models of Section 2.7.1.3). The radio energy produced by radar is coherent and, in the case of MMW radar, the wavelength of the energy is in the millimeter range. This implies that the reflected energy from the multiple scatterers can result in constructive as well as destructive interference at the radar’s antenna. Hence, the received power from each scatterer can combine constructively, resulting in a high-powered reception, or destructively, resulting in a received power value that possibly falls below the minimum detectable signal. As a result, the received power in each range bin, is typically the result of many scatterers, which gives rise to high and low powered bins in a random fashion, known as speckle. To demonstrate the effect, Figure 2.20 shows an aerial view of an urban scene, which is scanned at ground level, by the K2Pi MMW FMCW radar of Figure 1.19. The effect of speckle is clearly visible in the grainy PPI image of Figure 2.21(a). Here, the darker dots correspond to constructive interference, and a grainy image is apparent. Minimizing the effect of this noise has been a major research area, with methods of reducing it ranging from multilook filtering [17], which is applied during the acquisition process, to spatial filtering methods, such as the median filter or more sophisticated Lee-Sigma or Frost filters [18], applied to the radar image as a whole. Figure 2.21 shows speckle reduction based on multilook imaging, achieving an incoherent average over successive looks. As with all noise-reduction methods, the challenge in speckle reduction is that of minimizing the graininess of images while preserving the true targets. In the scans acquired throughout this book, the radar beam is mechanically scanned, and line-of-sight A-scopes are recorded in an asynchronous manner. This means that, when multiple scans are recorded, a particular A-scope in one scan is not exactly 360° ahead of the corresponding

2.7  Sources of Uncertainty in Radar

65

Figure 2.20  An aerial image of a scene scanned by the K2Pi radar. (Courtesy of lRSTEA, TSCF Research Unit, Aubière, France.)

(a) Speckle increases image noise.

(b) Speckle reduction with multi-look techniques.

Figure 2.21  The effect of speckle in single antenna radar images. The dark areas show high power returns. (Courtesy of lRSTEA, TSCF Research Unit, Aubière, France.)

66

A Brief Overview of Radar Fundamentals

A-scope from the previous scan, although it is as close as possible to it. In a static or slowly changing scene, this allows scan integration to take place, and averaging multiple A-scopes at slightly different bearing angles can aid in the reduction of speckle, since reflections should be statistically independent and result from different scatterer configurations. A scan integration method of this type will be the subject of Chapter 6. 2.7.1.5  Multi-Path Effects

In many autonomous robotic applications, radars are operated near to ground level, and possibly in the presence of flat surfaces such as walls. This means that multi-path effects are likely, which result in the possibility of multiple versions of a single target being detected, at incorrect bearing and elevation angles. This effect is demonstrated in Figure 2.22. In the figure, the paths of the radio waves from and back to a radar, when incident on the true target, can be many. Multipath reflections are possible along the direct line-of-sight path OAO and, as a result of the vertical plane, along the indirect path OBABO (and possibly path OBAO). Since it is possible that the radio wave is received along direction BO, a multipath image results in bearing direction OB, at a range equivalent to the range of O to B plus range B to A. This results in the false image at C, which is behind the vertical plane. A similar effect can take place in the horizontal plane, resulting in a false image below that plane, in elevation direction ODE. In principle, such effects can cause severe problems for autonomous robotic applications. To demonstrate the effect with real data, a “bad scenario” environment was scanned in a horizontal plane in Figure 2.23. This scan was taken in a sports hall, with four planar walls and an open door (labeled), and shows successive A-scope, Slog, displays at discrete bearing angular increments of 2.4°. The relatively high-powered peaks can be observed at the locations of the walls, as well as the corresponding reduction in received power at the open door. Figure 2.24 shows the result of applying a standard cell averagMulti path image, behind vertical plane True target

C B

Vertical plane Bearing error

A O D Elevation error

E

Horizontal plane Multi path image, behind vertical plane

Figure 2.22  The principle of multipath reflections.

Scanning radar

2.7  Sources of Uncertainty in Radar

67

S log (dB)

Open door

x (m)

y (m) Walls

Figure 2.23  Multiple superimposed A-scope displays at discrete bearing angles, from a complete scan in an indoor sports hall, rectangular in shape with an open door.

Open door

Multi path image of open door

Target hypothesis

Multi path images images of walls

y (m)

x (m)

Figure 2.24  Landmarks extracted along each line of sight, radial A-scope, from the scan shown in Figure 2.23.

68

A Brief Overview of Radar Fundamentals

ing, constant false alarm rate (CA-CFAR) processor to the linearized data Slin of Figure 2.23. The CA-CFAR processor will be explained in Chapter 3. At this stage, it suffices to say that the CA-CFAR processor yields statistically based decisions on whether or not a received power value Slin(q) at range bin q, in a given A-scope, at a particular bearing angle, is a target or not. These decisions, or target hypotheses, with values 0 or 1, are plotted as a function of q along each bearing angle in Figure 2.24. In this radar representation, many multipath targets are visible. For example, false images of the walls, as well as the open door, are evident (labeled), due to the multiple radio wave paths shown. The detection and removal of such multipath effects is challenging. It is evident in Figure 2.22 that if the beam width of the radar could be reduced as much as possible, then maximum energy would only be directed along path OA, and, further, the radar antenna would only be receptive in this direction. All real radar antennas, however, have an antenna gain beam pattern, which contains side lobes (Section 2.7.1.2). This means that energy is released and can be received at angles other than the direction in which the radar is considered to point, opening the possibility of multipath reflections. The maxima and minima in a radar antenna’s beam pattern can further result in a phenomena known as multipath fading, in which radio energy that has undergone multiple reflections enters the radar’s aperture at angles in and out of the side lobes [1]. This causes the false images to appear and disappear as the radar and/or target moves. The radars of Figures 1.6 and 1.15(a) have antennas with pencil beam gain characteristics. This concentrates the transmitted energy over a beamwidth of just 1.8° and results in a focused system. Close observation of the received power scan of Figure 2.23 shows that the received power from the erroneous images is typically much lower than that from the true walls of the sports hall, so that further processing could be possible to identify such false targets based on their relative power value. However, if it is intended to use the radar for multiple line-of-sight sensing, by using the penetration abilities outlined in Section 2.4, care must be taken not to confuse secondary line-of-sight targets with multi-path reflections.

2.8  Uncertainty Specific to TOF and FMCW Radar The sources of uncertainty addressed so far are common to almost any form of radar, irrespective of their principle of operation. Further sources of uncertainty exist, which are specific to particular detection and range estimation types. To match the radar range measurement techniques considered most appropriate for robotic systems, the following sections analyze radar uncertainty based specifically on the TOF and FMCW range estimation techniques. 2.8.1  Uncertainty in TOF Radars

TOF range measurements can be achieved by counting very-high-speed clock pulses between the leading edge of the transmitted pulse (sometimes called the bang pulse) to that of the echo pulse. Due to the high speed of electromagnetic

2.8  Uncertainty Specific to TOF and FMCW Radar

69

Transmitted power

Time/range

Received power

Time/range Detection electronics – select range value Detection threshold T0

Time/range Output range R 0

Figure 2.25  TOF pulses in the presence of noise. Note the critical choice of detection threshold for correct TOF estimation.

wave propagation (c ≈ 3 ´ 108 m/s), this requires clock frequencies in the GHz range. At such frequencies, pulses do not have the “nice” rectangular form of the schematic diagram shown in Figure 2.16 but, due to stray capacitances, are distorted, with heavily rounded edges, as shown schematically in Figure 2.25. Receiver noise also corrupts the received power signal, meaning that the already distorted pulses must be discernible from this noise. Assuming a detection threshold can be successfully set, so that the noise power values do not exceed it (this will be the subject of Chapter 3), uncertainty still arises due to the exact location on the received pulse’s rising edge at which the threshold is crossed. Due to pulse distortion, this rising edge is no longer vertical, resulting in ambiguities in the timing from the transmitted to received pulses, known as range walk. This is schematically shown in Figure 2.26. Received power Transmitted pulse

Received pulse

Strong pulse Detection threshold

Intermediate pulse Weak pulse

Range walk Figure 2.26  The production of range walk errors in TOF pulsed detection systems.

Time

70

A Brief Overview of Radar Fundamentals

2.8.1.1  Constant Fraction Timing Discriminators

Constant fraction timing discriminators (CFTDs) are often implemented to reduce this effect. CFTDs employ the trick of applying the received pulse, and a delayed version of it, to the inputs of a comparator. This then allows the point at which the delayed version rises above the undelayed version to stop the timer. In some versions, the undelayed received pulse is attenuated, before being input to the comparator [9]. The delay time applied and level of attenuation used, if any, can then be calibrated to minimize range walk. This technique works well if the received pulse’s shape always follows a predictable form but is still prone to error when target characteristics affect the received pulse shape. Further refinements include the application of split-gate trackers, crossover timing discriminators, and their combinations [3]. In the latter technique, the derivative of the received pulse is determined, and its zero crossing is used to stop the timer. This method yields a stop time that is independent of the received pulse’s amplitude but still assumes pulse symmetry and amplifies high-frequency noise, increasing the danger of noise triggering the timer stop. In practise, combinations of these techniques are applied to both the transmitted and received pulses to ensure that any delays and/or distortions introduced by each technique are reproduced on both pulses. This is so that they cancel when the relative time between them is determined.

2.8.2  Uncertainty in FMCW Radars

The two main sources of error in FMCW radars are receiver and phase noise. Receiver noise affects the power reading, as already described in Section 2.7.1.1, while phase noise affects the range estimate itself. 2.8.2.1  Phase Noise

In FMCW radars, a source of noise that affects the A-scope displays (received power versus range spectra) is phase noise. Phase noise is generated by frequency instability of the oscillator due to thermal noise and can result from the leakage of the transmitted signal into the mixer [19]. Ideally for a particular input voltage to the VCO (Figure 2.17), the output should have a single spectral component. In reality, the VCO generates a spectrum of frequencies with finite bandwidth, which constitutes phase noise. This will be examined further in Chapter 5, where Equation 2.26 is expanded to include both received power and phase noise terms. To summarize its effect, the phase noise broadens the received power peaks and reduces the sensitivity of range detection [20]. This introduces noise into the range estimate itself. Experimental data provides insight into the effects of phase noise. Figure 2.27 shows 1000 superimposed A-scope displays, obtained from the Navtech FMCW radar (Figure 1.6), for a constant RADAR swash plate bearing angle. Figure 2.28 shows a zoomed view of the A-scopes obtained from the landmark, which was located at 10.25 meters. A broadening of the power spike is evident, which is a direct

2.8  Uncertainty Specific to TOF and FMCW Radar 50

71

Features

40

S log_rc (dB)

30

20

10

0

0

20

40

60

80

100

120

Range(m) Range (m)

140

160

180

200

Figure 2.27  1000 superimposed A-scope displays, recorded at the same bearing angle, from a static scene containing 3 landmarks down range.

30

25

20

log_rc (dB) SPower(dB)

15

10

5

0

7

8

9

10

11

Range(m) Range (m)

12

13

14

Figure 2.28  Zoomed received power returns from the feature at a distance of 10.25 meters from Figure 2.27.

72

A Brief Overview of Radar Fundamentals

result of phase noise. It should be remembered, however, that the vertical scale in Figure 2.28 is logarithmic, so that the noise and signal can be viewed on the same graph. On a linear scale, the target power far exceeds the noise power (in this case) and, depending at which power level the detection threshold is set, will affect how wide the range ambiguity is. In Chapter 5 it will be shown that the effect of receiver noise is of far greater importance in detecting landmarks than the phase noise shown here. Phase noise does play a role, however, when applying constant false alarm rate (CFAR) processors, as guard cells become necessary to stop the received power values of neighboring range bins corrupting the adaptive noise estimate. This will be the subject of Chapter 3.

2.9  Polar to Cartesian Data Transformation The raw data that is received from the radar is in polar coordinates (r(q),qi), and the received power information can be plotted without distortion, in the form known as a B-scope, as shown in Figure 2.29. Here, the received power is plotted as a shaded cell (lighter shades correspond to higher power) at each range and bearing coordinate on a rectangular grid. Hence each column of data corresponds to a single A-scope. Typically, for the purposes of visualization and robot navigation, Cartesian coordinates are used, and the received power observations and their inferred range values must be transformed to this coordinate system. This is particularly

500

400

Range / bin

300

200

100

0

90

180

270

360

Bearing (degrees)

Figure 2.29  B-scope radar display, in which the received power content of each range bin is plotted as a shaded value on a rectangular grid. The grid has been filled based on multiple scans, which fill in the gaps left after a single sweep of the environment, since corresponding bearing angles do not exactly overlap between scans. Dark areas correspond to low received power and vice versa.

2.9  Polar to Cartesian Data Transformation

73

S lin ( q +1, q i+1 )

Slin

S lin ( x, y ) S lin ( q +1, q i)

q i+1 S

lin

(q , q i+1 )

Cartesian grid qi

S lin (q , q i) x y Figure 2.30  Transforming polar power values to Cartesian form.

important for the application of Cartesian-based image processing algorithms, which will be used in Chapter 10, and in grid-based navigation, where all state estimates and measurements must be referred to a Cartesian grid. This produces a dilemma, since, particularly at larger ranges, the points in space from which the power information was received can be relatively far from the Cartesian point at which the measurement information is required. This is illustrated in Figure 2.30, where it can be seen that it is not immediately clear which polar power value should be used to infer the received power value Slin(x, y) at Cartesian coordinates (x, y). 2.9.1  Nearest Neighbor Polar to Cartesian Data Conversion

A simple method to convert received power values and locations to Cartesian coordinates is to set the value of Slin(x,y) in Figure 2.30 to the same value as its closest polar neighbor. This is referred to as nearest neighbor polar to Cartesian data conversion. The results of applying this technique to the received power data of Figure 2.29 is shown in Figure 2.31. It can be seen that objects at larger distances are heavily pixelated. This is caused by discretization errors in which the same polar power value can erroneously be converted to multiple Cartesian values. 2.9.2  Weighted Polar to Cartesian Data Conversion

An alternative solution is therefore used to convert polar radar images into Cartesian form throughout this book. First, the polar coordinates of the Cartesian point of interest (x, y) must be determined, which are expressed as r(x, y) and q(x, y). For each required Cartesian value Slin(x, y), the surrounding four received polar power

A Brief Overview of Radar Fundamentals

Distance / bin

74

Distance / bin Figure 2.31  Received power data of Figure 2.29 converted to Cartesian form, using nearest neighbor data conversion. Note the pixelated appearance of the image, particularly at higher ranges.

values, Slin(q,qi), Slin(q + 1, q­i), Slin(q, q­i+1) and Slin(q + 1,qi+1), located at polar coordinates (r(q),qi), (r(q + 1), q­i), (r(q), q­i+1), and (r(q + 1), qi+1) (Figure 2.30), are each weighted according to their radial and angular distances from Slin(x, y). The value of Slin(x, y) is determined by linear interpolation as follows S lin (q, q + 1,θi ) = S lin (q,θ i )

+ (S lin (q + 1,θi ) − S lin (q,θ i ))

(r(x, y) − r(q)) (r(q + 1) − r(q))

(2.60)

S lin (q, q + 1,θi +1) = S lin (q,θ i +1)

+ (S lin (q + 1,θi +1) − S lin (q,θ i +1))

(r(x, y) − r(q)) (r(q + 1) − r(q))

(2.61)

2.9  Polar to Cartesian Data Transformation

75

S lin (x, y) = S lin (q, q + 1,θi ) + (S lin (q, q + 1,θi +1) − S lin (q, q + 1,θ i ))



×

(2.62)

(θ (x, y) − θ i ) (θi +1 − θ i )

Distance / bin

− − where S lin(q,q + 1,qi) and S lin(q,q + 1,qi+1) represent the weighted average powers along directions qi and qi+1, between range bins q and q + 1, respectively, and Slin(x,y) is the interpolated value of the Cartesian power value with polar coordinates (r(x,y), q (x,y)) and Cartesian coordinates (x,y). Figure 2.32 shows the result of this transformation, after application to Figure 2.29. The result of using the weighting method of Equations 2.60–2.62 for the transformation, shows an improved representation of the data and a sharper image with less pixelation, particularly at large ranges. This results in less distortion after transformation and is the method used to transform scanned radar data into Cartesian images in Chapter 10.

Distance / bin Figure 2.32  Received power data of Figure 2.29 converted to Cartesian form, using the four point weighted method.

76

A Brief Overview of Radar Fundamentals

More principled methods exist for representing data sets that are converted from polar to Cartesian format, which use surface reconstruction algorithms based on Delaunay triangulation and Ruppert’s algorithm [21].

2.10  Summary The principles upon which radar measurements are formed, and their sources of uncertainty, have been the subject of this chapter. An understanding of these principles is central to the modeling, detection, robot mapping, and SLAM chapters of this book. For example, in Chapter 5, the principle of range estimation based on the FMCW beat frequency measurement technique, introduced in Section 2.6.2, will be reevaluated in the presence of both the receiver and phase noises explained in Sections 2.7.1.1 and 2.8.2.1, respectively. This will result in the generation of a model for the prediction and simulation of realistic radar scans, based on the FMCW technique. The penetrating ability of radar, introduced in Section 2.4, will inspire the use of multiple line-of-sight detection algorithms, to be introduced in Chapter 3. This will further inspire the use of a multiple scan integration method in Chapter 6 to improve the probability of detection of both unoccluded and occluded targets. The sources of radar uncertainty introduced in Section 2.7 were shown to affect both a possible target’s existence and spatial uncertainty. This is because, as shown in Section 2.8.1, for TOF radars, and 2.8.2.1, for FMCW radars, noise sources that affect the received power in the signal also indirectly affect the range estimate to a possible target. Hence, object existence and spatial uncertainties are coupled with common noise sources, such as receiver noise, clutter, target RCS fluctuation, multipath effects, and speckle. Therefore, target existence, number, and location uncertainties should, ideally, be estimated in a joint manner. This has inspired the multi-object, multi source autonomous robot mapping concepts, based on finite set statistics, to be introduced in Chapter 4, and their applications in complex environments in Chapters 8, 9, and 10. Further, the polar to the Cartesian transformation method of Section 2.9 plays an important part in the vision-based scan smoothing and feature extraction algorithms applied in Chapter 10, where feature extraction in a cluttered coastal marine environment is necessary.

2.11  Bibliographical Remarks As mentioned in Section 2.1, there are many classical texts that provide a greater depth and understanding of many of the radar principles introduced in this chapter. In the following sections, reference is made to these and others under their corresponding themes. 2.11.1  Extensions to the Radar Equation

The radar Equations 2.10 and 2.11 are fundamental to much of the prior work on radar. They are often expressed in their logarithmic (dB) form due to the large

2.11  Bibliographical Remarks

77

dynamic range of the parameters. Deviations from the ideal performance of the radar equation are often accounted for via the system loss parameter L(>1) in the radar equation. Hence, when written in its more usual dB form, all of the known loss terms can be subtracted from the ideal power value received by a perfect radar, in Equation 2.15. Examples of system losses that can be quantified, were introduced by Skolnik [1]. These include antenna beam shape loss, collapsing loss, and microwave circuit losses. Other losses highlighted by Skolnik include field degradation loss and even operator fatigue/motivation loss—losses which are clearly difificult to quantify, at least in a deterministic manner. In the type of radars used in robotics, the antenna pattern is often of the pencilbeam type, meaning a very narrow beamwidth. Beam shape loss in these devices often results in received power values being a function of the angular position of a target within the beam, resulting in often unmodeled, power fluctuations. Methods to determine this loss factor are given by Hahn et al. [22]. In general radar applications, the curvature of the earth should be taken into account when predicting radar coverage, particularly at low elevation angles. This has motivated research into the propagation of radar waves in the interference and diffraction regions, for long-range radar applications. Conversely, for most robotic applications, the curvature of the earth can be neglected, and the effects of flat earth are of importance [1]. In this case, in a manner similar to the multi-path effect explained in Section 2.7.1.5, the plane earth assumption analyzes the effects of targets being illuminated directly from the radar, but returning the reflected energy in an indirect path back to the radar via reflection at the “plane earth.” As a result of this effect, a parameter should be introduced into the radar equation that models the ratio of the field strength at a target in the presence of a surface, to that which would result in free space. This parameter is dependent on the height above the ground of the radar and target, as well as its direct line-of-sight range and the radar transmission wavelength. The parameter also introduces the effect of “lobing” into the radar equation, as the received power undergoes large changes in value as the height of the target and/or radar change. Work that has modeled this effect in the vicinity of ground planes was carried out by Milsom [23] and, in the vicinity of the sea surface, by Ward et al. [24]. Regarding the use and accuracy of the radar equation in defining radar performance, Skolnik provides an interesting philosophy, which could be taken as the final word (page 64 [1]): “The predicted value of the range as found from the radar equation cannot be expected to be checked experimentally with any degree of accuracy.... The safest means to achieving a specified range performance is to design conservatively and add a safety factor.” 2.11.2  Signal Propagation/Attenuation

Since the environment in which a radar operates influences its ability to detect objects, the ability of radio waves to penetrate the various components of the atmosphere is well documented. Curves that quantify the expected one-way power loss in dB/km, as radio waves penetrate fog, drizzle, and heavy rain, have been published by Barton [2], and recently collated by Brooker [9]. Radar losses as a result of fog, categorized as

78

A Brief Overview of Radar Fundamentals

“radiation fog” or “advection fog,” have been documented by Bhartia et al. [25] and as a result of smoke and dust particles by Pinnick et al. [26]. 2.11.3  Range Measurement Methods

Besides the TOF and FMCW range estimation methods presented in Section 2.6, other range estimation techniques exist, with an overview of some of these given in [9]. A simple technique to determine range is by measuring the phase shift of the received radar signal, relative to the transmitted signal. This can be achieved based on the radio signal itself (unmodulated carrier) or a modulated carrier signal. The first successful microwave electronic distance measuring device, based on phase measurement, was the Tellurometer, introduced in 1959 by T. L. Wadley [27]. This allowed large distance measurements to be made by introducing slight changes into the modulation wavelength of transmitted waves to greatly extend the ambiguity range interval of the single wavelength phase shift method. The device was capable of range measurement up to 70 km and was used in applications ranging from mapping to coastal surveying. For the reasons explained in Section 2.6, range estimation via electronic phase estimation is now rarely used in radar applications. It is however applied to its optical counterpart—laser detection and ranging system. This method of range estimation requires AMCW transmission and has been successfully implemented in short-range applications, typically up to a few hundred meters. A good review of AMCW laser ranging techniques is given in [28]. Further analyses of the sources and estimation of range uncertainty in AMCW ladar systems is given in [29, 30].

2.11.4  Uncertainty in Radar 2.11.4.1  Swerling Models

The application and validity of the Swerling models have received a great deal of attention in the literature. For example, it has been noted that for targets that yield a large SNP, the Swerling I model produces very “wide spreading” power distributions, resulting in pessimistic values for the probability of detection PD of the target [31]. On the other hand, the Swerling III model yields overly optimistic values. Many proposed distributions exist, including log-normal and Weibull, which have been used to model received power fluctuations between scans when the received SNP is high. In 2003, Schnidman proposed expanded Swerling models that produce the required probability of detection (PD) during high SNP receptions [31]. All of the proposed distributions have certain computational drawbacks, often yielding analytically intractable PD values. In particular, the Swerling models and gamma probability density functions assume that the reflections are made up of zero mean Gaussian random variables. As will be demonstrated throughout this book, many targets yield reflections that are persistent and should be modeled by distributions with a non zero mean. 2.11.4.2  Phase Noise

Reasons for, and analysis of, FMCW phase noise have been studied in the literature. For example, Foessel et al. showed that the peaks and skirts, an example of which

2.11  Bibliographical Remarks

79

was shown in Figure 2.27, occur due to the leakage of signals from the transmitter into the mixer through the circulator and also due to antenna impedance mismatch [20]. Further analyses of phase noise in FMCW systems and minimizing its effect on object detection are given in [19, 32, 33].

References   [1] Skolnik, M. I., Introduction to Radar Systems, New York: McGraw Hill, 1982.   [2] Barton, D., Modern Radar Systems Analysis. Norwood, MA: Artech House, 1988.   [3] Barton, D., and S. Leonov, Radar Technology Encyclopedia. Norwood, MA: Artech House, 1997.   [4] Toomay, J. C., Radar Principle for the Non-Specialist, Lifetime Learning Publications, 1982.   [5] Levanon, N., Radar Principles, New York: Wiley, 1988.   [6] Currie, N. C., and C. E. Brown, Principles and Applications of MMW Radar. Dedham, MA: Artech House, 1987.   [7] Balanis, C. A., Antenna Theory Analysis and Design. Hoboken, NJ: John Wiley & Sons, 2005.   [8] Ferris, D. D., and N. C. Currie, “Microwave and Millimeter-Wave Systems for Wall Penetration,” Proceedings of the SPIE—The International Society for Optical Engineering, Vol. 3375, 1998, pp. 269–279.   [9] Brooker, G., Introduction to Sensors for Ranging and Imaging, Raleigh, NC: SciTech Publishing, 2009. [10] Scheding, S., G. Brooker, M. Bishop, R. Hennessy, and A. Maclean, “Terrain Imaging Millimetre Wave Radar.” In International Conference on Control, Automation, Robotics and Vision, Singapore, November 2002. [11] Oppenheim, A. V., and R. W. Schafer, Digital Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1975. [12] Clark, S., Autonomous Land Vehicle Navigation Using Millimetre Wave Radar, PhD Thesis, Australian Centre for Field Robotics, University of Sydney, 1999. [13] Langer, D., “An Integrated MMW Radar System for Outdoor Navigation.” In International Conference on Robotics and Automation, IEEE, Minneapolis, MN, April 1996, pp. 417–422. [14] Connor, F. R., Noise: Introductory Topics in Electronics and Telecommunications, 2nd Ed., London: Edward Arnold, 1982. [15] Barkat, M., Signal Detection and Estimation, Norwood, MA: Artech House, 2005. [16] Rohling, H., “Some Radar Topics: Waveform Design, Range (CFAR) and Target Recognition.” In NATO: Advances in Sensing with Security Applications, Il Ciocco, Italy, July 2005. [17] Henderson, F. M., and A. J. Lewis, eds., Principles and Applications of Imaging Radar: Manual of Remote Sensing, Chapter 2: Radar Fundamentals: Technical Perspective, New York: John Wiley & Sons, 1998. [18] Meenakshi, A. V., and V. Punitham, “Performance of Speckle Noise Reduction Filters on Active Radar and SAR Images,” International Journal of Technology and Engineering Systems (IJTES), Vol. 2, No. 1, January–March 2011, pp. 111–114. [19] Wu, L., S. S. Peng, and X. Q. Shi, “Effects of Transmitter Phase Noise on Signal Processing in FMCW Radar.” In Proceedings of the International Conference on Signals and Electronic Systems ICSE2000, Ustronie, Poland, 2000, pp. 51–56. [20] Foessel, A., Scene Modeling from Motion-Free Radar Sensing, PhD Thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, January 2002.

80

A Brief Overview of Radar Fundamentals [21] Ruppert, J., “A Delaunay Refinement Algorithm for Quality 2-Dimensional Mesh Generation,” Journal of Algorithms, Vol. 18, No. 3, May 1995, pp. 548–585. [22] Hahn, P. M., and S. D. Gross, “Beam Shape Loss and Surveillance Optimization for Pencil Beam Arrays,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 5, No. 4, July 1969, pp. 674–675. [23] Milsom, J. D., “HF Groundwave Radar Equations.” In Seventh International Conference on HF Radio Systems und Techniques, Nottingham, UK, July 1997, pp. 285–290. [24] Ward, K. D., R. J. A. Tough, and S. Watts. Sea Cluller: Scattering, the K Distribution and Radar Performance, Institution of Engineering and Technology, London, UK, 2006. [25] Bhartia, P., and I. Bahl, Millimeter Wave Engineering and Applications, New York: John Wiley and Sons, 1984. [26] Pinnick, R., G. Fernandez, and B. Hinds, “Explosion Dust Particle Measurements,” Applied Optics, Vol. 22, No. 1, 1983, pp. 95–102. [27] Smith, J. R., “The Tellurometer,” Coordinates (Monthly Magazine on Positioning, Navigation and Beyond), Vol. l2, December 2009, pp. 15–16. [28] Amman, M. C., T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser Ranging: A Critical Review of Usual Techniques for Distance Measurement,” Proceedings of the SPIE—The International Society for Optical Engineering, Vol. 40, No. 1, 2001, pp. 10–19. [29] Adams, M. D., “Lidar Design, Use, and Calibration Concepts for Correct Environmental Detection,” IEEE Transactions on Robotics, Vol. 16, No. 6, December 2000, pp. 753– 761. [30] Adams, M. D., Sensor Modelling, Design and Data Processing for Autonomous Navigation, World Scientific, Singapore, 1999. [31] Shnidman, D. A., “Expanded Swerling Target Models,” Aerospace and Electronic Systems, IEEE Transaclions on, Vol. 39, No. 3, July 2003, pp. 1059–1069. [32] Beasley, P. D. L., “The Influence of Transmitter Phase Noise on FMCW Radar Performance.” In Proceedings of the 3rd European Radar Conference (EuRAD), Manchester, UK, September 2006, pp. 331–334. [33] Adamski, M. E., K. S. Kulpa, M. Nalecz, and A. Wojtkiewicz, “Phase Noise in Two­Dimensional Spectrum of Video Signal in FMCW Homodyne Radar.” In 13th International Conference on Microwaves, Radar and Wireless Communications, (MIKON-2000), Vol. 2, 2000, pp. 645–648.

Chapter 3

An Introduction to Detection Theory 3.1  Introduction In any sensory-based application, methods are necessary to decide when the power detected at the receiver corresponds to a valid object and when it corresponds to noise or clutter. The underlying mathematical tools to make these decisions are based on detection theory. In many commercially available ranging sensors, such decisions take place “inside the machine” and are hidden from the user. However, as shown throughout Chapters 1 and 2, most radars are imaging sensors, capable of range estimation. They provide entire received power versus range information— the A-scope—allowing the user to make the critical detection decisions. In the robotics literature, sensors often feed-feature detection algorithms, which in turn are required to make feature-detection decisions. Importantly, whether hidden from the user or not, for any form of useful application, all sensors and/or their accompanying feature-detection algorithms require some form of detection theory. This chapter begins by comparing concepts based on non-adaptive detection, hypothesis free modeling, and stochastic-based adaptive detection. In the presence of all of the disturbances and non ideal radar behaviors introduced in Chapter 2, the fundamental importance of the stochastic-based methods is highlighted. The concept of choosing a received power threshold based on an acceptable probability of false alarm is introduced. Depending on how well separated the target absence and presence noise power distributions are determines the probability of detecting a true landmark. This in turn allows the probability of detection to be determined as a function of the received power. In realistic radar detection scenarios, the type of received noise power distributions may be known, but not their moments (means, variance, and so on), since they typically change under different environmental conditions or with the object being sensed, the distance to the object, and many other factors, as illustrated in Chapter 2. This has led to the development of a certain class of detectors, which attempt to estimate some of these parameters and which then provide an adaptive received power threshold, to achieve a constant probability of false alarm. Known as constant false alarm rate (CFAR) processors, these are the main focus of this chapter. There are many types of CFAR processors, all of which are derived based on slightly different underlying power noise assumptions. This chapter gives the implementation details of arguably the two most popular versions applied in the application of short-range radar, namely, the cell averaging (CA) and the ordered statistics (OS)CFAR processors. They will be further applied to the problems of improving target detection probabilities in Chapter 6 and directly for landmark detection in featurebased robot navigation experiments in Chapters 8, 9, and 10. 81

82

An Introduction to Detection Theory

The chapter proceeds as follows. In Section 3.2 the difficulty of extracting the power returns corresponding to true landmarks is demonstrated using a radar scan as an example. This is followed in Section 3.3 by different concepts aimed at extracting target returns, which have been applied in both the radar and robotics literature, accompanied by comparisons highlighting the importance of the stochastic-based detection methods. Section 3.4 therefore introduces classical, probabilistic detection theory, based on known received power distributions and their parameters, during target absence and presence. In realistic target detection scenarios, power noise distribution parameters are typically unknown and varying, and hence Section 3.5 introduces adaptive power thresholding CFAR methods that can still maintain a constant probability of false alarm when detecting objects. After the Summary in Section 3.6, a more comprehensive summary of the vast array of CFAR processors is given in the Bibliographical Remarks in Section 3.7, with more details to be found in [1, 2].

3.2  Concepts of Detection Theory Figure 3.1 shows a 360° radar PPI and laser scan superimposed onto a satellite image of a campus environment. Within the PPI scan, received power values are

0o

Figure 3.1  Aerial view of a campus environment with a single 360° radar PPI and laser range finder scan superimposed. The black dots represent laser range finder returns, and the shaded scale is used to represent the received power values from the radar.

3.2  Concepts of Detection Theory

83

Figure 3.2  A-scope (S log) extracted from the radar PPI in Figure 3.I. Superimposed is the single spike corresponding to the laser range finder’s target location decision.

represented as a shaded scale (lighter shades indicating higher powered returns) at their corresponding discretized range values from the center of the scan. The returned range values from the laser range finder are represented as black dots. Whereas at each bearing, complete power versus range information is produced by the radar, the laser sensor output produces single target range values. Hence some form of detection takes place within the laser range finder. This is demonstrated in Figure 3.2, in which the particular A-scope corresponding to the 0° bearing angle in Figure 3.1 is shown. The single target range decision from the laser range finder is also superimposed as a dashed line. This verifies the radar’s range measuring capabilities but also demonstrates the radar’s ability to propagate its energy down range and receive energy back from multiple landmarks at a single bearing angle. As predicted in Section 2.4, its potential for detecting multiple targets at individual bearing angles is clearly evident. As can be seen in Figure 3.2, multiple peaks are present in the A-scope display, only some of which are from valid landmarks. Principled methods to estimate the true landmarks are based on the concepts of detection theory. Detection theory concerns the detection of landmark signals from noisy sensor measurement data such as that of Figure 3.2. Feature extraction then concerns the reduction of the dimensionality of the hypothesized landmark detections into feature models, such as point features, lines, circles, and planes [3–5]. In robotics, commonly used laser range finders and sonars (such as the Polaroid sonar) [6], internally perform landmark detection, and the sensor output is the range and bearing value of each single hypothesized detection at each bearing angle. As shown here, however, imaging sensors such as radar return raw power measurements w­ithout

1.

And in fact X-ray, thermal vision, and certain types of sonars.

84

An Introduction to Detection Theory

any landmark hypotheses. It is therefore necessary to apply detection theory to the received A-scopes to extract, as robustly as possible, the true targets and their range values from the noisy data.

3.3  Different Approaches to Target Detection Previous landmark detection methods, applied to imaging sensors are typically based on non-adaptive detection, hypothesis free modeling and stochastic based, adaptive detection. 3.3.1  Non-adaptive Detection

Much of the previous work in the robotics domain applies fixed thresholding methods or heuristics to hypothesize landmarks within received data [7–9]. Single landmark presence is assumed, and generally only one landmark is extracted at each bearing angle, which may suffice for some applications, however it discards much useful information in terms of map building and localization capabilities. Figure 3.3 shows the A-scope of Figure 3.2 with two possible constant thresholds Sthresh_1 and Sthresh_2 applied to perform landmark detection. While computationally simple to apply, a non adaptive detection method requires the threshold power to be set a priori, which is nonintuitive. In Figure 3.3 it can be seen that if the threshold Sthresh_2 were used, then the power spike at a range of approximately 17 m would be considered as noise only, whereas if the lower threshold Sthresh-1 were used, it would be hypothesized as a valid landmark detection. Importantly, no statistical information can be implied from the landmark detections, and sub-

S log (dB)

S thresh_2

S thresh_1

Figure 3.3  The A-scope Slog(q) of Figure 3.1 with constant received power thresholds (horizontal dashed lines).

85

S norm

3.3  Different Approaches to Target Detection

Figure 3.4  The A-scope Slog(q) of Figure 3.1 with its Gaussian mixture approximation (labeled “GM model”).

sequently, since the power reflected from landmarks fluctuates with respect to radar position (Section 2.7.1.3), fixed thresholds can be prone to many missed detections [10]. 3.3.2  Hypothesis Free Modeling

To overcome the fallacies of constant thresholding methods, other researchers have attempted to model A-scopes with Gaussian mixture models [11]. Figure 3.4 shows the normalized power, Snorm, corresponding to the A-scope of Figure 3.2. A Gaussian mixture (GM), labelled “GM model” is fitted to the normalized A-scope. The number of Gaussians can be set a priori or based on a function of the number of peaks detected in the A-scope. While this approach aims to reduce the dimensionality of the data (to NG Gaussian functions), it possesses some probabilistic flaws. The method attempts to transform power versus range values into a range-based measurement likelihood of the form

g(r M, Xk ) =

NG

å wi N (r; µ i , Si ). i =1

2.

The area under the Snorm A-scope is normalized to unity.

(3.1)

86

An Introduction to Detection Theory

where g(r½M, Xk) represents the likelihood of range measurement value r, given map M and robot location Xk, and N {r; mi, S i) represents a Gaussian distribution on r with mean mi and variance Si, which is weighted by factor wi. A fundamental property of a PDF requires that the integral of g(×) over its entire range space is unity. Therefore, such a mixture model can only model a normalized A-scope, (i.e., such a transformation assumes that there is a single landmark present within each A-scope). As seen previously in Figure 3.2, this is not the case in general, as multiple line-of-sight targets are possible. Furthermore, the A-scope cannot be interpreted as modeling the positional noise of a single landmark estimate, since it in fact represents the power received in each range bin. Range and bearing measurement noise are a property of the sensor and not of the power received in each range bin. 3.3.3  Stochastic-Based Adaptive Detection

While often overlooked in the robotics community, imaging sensors generally have well-founded statistical principles that can be exploited to increase the information content of a measurement. Such principles are well understood in the radar and tracking communities [12–14]. From an imaging sensor’s perspective, the environment can be considered to consist of a random number of density functions of signal power, which are spatially distributed about that environment. These model the power which may be reflected to the radar, at a given location. In an a priori unknown environment, such signal power PDFs are unknown both in terms of their distribution type(s) and moments. Stochastic detection methods have been introduced that generate a range reading from the A-scope information using statistically founded adaptive thresholds. In these methods, signal detection is typically formulated as a likelihood of landmark signal presence versus noise signal presence, which is then compared to a threshold value. Such methods, as opposed to the techniques reviewed in Sections 3.3.1 and 3.3.2, will be the focus of target detection from radar data throughout this book. These methods, which can detect landmarks with predefined, acceptable probabilities of false alarm, Pfa, and missed detection, PMD, have been applied extensively within the radar community and are based on a constant false alarm rate (CFAR) [21]. A CFAR processor dynamically determines a detection threshold by estimating the local background noise and/or clutter power and multiplying this estimate by a scaling constant based on the desired Pfa. Hence CFAR processor performance can be maintained in an unknown environment where the background noise/clutter power varies [15]. CFAR processors assume that the noise samples within the received power signal are independent and identically distributed (IID) and that the noise follows certain distribution types. When these assumptions are not valid, failure of the CFAR algorithm, in determining the range to true objects, is possible. The following sections introduce the concepts of CFAR processors, first based on known noise and target power distribution statistics and then based on adaptive techniques for unknown distribution moments. Target detection methods based on CFAR techniques, which address the violations of the IID and distribution type assumptions, will be given in Chapters 6 and 10. 3.

Incorrectly declared landmarks and/or outliers (infrequent spurious returns).

3.4  Detection Theory with Known Noise Statistics

87

3.4  Detection Theory with Known Noise Statistics The method for detecting targets is based on the type of received power distributions during target absence (noise only), which is referred to as hypothesis H0, and target presence (target and noise), referred to as hypothesis H1. Figure 3.5 shows the basic assumed received power distributions under hypotheses H0 and H1, used in classical detection theory analysis. 3.4.1  Constant PfaCFAR with Known Noise Statistics

The shaded received power distribution is assumed to represent the noise only (target absence) distribution. The typically assumed exponential distribution is shown, given by

λ ta exp(− λta S lin ) p(S lin H0) = Exponential(λ ta ) =  0

if S lin ≥ 0 otherwise

(3.2)

where Slin is the continuous, linear received power variable (horizontal axis in Figure 3.5) and lta is the “target absence” exponential noise parameter. The Gaussian distribution is assumed to represent the target and noise (target present) distribution. In the case of target presence, the Gaussian distribution, with mean received power value Sˆ lin and variance Stp (tp = “target presence”), is evident, which can be expressed as

p(S lin H1) =

 (S lin − Sˆ lin )2  exp  −  . 2S tp 2 πStp   1

(3.3)

If exponential noise and Gaussian target distributions are assumed, standard statistical decision theory can be used to derive a detector which achieves a desired Pfa and a corresponding maximum probability of detection, PD. Based on a single received power sample, a desired Pfa can be chosen that will be a function of the radar target absence (noise only) distribution’s mean value parameter lta.

Figure 3.5  The theoretical, standard assumed noise distributions during target absence (hypothesis H0) and target presence (hypothesis H1). The CFAR-based hypotheses H0CFAR and H1CFAR result based on Slin SCFAR, respectively.

88

An Introduction to Detection Theory

In particular, if lta can be assumed to be constant, then selecting a constant received power threshold SCFAR for the received power value from the radar Slin (shown schematically in Figure 3.5) will result in a constant false alarm rate (i.e., Pfa = PfaCFAR × PfaCFAR can be determined according to the Neyman-Pearson (NP) approach), applied to the continuous p(Slin½H0) curve as a function of the continuous received power value Slin PfaCFAR = ∫



∞ S CFAR

p(Slin H0 ) dSlin = λ ta ∫

∞ S CFAR

exp(− λta Slin )dSlin

(3.4)

yielding S CFAR = −



1 loge PfaCFAR λ ta

(3.5)

Hence, with a constant exponential noise parameter lta, the threshold SCFAR can be chosen according to a desired false alarm rate PfaCFAR, making it truly a CFAR processor. For the more general case when lta is unknown and variable, an adaptive threshold for SCFAR is necessary, which will be the subject of Section 3.5. 3.4.2  Probability of Detection PDCFAR with Known Noise Statistics

With assumed Gaussian noise statistics for p(Slin½H1) in Figure 3.5, the probability of detection is given by: PDCFAR = ∫





SCFAR

p(S lin H1) dS lin

(3.6)

so that in the qth range bin, PDCFAR (q) =



1 2π S tp



∫S

CFAR

 (S lin − Sˆ lin (q))2  lin exp  −  dS 2S tp  

 S CFAR (q) − Sˆ lin (q)  = Q    S tp  



(3.7)

where SCFAR(q) is the received power threshold (Equation 3.5) required to achieve the desired PfaCFAR and S tp is the apriori (assumed) known variance of the received power distribution during target presence. Q(×) is the standard Q-function corresponding to the tail probability of a Gaussian distribution. The aim of any stochastic-based detection method is to determine the relationship between its probability of detection and the received signal power for chosen, constant probabilities of false alarm PfaCFAR. Figure 3.6 shows curves of PDCFAR (q) versus Slin(q), based on Equation 3.7, for chosen values of PfaCFAR in Equation 3.5. Detection curves of the type shown in Figure 3.6 form the basis of any radar-based detection technique, and similar curves will be derived in the following sections and in Chapter 6 where radar detection based on multiple scans will be addressed.

3.5 Detection with Unknown Noise Statistics—Adaptive CFAR Processors

1

P fCFAR = 10 a CFAR P fa = 10 P CFAR = 1 0

0.8

P DCFAR (q)

89

fa

0.6

0.4

0.2

0

0

100

200 ^lin

S

300

400

500

(q)

PDCFAR (q )

Figure 3.6  curves, under the assumption of Gaussian target presence statistics, based on CFAR 3 different values for Pfa . CFAR 3.4.3  P  robabilities of Missed Detection PMD and Noise PnCFAR with Known Noise Statistics CFAR For completeness, from Figure 3.5, the probabilities of missed detection PMD noise PnCFAR can also be defined as



CFAR PM =∫ D



PnCFAR = ò

SCFAR

0 SCFAR

0

p(S lin H1) dS lin = 1 − PDCFAR

(3.8)

p(S lin H0 ) dS lin = 1 - PfaCFAR

(3.9)

3.5  D  etection with Unknown Noise Statistics—Adaptive CFAR Processors In contrast to the previous analysis, a more typical starting point for the processing of single scans is the standard assumption that the target absence and presence distribution types are known, but not their moments, since these typically vary and should be estimated. In this case, a test statistic must be derived. If such a statistic can be found that is independent of any unknowns, then the detection method is known as a CFAR detector. Many variants of CFAR processors continue to be reported in the literature, each tailored to particular distribution types for H0 and H1. In this section, the commonly applied cell averaging (CA) and ordered statistics (OS) CFAR processors are explained. They will be used and extended in Chapters 6 and 10 where, respectively, detection improvements through the integration of multiple scans and an autonomous navigation application in a marine environment are explained.

90

An Introduction to Detection Theory

Figure 3.7  The CA-CFAR processor. The shaded cells marked “G” correspond to guard cells and are excluded from the CA-CFAR calculation. For brevity in the figure, the received power S corresponds to Slin.

3.5.1  Cell Averaging—CA-CFAR Processors 3.5.1.1  Calculation of PfaCA-CFAR

In most FMCW radar signal processing literature to date, a cell averaging (CA)CFAR detector is the preferred method of target detection, which automatically adapts to noise variance changes, based on monitoring the received power values in the cells neighboring the cell under test (CUT) [16–18]. The CA-CFAR detector also assumes IID, exponentially distributed environmental noise, and clutter, as given in Equation 3.2, where lta is now unknown and can vary between scans. The detector has a test statistic T(Slin(q)) that estimates the mean of the noise distribution, (1/lta). This estimate is the sample mean of the neighboring cell’s power values, which is determined over a window width 2W surrounding the CUT. This is shown in Figure 3.7. In the figure, the sums U and V of the received power values are determined in the W cells either side of the shaded guard cells surrounding the CUT. This power sum is denoted as Slin-sum(q) given by

S lin-sum (q) =

q +W



S lin (j)

for

j ≠ q, q ± 1, q ± 2 … q ± G

(3.10)

j = q −W

The test statistic is then equivalent to (U + V)/(2W) and is given by 4.

T (S lin (q)) =

1 lin-sum S (q) 2W

(3.11)

Which typically excludes the values of the 2 G cells immediately neighboring the CUT, since these could be affected by the broadened received spikes, which result due to radar imperfections, causing ambiguity in the range value of a particular spike. In FMCW radars, this largely results from phase noise, as explained in Section 5.3.2.

3.5 Detection with Unknown Noise Statistics—Adaptive CFAR Processors

91

The likelihood test is then ηSNP (q) =



S lin (q) ≥ γ T (S lin (q))

(3.12)

where hSNP(q) is the estimated SNP in the CUT and g is a threshold set to achieve a constant PfaCA-CFAR. Hence the likelihood test can be written as S lin (q) ≥ T (S lin (q))γ



(3.13)

Since the threshold itself is a random variable, PfaCA-CFAR must evaluated as PfaCA-CFAR = E{P[S lin (q) > T (S lin (q))γ | H0 ]}



(3.14)

Using the property that the sum of IID exponential distributions is a gamma distribution, [2] shows that for a CA-CFAR detector of window width 2W, PfaCA-CFAR is given by γ    PfaCA-CFAR =  1 +  2W 



−2W



(3.15)

and, therefore, the corresponding threshold value g can be calculated/based on a desired PfaCA-CFAR, as  γ = 2W  PfaCA-CFAR  

(



)



1 2W

 − 1   

(3.16)

From Inequality 3.12, the target presence likelihood test is to then check if the received power value Slin(q) in the CUT S lin (q) ≥ SCA-CFAR (q) = γ T (S lin (q))



(3.17)

so that S



CA-CFAR

 (q) =  PfaCA-CFAR 

(

×

q +W



)



S lin (j)

1 2W

 − 1 

for

j ≠ q, q ± 1, q ± 2 … q ± G

(3.18)

j =q− W

Hence Slin(q) ³ SCA-CFAR(q) defines the CA-CFAR-based target presence hypothesis H1CA-CFAR, and Slin(q) < SCA-CFAR(q) defines the target absence hypothesis H0CA-CFAR for CUT q. 3.5.1.2  Calculation of PDCA-CFAR

Under the same IID assumptions used to derive Equations 3.16 and 3.17, [2] shows that the detection probability of a Rayleigh fluctuating target, embedded in

92

An Introduction to Detection Theory

Desired P

fa

Rayleigh target dist. (Eqn. 3.18)

20 0

Test statistic (Eqn. 3.10)

S lin (q)

S RN log(q,l = 6 6, c = 1 ) / dB

s

0 0

20

40

60 80 Distance /m

100

t

detector (Eqn. 3.17)

120

PD

(q)

S

(q)

S lin (q)

Figure 3.8  Block diagram summarizing the received power detection threshold and corresponding PDCA - CFAR based on a single A-scope.

R­ayleigh noise or clutter (Swerling I model), can be determined by simply replacing g in Equation 3.15 with g/(1 + hSNP(q)), which gives PDCA-CFAR (q) =



  γ  1 1 +   SNP 2W  1 + η (q)   

−2W



(3.19)

where g is defined in Equation 3.16, and hSNP(q) is the estimated received SNP from Equation 3.12. The detection threshold and corresponding PDCA-CFAR(q) estimation algorithm is summarized in the block diagram of Figure 3.8. Hence, for chosen values of W, G, and PfaCA-CFAR, the PDCA-CFAR (q) of a target, within a single A-scope, as a function of the estimated received SNP hSNP(q), can be derived. Figure 3.9 shows the expected PDCA-CFAR versus hSNP for 4 values of the CFAR window width W = 10, 15, 20, and the theoretical best curve for W ® ¥, all corresponding to PfaCA-CFAR = 10−5. As the curves demonstrate, a larger window

1

D

P CA-CFAR

0.8 W W W W

0.6 0.4

= 10 = 15 = 20 =

¥

0.2 0 10 0

10 1

10 2

10 3

^ ηSNP

Figure 3.9  PDCA-CFAR curves, for PfaCA-CFAR = 10–5, under the assumption of Rayleigh target presence statistics, based on the CA-CFAR processor. Three curves for values of the CA-CFAR window width W = 10, 15, and 20 are shown, together with the limiting curve for W ® ¥.

3.5 Detection with Unknown Noise Statistics—Adaptive CFAR Processors

93

width 2W results in a higher probability of detection for a given SNP. However, increasing W clearly increases the danger of nonhomogeneous power samples, such as power values from clutter or even other targets, corrupting the CA-CFAR’s noise estimate. This results in an overestimate of the noise threshold and ultimately in missed detections. This will be demonstrated in the next section. 3.5.1.3  Performance of the CA-CFAR Processor

The detection performance of the CA-CFAR processor is optimum in a homogeneous background, when the reference cells contain IID observations governed by an exponential distribution. In principle this requires that actual landmarks are well separated within the A-scope, so that the sliding window of width 2 W only contains a single target when centered on a particular range bin. This is demonstrated in Figure 3.10, where an A-scope display from two landmarks (one at a true distance of 10 m and the other at 136.5 m) is shown. The third detection corresponds to a multipath reflection at approximately 150 m and is misclassified as a valid target by all detection methods from a single radar position (Section 2.7.1.5). In the figure, the CA-CFAR threshold is also shown, calculated from Equation 3.18. A-scope power values that exceed this threshold are labeled with triangles. In practice, the environment is usually non-homogeneous due to the presence of multiple targets and/or clutter in the reference window. There is a significant decrease in performance when the assumption of a homogeneous

s

S

log

(dB)

Detected landmark

Figure 3.10  Landmark detections and false alarms corresponding to a single A-scope Slog with a­ctual landmarks present at 10 m, 136.5 m, and a multi-path reflection at 150 m.

94

An Introduction to Detection Theory

S

log

(dB)

Detected landmark s t

Figure 3.11  The effect of target masking in the CA-CFAR processor. When two or more landmarks are in close proximity and lie within the 2W window width, missed detections occur.

environment is not met. Figure 3.11 shows an A-scope display containing two actual landmarks in close proximity. The first landmark, a tree, is located 25 m from the radar, and the second, a metal sign post, at 32 m. It can be seen in the figure that due to the close proximity of the landmarks, the CA-CFAR detection threshold overestimates the noise power, and the target with the lower RCS, in this case the tree, is masked by the larger RCS target. Hence the metal sign post is correctly detected, but the tree results in a missed detection. 3.5.2  Ordered Statistics—OS-CFAR Processors

To overcome the commonly occurring violations of the homogeneous environmental conditions of the CA-CFAR processor, an ordered statistics (OS)-CFAR processor allows its test statistic to be chosen within the ordered set of received power samples from the CUT’s leading and lagging windows [19]. This processor simply ranks the received power values in the windows neighboring the CUT according to their power value. This results in the sequence

lin S1lin ≤ S2lin ≤ … ≤ S2W

(3.20)

with S1lin denoting the minimum received power value from one of the neighborlin ing cells (excluding the guard cells), and S2W the maximum value. One of these lin ranked values Skos is then selected to represent the mean noise and clutter power in the window. This eliminates the assumption of uniform clutter within the reference window and can result in non-homogeneous background noise and outliers being discarded from the back ground noise statistic estimate, if kos is chosen correctly, thus alleviating the problems associated with the homogeneous environ-

3.5 Detection with Unknown Noise Statistics—Adaptive CFAR Processors

95

ment assumptions. This comes at the price of a small loss of detection power, when h­omogeneous background noise conditions are satisfied, when compared with the CA-CFAR processor [20]. In the coastal marine-based mapping experiments of Chapter 10, due to the nonhomogeneity of the background radar returns, the OS-CFAR processor is the detector used. In this environment, X-band marine radar is used for landmark detection in the presence of high sea clutter. This takes the form of unwanted returns from the sea surface due to robotic sea vessel rolling and pitching. Its mathematical explanation, and proof of its CFAR properties, are outlined here. 3.5.2.1  Calculation of PfaOS-CFAR

As in the case of the CA-CFAR detector, the OS-CFAR detector also assumes IID, exponentially distributed environmental noise, and clutter (Equation 3.2), where lta is again unknown and can vary between scans. The decision threshold is similarly SOS-CFAR (q) = γ T (S lin (q))



(3.21)

where again, T(Slin(q)) is an estimate of the average noise power (l/lta), and g is a scaling factor to achieve a desired false alarm rate, now referred to as PfaOS-CFAR, given by

{

}

PfaOS-CFAR = E P[S lin (q) > γ T (S lin (q)) H0 ]

(3.22)

Contrary to the CA-CFAR concept, in an OS-CFAR processor, the test statistic, th T(Slin(q)), is chosen as the kos sample from the ordered samples within leading and lagging windows. Its PDF under the null hypothesis must be evaluated. First, the cumulative density function (CDF) F(Slin) of the continuous function Slin can be evaluated as F(S lin ) = ∫



S lin 0

p(S lin H0) dS lin

(3.23)

and from Equation 3.2

F(S lin ) = 1 − exp(λ ta S lin )

(3.24)

th The PDF of the kos value of the ordered statistic PSlin (S lin ) is then given by [82], kos

PSklin (S lin ) = pkos (S lin ) os

= kos

( )(1 − F(S 2W kos

lin 2W − kos

))

(F(S lin ))kos −1

(3.25)

In the case of IID, exponentially distributed random variables,

 2W  ta lin 2W − kos +1 pkos (S lin ) = kos λ ta  (1 − exp(− λta S lin ))kos −1  (exp(− λ S ))  kos 

(3.26)

96

An Introduction to Detection Theory

This then represents the PDF of the test statistic, T(Slin(q)). Then, ∞

PfaOS-CFAR = ∫ P[S lin (q) ≥ γ T (S lin ) H0] pkos (S lin )) dS lin



0

(3.27)

which is shown in [19] to yield PfaOS-CFAR = kos



( ) (k

os

2W kos

− 1)!(γ + 2W − kos )! γ + 2W!

(3.28)

Again the false alarm probability is independent of any unknowns; thus, a scale factor, g, can be set to achieve the desired constant rate of false alarm. Unlike the case of the CA-CFAR processor, an equation that directly gives the scaling factor g (Equation 3.16) is not available for the OS-CFAR processor, and once PfaOS-CFAR is chosen, g can be iteratively determined by nonlinear, numerical zero finding techniques, applied to Equation 3.28. 3.5.2.2  Calculation of PDOS-CFAR

Considering the detection probability, ∞

PDOS-CFAR = ∫ P[S lin (q) ≥ γ T (S lin ) H1 ] pkon (S lin ))dS lin



0

(3.29)

where P[S lin (q) ≥ γ T (S lin ) H1 ] = ∫





γ S lin

f (S lin H1)dS lin

(3.30)

Here, the distribution, f(Slin½H1) (under the target presence hypothesis) is usually assumed a priori by an SNR fluctuation model derived from one of the RCS fluctuation (typically Swerling) models of Section 2.7.1.3. Reference [2] shows that PfaOS-CFAR in Equation 3.28 can also be expressed as PfaOS-CFAR =



kos −1

∏ i =0

2W − i 2W − i + ( γ / 2W )

(3.31)

and again, under the Swerling 1 assumption for target and noise statistics, PfaOS-CFAR (q) results by replacing g  in Equation 3.31 with g /(1 + hˆ SNP(q)) giving PDOS-CFAR (q) =



kos -1

Õ i =0

2W - i 2W - i + (γ / 2W (1 + ηˆ SNP (q)))

(3.32)

where

ηˆ SNP (q) =

S lin (q) Sklin os

(3.33)

3.5 Detection with Unknown Noise Statistics—Adaptive CFAR Processors

97

Figure 3.12  The OS-CFAR processor. Note that in this case, Equation 3.31 must be solved to d­etermine g   for given PfaOS-CFAR, W and kos. Again, for brevity in the figure, S corresponds to Slin.

Figure 3.12 shows the OS-CFAR concept. In the article that introduced the OS-CFAR processor, Rohling showed that choosing kos » (3W/2) minimized a parameter called the average decision threshold, corresponding to the separation between two probability of detection curves. Hence choosing kos » (3W/2) allows the OS-CFAR processor to perform almost as well as the CA-CFAR processor under true homogeneous background clutter, but avoids the danger of allowing nonhomogeneous power values to corrupt the noise estimate when such assumptions are not true. 3.5.2.3  Performance of the OS-CFAR Processor

The results of applying the OS-CFAR processor, with parameters W = 20, PfaOS-CFAR = 10–6, kos = 3W/2 and G (number of guard cells) = 2, to the A-scope of Figure 3.2 are shown in Figure 3.13. The adaptive threshold correctly yields 4 landmark signals within the data, without reporting any false alarms. The local maximum of each peak that exceeds the threshold can then be used to generate a range estimate at the given bearing angle. To further demonstrate the performance of the OS-CFAR processor, the case of target masking, caused by the CA-CFAR processor, is again revisited. Figure 3.14 shows an extreme case of target masking with the CA-CFAR processor for an Ascope, again recorded from the two landmarks in close proximity, one at a range of 25 m and the other at 32 m, introduced in Section 3.5.1.3. In this particular A-scope and application of the CA-CFAR processor, both landmarks were missed due to the high-powered returns from the actual landmarks occurring within the CA-CFAR window. This in turn corrupts the noise estimate, making the value of SCA-CFAR in Equation 3.18 too high in the corresponding cells surrounding the a­ctual

98

An Introduction to Detection Theory 45 40

S log (dB)

35 30 25 20 15 10 5 0

0

10

20

30

Range (m)

40

50

Figure 3.13  Single A-scope of Figure 3.2, with its OS-CFAR adaptive threshold superimposed. R­eceived power values that exceed the OS-CFAR threshold yield the detection of 4 out of 5 correct landmarks present in the data.

l­andmarks. Figure 3.15 shows the results of applying the OS-CFAR detector to the same A-scope display as Figure 3.14. It can be seen in the figure that, due to the selection of a single power value Sklin , rather than an averaged power value, as in the os CA-CFAR processor, the OS-CFAR power threshold is more responsive to power changes in the A-scope and adapts more rapidly to the change in power between the closely spaced landmarks. This results in multiple detections of both targets, shown as solid circles, at the expense of multiple false alarms at a range of 180m.

S log (dB)

s Detected landmarks

Figure 3.14  The effect of target masking in the CA-CFAR processor. With two landmarks in close proximity, and within 2W range bins of each other, in this case both are missed detections.

3.5 Detection with Unknown Noise Statistics—Adaptive CFAR Processors

99

s

S log (dB)

Detected landmarks

S log (dB)

Figure 3.15  OS-CFAR performance with landmarks in close proximity. In contrast to the CA-CFAR detector (Figure 3.14), both landmarks are successfully detected.

Figure 3.16  A comparison of OS-CFAR and CA-CFAR again demonstrating the ability of the OS-CFAR processor to overcome target masking. For clarity, the CA-CFAR detection markers (t­riangles) are displaced vertically above the OS-CFAR detection markers (circles).

100

An Introduction to Detection Theory

Finally, Figure 3.16 shows a comparison of the CA-CFAR and OS-CFAR processor thresholds and detections for yet another A-scope display, corresponding to the two closely spaced targets. Again in this example, the OS-CFAR detector correctly detects both landmarks, whereas the CA-CFAR detector misses the tree with the lower RCS. Again it can be seen in the figure that the OS-CFAR adaptive threshold responds more rapidly to the A-scope power spikes than the CA-CFAR threshold. Due to the averaging of power values within the CA-CFAR processor, its adaptive threshold (dashed line) remains just above the missed landmark’s power value for almost the entire window width, 2W, causing it to be masked. Note, however, the slight increase in the number of false alarms in the OS-CFAR method.

3.6  Summary For imaging sensors, such as underwater imaging sonars, cameras, and radars, the measurement can be considered to contain raw signal intensity measurements without any landmark detection preprocessing. It was shown (in Figures 3.2 and 3.13 and in Chapter 2) that radar can yield information from multiple down-range landmarks in a single A-scope. Section 3.3 demonstrated that, because of this, many of the robotics-based detection algorithms either result in a significant loss of information by applying fixed thresholds to the received power data or misleadingly attempt to fit spatial distributions to the A-scope, which inadvertently, and possibly erroneously, assume the existence of a single target within it. Interestingly, the radar literature has followed a significantly different path in the detection of objects. Adaptive received power threshold methods, exemplified in Section 3.5, with CFAR properties have been introduced for performing landmark detection with such imaging sensors. Such detection methods do not suffer from the loss of information problems associated with constant thresholds and make no assumptions regarding the number of targets present in an A-scope. As shown in Section 3.5, however, CFAR detectors also rely on certain assumptions. In particular, they assume that the received power signal obeys certain statistical conditions and exploit them to obtain a statistical threshold, analogous to the normalized innovation square threshold used for data association in much of the robotics literature [21, 22]. In reality, however, the actual rate of false alarms is much higher than that predicted by the CFAR processors of Section 3.5, as will be demonstrated in Chapter 6. This is primarily due to violations of the idealistic assumptions made in the detectors’ derivations. However, the CFAR detectors examined in this chapter are based on principles of greater relevance to the data produced by imaging sensors, such as radar, than current typical detection schemes applied in robotics, as highlighted in Section 3.3. Indeed, in terms of maximizing the detection probability while minimizing the false alarm probability, CFAR approaches have been demonstrated to outperform standard methods of landmark detection used in various autonomous navigation applications with imaging sensors [23]. This will be demonstrated further in Chapter 10, where reliable landmarks must be detected in a coastal environment in the presence of noise and high sea clutter. A comparison of the CA-CFAR and OS-CFAR processors demonstrated the superior ability of the OS-CFAR processor in detecting targets that lie in close prox-

3.7  Bibliographical Remarks

101

imity to one another. This came at a slight cost of increased false alarms. In the detection literature, both processors are used and chosen according to the type of environment to be sensed. Both CA-CFAR and OS-CFAR processors will therefore be the landmark detection techniques of choice in the radar-based, feature extraction, and navigation studies throughout this book.

3.7  Bibliographical Remarks An overview of CFAR processing techniques, with derivations of their probabilities of false alarm and detection, is given in [1, 2]. Many versions of the CFAR processor have been demonstrated under various noise and clutter conditions [19, 24–26]. The most common CFAR processor is the cell averaging (CA)-CFAR processor [15]. In 1994, Gandhi et al., proved that a cell averaging (CA)-CFAR processor was the optimal detector, in the presence of unknown noise power [17]. In Habib et al., closed form expressions for the probabilities of false alarm and detection were derived for applications of the CA-CFAR processor to target detection at sea, amid sea clutter [27]. In this work, it was demonstrated that many target distribution models based on Gaussian, log-normal, or Weibull statistics were unsuitable for modeling sea clutter. A distribution deemed appropriate to model this kind of clutter was the K-distribution; however, a closed form expression for the probability of detection under this model remains elusive [28]. Therefore, Schnidman proposed a noncentered chi-2 Gamma distribution, which he demonstrated to be a suitable model for high-resolution sea clutter [29]. The work by Habib et al., then derived closed form expressions for both the probabilities of false alarm and detection for this type of target distribution under the CA-CFAR processor. In Raghavan’s paper, an analysis of linear-law detection CA-CFAR processors was given in comparison with the usual square law detection techniques [30]. The work showed that Rayleigh target distributions could be approximated by a mixture of Chi-squared densities, allowing derivations of the test statistics in a CA-CFAR processor to proceed based on these densities, instead of the sums of IID Rayleigh variates. It was demonstrated that the probability of detection versus signal-to-clutter ratio (SCR) curves for each detection law were very similar at most SCRs. Various modifications have been applied to improve the detection performance of the CFAR processor, under nonideal situations. In order to preserve a constant FAR property during clutter transitions the greatest-of (GO) CFAR processor was proposed [31]. In the GO-CFAR processor the greater average of either the leading or lagging windows is used to estimate the noise power. Although the GO-CFAR processor can regulate the FAR at clutter transitions to some extent, CFAR loss is higher compared to the CA-CFAR processor, resulting in target masking. The smallest-of (SO) CFAR processor can resolve closely lying features, which uti5.

6.

As the moving window of the CFAR processor slides past a cluttered region within the A-scope display, a portion of the window may contain clutter and the rest may be occupied by noise. This will cause a sudden change in the average power inside the window, which violates the IID noise assumptions. CFAR loss is the incremental signal-to-noise ratio (SNR) required by a given CFAR threshold estimation scheme to achieve a specified detection probability, PD, over that required with known background i­nterference.

102

An Introduction to Detection Theory

lizes the smallest average of the leading and lagging windows [32]. SO-CFAR detector performance deteriorates when the IID assumption is violated, such as in the presence of the previously mentioned clutter transitions in the reference window. The OS-CFAR processor can overcome the issue of masking, thereby resolving closely lying features [33] The trimmed-mean (TM) CFAR processor is reported as being a more generalized improvement over the OS-CFAR processor, which combines ordering and arithmetic averaging [34]. False alarm occurrences in TM-CFAR processors are reported to be reduced compared to the OS-CFAR processors [25]. To extract features within cluttered environments accurately, techniques including the censored greatest-of (CGO) [35, 36], automatic censored mean level detector (ACMLD) [37] and ordered statistics greatest-of (OSGO) [38] CFAR processors have been proposed. The performances of the CGO and ACMLD-CFAR detectors in the presence of clutter transitions are similar to the performance of the GO-CFAR processor. However, in the case of the CGO-CFAR processor, if the number of interferers exceeds a predefined limit, its detection performance degrades severely. The ACMLD-CFAR processor requires intense computation, due to its relatively complex algorithm [10]. Similarly, the OSGO-CFAR processor requires a great deal of processing time to decide upon which of the two reference subwindows has to be selected to determine the noise/clutter power. A composite CFAR processor approach was adopted in [25], known as the variability index (VI)-CFAR processor, which utilizes the advantages of various standard CFAR techniques. The VI-CFAR processor is composed of the CA, GO, and SO-CFAR processors. A slight modification in the VI-CFAR architecture is shown in [26], where the SO is replaced by an OS-CFAR processor in the design. Threshold selecting rules are also modified in [26]. The VI-CFAR processor shows superior clutter rejection over other stand alone CFAR processors [25]. However, detection performance degrades if the interfering targets are not confined to one half of the reference window [26].

References   [1] Currie, N. C., and C. E. Brown, Principles and Applications of MMW Radar, Dedham, MA: Artech House, 1987.   [2] Barkat, M., Signal Detection and Estimation, Norwood, MA: Artech House, 2005.   [3] Guivant, J., E. Nebot, and S. Baiker, “Autonomous Navigation and Map Building Using Laser Range Sensors in Outdoor Applications,” Journal of Robotic Systems, Vol. 17, No. 10, October 2000, pp. 565–283.   [4] Makarsov, D., and H. F. Durrant-Whyte, “Mobile Vehicle Navigation in Unknown Environments: A Multiple Hypothesis Approach,” IEE Proceedings of Control Theory Applications, Vol. 142, No. 4, July 1995, pp. 385–400.   [5] Adams, M. D., and F. Tang, W. S. Wijesoma, and C. Sok, “Convergent Smoothing and Segmentation of Noisy Range Data in Multiscale Space,” IEEE Transactions on Robotics, Vol. 24, No. 3, June 2008, pp. 746–753.   [6] Tardos, J., J. Neira, P. Newman, and J. Leonard, “Robust Localization and Mapping in Indoor Environments Using Sonar Data,” The International Journal of Robotics Research, Vol. 21, No. 4, April 2004, pp. 311–330.

3.7  Bibliographical Remarks

103

  [7] Ribas, D., P. Ridao, J. D. Tardos, and J. Niera, “Underwater SLAM in a Marina Environment.” In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, October 2007.   [8] Clark, S., and G. Dissanayake, “Simultaneous Localisation and Map Building Using Millimetre Wave Radar to Extract Natural Features.” In International Conference on Robotics and Automation (ICRA), IEEE, Detroit, MI, May 1999, pp. 1316–1321.   [9] Chandran, M., and P. Newman, “Motion Estimation from Map Quality with Millimeter Wave Radar.” In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Beijing, China, October 2006. [10] Khalighi, M. A., and M. H. Bastani, “Adaptive CFAR Processor for Nonhomogeneous Environments,” Aerospace and Electronic Systems, IEEE Transactions on, Vol. 36, 2000. [11] Durrant-Whyte, H. F., S. Majumder, M. de Battista, and S. Scheding, “A Bayesian Algorithm for Simultaneous Localisation and Map Building.” In The Tenth International Symposium of Robotics Research (ISRR), Victoria, Australia, 2001. [12] Skolnik, M. I., Introduction to Radar Systems, New York: McGraw Hill, 1982. [13] DiFranco, J. V., and W. L. Rubin, Radar Detection, Dedham, MA: Artech House, 1980. [14] Blackman, S., and R. Popoli, Design and Analysis of Modern Tracking Systems, Norwood, MA: Artech House, 1999. [15] Gandhi, P. P., and S. A. Kassam, “Analysis of CFAR Processors in Nonhomogeneous Background,” IEEE Transactions on AES, Vol. 4, No. 24, July 1988, pp. 427–445. [16] Langer, D., “An Integrated MMW Radar System for Outdoor Navigation.” In International Conference on Robotics and Automation, IEEE, Minneapolis, MN, April 1996, pp. 417–422. [17] Gandhi, P. P., and S. A. Kassam, “Optimality of the Cell Averaging CFAR Detector.” In IEEE Transactions on Information Theory, Vol. 40, July 1994, pp. 1226–1228. [18] Foessel, A., J. Bares, and W. R. L. Whittaker, “Three-Dimensional Map Building with MMW Radar.” In Proceedings of the 3rd International Conference on Field and Service Robotics, Helsinki, Finland, June 2001. [19] Rohling, H., “Radar CFAR Thresholding in Clutter and Multiple Target Situations,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 19, No. 4, 1983, pp. 608–621. [20] Magaz, B., A. Belouchrani, and M. Hamadouche, “Automatic Threshold Selection in OSCFAR Radar Detection Using Information Theoretic Criteria,” Progress in Electromagnetics Research B, Vol. 30, 2011, pp. 157–175. [21] Mahalanobis, P. C., “On the Generalized Distance in Statistics,” Proceedings of the N­ational Institute of Sciences of India, Vol. 2, No. 1, 1936, pp. 49–55. [22] Bar-Shalom, Y., and T. E. Fortmann, Tracking and Data Association, New York: Academic Press, 1988. [23] Mullane, J., Autonomous Navigation: On Issues Concerning Measurement Uncertainty, PhD Thesis, School of Electrical & Electronic Engineering, Nanyang Technological University, 2008. [24] Weiss, M., “Analysis of Some Modified Cell-Averaging CFAR Processors in Multiple-Targe­t Situations,” IEEE Transactions on AES, Vol. 18, January 1982, pp. 102–113. [25] Smith, M. E., and P. K. Varshney, “Intelligent CFAR Processor Based on Data Variability,” IEEE Trans. Aerospace and Electronic Systems, Vol. 36, No. 3, 2000, pp. 837–847. [26] Hammoudi, Z., and F. Soltani, “Distributed IVI-CFAR Detection In Non-Homogeneous Environments,” Signal Process, Vol. 84, No. 7, 2004, pp. 1231–1237. [27] Habib, M. A., M. Barkat, B. Alssa, and T. A. Denidni, “CA-CFAR Detection Performance of Radar Targets Embedded in ‘Non-Centered Chi-2 Gamma’ Clutter.” In Progress in E­lectromagnetics Research (IPER), Vol. 88, 2008, pp. 135–148.

104

An Introduction to Detection Theory [28] Gini, F., “Suboptimum Coherent Radar Detection in a Mixture of K-Distributed and Gaussian Clutter,” IEE Proceedmgs in Radar, Sonar and Navigation, Vol. 144, No. 1, February 1997, pp. 292–299. [29] Schnidman, D. A., “Generalized Radar Clutter Model,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 35, No. 3, July 1999, pp. 857– 865. [30] Raghavan, R. S., “Analysis of CA-CFAR Processors for Linear-Law Detection,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 28, No. 3, July 1992, 661–665. [31] Hansen, V. G., “Constant False Alarm Rate Processing in Search Radars.” In Proceedings of IEE International Radar Conference, London, 1973, pp. 325–332. [32] Trunk, G. V., “Range Resolution of Targets Using Automatic Detectors,” IEEE Transactions on AES, Vol. 14, September 1978, pp. 750-755. [33] Shor, M., and N. Levanon, “Performances of Order Statistics CFAR,” IEEE Trans. on Aerospace and Electronic Systems, Vol. 27, No. 2, 1991, pp. 214–224. [34] El Mashade, M. B., “Detection Performance of the Trimmed-Mean CFAR Processor with Noncoherent Integration,” Radar, Sonar and Navigation, IEE Proceedings, Vol. 142, No. 1, February 1995, pp. 18–24. [35] Al-Hussaini, E. K., “Performance of the Greater-of and Censored Greater-of Detectors in Multiple Target Environments,” Radar and Signal Processing, IEE Proceedings F, Vol. 135, 1988, pp. 193–198. [36] Ritcey, J. A., and J. L. Hines, “Performance of Max-Mean Level Detector With and Without Censoring,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 25, 1989. [37] Himonas, S. D., and M. Barkat, “Automatic Censored CFAR Detection for Nonhomogeneous Environments,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 28, 1992. [38] Elias-Fuste, A. R., M. G. deMercado, and E. R. Davo, “Analysis of Some Modified O­rdered Statistic CFAR: OSGO and OSSO CFAR,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 26, January 1990, pp. 197–202. [39] Ritcey, J. A., and J. L. Hines, “Performance of Max Family of Order-Statistic CFAR Detectors,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 27, 1991.

Chapter 4

Robotic Navigation and Mapping 4.1  Introduction An autonomous robot consists of a mobile platform that receives command inputs to its motion controller while having the goal of maneuvering through an environment to perform some task. A suite of onboard sensors are assumed available to the platform, consisting of both proprioceptive and exteroceptive devices. Process models are used in conjunction with the proprioceptive measurements to generate an initial prediction of the vehicle’s location. Via appropriate sensor models, exteroceptive measurements are then combined with this prediction to correct the location estimate, and reduce, and place bounds on the location estimation error accuracy. This process is known as autonomous navigation. As an autonomous robot navigates, the information gathered via its exteroceptive sensors can be used to create a mathematical representation of the environment, referred to as the map. This process is referred to as autonomous mapping. In many applications it is necessary for a robot to carry out mapping at the same time as estimating its location, based only on its proprioceptive and exteroceptive sensors, and assumed process and sensor models. This process is known as simultaneous localization and map building (SLAM), which, in the robotics literature, has been referred to as the “holy grail” of autonomous robotics research [1]. Such an autonomous system is subject to multiple sources of uncertainty. Errors in the vehicle process model, as well as unmodeled disturbances that occur at the motion controller inputs and noise in the proprioceptive measurements, lead to incremental localization errors. The exteroceptive measurements are subject to landmark detection or feature extraction uncertainty, spurious measurements, data association uncertainty, as well as spatial (typically range and bearing) measurement noise. Furthermore, these errors are correlated due to the dependence between the map and localization estimates. To contend with the multiple sources of uncertainty, a multitude of probabilistic and non probabilistic approaches have been proposed. Some researchers cast the problem into non probabilistic theoretics such as genetic algorithms (GA) or kinematic link problems. Dong [1] and Begum [2] approach the problem from a GA perspective, with a kinematic link approach being presented by Porta [3]. However, following the seminal paper by Smith, Self, and Cheeseman [4], which casts the autonomous navigation problem into the probabilistic domain of a joint Kalman filtering framework, stochastic approaches have gained popularity. Probabilistic robotics has emerged as the most widely used framework, as it statistically integrates both the imperfect process models and sensor models through well established probabilistic and statistical tools. As probabilistic approaches are also the focus of the radar-based mapping and navigation strategies of this book, in this chapter, the classical probabilistic 105

106

Robotic Navigation and Mapping

formulations of the problems are first outlined. A review of the most significant commonly adopted solutions is then presented, highlighting both advantages and pitfalls, which are addressed in this book. The aim of this chapter is therefore to introduce principled autonomous mapping and SLAM concepts, which utilize the quantified radar measurement uncertainties, such as probabilities of detection and false alarm introduced in Chapters 2 and 3 as the spatial uncertainties commonly used by robotics researchers. The latter often take the form of measurement range and bearing uncertainties. The chapter therefore defines the meaning of the terms mapping and SLAM. It also forms the foundations of the robotic mapping (RM) and SLAM experiments given in Chapters 7, 8, 9, and 10 and is an essential prerequisite to these chapters for those not familiar with probabilistic RM and SLAM concepts. In Section 4.2, the general SLAM problem is introduced in its most general probabilistic form, as a Bayesian recursion. This calculates a posterior joint distribution on the robot’s pose and map state, which is updated every time new proprioceptive and exteroceptive sensor data is available to optimally take into account the new information. This section then further highlights how the robot’s pose and its surrounding map can be represented as a single state in a mathematically joint manner. A robotic mapping example will be used to introduce the classical grid and feature map representations, which form the foundations for advances in these representations, presented in Chapters 7, 8, 9, and 10. Section 4.3 provides the general Bayesian equations for solving the robot mapping only problem, with assumed known vehicle pose, and the robot localization only problem, with an assumed known map. This is followed by Section 4.4, in which the two most popular robotic mapping solutions, grid-based robotic mapping (GBRM) and feature-based robotic mapping (FBRM) are examined. Fundamental issues in the classical implementations of these methods are highlighted in terms of the way measurements are represented and processed in both techniques. Therefore, in Section 4.5 the different approaches in which sensor measurements are related to the mapping/SLAM state of interest researchers from the robotics community and radar and tracking communities have taken are noted. These differences are of fundamental importance, and in fact render the robotics-based methods of application of the GBRM and FBRM frameworks almost inapplicable to radar. They also highlight problems in their application to other sensors, which require feature extraction algorithms, all of which, in some form, suffer from non zero probabilities of false alarm and non unity probabilities of detection. This section therefore re-examines a sensor measurement at its most fundamental level and presents different measurement models based on spatial uncertainty and object existence uncertainty. The full Bayesian recursion solution to SLAM, presented in Section 4.2 is mathematically intractable for any realistic robotic application, and hence approximations are required. Therefore, in Section 4.6, arguably the two most popular approximations, based on approximate Gaussian (extended Kalman filter (EKF)SLAM) and particle solutions (a factorized solution to SLAM (FastSLAM)), are presented in detail, with other approximate methods being summarized the bibliographical comments in Section 4.10.4. As well as the approximation errors introduced by these methods, they are further compromised by necessary accompanying feature management and association methods, which are particularly fragile in the

4.2  General Bayesian SLAM—The Joint Problem

107

presence of closely lying features, clutter, and missed detections. To provide the foundations for the implementation of EKF-SLAM and FastSLAM in Chapters 8, 9, and 10, three notable methods of feature association are presented based on maximum likelihood, joint compatibility branch and bound (JCBB), and multiple hypothesis (MH) feature association. Section 4.7 presents a detailed analysis of the recently introduced random finite set (RFS) paradigm for feature map representation, with concepts applied from the field of stochastic geometry. This section forms the focus of this chapter and is used as the map representation of choice in the more advanced autonomous navigation experiments of Chapters 8, 9, and 10. In this section, motivation for the RFS, as opposed to the random vector representations of the previously introduced methods, are provided. This is based on the fact that it is the only concept to date that can propagate, via Bayesian methods, both the spatial and existence measurement uncertainties, highlighted in Section 4.5, in a joint manner. Because of this, a principled approximation of a random set, known as the probability hypothesis density (PHD) is introduced, which allows Bayesian estimation with RFSs to take place, without the need of any fragile feature management and association methods. The PHD filter is introduced, in terms of the estimation of a general RFS state, along with an intuitive interpretation. Finally, it should be noted that estimation has very little meaning without a clear concept of estimation error. Therefore, in Section 4.8, an analysis of state of the art performance metrics used in autonomous vehicle localization, mapping, and SLAM are introduced, together with a discussion of their limitations in estimating the performance of realistic autonomous mapping algorithms. In this section it will be further shown that the RFS map representation concept of Section 4.7 naturally allows a concise metric to be defined, which penalizes FBRM and SLAM algorithms for both feature location and number errors. The metric defined in Section 4.8.3, based on second-order Wasserstein construct, will be the metric used to gauge, and compare, different FBRM and SLAM algorithms, again in Chapters 8, 9, and 10. Following the Summary in Section 4.9, detailed bibliographical remarks are provided in Section 4.10.

4.2  General Bayesian SLAM—The Joint Problem The vast majority of recent autonomous navigation algorithms are derived from approximations to the classical Bayesian SLAM recursion. The work in this book also focuses on Bayesian probabilistic solutions; therefore, its derivation is briefly outlined here. The SLAM scenario is a vehicle moving through an environment represented by an unknown number of landmarks. At each time step, k, the vehicle is assumed to have received the noisy control inputs, Uk–1, to drive it to its next way point. Let the history of all vehicle control inputs, up to and including the input at time k – 1, be denoted Uk–1 = [U0,…,Uk–1]. The vehicle state, X, is confined to exist on 1. 2.

Terminology: a “landmark” refers to any physical object in the given environment. A notational point “[ ]” denotes a vector (i.e., known number of elements, in this case, k, whose order is rigid and significant—indicating the order of sequential control inputs).

108

Robotic Navigation and Mapping

the state space X Î R nX, where nX is the dimensionality of X. This is generally a three-dimensional state consisting of an (x, y) Cartesian pair indicating the vehicle’s location on a global metric frame and the vehicle heading, f. The vehicle trajectory at time k is denoted, Xk = [X0,…, Xk]. The vehicle also receives a measurement, Zk, at time k, of the environment, denoted M, using exteroceptive measuring sensors located on the vehicle. Let the vector Zk = [Z0,…, Zk] denote the history of all measurements, up to and including the measurement at time k. Note that the formulation here is present for the general case of a map, M, and (assumed) state-related measurement, Z. Later sections outline the mathematical structure and theoretical significance of the popular feature and grid map representations, as well as the associated measurements. The SLAM problem requires a robot to begin in a known location, X0, in an unknown environment, M, and use its suite of onboard sensors to both construct a map and localize itself within that map without the use of any a priori information. Thus, the problem is to estimate the robot pose, X, and the map, M. For a real world application, this should be performed online and incrementally as the robot maneuvers about the environment. Due to vehicle motion and a limited sensor field of view, at any given time, only a portion of the entire map, M, will be visible to the mobile vehicle’s sensor(s). Therefore, let Mk represent the portions of the map M that have been observed, at least once, by the onboard exteroceptive sensor. Depending on the vehicle trajectory, it may therefore not be possible to determine an estimate of the entire map as, in general, Mk ¹ M. As robot motion introduces error, coupled with landmark sensing error, both localization and mapping must be performed simultaneously [6]. Probabilistic approaches model the system by a joint probability density function (PDF) on both the map and the vehicle location. Therefore, at each time step k, the joint PDF, assuming that such a density exists, can be written as

pk|k (Xk , Mk Zk ,U k -1 , X0 )

(4.1)

which, from an optimal Bayesian perspective, captures all relevant statistical information about the vehicle state and the map (which has been observed over the vehicle trajectory Xk). Recursive probabilistic solutions generally take on a predictor/corrector form where the control inputs are used to predict the joint state, pkçk–1 (Xk, MkçZk–1, Uk–1, X0), while the observations are used to provide some correction to the predicted state to get the posterior of Equation 4.1. Using the total probability theorem and the Chapman-Kolmogorov equation, the predicted density (or time update) can be written as pk|k-1 (Xk , Mk Zk -1 , U k -1 , X0 ) = ò pk|k -1 (Xk , Xk -1 , Mk Zk -1 , U k -1 , X0 )dXk -1 = ò pk|k -1 (Xk Xk -1 , Mk , Zk -1 , U k -1 , X0 )

´ pk -1|k -1 (Xk -1 , Mk -1 Zk -1 , U k -2 , X0 )dXk -1



(4.2)

4.2  General Bayesian SLAM—The Joint Problem

109

Since the vehicle motion is independent of the observations, the map is assumed time-invariant, and vehicle inputs are temporally independent, the predicted joint density can be written as pk | k-1(X k , M k Zk -1 ,U k -1 , X0 )

= pk| k-1(X k X k -1, Uk -1)p k -1|k -1 (X k -1 , M k -1 Zk -1 ,U k -2 , X0 )dX k -1

(4.3)

Equation 4.3 consists of the joint vehicle and map posterior density at time k – 1, and the transition density, pkçk–1 (Xk, çXk–1, Uk–1) describes the predicted vehicle location given the control inputs at time k – 1. The latter is assumed to obey a firstorder Markov process; thus, the predicted vehicle state at time k depends solely on the previous state Xk–1 and the input Uk–1. According to Bayes rule the corrector, based on a measurement update, is then written as pk|k ( Xk , Mk Zk ,U k -1 , X0 )

=

g( Zk Mk , Xk )pk|k -1( Xk , Mk Zk -1 ,U k -1 , X0 )

ò ò g( Zk Mk , Xk )pk|k-1( Xk , Mk Z

k -1

,U k -1 , X0) dXkdMk



(4.4)

where g(ZkçMk, Xk) is the measurement likelihood function (i.e., given a state Xk and Mk at time k, the probability density of receiving the observation Zk is given by g(ZkçMk, Xk), and at this stage it is assumed that integration with respect to the map is well defined, whatever its representation). Since the posterior joint density at time k is a function of the measurement, Zk, the vehicle control input Uk–1 and the previous joint density pk–1çk–1 (Xk–1, Mk–1çZk–1, Uk–2, X0) a recursive probabilistic framework for solving the SLAM problem has been established. This formulation assumes a single mobile vehicle and a single exteroceptive measurement sensor. The SLAM problem requires the estimation of both the vehicle state and the map state. The following sections therefore discusses popular methods of modeling these states. 4.2.1  Vehicle State Representation

Representing the vehicle state, Xk in the PDF of Equation 4.4 is relatively straightforward, as most SLAM algorithms proposed in the literature are formulated for a single autonomous platform that can maneuver around a continuous state space in response to its control inputs. The number of degrees of freedom (DOF) of the 2-vehicle state thus depends on the dimensionality of an individual map element, nm (with dim(M) representing the arbitrarily large dimensionality of the space

3. Note the notational usage for the measurement likelihood. Throughout this book, the pk notation is used only on the densities from which state estimates are to be extracted via a suitable Bayes optimal estimator. While not commonly used, we believe that denoting the measurement likelihood by gk adds clarity and improves readability.

110

Robotic Navigation and Mapping

represented by the map), as this defines the space about which the vehicle can maneuver. Generally,

nX = 3 if nm = 2

(4.5)



nX = 6 if nm = 3

(4.6)

The majority of SLAM algorithms adopt a 3 DOF vehicle state that contains a Cartesian (x, y) pair with a heading angle f. This is due to most literature on SLAM assuming a planar, nm = 2, navigation environment [1, 6–13]. This is a common assumption for both indoor and outdoor land-based autonomous vehicles, and nx = 3 is assumed for all of the mapping and navigation concepts throughout this book. An important component of all navigation concepts is the vehicle motion model, represented under the general equation Xk = F veh (Xk -1 ,Uk -1 , υk -1)



(4.7)

where F veh( ) represents the, in general, non linear vehicle state transition function, relating the pose of the vehicle at time k to its previous pose Xk–1, and the previous input Uk–1. υk–1 is a vector, representing the uncertainty in the state transition model. An example of such a state transition function now follows. 4.2.1.1  Simple 2D Vehicle Motion Model

A simple planar motion vehicle model can be based on the Ackerman motion model [14]. The control inputs to the vehicle can be considered to be its time varying forward speed V and steering angle gs. In terms of these inputs, the vehicle state can be predicted from the following kinematic equations [1] x� = V cos φ y� = V sin φ



V tan γ s φ� = LWB



(4.8)

where LWB is the wheel base length defined in Figure 4.1. Note that the local origin on the vehicle, is defined as the center of the rear axle. In discrete time form, with the inclusion of noise terms in the control inputs, Equations 4.8 can be expressed as

é xk ù êy ú = ê kú êë φk úû

V é xk -1 ù éê DT (Vk -1 + vk -1)cos[φk -1 ] ùú ê u ú + ê DT (V + vV )sin[φ ] ú k -1 k -1 k -1 ê k -1 ú ê γs ú êëφ k -1 úû ê DT (V v )tan[γ sk -1 + vk -1 ] ú k -1 ê ú LWB ë û

(4.9)

4.2  General Bayesian SLAM—The Joint Problem

y

111

Target i G RCS i

i

(x ik , y ki )

θk

r ki

b

( x kradar , y radar ) k

a φk

yk

L WB (γs)k

Time k

Vk φk

yk

k

x xk

xk Figure 4.1  Relating target location to vehicle motion.

where xk represents the x coordinate of the vehicle at time k; xk–1 represents the previous x coordinate of the vehicle at time k – 1 (and similarly for y and heading f); DT is the time between updates k – 1 and k; and Vk–1 and (gs)k–1 are the now discretized forward velocity and steering angle commands given to the vehicle at 2 υ γ s ~ N (0,(σ γ2s )k -1) represent the input noise time k – 1. υV k -1 ~ N (0,(σ V )k -1 ) and k -1 terms in the model. N(μ, σ 2) represents a Gaussian distribution with mean m, and variance σ 2. Note that Equation 4.9 represents a particular case of the general vehicle model Equation 4.7. Assuming a static environment with objects which can be spatially represented by points, the point target coordinate state transition equations, for the ith landmark, are simply

Mk = f map (Mk -1 ,Uk -1 ,Uk -1),

(4.10)

where in a 2D point feature-based environment the point vector [ xki yki ]T defines map feature Mki , for all p point features i = 1…p. If these vectors are concatenated, the static map update equation results

Mk = Mk -1

(4.11)

It should be noted that for some 2D vehicle motion models, this analysis is not adequate, such as in a coastal environment, as will be demonstrated in Chapter 10. Further, for some applications such as airborne [15] or underwater autonomous vehicles [16], extensions to 6 DOF are necessary where the planar navigation environment assumption is no longer valid. Some researches have also examined 6 DOF 4.

These could correspond to the centroids of clusters of radar data within an environment or the outputs of point feature extraction algorithms.

112

Robotic Navigation and Mapping

algorithms for land vehicles [17] with the aim of constructing 3D maps while maintaining accurate estimates of the vehicle’s 6 dimensional coordinates. 4.2.2  Map Representation

As the SLAM problem involves the estimation of the map state (Mk in the PDF of Equation 4.1), a brief overview of the popular methods of modeling the map is given in this section. The map is assumed to consist of landmarks that can take on any physical form and comprise everyday objects that can be detected in any given environment. Furthermore, these landmarks are assumed static [6]. To handle the inherent difficulties in solving the map-building problem due to sensor uncertainty, as well as possible vehicle location estimation error, two major probabilistic metric spatial map representations are commonly adopted in the autonomous navigation community. These take the form of the GBRM and FBRM methods already introduced. In a practical application, for a planar autonomous navigation application (nm = 2), the mapping state space is regarded as a continuous region in which landmarks are free to exist anywhere with an a priori uniform probability. 4.2.2.1  Grid-Based (GB) Maps

As the name implies, a grid-based map partitions the naturally continuous mapping state space into a grid of p discrete cells at predefined Cartesian coordinates m(x,y) with reference to a global coordinate system [9, 18–26]. Each cell, m(x, y), contains a further dimension, commonly referred to as an occupancy value, implying that even with a planar navigation scenario, the dimensionality of a grid-based map state is in fact 3. The occupancy state is a binary random variable where, m(x,y) = 0, denotes complete certainty of no landmark occupying the cell at (x, y) m(x,y) = 1, denotes complete certainty of a landmark occupying the cell at (x, y) and conversely, since the inverse, m(x, y) = 1 - m(x, y), m(x,y) = 0, denotes complete certainty of a landmark occupying the cell at m(x, y) = 1, denotes complete certainty of no landmark occupying the cell at (x, y). An example of an outdoor scene with radar detections (white) superimposed can be seen in Figure 4.2. A vehicle-centric view, inset in Figure 4.2, shows the typical landmarks in this environment, with superimposed laser range measurements. These measurements will be used for comparisons with radar detections in later chapters. Landmarks such as trees, lamp posts, fire hydrants, and embankments are all represented in the grid map, as they all represent landmarks intersecting the assumed sensor’s plane of navigation. An occupancy grid map representation of the scene in Figure 4.2 can be seen in Figure 4.3. For every cell, m(x,y) in the map Mk, that is scanned by the sensor, an estimate of the occupancy random variable is made. Successive fusion of sensor data can lead to very informative map representations, at the cost of increased computational memory requirements. Therefore, a GB map, M = [m(1x,y) … m(px, y) ]T,  is assumed to be comprised of p discrete cells each containing a value indicating

4.2  General Bayesian SLAM—The Joint Problem

113

Figure 4.2  Plan view of a campus environment with measurement data superimposed. The inset window shows the laser range data from a vehicle-centric perspective.

the probability of occupancy by a landmark at the discrete location m(x,y) in the mapping space. Since discretization is an a priori task, the total number of map elements (cells), p, is a known constant. At any given time however, the observed map, Mk, will comprise p £ p cells, Mk = [m(1x, y) … m(px, y) ]T , containing posterior occupancy estimates as a result of registering a sensor measurement. For the cells,

Figure 4.3  An occupancy grid map representation of the scene in Figure 4.2.

114

Robotic Navigation and Mapping

Vehicle trajectory

Point features extracted from environment

Figure 4.4  A feature-based map representation of the scene depicted in Figure 4.2.

Mk Ç M, occupancy estimates are typically left at their a priori assumed value. E­stimating a GB map, therefore, involves estimating the occupancy random variable for each of the (known) p discrete cells from which sensor measurements have been received. 4.2.2.2  Feature-Based (FB) Maps

Due to the vast diversity of landmarks that may appear in a vehicle’s working environment, simplified representations of landmarks are commonly employed in SLAM algorithms. For instance, a tree may be parameterized by a single point in space corresponding to the estimated center of its assumed circular cross section [27], whereas a wall maybe represented by an Euler angle and a perpendicular distance [28]. Such simplified landmark representations are referred to as a features, with FB maps naturally referring to maps represented by a collection of features. An example of the same outdoor scene of Figure 4.2, but represented by an FB map can be seen in Figure 4.4. Here, a point feature model [27] is used to represent cylindrical landmarks such as the trees, lamp posts, and fire hydrants. Note that the embankment on the left of Figure 4.2 is a physical landmark in the environment; however, as a consequence of its failure to fit the point feature model, it is not represented in the FB map of Figure 4.4. FB maps are widely employed in the SLAM literature [1, 8, 11, 15, 28–35]. Therefore, a feature-based map, M = {m1,…,mp} is assumed to be comprised of an unknown number of p features with m1,…,mp denoting their locations in R2 for planar maps. In a similar manner to the previous GB approach, let Mk = {m1,…, mp} 5. 6.

Notation: “{ }” denotes a set (i.e., unknown number of elements whose order is insignificant). The symbol p thus represents the number of “elements” in the map. In the case of a GB map, it is the number of cells, and in an FB map, it denotes the number of features.

4.3  Solving Robot Mapping and Localization Individually

115

denote the estimates of the (unknown) p features already observed, where p £ p. Therefore, contrary to GB maps, estimating a feature-based map involves estimating both the number of observed features, p, in Mk, and their corresponding locations. 4.2.2.3  Comments on Grid- and Feature-Based Map Representations

An important advantage that inspires the use of occupancy grid mapping over the feature-based methods should be noted. Not only is information incorporated into the grid at the range reading reported by the sensor, but all of the cells between the sensor and the cell at the reported range can be assigned to be empty. Hence, information regarding the occupancy status of cells other than those at the sensed range/bearing value are updated. This is not the case in feature-based mapping, where only the feature coordinates at the location of the detection are updated, and no information is updated regarding the space between the sensor and that point in space. GB approaches, however, suffer from many disadvantages. Standard occupancy grid maps do not maintain the dependencies of the occupancy probability of each cell. Also, a full map posterior is generally intractable, due to the huge combinatorial number of maps that can be represented by a grid. Further, the grid size and its resolution (cell size) must be once and for all determined prior to navigation, thus restricting the area that can be mathematically represented by the robot. Feature-based robot mapping (FBRM) approaches offer the advantage that the sensor data is compressed into features (such as point clusters, circles, lines, and corners). The FB map representation has gained widespread popularity, particularly in the outdoor robotics domain due to its ability to estimate large scale maps and provide correctional information for simultaneous vehicle pose estimation. FBRM approaches are also computationally appealing, since few features need to be detected and updated per sensor scan, and feature-to-feature and feature-tovehicle correlations can be maintained. However, they fundamentally differ from their grid-based counterparts in that they are required to estimate the location of an initially unknown number of objects, represented as features. In GB approaches, the occupancy probability of an initially known number of cells is required.

4.3  Solving Robot Mapping and Localization Individually The full SLAM recursion of Equation 4.4 propagates the joint density on the vehicle state and map. Much work has also, however, focused on the individual problems, namely, localization only and mapping only. While not optimal in an autonomous navigation sense, these approaches propagate a density on either the vehicle state or the map, assuming the other to be known. In probabilistic approaches, which dominate current autonomous mapping and localization research [13], the Bayesian formulations outlined here again form the basis for almost all proposed solutions.

116

Robotic Navigation and Mapping

4.3.1  Probabilistic Robotic Mapping

In this book, probabilistic robotic mapping refers to stochastic methods of estimating the posterior density on the map (either FB or GB), when at each time instance, the vehicle trajectory, Xk, is assumed a priori known. The posterior density for the RM problem is therefore pk|k (Mk Zk , X k )



(4.12)

This posterior encapsulates all relevant statistical information of the map Mk. Measurements registered by the onboard exteroceptive sensor can be incorporated through the Bayesian update

k

k

pk|k (Mk Z , X ) =

g(Zk Mk , Xk )pk|k −1(Mk Zk −1 , Xk )

∫ g(Zk Mk , Xk )pk|k−1(Mk Z

k −1

, Xk )dMk



(4.13)

Since a static map is commonly assumed [22],

pk|k -1(Mk Zk -1 , Xk ) = pk -1|k -1(Mk -1 Zk -1 , X k -1)

(4.14)

that is, the time update density in the Bayes recursion is simply the posterior density at time k – 1. Note that in general, a static map assumption does not necessarily imply that Equation 4.14 is valid. This is due to occlusions that may result in corrupted segments of Mk, which consequently cannot be observed by the sensor or represented by the likelihood g(Zkú Mk, Xk). To model this added uncertainty, an extended formulation is required with vehicle state-dependant Markov Transition matrices or state-dependant detection probabilities incorporated into the measurement likelihood. This will be re-examined in Chapter 7, Section 7.3.4. Although not explicitly formulated in this manner, this observation was considered in the seminal scan-matching paper of Lu and Milios [36]. 4.3.2  Probabilistic Robotic Localization

Robotic localization requires the recursive evaluation of the posterior of the vehicle state, when the map, M, is known a priori. The density encapsulating the vehicle location is then given by pk|k (Xk M, Zk )



(4.15)

Measurements registered by the on board exteroceptive sensor can then be incorporated through the Bayesian update,

pk|k (Xk M, Zk ) =

g(Zk M, Xk )pk|k -1(Xk M, Zk -1 ,Uk -1)

ò g(Zk M, Xk )pk|k-1(Xk M, Z

k -1

,Uk -1)dXk



(4.16)

While this book does not concern itself directly with the individual problem of robotic localization, it is used for the evaluation of maps produced from various discussed algorithms.

4.4  Popular Robotic Mapping Solutions

117

4.4  Popular Robotic Mapping Solutions This section focusses on popular solutions to the problems associated with auto­ nomous navigation. 4.4.1  Grid-Based Robotic Mapping (GBRM)

Since the grid map representation first proposed in the seminal work of Moravec and Elfes [23], GBRM has been widely researched in the autonomous navigation community. The framework primarily addresses sensor detection uncertainty and spurious measurements, and does not explicitly deal with measurement noise. Data association uncertainty is typically approached using correlation-based methods. As described previously in Section 4.2.2.1, the map Mk is comprised of p cells, each indicating a probability of occupancy by a landmark at discrete locations in the mapping space. Since a trivial time update is typically employed [9], of critical importance to GB robotic mapping is the form of the measurement likelihood, g(Zkú Mk, Xk), in Equation 4.13. The original formulation discussed in [9, 18, 37] assumed a zero order Markov random field model for the grid map, M = [m(1x,y) … m(px, y) ]T . Thus, instead of jointly solving the full posterior over the entire map, pkúk(Mú Zk, Xk) the problem could be decomposed into p independent filtering problems for each individual cell; that is, i =p



pk|k (M Zk , Xk ) = ∏ pk|k (m(ix,y) Zk , X k )

(4.17)

i =1

which consequently implies i =q



pk|k (Mk Zk, Xk ) = ∏ pk|k (m(ix, y) Zk , X k )

(4.18)

i =1

Furthermore, the recursion of Equation 4.13 was written in the form of a likelihood ratio [37] of the density that the cell, m(ix, y) is occupied by a landmark, or that it is empty, m(ix, y),



pk|k (mxi ,y Zk, Xk ) pk|k (mxi ,y Zk, Xk )

=

p(m(ix, y) Zk, Xk )p(m(ix, y) )pk −1|k −1(mxi ,y Z k −1, Xk −1) p(m(ix, y) Zk, Xk )p(m(ix, y) )pk −1|k −1(mxi ,y Z k −1, Xk −1)



(4.19)

Note that in this recursive formulation, the measurement likelihood, g(Zkú Mk, Xk) is obtained through Bayes rule, requiring the density on the map cell, m(ix, y),  given the measurement and vehicle location, p(m(ix, y)Zk , Xk ), to be known. The prior density on the occupancy random variable of a given cell is denoted p(m(ix, y) ).  From a theoretical perspective, the model, p(m(ix, y)Zk , Xk ), is directly inferring an estimate on the map, given the measurement and is contrary to the general form of the optimal Bayesian recursion of Equation 4.4. Furthermore, a deterministic

118

Robotic Navigation and Mapping

measurement of the occupancy random variable is assumed, which allows for the posterior likelihood ratio to be written in Odds form,



Pk|k (mxi ,y Zk, Xk ) Pk|k (mxi ,y Zk, Xk )

=

p(m(ix, y) Zk, Xk )p(m(ix, y) )pk −1|k −1(mxi ,y Z k −1, Xk −1) p(m(ix, y) Zk, Xk )p(m(ix, y) )pk −1|k −1(mxi ,y Z k −1, Xk −1)



(4.20)

æ Pk k (m(ix, y) ) ö or adopting the well known Odds notation ç Ok k (m(ix,y) ) = ÷, çè Pk k (m(ix, y) ) ÷ø

Ok|k (m(ix, y) Zk, Xk ) = O(m(ix, y) Zk, Xk )Ok −1|k −1(m(ix, y) Zk −1, Xk −1)O(m(ix, y) )−1 (4.21) Determining the probabilities, P(m(ix, y) Zk , xk ), requires extensive off-line prior training in the specific environment of deployment, thus making it unsuitable for general outdoor navigation applications. Furthermore, the lack of a statistical measurement likelihood compromises the optimality of the results. A summary of many notable research contributions to the GBRM problem is given in the bibliographical remarks in Section 4.10.1. In all of these approaches, however, the true uncertainty caused by measurement noise is not explicitly maintained in the statistical sense. Instead, 2D Gaussian spread functions are used to “smear” the occupancy posterior of a given cell, pkúk(m(x,y)ú×), neighboring cells according to the range/bearing measurement noise magnitude. Furthermore, a discrete version of Equation 4.13 is used to propagate estimates of the occupancy random variable. This is because the likelihood of a range measurement corrupted by Gaussian noise is used as the occupancy measurement. The evaluation of the cumulative distribution of the Gaussian in the cell, m(x, y), is then used as the likelihood of occupancy. While a range measurement can be used as a likelihood of a hypothesized landmark’s location, it should not be used as a likelihood of cell occupancy. Based on initial results published in [38, 39], which will be extended in Chapter 7, this book shows that, from a theoretical perspective, only the detection statistics can infer any meaningful, and state-related, measurement of the occupancy random variable. Feature detections should generally be achieved based on concepts related to the detection theoretic methods of Chapter 3, allowing detection statistics to be incorporated in a theoretically concise manner, into the GB robotic mapping problem. 4.4.2  Feature-Based Robotic Mapping (FBRM)

FB robotic mapping addresses the FB map estimate from a robotic feature mapping algorithm. FBRM can also include the map produced by any FB-SLAM algorithm, where both the vehicle model is assumed correct and the vehicle control input noises are zero. Even with a known vehicle trajectory, the interpretation of sensor data and consequent generation of a map estimate is far from trivial. Detection decisions or feature extraction routines [31, 40] from raw sensor measurements can be highly uncertain, resulting in frequent missed detections or numerous spurious readings. 7.

Note that p(·) denotes a probability density function, whereas P(·) denotes a discrete probability.

4.4  Popular Robotic Mapping Solutions

119

In areas of dense features, even with a known vehicle trajectory, data association decisions can be challenging. Sensor biases [41] can also have detrimental affects on the estimated feature map, even from known vehicle locations. As highlighted previously in Section 4.2.2.2, a FB map is assumed to consist of an unknown number, p, features. At any given time, a random number, p £ p of these features can be present within the sensors field of view, thus for an a priori map, p is a random number while p is a fixed but unknown quantity. This was acknowledged many years ago in [42], which phrased the SLAM problem as “a state estimation problem involving a variable number of dimensions.” Assuming an infinite sensor field of view, the measurement likelihood used in FB algorithms, g(Zkú Mk, Xk), represents the likelihood of receiving a measurement from each of the p features assumed to exist in the map. That is,

Zk = hk (M, Xk ) + wk

(4.22)

where hk() is generally a non-linear function mapping the landmark and vehicle locations into the relative range and bearing measurement, with wk being the additive (usually assumed Gaussian) noise component. As written in the seminal work of Smith, Self and Chessman [5], “the sensor function is a function of all the variables in the state vector.” However, it is immediately clear that this equation contains no accommodation for spurious measurements, or detection and data association uncertainty, and therefore a real sensor measurement, such as that from a radar, is not simply a function of all the variables in the state vector, as was demonstrated in Chapter 3. The likelihood from the sensor function in Equation 4.22 represents only the likelihood of measurements from each of the features in the map state, and when used in the FB Bayes recursion of Equation 4.13 can result only in the posterior, pkúk(m1, ¼ , mpúZk, Xk). It cannot jointly consider the problem of estimating the a priori unknown number of features, p. Furthermore, the standard measurement model of Equation 4.22 assumes that all features have a unity probability of detection/extraction. Thus it is argued in this book that this FB measurement model adopted by a multitude of FB mapping (and FB SLAM algorithms) [1, 5, 11, 31, 34, 35, 43] fails to encapsulate all the uncertainty in a practical measuring sensor. Even though it is recognized that a FB approach has variable number of dimensions, as well as measurements being subject to numerous sources of uncertainty, the FB representation and resulting estimation theoretic algorithms primarily address only the sensor uncertainty as a result of spatial measurement noise—typically in range and bearing. Due to the variable dimensionality of the FB map, its estimation involves jointly solving for both the random number of features within the field of view at each time instant, p (and consequently the total number of features p), as well as their corresponding locations in a 2D Cartesian state space (in the case of planar, nm = 2, mapping). Due to the subtle oversight of the measurement equation in Equation 4.22, to estimate p, FBRM algorithms typically incorporate adhoc fixes such as intuitive feature management routines that exploit the static map assumption.

8.

Which is usually independent of discrete time k, but in general can be time varying.

120

Robotic Navigation and Mapping

Separate data association algorithms are required as a direct consequence of modeling the map state, Mk, as a vector [5, 6, 8, 29, 43, 44]. Consequently the measurement, which is a function of the map and vehicle state, is a vector. Since the order of elements in a vector is rigid, modeling in such a manner assumes that both the measurement and state vectors are perfectly aligned, (i.e., z1 is from m1, z2 is from m2, and so on) with z Î Zk and m Î Mk. Clearly, when Y a mobile platform is considered, this is, in general, not the case. Therefore, data association routines are required to solve the measurement-map relationship prior to the Bayesian map update. Mathematically, since both the measurement likelihood gk(Zk½Mk, Xk) and the predicted SLAM state pkúk-1(X0:k, MkúZ0:k–1, U0:k–1, X0), in the numerator of Equation 4.4, are represented by random vectors, they must be of compatible dimensions before the Bayes update can be carried out. This is why the independent data association steps are necessary. While in SLAM approaches it has been long acknowledged [5] that the problems of localization and map building must be jointly considered, no vector-based approach jointly solves the FB robotic mapping problem in its optimal Bayesian structure (i.e., the joint estimation of the number of features and their corresponding locations). The FB framework, in its vector-based form, is unable to optimally deal with detection and data association uncertainty as well as spurious measurements in an optimal manner, an issue which is addressed in Section 4.7 where the SLAM problem will be reformulated under an RFS, as opposed to a random vector concept.

4.5  Relating Sensor Measurements to Robotic Mapping and SLAM Based on the discussions so far, a fundamental difference in the way that robotics r­esearchers and radar/tracking researchers relate measurements to map SLAM state estimation is evident. To highlight these fundamentally different approaches, reference is made to Figure 4.1 in which landmark i is “sensed” from the vehicle at time k. For ease of explanation, uncertainty in the bearing measurement θki is ignored in this section. In the case of radar, a “theoretical A-scope” which could result when the radar on board the vehicle points in the direction of landmark i, is illustrated in Figure 4.5. Graph (a) represents an A-scope display that could result when the radar points in the direction of landmark i. Graph (b) illustrates the concept of detection theory, applied to the A-scope in graph (a). This would typically be based on the CFAR concepts of Chapter 3. Note that in this hypothetical example, the received power from the true target exceeds the detection threshold, as does a false alarm further down range. Graph (c) illustrates the way in which most of the spatial modeling approaches in the robotics literature have interpreted this measurement information. Its interpretation exists only in the spatial domain. A probability distribution is created in the range space with its mean located at the range bin of the closest range detection and spatial variance based on prior information about the sensor. Note that by definition, the area under the distribution in range space must be unity. Graph (d) illustrates the way in which most of the detection modeling approaches in the radar and tracking literature have interpreted this form of measurement in-

4.5 Relating Sensor Measurements to Robotic Mapping and SLAM

121

formation. Based on a pre chosen probability of false alarm, Pfa, from which the detection threshold is derived, every range bin q is accompanied by a calculated probability of detection Pd and a hypothesis value H0 or H1 corresponding to “no landmark detected” or “landmark detected,” respectively. Note also that the area under graph (d) is not, in general, unity, since each range bin has its own unique value of PD as graph (d) is not a PDF. Hence, in the interpretation in graph (d), two landmarks would be hypothesized, in this case, the closer one corresponding to the true target and the further one corresponding to a false alarm. The question naturally arises as to which interpretation is correct? In the case of the spatial interpretation of graph (c), since a probability distribution in the spatial space is used, the area under this distribution is, by definition unity, implying that one landmark is assumed to exist with unity probability somewhere in the range space. This is in conflict with the information provided by PD, which, in general, is not unity. This creates theoretical issues in GBRM, in which the state to be estimated is that of grid cell occupancy. It is not theoretically clear how to relate spatial measurements to an occupancy variable. This issue will be addressed in detail in Chapter 7, in which GBRM is reanalyzed. In the case of the detection based interpretation of graph (d), each range bin comprises a binary landmark hypothesis, H0 or H1, each accompanied by values for PD and Pfa. Since more than one range bin can be associated with hypothesis H1, note that, contrary to the spatial interpretation, no assumption on the number of possible landmarks at bearing angle θki is made. Further, in the case of GBRM, if the measurement is considered to be a binary value H0/1 at each discrete range value r(q) its relation to the occupancy state in GBRM is derivable in a straight forward manner, as will be shown in Chapter 7. However, under this interpretation, contrary to the spatial interpretation of graph (c), it is not immediately clear how any prior knowledge of spatial uncertainty could be incorporated into the measurement information. The answer to the question of which interpretation is correct is neither one, since both interpretations fail to represent all of the available information. However, it is also clear that care must be taken when interpreting measurements for the state estimation problem at hand. The following sections therefore introduce typical measurement models under each interpretation. These models are a fundamental component in the navigation and mapping analyses in Chapters 7 through 10. Fundamentally, in Section 4.7 the concept of a set-based measurement and state representation will be introduced, which allows all of the measurement information, spatial and detection, to be incorporated into joint Bayesian navigation and mapping frameworks. 4.5.1  R  elating the Spatial Measurement Interpretation to the Mapping/SLAM State



Based on graph (c) in Figure 4.5, once a detection is made at range rki the measurement can be interpreted in a stochastic manner, under simple Gaussian assumptions, as the vector é ri ù (4.23) Zki = ê ki ú êëθk úû

122

Robotic Navigation and Mapping

with measurement covariance matrix

é(σ r2 )i k Rki = ê êë 0

0 ù ú (σ θ2 )ik úû

(4.24)

where (σ r2 θ )ik is the variance associated with the range/bearing measurement to landmark i, at time k. From Figure 4.1 it can be seen that it is possible to relate the spatial measurement vector [rki θki ]T to the vehicle and environmental feature states, denned in Equations 4.9 and 4.10

rki = (xki - xkradar )2 + (yki - ykradar )2 + (wkr )i

(4.25)



é yi - y radar ù k θki = arctan ê ki ú - φk + (wθk )i radar êë xk - xk úû

(4.26)

θ i 2 i where (wkr )i ~ N (0,(σ r2 )ik ) and (wk ) ~ N (0,(σ θ )k ) assumed additive, spatial measurement noise terms in range and bearing respectively and, due to the radar’s offset distance from the vehicle’s origi­n,

xkradar = xk + a cos[φk ] − b sin[φ k]

ykradar = yk + a sin[φk ] − b cos[φ k]



(4.27)

where a and b are defined in Figure 4.1. Equations 4.25 and 4.26 can be substituted into Equation 4.23 and written as a single vector equation. When augmented for all landmarks detected at different sensor bearing angles within a scan, this takes the form of the general measurement equation 4.22. From here on, it is however necessary to clarify in the general measurement equation 4.22, whether the assumed measurement model is of the spatial or detection type. Hence for spatial measurement interpretation, the general measurement equation will now be referred to as

Zk = hkspatial (Mk , Xk ) + wk

(4.28)

where Mk and Xk were defined in Equations 4.11 and 4.7 and wk corresponds to the augmented vector of measurement noises [(wkr )i (wθk )i ]T over all detected features within a scan. 4.5.2  R  elating the Detection Measurement Interpretation to the Mapping/SLAM State

Techniques to relate the detection type interpretation of a measurement to a robotic map or SLAM state, will specifically be the subjects of Sections 4.7.2.3 and 7.2.2. In Section 4.7.2.3, once the robotic map state and measurements are defined as set-based quantities, it is possible to relate the detection type measurements of graph (d) in Figure 4.5, as previously estimated landmarks will be updated based on their probabilities of detection, the birth of new landmarks is initiated according to their probabilities of false alarm, and also spatial uncertainty can be incorporated in a joint manner.

4.5 Relating Sensor Measurements to Robotic Mapping and SLAM

123

(a) Theoretical A – scope display from sensor bearing angle q i k

Received power

Range r

ri

k

(b) Detection theory applied to A – scope from sensor bearing angle q i k

Received power and detection threshold

Detection threshold (e.G. Cfar)

ri

Range r

k

(c) Extracted spatial measurement information (common in robotics research)

Density

Area under distribution = 1

ri

Range r

k

(d) Extracted detection measurement information (common in radar research)

probability of detection Detection

Detection

r i (q) k

Range bin r(q)

Figure 4.5  The interpretation of measurement detection in robotics and radar/tracking research.

In Section 7.2.2 the measurement space is defined in a binary form, in which each measurement constitutes the Hypothesis H0/1in each range bin. This measurement is then related to the occupancy state variable of each cell within a GBRM implementation. The occupancy probability, together with the sensor likelihood densities, are estimated in a joint manner, based on these detection type m­easurements. With appropriately defined measurements Zk and state Xk, Mk, a robotic measurement model of this kind can be expressed in the form

Zk = hkdetection (Mk , Xk ) + wk

(4.29)

The details of different implementations of Equation 4.29 are specific to Sections 4.7.2.3 and 7.2.2, and their explanation is left until then.

124

Robotic Navigation and Mapping

4.6  Popular FB-SLAM Solutions The Bayesian SLAM formulation outlined in Section 4.2 forms the basis of most probabilistic autonomous navigation algorithms. Direct implementations of the recursions are typically computationally intractable, and thus approximations are used. Gaussian and particle approximations remain the dominant methods within the autonomous navigation research community [6]. 4.6.1  Bayesian FB-SLAM—Approximate Gaussian Solutions

This section reviews Gaussian-based solutions to the Bayesian FB-SLAM problem. As the optimal solution requires the joint estimation of both the vehicle pose and the map, let ζ k = [Xk Mk ]T



(4.30)

be the true joint state on the vehicle and map, at time k. The most popular approximation of the recursion of Equation 4.4 is to assume that the map can be represented as a vector and that the SLAM posterior density at time k is Gaussian distributed according to

Pk|k (ζ k Zk,U k -1, X0 ) » N ( ζ k, ζ k|k , Pk|k )

(4.31)

with estimated mean, zk|k, and covariance, Pk|k, given all the information up to and including time k. The observation, Zk is modeled as in Equation 4.22. The measurement variance, Rk = E[wkspatial (wkspatial )T ]. Assuming a static map Mk = Mk -1



(4.32)

so that Equations 4.7 and 4.32 can be concatenated to give

é Xk ù êM ú = ë kû

é F veh (Xk -1 ,Uk -1 ,Vk -1) ù ê ú êëF map (Mk -1 ,Uk -1 ,Vk -1)úû

(4.33)

In terms of the joint SLAM state Vk = [Xk Mk]T Equation 4.33 can therefore be written as the first order Markov process

ζ k = F SLAM (ζ k -1 ,Uk -1 ,Vk -1)

(4.34)

It can be shown that the predicted density can also be approximated by a Gaussian distribution, 9.

Pk|k -1(ζ k Zk -1 ,Uk -1 , X0 ) » N (ζ k , ζˆk|k -1 , Pk|k -1)

(4.35)

Note that the state transition function in Equation 4.34 is usually considered not to be time varying n r­obotic vehicle navigation, although in general it can vary with k and hence is denoted Fk(·).

4.6  Popular FB-SLAM Solutions

125

through appropriate linearization methods, with ζˆk k -1 and Pk|k–1 being the predicted mean and covariance, respectively. If a Gaussian distributed measurement likelihood is assumed, it can be shown that the first two moments of the Gaussian state can be propagated through the Bayesian recursion to produce a Gaussian distributed posterior pk k (ζ k Z0:k ,U0:k -1 , X0 ) » N (ζ k ; ζˆk k , Pk k ) where [45]

ζˆk|k = ζˆk|k -1 + Kk (Zk - hspatial (ζˆk|k -1))

(4.36)



Pk|k = [I - Kk Ñ x hkspatial ]Pk|k -1

(4.37)

and ζˆk k, Pk|k the estimated mean and covariance of the true joint state zk, and hkspatial () introduced in Section 4.5.1. The components required to evaluate the above are given by

Kk = Pk|k -1(Ñ x hspatial )T (Ek )-1

(4.38)



Ek = Rk + ÑX hspatial Pk|k -1(ÑX hspatial )T

(4.39)



Pk|k -1 = ÑX F SLAM Pk -1|k -1(ÑX F SLAM )T + ÑU F SLAMQk -1(ÑU F SLAM )T

(4.40)

with Qk -1 = E[υ k -1 υkT-1 ] and ÑXF SLAM indicating the Jacobian matrix of state transition function F SLAM ( ) differentiated with respect to the state Xk–1. Similarly, ÑUFk is the Jacobian matrix of the same state transition function, but differentiated with respect to the state Uk–1, and ÑX hspatial is the Jacobian of the measurement equation with respect to the vehicle state Xk–1. All of these Jacobian matrices are then numerically evaluated at the most recent predicted state value ζˆk k -1 which, by taking expectations of Equation 4.34, is

ζˆk|k -1 = F SLAM (ζˆk -1:k -1 ,Uk -1)

(4.41)

These equations are those commonly found in Kalman (or extended Kalman) filtering literature [45–48]. The approximate Gaussian SLAM concept is demonstrated for 2D SLAM in Figure 4.6, in which the EKF representation of the robot pose and map features is shown. Note that under this formulation, robot and feature positional uncertainties are represented as ellipsi. These correspond to regions in 2D space of minimum area known to contain the robot and feature positions, with a constant, predefined probability based on the estimated means and covariances only. The aim of EKF SLAM is that the estimated robot (×) and feature (+) positions converge as close to their true positions (black triangles and circles, respectively) as possible, with the resulting feature and most recent robot error covariance ellipsi having the minimum areas.

126

Robotic Navigation and Mapping

= Estimated feature position = Actual feature position = Estimated robot position = Actual robot position Figure 4.6  EKF-based SLAM representing uncertainty in the estimated robot and feature positions with covariance values (ellipsi) only.

4.6.2  Feature Association

For each landmark detected, the EKF SLAM algorithm has to decide which landmark in the existing map it corresponds to or whether a new landmark was observed and has to be added to the map. This complex data association issue is often the weakest link in many feature-based SLAM algorithms and has therefore received a great deal of attention in the literature. To accompany the EKF SLAM algorithm, many data association techniques are possible, a summary of which is given in the Bibliographical Remarks Section 4.10.5. Two popular methods, which are used for comparing SLAM algorithms in Chapters 9 and 10, are given in this section. 4.6.2.1  Maximum Likelihood (ML) Feature Association

Maximum likelihood (ML) estimation calculates and compares the Mahalanobis distances between the detected feature locations and their predicted locations. The Mahalanobis distance is a distance measure introduced by P. C. Mahalanobis in 1936 [49]. It differs from the Euclidean distance between points by taking into account the correlations of the multidimensional points and is invariant in scale. The 2 Mahalanobis distance measure in SLAM is defined as dM (zkj , zˆ ki ), which provides j a measure on the spatial difference between measurement zk and predicted feature measurement zˆ ki , given by

2 dM (zkj , zˆ ki ) = (zkj - zˆ ki )T Sk-1(zkj - zˆ ki )

(4.42)

4.6  Popular FB-SLAM Solutions

127 j

This value has to be calculated for all possible (zk , zˆ ki ) combinations, for which

dM (zkj , zˆ ki ) £ α

(4.43)

Often referred to as a validation gate, the region in measurement space described by Equation 4.42 corresponds to an ellipse. For measurement/prediction pairs for which Equation 4.43 is true (measurements fall within the validation ellipse, centered on a predicted feature’s location), the maximum likelihood (ML) correspondence is selected as

j i 2 i(j) = argmin ������� (dM (zk , zˆ k ))

(4.44)

i

If any detected feature does not “gate” with any predicted feature’s location, (Equaj tion 4.43 is not satisfied for any (zk , zˆ ki ) combination), then measurement zjk is added to the map as a tentative new feature. If the measurement is associated with a previously unassociated feature, the feature counter is incremented. This method of data association is often referred to as the nearest neighbor standard filter (NNSF) or simply the (gated) nearest neighbor (NN) approach. NN data association, together with EKF robotic mapping (labeled NN-EKF), will be applied to radar-based mapping experiments in urban and marine environments and used for comparison with other techniques in Chapters 8 and 10. 4.6.2.2  Joint Compatibility Branch and Bound (JCBB) Data Association

In 2001, Neira et al. highlighted limitations of the NN approach explained in Section 4.6.2.1 [33]. The NN method examines each possible match between each sensor observation and predicted feature location independently, ignoring the fact that measurement prediction errors are correlated. This is known as the individual compatibility nearest neighbor (ICNN) algorithm. This can lead to possible mis­ associations, based on an apparent good match between individual detected and predicted features, which results in a poor overall match between the entire sets of detected and predicted features. This, in turn, can result in inconsistent feature association hypotheses. For this reason, the joint compatibility branch and bound (JCBB) data association method attempts to optimize the match between a set of detected and predicted feature pairs, which can yield more optimal association decisions and takes into account the measurement prediction correlations. One way to obtain consistent hypotheses is to carry out a sequential compatibility nearest neighbor (SCNN) test. This calculates the same Mahalanobi distance for detected feature j and predicted feature i using Equation 4.42, assuming that a hypothesis based on all pairings for feature detections I to j – 1 and predictions i1 to ij–1 has successfully been established and used to obtain an estimate of the map state based on the j – 1 associations. In this case, if Inequality 4.43 is satisfied for feature j and predicted feature i, the association of detection j and prediction i will be consistent with all prior associations. This can then be used to obtain a new estimate of the map state, now based on j (rather than j – 1) associations. This new estimate forms a new basis for the predicted observations and measurement covariance matrix, defined in Equation 4.39, which can be used to consider a further f­eature

128

Robotic Navigation and Mapping

detection j + 1, and the cycle continues until all feature detections are matched or rejected according to Inequality 4.43. The SCNN method is computationally attractive; however, it ignores the fact that prior associations may anyway be incorrect (due, for example, to a false alarm being associated with a predicted feature). Therefore, the JCBB algorithm calculates the Mahalanobis distance measure of Equation 4.42, but instead of using individual j feature detections zk and predictions zki at time k, it uses the entire vector of currently associated features, augmented with the current feature detection and prediction pair in question. Equation 4.42 then becomes

i …i i …i i …i 2 dM (Zk1…j , Zˆ k1 j ) = (Zk1…j - Zˆ k1 j )T (Skj )-1(Zk1…j - Zˆ k1 j )

(4.45)

where Zk1…j now represents the entire vector of detected features 1 to j at time k, … Zˆ k1 ij represents the vector of j predicted features (i.e., predictions i1 to ij) and Skj is the covariance of the joint innovation, determined from Equation 4.39, based on a map state containing the current set of feature associations under test. Evaluating Equation 4.45 can be computationally expensive, since it is no longer j based on single association hypothesis, resulting in a potentially large matrix Sk,  which must be inverted. Therefore, under the JCBB algorithm, Equation 4.45 is incremenj -1 tally evaluated, using a recursion for Skj , which gives Skj in terms of Sk .  This relationi …i 2 (Zk1…j , Zˆ k1 j ) from Equation 4.45 is used ship is derived and given in [33]. Finally, dM in Inequality 4.43 to test tor compatibility of the hypothesized set of associations. JCBB-based data association will be applied to EKF-SLAM in Chapter 9 in a cluttered campus environment, where it will be used as a basis of comparison with the SLAM techniques to be presented in Section 4.7. 4.6.3  Bayesian FB-SLAM—Approximate Particle Solutions

Although difficult to implement in its full form, the SLAM representation of Equation 4.1 can in principle directly approximated by samples, known as particles, first suggested by Gordon et al. [50]. In the SLAM context, each sample, or particle, consists of a vehicle state and a map. Thus the particles themselves may have varying dimensions, due to the potential dimensionality differences of the map estimates. Assume that at time k – 1, a set of weighted particles {wk(i-) 1, ζ Ù(ki)-1|k -1}iN=1, ζ Ù(ki)-1|k -1 is the ith of N samples taken from Equation 4.1 at time k – 1 and wk(i-) 1 is its weight) representing the posterior, pk–1½k–1(zk–1½Zk–1, Uk–2, X0) is available; that is,

N

Pk -1|k -1(ζ k -1 Zk -1 ,Uk - 2 , X0 ) » å wk(i-) 1δ (ζ k -1 - ζ Ù(ki)-1|k -1)

(4.46)

i =1

The particle filter proceeds to approximate the posterior at time k by a new set of weighted particles {wk(i) , ζ Ù(ki|)k }iN=1 The main practical problem with the particle SLAM filter presented here is the need to perform importance sampling in very high dimensional spaces if the number of features is large. Moreover, it can be difficult to find an efficient importance sampling density.

4.6  Popular FB-SLAM Solutions

129

4.6.4  A Factorized Solution to SLAM (FastSLAM)

In the case of SLAM, however, the extremely useful Rao-Blackwellization strategy can be applied by exploiting the structure of the problem to reduce the computational cost. Such a strategy was introduced by Montemerlo et al. [34, 35] where the state to be estimated zk was modified as zk = [Xk Mk]T to include the entire trajectory of the vehicle, Xk, as opposed just its most recent state, Xk. The posterior distribution on this modified, joint state could then be factorized as pk|k (ζ k Zk ,U k -1 , X0 )

m

= pk|k (Xk Zk ,Uk -1 , X0 )Õ Pk|k (ml Xk , Zk ,Uk -1 , X0 )

(4.47)

l =1

yielding a factorized solution to SLAM (FastSLAM) in which the particles represented entire estimated vehicle trajectories (X k|k )(i) accompanied by their own estimate of the map. Hence FastSLAM allows the SLAM problem to be decomposed into m independent state estimators for each feature in the map, m Î M, conditioned on each vehicle trajectory, (X k|k )(i). A Rao Blackwellization strategy thus allows for a particle approximation of the density pk½k(Xk½Zk, Uk–1, X0) given by N



Pk|k (Xk Zk ,Uk -1 , X0 ) » å wk(i)δ (X0:k - (X k|k )(i)

(4.48)

i =1

(X k|k )(i) represents a hypothesized vehicle trajectory, while independent EKFs, due to the non-linear measurement model, are used to estimate each feature density, pk k (ml (X k|k )(i) , Zk ,Uk -1 , X0 ) [51]. This technique is illustrated in Figure 4.7. In

= Estimated feature position = Actual feature position = Estimated robot trajectory sample = Actual robot position Figure 4.7  Illustration of the FastSLAM concept. The robot trajectory is maintained as a set of particles. Each trajectory particle maintains its own EKF to represent the map features, conditioned on its own trajectory estimate (only 2 EKF map representations are shown, for clarity).

130

Robotic Navigation and Mapping

the figure, particles are used to represent the vehicle’s trajectory, shown as dots in the (x, y) space. Although only sample points are shown, each particle would also have a hypothesized direction associated with it. Note that this non parametric representation, is not bound to the elliptical robot pose distributions of EKF-SLAM shown in Figure 4.6. Relative to each robot trajectory sample, an EKF is used to estimate the feature map, conditioned on that particle’s trajectory. For clarity, in the figure, only two such EKF maps are shown, conditioned on two individual trajectory samples, shown as the two dashed lines. The feature observations are then used to generate a weight for each trajectory particle, based on the likelihood of the feature measurements, conditioned on each trajectory. Resampling of the particles, based on those weights then yields the most likely trajectory, and its accompanying map is taken as the FastSLAM solution. 4.6.5  Multi-Hypothesis (MH) FastSLAM

FastSLAM estimates the map on a per-particle basis, meaning that different particles can be associated with different features. This means that the FastSLAM filter has the possibility to maintain different tracks for each possible hypothesis of each detected feature, known as multiple hypothesis tracking (MHT) [52]. The technique to be used in the MH-FastSLAM experiments of Chapters 9 and 10 are based on [53]. A new particle is created for each new hypothesis of each measurement, meaning that each particle is split into n + 2 new particles, one for each of the n hypotheses (possible associations) and one particle for the non association hypothesis and the other particle for a new feature hypothesis. Clearly this would correspond to an explosive growth in the number of particles, except that during the resampling phase of FastSLAM, based on further measurements, most of the incorrect hypotheses should be eliminated. Particles with incorrect data association are, on average, more likely to be eliminated than those that were based on correct associations. This step reduces the number of particles back to its original number. 4.6.6  General Comments on Vector-Based FB SLAM

In state-of-the-art stochastic FBRM and SLAM, the number of features and their locations are typically represented by varying the size of a vector. Map management methods are then introduced that augment this vector when new features are detected. Data association techniques, such as the ML and JCBB techniques of Sections 4.6.2.1 and 4.6.2.2, are then necessary to determine which feature elements of this vector correspond to which elements of the total current detections, which is also represented as a vector, containing the measured attributes of the currently sensed features. Only then can a Bayesian update of the feature map take place. This concept, which summarizes the current state of the art in FBRM and SLAM, is shown in Figure 4.8. It can be seen in the figure that, Bayes optimality can only take place with the number of map states (p in Figure 4.8) that are considered to exist according to the map management heuristics/filters just described. These heuristics/filters are, however, external to the application of Bayes theorem. Therefore, with error-free data association and optimal feature initialization

k

Vector state ready for prediction at time k + 1.

initial state features

Determine r feature terminations

Determine q feature initialisations

Determine p feature associations

Map management

filters/heuristics

Map management

Vector of p associated features, taken from state vector at time k

p

. . .

1 2 3

Dimension and feature order in each vector must be equal

p

. . .

p associated features 1 2 3

(eg., EKF)

estimator

Bayesian

vector at time k

Estimated feature

p

. . .

1 2 3

Bayes optimality based only on p features

Figure 4.8  Vector-based feature mapping, for single update cycle, in both FBRM and SLAM.

mm k

. . .

Predicted feature state vector at time k m1 m2 m3

z

. . .

Detected feature vector at time k z1 z2 z3

Detection theory

Sensor data

truncation

augmentation/

Vector

. . .

Updated feature vector at time k 1 2 3

4.6  Popular FB-SLAM Solutions 131

}

Bayes

2

m

3

. . . m m k}

Statistical representation of the set (eg., PHD function)

State representation ready for prediction at time k + 1.

PHD function)

Statistical representation of the set estimate (eg., updated

Possible to reconstruct updated feature set with updated number of elements and their attributes (locations)

Figure 4.9  RFS-based feature mapping, for a single update cycle, in both FBRM and SLAM. Notice that no map management or data association filters/heuristics are n­ecessary.

{m 1 m

set at time k

(eg., PHD)

k

feature state

. . .z

Bayes optimality based on all k observations and all mk previous state estimates

estimator

3

Statistical representation of the set (eg., PHD function)

Sets mathematically represented in terms of feature number and their attributes (locations).

Predicted

{ z1 z 2 z

Detected feature set at time k

Feature detection

Sensor data

132 Robotic Navigation and Mapping

4.7  FBRM and SLAM with Random Finite Sets

133

routines, optimal estimates of a predefined number of feature locations are realizable using vector-based, linear Gaussian approaches [1]. However, when the intrinsic properties of the map are considered (i.e., an unknown number of insignificantly ordered features), Bayes optimality of the true problem is not realizable with vector-based methods. As noted in the field of multi-target filtering by Mahler ([54], page 571): …having a good estimate of target number is half the battle in multi-target tracking. If one has 1,000 measurements but we know that roughly 900 of them are false alarms, then the problem of detecting the actual targets has been greatly simplifie­d.

This chapter advocates that the same principle applies to feature maps in robotics. As shown for the case of radar throughout this book, realistic feature detection algorithms, which process any form of sensor data, produce false alarms and missed detections, and estimating the true number of features is therefore central to the FBRM and SLAM problems. Note that in the SLAM concepts presented so far, no account is taken of the probabilities of detection of the landmarks, which for radar can be readily available according to the analyses of Chapter 3. In fact, in the concepts presented so far, these probabilities are subtly assumed to be unity. This chapter therefore continues by redefining the concept of Bayes optimality for estimation with unknown feature number, by formulating it as a random set estimation problem. The formulation unifies the independent filters adopted by previous, vector-based solutions, and highlighted in Figure 4.8, through 7C the recursive propagation of a distribution of a random finite set of features. This allows for the joint propagation of the FB map density and leads to optimal map estimates in the presence of unknown map size, spurious measurements, feature detection, and data association uncertainty. The RFS framework further allows for the joint treatment of error in feature number and location estimates as it jointly propagates both the estimate of the number of landmarks and their corresponding states, and consequently eliminates the need for feature management and data association algorithms. The RFS approach to FBRM and SLAM is depicted in Figure 4.9.

4.7  FBRM and SLAM with Random Finite Sets 4.7.1  Motivation: Why Random Finite Sets

Until the use of RFSs, a map representation that unified the existence filtering statespace of the occupancy map representation and the spatial state-space of the feature map representation, remained elusive. While previous researchers generally adopted separate filters to propagate the spatial and existence posteriors of a vector feature map, such an approach led to some theoretical inconsistencies. For instance, consider the posterior density for a single feature map, pk½k(M = [m]½Z0:k), where m represents the coordinates of the single feature and Z0:k represents all measurements from time 0 to k inclusive. In order for the Bayesian recursion of Equation 4.13 to be valid, the density must be a PDF (i.e, ò pk k (M Z0:k )dM = 1). This implies that the

134

Robotic Navigation and Mapping

feature definitely exists somewhere in the map. By using a separate existence filter to obtain an existence probability of a, the implication is that ò pk k (M Z0:k )dM = α , which subsequently violates a fundamental property of a PDF " a ¹ 1. For such a case, it is evident that a vector-valued feature map representation cannot jointly incorporate feature existence and location uncertainty.10 An RFS framework can readily overcome these issues. For instance, an analogous joint recursion can be obtained by adopting a Poisson RFS to represent the feature map. This approach does not maintain an existence estimate on each feature, but instead propagates a density that represents the mean number of features in the map as well as their spatial densities. In general, the number of features detected in a single sweep of the environment will not be the same as the current number of feature estimates in a mapping or SLAM filter. Further, the order in which the features are detected in a single scan will typically not be the same as the order of the current feature estimates, if a vector is used to represent them. Therefore, current vector-based formulations of the FBRM and SLAM problems require map management and feature association to trim the observed (extracted) feature vector and its current estimate to be of the same dimension, and then the correspondences between them must be solved prior to the Bayesian update. This is because feature estimates and measurements are rigidly ordered in their respective finite vector valued, map states. It can be seen in Figure 4.8, that this data association step is necessary, before any vector-based, Bayesian update can take place. The RFS approach, on the other hand, represents both features and measurements as finite valued sets M and Z, respectively, which assume no distinct ordering of the features, as shown in representations 4.49 and 4.50, corresponding to an example of a map containing five features. M = [m1m2 m3m 4 m5 ]T

M = [m 4 m2 m3m1m5 ]T � 5

4

3

2

(4.49)

1 T

M = [m m m m m ] � ���������������������� M = {m1m2 m3m4m5 }

Z = [z1z 2 z 3z 4 z 5 ]T

Z = [z 4 z 2 z 3z1z 5 ]T �

(4.50)

4 3 2 1 T Z = [z 5 z���������� z z z ]� �������� Z = {z1z 2 z 3z 4 z 5 }

10. It should be noted that in grid-based approaches, the number of grid cells, m, is inherently known a priori. This has a fundamental impact on the optimality of the Bayesian recursion since it means that only the occupancy of each cell needs to be estimated and not the number of grid cells. Thus, a vector-valued map state can be used to represent the grid cells since, in this case, it is not necessary to encapsulate uncertainty in the number of states.

4.7  FBRM and SLAM with Random Finite Sets

135

It should be noted that throughout this book M and Z are vector representations of an FB map and extracted features, whereas M and Z represent set-based representations of the map and extracted features, respectively. While defining a vector or set valued feature map representation may appear to be a trivial case of terminology, in fact it has already been demonstrated that the vector valued feature map has numerous mathematical consequences [54], namely an inherent rigid ordering of variables and a fixed length. Since the finite set representations 4.49 and 4.50 naturally encapsulate all possible permutations of the feature map and measurement, feature association assignment does not have to be addressed. This will be demonstrated throughout the chapter. 4.7.2  RFS Representations of State and Detected Features 4.7.2.1  Robot Vehicle State

The vehicle state in autonomous robotics frameworks is typically defined as a vector Xk representing its spatial coordinates at discrete time k as explained in Section 4.2.1. Since, for a given problem, the dimensions of this state are fixed as time progresses, and the order of the variables in the vector remain the same, the robot vehicle state is adequately modeled as a random vector. The vehicle dynamics can therefore be modeled by the standard Markov process with the transition density of Equation 4.7. 4.7.2.2  RFS Dynamic Map State

For the reasons explained in Section 4.7.1, the map state and detected features should be modeled as RFS. Let M be the RFS representing the entire unknown and unexplored static map and let Mk–1 be the RFS representing the subset of that map that has been explored, (i.e., that has passed through the field(s) of view (FoV) of the vehicle’s sensor(s)). That is

Mk -1 = M Ç FoV (X k -1)

(4.51)

Note that FoV(Xk–1) = FoV(X0) È FoV(X1) È … È FoV(Xk–1). Mk–1) therefore represents the set on the space of features that intersects with the union of individual FoVs over the vehicle trajectory up to and including time k – 1. Given this representation, the explored map Mk–1 evolves in time according to

Mk = Mk -1 È (FoV (Xk ) Ç Mk -1 )

(4.52)

where Mk -1 = M - Mk -1 (note the difference operator used here is the set difference); the set of features that are not in Mk–1. Let the new features that is have entered the FoV (i.e., the second term of Equation 4.52) be modeled by the independent RFS, Bk(Xk). In this case, the RFS map transition density is given by

f M (Mk Mk -1 , Xk ) =

å

W Í Mk

f M (W Mk -1)fkB (Mk - W Xk )

(4.53)

136

Robotic Navigation and Mapping

where f M (W Mk -1) is the transition density of the set of features that are in the FoV(Xk–1) at time k – 1 time k, and f kB (Mk - W Xk ) is the density of the RFS, B(Xk), of the new features that pass within the FoV at time k. Including the vector-based vehicle state transition density of Equation 4.7, the joint transition density of the map and the vehicle pose can be written as

f SLAM (Mk , Xk Mk -1, Xk -1 ,Uk ) = f M (Mk Mk -1 , Xk )f veh (Xk Xk -1 ,Uk -1)

(4.54)

4.7.2.3  RFS Detected Feature State

To encapsulate detection and data association uncertainty, as well as spatial m­easurement noise, the detected features, Zk, received by a sensor mounted on a vehicle with pose Xk, at time k, can be mathematically modeled by an RFS e­quation as

Zk =



Dk (m, Xk ) È Ck (Xk )

(4.55)

mÎMk

where Dk (m, Xk) is the RFS of measurements generated by a feature at location m and Ck (Xk) is the RFS of the spurious measurements at time k, which may depend 1 2 3k on the vehicle pose Xk. Therefore, Zk = {zk , zk ,…, zk } consists of a random number, zk, of measurements, whose order of appearance has no physical significance with respect to the estimated map of features. It is also assumed that Dk(m, Xk) and Ck(Xk) are independent RFSs when conditioned on Xk. For each feature, m Î Mk, and zki Î Zk, Dk (m, Xk ) = {zki }



(4.56)

)

(

with probability density PD(m, Xk) g zki m, Xk and Dk(Xk, m) = 0 with probability 1 – PD(m, Xk), where PD(mMk, Xk) is the probability of the sensor detecting the mth feature from pose Xk and g zki m, Xk represents the sensor’s likelik hood of detecting z, given m and Xk. Values for the probability of detection of radar-based features can be calculated using the CFAR techniques of Chapter 3. Using finite set statistics [54], the probability density that the sensor produces the measurement set Zk given the vehicle state Xk and map Mk at time k is then given by [55]

(



g(Zk Mk , Xk ) =

å

W Í Zk

)

g D (W Mk , Xk )gC (Zk - W )

(4.57)

with g D (W Mk , Xk ) denoting the density of the RFS of observations, Dk(m, Xk) generated from the features in the observed map Mk given the state of the vehicle, and gC(Zk – W) denoting the density of the clutter RFS, Ck (Xk). g D (W Mk , Xk ) describes the likelihood of receiving a measurement from the elements of the set-valued map, which incorporates detection uncertainty and spatial measurement noise. gC(Zk – W) models the spurious measurement rate of the sensor and is typically

4.7  FBRM and SLAM with Random Finite Sets

137

a priori assigned based on the probability of false alarm Pfa, which can also be calculated based on the CFAR methods of Chapter 3. 4.7.3  Bayesian Formulation with a Finite Set Feature Map

To consider the estimation of a finite set feature map, a clear definition of an RFS is necessary. An RFS is a random variable that take values as finite sets. It is defined by a discrete distribution that characterizes the number of elements, mk, in the set, and a family of joint distributions that characterize the distribution of the element’s values, conditioned on the cardinality [54]. Therefore, in a similar vein to the vector FB-SLAM state of Equation 4.4, an RFS FB-SLAM state can be described by its PDF Pk|k (Xk , Mk = {m1, m2,…, mmˆk } Zk ,Uk )



(4.58)

and propagated through a Bayesian recursion as follows: ·

Predict time update, based on the previous vehicle pose Xk–1 and previous inputs Uk–1 to the robot: pk|k -1(Xk , Mk Zk -1 ,Uk -1 , X0 ) = ò f veh (X k , Mk X k -1 , Mk -1 ,Uk -1)



(4.59)



´ pk -1|k -1(Xk -1 , Mk -1 Zk -1 ,Uk - 2 , X0 )dXk -1 · ·

Acquire the measurement set Zk. Perform the measurement update: Pk|k (X0:k , Mk Z0:k ,U0:k -1 , X0 )



=

gk (Zk Mk , Xk )Pk|k -1(X0:k , Mk Z0:k -1 ,U0:k -1 , X0 )

ò ò gk (Zk Mk , Xk )pk|k-1(X0:k , Mk Z0:k-1,U0:k-1, X0 )dXkδ Mk



(4.60)

where d implies set integration. Contrary to the vector-based implementation of Bayes theorem in Equations 4.3 and 4.4, it is important to note that the measurement likelihood g(Zk Mk , Xk ) and the predicted SLAM state Pk|k -1(Xk , Mk Zk -1 ,Uk - 1, X0) in the numerator of Equation 4.60 are finite set statistics (FISST) densities representing the RFS, which do not have to be of compatible dimensions. Integration over the map in the denominator of Equation 4.60 requires integration over all possible feature maps (all possible locations and numbers of features). By adopting an RFS map model, integrating over the map becomes a set integral. This feature map recursion therefore encapsulates the inherent feature number uncertainty of the map, introduced by detection uncertainty, spurious measurements, and vehicle maneuvers, as well as the feature location uncertainty introduced by spatial measurement noise. Features are not rigidly placed in a map vector, nor are

138

Robotic Navigation and Mapping

measurements simply a direct function of the map state, due to the explicit modeling of clutter. Therefore, contrary to vector representation approaches, no explicit measurement-feature assignment, often referred to as the data association problem, is required. Hence, by adopting an RFS representation of the mapped and detected features, Bayes theorem can be applied to jointly estimate the feature state, number, and vehicle pose for SLAM. 4.7.4  The Probability Hypothesis Density (PHD) Estimator

It should be noted that in any realistic implementation of the vector-based Bayes filter, the recursion of Equation 4.4 is, in general, intractable. Hence, as explained in Sections 4.6.1 and 4.6.3 the well known extended Kalman filter (EKF) and p­article filters and their combinations can be used to approximate multifeature, vector-based densities. Other approximations are summarized in the Bibliographical Remarks Section 4.10.4. Unfortunately, the general RFS recursion in Equation 4.60 is also mathematically intractable, since multiple integrals on the space of features are required. This chapter therefore introduces principled methods, which can propagate approximations of the full multifeature posterior density. Techniques borrowed from recent research in point process theory known as the probability hypothesis density (PHD) filter, cardinalized probability hypothesis density (C-PHD) filter, and the multitarget, multi-Bernoulli (MeMBer) filter all offer principled approximations to RFS densities. In this chapter, special attention is given to one of the simplest of these, the PHD filter. This filter will be explained and implemented to execute both FBRM and SLAM with radar data sets in Chapters 8, 9, and 10. A simple approach to set-based estimation is to exploit the physical intuition of the first moment of an RFS, known as its PHD or intensity function. This corresponds to the multi-feature equivalent of an expected value—the expectation of an RFS. This section starts by giving an explanation of what the PHD is and how it should be statistically interpreted in Section 4.7.4.1. This is followed by an intuitive derivation of the PHD in Section 4.7.4.2, and in Section 4.7.4.3 the recovery of the probability density of the map from the PHD is explained. 4.7.4.1  Interpretation of the PHD

The intensity function at a point gives the density of the expected number of features occurring at that point. A PHD function must obey the following two properties: 1. The mass (integral of the density over the volume of the space) of the PHD gives the expected number of features. 2. The peaks of the intensity function indicate locations with a relatively high concentration of expected number of features; in other words, locations with a high probability of feature existence. Such a concept relates well to radar detection problems, which exhibit feature existence as well as location uncertainty.

4.7  FBRM and SLAM with Random Finite Sets

139

To provide an intuitive interpretation, consider a simple ID example of a valid PHD representing two targets located at x = 1and x = 4, each with spatial variance s 2 = 1 (example taken from page 569, [54]). A corresponding Gaussian mixture (GM) representation of the PHD for this problem is

PHD(x) =

æ (x - 1)2 ö æ (x - 4)2 ö ù 1 é + exp êexp ç ÷ ç÷ ú 2π σ êë 2σ 2 ø 2σ 2 ø úû è è

(4.61)

PHD(x) versus x is plotted in Figure 4.10. Note that the maxima of PHD(x) occur near the target locations (x = 1, 4). The integral of PHD(x) is m ˆ where ˆ = ò PHD(x)dx = ò N (1, σ 2 )dx + ò N (4,σ 2 )dx m =

+

1



(4.62)

=2

1

where N(m, s2) represents a Gaussian distribution with mean m and variance s2. From Equation 4.62, m equals the actual number of targets. Note that PHD(x) satisfies both conditions l and 2 and is therefore a valid PHD. Here we note that a PHD is not a PDF, since its integral over the space of its variable is not, in general, unity. To illustrate the application of the PHD estimator to 2D robotic mapping, graphical depictions of posterior PHDs after two consecutive measurements, approximated by GMs, are shown in Figures 4.11 and 4.12. In each figure the intensity function is plotted as a function of the spatial coordinates. Since the integral of the intensity function (or PHD) is, by definition, the estimated number of features in the map, the mass (or integral) of each Gaussian can be interpreted as the number of features it represents. In the case of closely lying features (and large spatial measurement noise), the PHD approach may not be able to resolve the features, as demonstrated for the feature of Figure 4.11 at approximate coordinates (5, –8). However, 0.45 0.4 0.35 0.3 PHD(x)



0.25 0.2 0.15 0.1 0.05 0 –2

–1

0

1

2

3

4

x Figure 4.10  A PHD for the ID, 2 target problem of Equation 4.61.

5

6

7

140

Robotic Navigation and Mapping

Figure 4.11  A sample map PHD at time k – 1 with the true map represented by black crosses. The left-hand map PHD is the plan view of the right-hand 3D view. The measurement at k –1 is represented by the dashed lines. The peaks of the PHD represent locations with the highest concentration of expected number of features. The local PHD mass in the region of most features is 1, indicating the presence of one feature. The local mass close to some unresolved features (for instance at (5, –8)) is closer to 2, demonstrating the unique ability of the PHD function to jointly capture the number of features.

the PHD will represent the spatial density of L features by a singular Gaussian with a corresponding mass of L, which may improve the feature number estimate. This is only theoretically possible using the RFS framework. A graphical example for L = 2 is illustrated in Figure 4.11, which is then resolved through measurement updates into individual Gaussian components for each feature of mass L ≈ 1, as shown in Figure 4.12 (the two peaks near coordinates (5, –8)). The PHD estimator has been proven to be powerful and effective in multi-target tracking [54]. 4.7.4.2  The PHD as the Limit of an Occupancy Probability

Intuitively, the PHD can be derived as a limiting case of the occupancy probability used in grid-based methods. Following [56] onsider a grid system and let mi denote

Figure 4.12  A sample map PHD and measurement at time k. Note that the features at (5, –8) are resolved due to well-separated measurements, while at (–3, –4), a lone false alarm close to the feature measurement contributes to the local PHD mass. At (–5, –4) a small likelihood over all measurements, coupled with a moderate ck(z|Xk), results in a reduced local mass.

4.7  FBRM and SLAM with Random Finite Sets

141

the center and B(mi) he region denned by the boundaries of the ith grid cell. Let P(occ)(B(mi)) denote the occupancy probability and d(B(mi­)) the area of the ith grid cell. Assume that the grid is sufficiently fine so that each grid cell contains at most one feature, then the expected number of features over the region R J = ∪ B (mi ) is i∈ J given by E[|M Ç RJ |] = å P(occ) (B(mi )) i ÎJ





(4.63)

= å υ(m )δ(B(m )) i

i

i ÎJ

P(occ) (B(mi )) . Intuitively any region RJ can be represented by ∪ B (mi ), i δ (B(m )) i∈ J for some J. As the grid cell area tends to zero (or for an infinitesimally small cell), dB(mi) ® dm. The sum then becomes an integral and the expected number of features in R becomes where υ(mi ) =



E[| M Ç R |] = ò υ(m)dm R

(4.64)

υ(m) defines the PHD, and it can be interpreted as the occupancy probability density at the point m. The (coordinates of the) peaks of the intensity function are points (in the space of features) with the highest local concentration of expected number of features and hence can be used to generate optimal estimates for the elements of M. According to Equation 4.64, the integral of the PHD gives the expected number of features, and the peaks of the PHD function can be used as estimates of the positions of the features. 4.7.4.3  Recovering the Map from the PHD Intensity Function

Since υ(m) is a density, it can be readily approximated by standard sequential Monte Carlo (SMC) or GM methods as shown in Figures 4.10, 4.11, and 4.12 and will be demonstrated in Chapters 8, 9, and 10. The PHD filter recursion therefore propagates the intensity function υ(m) of the RFS state and uses the RFS measurement in the update stage. Since the intensity is the first order statistic of an RFS, the PHD filter is analogous to the constant gain Kalman filter, which propagates the first order statistic (the mean) of the vector-based state. However, the intensity, υ(m), can be used to estimate both the number of features in the map, and their corresponding states, along with the uncertainty in the state estimates [55]. If the RFS, Mk, is Poisson (i.e., the number of points is Poisson distributed and the points themselves are IID) then en the probability density of Mk, pk(Mk) can be constructed exactly from the PHD.



pk (Mk ) =

Õ

υk (m)

mÎMk

exp(ò υk (m)dm)



(4.65)

142

Robotic Navigation and Mapping

where υk(m) is the map intensity function at time k and Mk is the RFS map that has passed through the FoV of the vehicle’s onboard sensor(s) up to and including time k. Under these approximations, it has been shown that, similar to standard recursive estimators, the PHD recursion has a predictor-corrector form [55]. 4.7.5  The PHD Filter

To define the PHD filter in a form general enough to be applied to the FBRM and SLAM experiments of Chapters 8, 9, and 10, a state variable Yk is now defined, which corresponds to the state of interest to be estimated. In [57], different forms of the state Yk are propagated via the PHD filter. In particular, [57] gives details of the mapping only case, FBRM, in which Yk® (m½Xk); an augmented state approach in which each feature is augmented with a hypothesized vehicle trajectory under which Yk® (m, Xk) and a Rao-Blackwellized (RB) strategy in which the PHD filter propagates N conditionally independent PHDs based on each of the N hypothesized trajectories under which Yk® (m½X0:k) In Chapter 8, the FBRM case will be analyzed, in which Yk® (m½Xk); that is, the feature at m, given the vehicle location Xk. The RB case Yk® (m½X0:k) that is N conditionally independent PHDs, based on N hypothesized trajectories represented as particles, is presented in Chapter 9, where land-based experiments with an FMCW short-range radar are carried out. In Chapter 10, coastal SLAM is examined with a much longer range X-band, TOF marine radar, also using the RB strategy Yk® (m½X0:k). This will allow the Bayes optimal, expected trajectories and expected maps in each case to be evaluated. This section therefore focuses on the foundation of all of these algorithms by considering the estimation of the general state variable Yk. In terms of Yk, the prediction of the map intensity function uk½k–1(Yk) is given by υk|k −1(Ψk ) = υk −1 k −1(Ψk −1) + bk (Ψk )



(4.66)

where bk(Yk) is the PHD of the new feature RFS, B(Xk). Note that uk–1½k–1(Yk–1) corresponds to the estimate of the intensity function at time k – 1, given all observations up to and including time k – 1. For ease of notation, however, this will be referred to as υk–1( ) in future instances. The PHD corrector equation is then [55],

  PD (Ψk )gk (z | Ψk ) υk (Ψk ) = υk|k −1(Ψk ) 1 − PD (Ψk ) + ∑  (4.67)  z∈Zk ck (z) + ∫ PD (ξk )gk (z | ξk )υ k|k −1(ξ k )dξk   where uk–1½k–1(Yk) is the predicted intensity function from Equation 4.66, ξk is a subset of Mk, and PD ( Ψ k )



= the probability of detecting a feature at m, from vehicle location Xk (encapsulated in Ψ k ) gk ( z | Ψ k ) = the measurement model of the sensor at time k , ck ( z ) = intensity of the clutter RFS Ck ( Xk )(in equation 4.55) at time k (4.68)

4.7  FBRM and SLAM with Random Finite Sets

143

4.7.5.1  Intuitive Interpretation of the PHD Filter

A detailed, intuitive interpretation of the PHD filter Equations 4.66 and 4.67 is given in Chapter 16 of [54]. The predictor Equation 4.66 comprises the addition of the previous PHD correction and the PHD of the set of features hypothesized to enter the sensor’s FoV and is self explanatory. The corrector Equation 4.67 can be more clearly interpreted in its integrated form since by definition

ò vk|k (Yk )dYk = mk|k



(4.69)

where m ˆ k|k is the number of estimated features at time k. To simplify the interpretation further, a constant (state independent) probability of detection is assumed in this section, i.e., PD (Yk ) = PD



(4.70)

Therefore, from Equation 4.67, mk = ò υk (Yk )d Yk

= (1 - PD )mk|k -1 + PD å

z ÎZk



(4.71)

ò gk (z|Yk )υk|k -1(Yk )dYk ck (z) + PD ò gk (z|ξk )υk|k -1(ξk )d ξk

Notice that the integrals in the numerator and denominator of the final term within the summation of Equation 4.71 are identical and to simplify the equation we in troduce

Dk|k -1(g k , υ k|k -1) � ò gk (z|Yk )υk |k -1(Yk )d(Yk ) = ò gk (z |ξk)υk|k -1 ( ξk)dξ k

(4.72)

where gk abbreviates gk(z½Yk) and uk½k–1 abbreviates uk½k–1(Yk). Therefore, the integral of the PHD corrector Equation 4.67, with constant PD, can be written as the feature number corrector equation

mk = (1 - PD )mk|k -1 + PD

Dk|k -1(gk , υk|k -1) c (z) + PDDk|k -1(gk , υk|k -1) z ÎZk k

å

(4.73)

Equation 4.73 is useful for intuitively interpreting the PHD corrector equation and is governed by the following effects: 1. P ­ robability of detection PD. If the map feature at m is not in the FoV of the sensor, it could not have been observed, thus PD = 0. Therefore, from Equation 4.73

mk = (1 - 0)mk|k -1 + 0 = mk|k -1

(4.74)

(i.e., the updated number of features simply equals the predicted number, no new information is available). Similarly, from from Equation 4.67,

144

Robotic Navigation and Mapping

υk ( Yk ) = υk|k -1( Yk )[1 - 0 + 0] = υ k|k -1(Yk )



(4.75)

(i.e., the updated PHD will simply equal the predicted value). On the other hand, if m is within the sensor FoV and if PD ≈ 1, the summation in Equation 4.67 tends to dominate the PHD update and é ù gk (z Yk ) υ k(Yk ) » υk|k -1(Yk ) ê å ú êë z ÎZk ck (z) + ò gk (z ξk )υk|k -1(ξ k)dξ k úû



(4.76)

Then the predicted PHD is modified by the sum of terms dependent on the measurement likelihood and clutter PHD. 2. False alarms ck(z). A particular feature observation could have originated from a feature or as a false alarm. Assume that the number of false alarms nfa (represented by its intensity ck(z)) is large and uniformly distributed in some region R. If the observed feature is in R, the term within the summation of Equation 4.73 becomes PD

Dk|k -1(gk , υk|k -1) ck (z) + PDDk|k -1(gk , υk|k -1)

= PD

n fa R

Dk|k -1(gk , υk|k -1) + PDDk|k -1(gk , υk|k -1)

» PD

R Dk|k -1(gk , υk|k -1) » 0 n fa

(4.77)

since nfa is so large that it dominates the denominator. Therefore, if an observation originates from R, it is likely to be a false alarm, and it contributes almost nothing to the total posterior feature count, as it should. On the other hand if a measurement originates from a region other than R, with low clutter, then it is unlikely to be a false alarm. This means that ck(z) ≈ 0 and

PD

Dk|k -1(gk , υk|k -1) Dk|k -1(gk , υk|k -1) » PD = 1 ck (z) + PDDk|k -1(gk , υk|k -1) 0 + PDDk|k -1(gk , υk|k -1)

(4.78)

so that the measurement now contributes one feature to the total feature number. In general, if the number of false alarms, governed by the clutter PHD ck(z½Xk) is high, this increases the denominator of the summation, thus lowering the effect of the sensor update, as it should. 3. Prior information PDgk(z½Yk). Assume that the sensor model is good, and PDgk(z½Yk) is large for a particular state Yk that produces z. If z is consistent with prior information (the observation model), PDDk½k–1(gk,uk½k–1) will tend to dominate the denominator of the summation in Equation 4.73, and the term corresponding to that feature in the summation will become

PD

Dk|k -1(gk , υk|k -1) » 1 ck (z) + PDDk|k -1(gk , υk|k -1)

(4.79)

Hence, a feature that is consistent with the observation model tends to contribute one feature to the total feature count.

4.8  SLAM and FBRM Performance Metrics

145

Conversely if the observation z is inconsistent with prior information (is unlikely according to the sensor model), then the product PDgk(z½Yk) will be small, and its corresponding term in the summation in Equation 4.73 will tend to be ignored. Comparisons of the performance of the PHD filter with the vector-based EKF and FastSLAM algorithms of Sections 4.6.1 and 4.6.3, will be given, based on radar navigation experiments in cluttered urban and marine environments in Chapters 9 and 10.

4.8  SLAM and FBRM Performance Metrics This section discusses the importance of evaluating SLAM and FBRM algorithms, with reviews of popular methods of evaluation being left until the Bibliographical Remarks Section 4.10.7. Performance metrics are critical to any stochastic theoretic problem such as FBRM and SLAM. Since SLAM involves the joint estimation of both the vehicle location and the map, the results from both estimates must be extensively analyzed to prove the performance of any solution. While most SLAM algorithms rigorously analyze the performance of the localization estimate, statistical evaluation of the resulting map is less widely performed and, in some cases, not performed at all. 4.8.1  Vehicle State Estimate Evaluation

For a single vehicle system, which is the case for most SLAM algorithms, estimation error metrics are well established in the tracking literature and are frequently adopted to analyze the localization estimate of a SLAM algorithm. When available, GPS signals are commonly used as the ground truth reading for the vehicle location. For Gaussian, based solutions (Section 4.6.1), common vehicle estimation error metrics examine the absolute difference between the ground truth and estimated vehicle locations [27]. Estimated error covariances can also be used to check the consistency of the niter’s vehicle location estimate [1]. Note, however, that this gives no information of the consistency of the corresponding map estimate. 4.8.2  Map Estimate Evaluation

While error metrics are well established in the case of the localization estimation aspect of a SLAM algorithm, methods of map estimation are less well studied. Since the goal of a SLAM algorithm is to also estimate the map, its estimation error is equally as important as that of localization. The primary difficulty with obtaining a map error estimate is obtaining the ground truth. For feature, based mapping, the GPS coordinates of a certain number of features can be manually obtained [1, 27] for which the estimation errors are examined in a similar fashion to that of the vehicle. However, since a map is inherently a joint multiple-feature state, individual feature error evaluation does not evaluate the estimate of the joint state, specifically for the case of either falsely declared or missed features. Even in simulated tests, where ground truth is available, adequate map estimate quantification is rare.

146

Robotic Navigation and Mapping

From the literature, it is evident that most research assumes that a “good” vehicle estimate inherently implies a “good” map estimate with no explicit evaluation of the map estimate itself (see Section 4.10.7). In the presence of large sensor discrepancy such as detection/feature extraction uncertainty, spurious measurements, and data association uncertainty, this assumption overlooks a major aspect of a SLAM algorithm. 4.8.3  E  valuation of FBRM and SLAM with the Second Order Wasserstein Metric

The primary difficulty in determining map estimation error is due to differences between the estimated and true number of features, and the need to satisfy the four metric axioms [58]11 Error metrics for fixed dimension problems, such as a sum of the squared error, can be readily obtained from grid-based robotic frameworks [23] or metrics based on the Hausdorff distance [59] from the template matching literature. Such metrics however, cannot encapsulate cardinality estimation error and are thus not suited to the evaluation of FB map estimates. A finite set representation of the map naturally allows for a metric to gauge map estimation error. While there are several metrics for finite-set-valued estimation [13], a feature map estimation error metric, which can be used to evaluate SLAM results and which jointly considers errors in feature location and number estimates � , it is given by is based on a second-order Wasserstein construction. If M > M 12





�  M    1   (c ) � (c ) j 2 2 i � � , m ) + c M − M   d (M, M) :=  min ∑ d (m  j∈{1, … , M } M   i =1     where

(

� i, m j ) = min(c,|| m � i − m j||) d (c )(m

)



(4.80)

(4.81)

is the minimum of the cut-off parameter, c, and the Euclidean distance between the � the met� i, and the true feature location m j. If M < M estimated feature location, m (c ) � ric is obtained through d (M, M). Note that the standard Euclidean distance used � j) is arbitrary. To incorporate orientated features, other � i,m in the definition of d (c )(m vector distances such as a Mahalanobis distance could be adopted where

� i , m j ) = min(c,((m � i - m j )T (Pi )-1(m � i - m j ))1 2 ) d(c) (m

(4.82)

with Pi being the estimated covariance of the ith feature. The Wasserstein construction is commonly adopted in theoretical statistics as a measure of similarity between probability distributions. It has been shown previously that for distributions of equal dimensionality, the Wasserstein distance reduces to an optimal assignment problem [58]. Minimization of a global distance between the estimated and ground 11. The four metric axioms can be defined as follows. Let c be an arbitrary, non-empty set. Then the function d is a metric iff: 1) d(x, y) > = 0, for all x, y Î c: 2) d(x, y) = 0 iff x = y, x Î c (identity axiom); 3) d(x, y) = d(y, x), for all x, y Î c (symmetry axiom); 4) d(x, y) £ d(x, z) + d(y, z), for all x, y, z Î c (triangle inequality axiom).

147

20

20

15

15

Meters

Meters

4.8  SLAM and FBRM Performance Metrics

10

10

5

5

0

0 0

Meters

5

0

10

Meters

5

10

Figure 4.13  A sample FBRM trial illustrating the ground truth (left) and raw measurement data (right).

truth maps inherently considers the problem in its joint-estimation framework, as opposed to analysis of individual estimates of features assigned through nearest I neighbor methods [1]. The metric of Equation 4.80 explicitly accounts for the di� , which assigns a usermensionality estimate errors through the term c 2 M − M defined cost, c, for each unassigned feature state. This threshold decides whether a given feature estimate is an incorrectly declared feature or is a correctly declared feature that is poorly localized.

(

)

4.8.3.1  Analysis of the Second-Order Wasserstein Metric

A simple demonstration of the ability of the second-order Wasserstein metric to provide a measure of estimated map accuracy is presented here. Figure 4.13 shows a simulated vehicle trajectory and map, containing 12 true (simulated) features with locations represented as circles (left figure). The right-hand figure shows the

20

20

15

15

Meters

Meters

.

10

5

10

5

0

0 0

Meters

5

10

0

Meters

5

10

Figure 4.14  Posterior FB map estimates from the vector-based NN-EKF-FBRM filter (left) and the RFS-based GMM-PHD-FBRMfilter (right). Visual inspection indicates an improved map estimate from the GMM-PHD-FBRM method, since all features are correctly declared, without false alarm, at “closer” distances to their ground truth locations.

148

Robotic Navigation and Mapping

FBRM error (m)

1.5

1.0

0.5

0.0

0

1

2

Parameter c

3

4

5

Figure 4.15  FBRM error versus the c parameter of Equations 4.80 and 4.81. The c parameter primarily determines the contribution of dimensionalitv errors to the total reported error.

true path and map as well as simulated sensor detections, shown as dots, in the presence of simulated clutter. The simulated, assumed known trajectory and noisy sensor detections are used as inputs to two FBRM algorithms, one of which uses the vector-based NN-EKF concept, details of which are given in Section 4.6.2.1, and the other based on a Gaussian mixture model (GMM)-PHD filter implementation, the details of which will be the subject of Chapter 8. In this section, the efficacy of the metric in quantifying the results is of importance, rather than the details of each algorithm’s implementation. The resulting posterior feature map estimates of both the NN-EKF-FBRM and GMM-PHD-FBRM algorithms are depicted in Figure 4.14. Visual inspection reveals that, for the same set of measurements, a single false feature is declared with the NN-EKF approach coupled with increased feature localization error when compared with the GMM-PHD-FBRM method. For a quantitative comparison of the map estimate, the c parameter in Equations 4.80 and 4.81 gives the relative weighting of feature number and location estimation error. Further insight into the effects of a given choice of c is shown in Figure 4.15. As is evident in the figure, an increasing c parameter results in an increasing overall error for the NN-EKF estimate. This is due to the contribution of the single false feature, which correspondingly has no effect on the error reported by GMMPHD posterior estimate, since in this particular trial it has correctly estimated the true number of features. The c parameter also determines the maximum distance at which an estimate is classified as a poorly localized feature estimate, as opposed to a false feature declaration, and should be chosen based on the maximum allowable estimated feature location error in a given application. Equation 4.80 will be used to gauge the FBRM estimation performance, in the cases of known ground truth maps, in Chapters 8, 9, and 10.

4.9  Summary This chapter highlighted how exteroceptive measurement uncertainty is handled by the most popular robot navigation GBRM, FBRM, and SLAM concepts. A general formulation of the SLAM problem was shown, and the popular grid-based (GB) and feature-based (FB) methods of autonomous map representation were intro-

4.10  Bibliographical Remarks

149

duced. While many seemingly successful implementations have been reported in the robotics literature, this chapter highlighted some subtle theoretical oversights in modeling the sensor’s measurement uncertainty in many of the frameworks. Emphasis was initially placed on the mapping aspect of autonomous navigation algorithms, which is equally as important as the localization aspect. The structure of the estimation theoretic RM algorithms are dependant on the form of the measurement likelihood and furthermore limit what can be represented by the resulting posterior density. These were discussed for both the GBRM and FBRM approaches. The analysis was then extended to the full Bayesian FB-SLAM problem, and Sections 4.6.1 and 4.6.3 introduced the classical, vector-based EKF and fastSLAM concepts. An analysis of these concepts was presented that questioned their optimality, based on subtle assumptions in the standard measurement equations. It was shown that general vector-based filtering approaches do not encapsulate the entire uncertainty directly into the filter recursion, thus compromising filter performance. Section 4.7 presented a random set theoretic framework for feature-based map estimation, which was a large focus of this chapter. Chapter 3 has shown that with radar, feature existence uncertainty modeling is at least as important as spatial uncertainty modeling. By adopting an RFS map state and measurement, the uncertainty in the map size and feature locations can be jointly propagated and estimated. This approach therefore generalizes the notion of Bayes optimality to the realistic situation of an a priori unknown number of features. To demonstrate the applicability of the RFS framework to solve real FB autonomous problems, it was shown how the first-order approximation, the PHD filter, could be implemented. The PHD filter alleviates the need for data association and map dimensionality estimation filters, as it incorporates these sources of uncertainty into the Bayesian recursion. Given its non-reliance on data association, the RFS framework, is more suited to applications with high clutter and is therefore essential in radar-based mapping algorithms. Evaluation metrics for FB map estimates are lacking in much of the SLAM literature. Since SLAM jointly considers the map estimate and the vehicle location estimate, this chapter argued that both are equally important. Therefore Section 4.8.3 introduced a principled metric to assign a performance score to FBRM and SLAM algorithms, based on their abilities to jointly estimate both feature location and number. The FBRM and SLAM concepts, based on EKF (Section 4.6.1) and fast SLAM (Section 4.6.3), with feature association based on ML (Section 4.6.2.1), JCBB (Section 4.6.2.2), and MH (Section 4.6.5) will be compared with the RFS methods presented in Section 4.7 in challenging environments, in Chapters 8, 9, and 10. The comparisons will be based on the second-order Wasserstein metric of Section 4.8.3.

4.10  Bibliographical Remarks 4.10.1  Grid-Based Robotic Mapping (GBRM)

In the work of Thrun [25], the GBRM problem was approached from an optimization perspective as opposed to a filtering and state estimation one. The pitfalls of modeling the sensor by the probability, P(m(ix, y) Zk , Xk ), were highlighted, namely

150

Robotic Navigation and Mapping

those of potential conflicting inferences on a cell causing inconsistent maps. A discrete measurement likelihood, with respect to the occupancy random variable, was established and an expectation-maximization (EM) algorithm used to find the map, which maximized the likelihood overall measurements, and the assumed known vehicle trajectory. Impressive results were produced in indoor environments and off-line batch processing was used to generate the nonunique, maximum-likelihood map for a given sequence of measurements, Zk. Binary maps were pro-duced, but the desired posterior in the Bayesian formulation, Pk k(M Zk , Xk), was not estimated. The problem of GB mapping in its original recursive filtering form was examined by Konolige [22], where a log-odds formulation of Equation 4.13 was used. An intuitive measurement likelihood for G(Zk½m(x,y),Xk) was derived based on Gaussian profiles of the range measurement noise, whereas the converse likelihood G(Zk m(x, y) , Xk ) was modeled by a uniform constant due to an implicit assumption that there is at least one occupied cell within range. Thus the possibility of a spurious measurement, (i.e., range reading when all cells are empty) was not considered. A recursive log-odds mapping approach was adopted by Foessel [19] for occupancy grid mapping with a millimeter wave radar sensor. The measurement odds value was a deterministic function based on the signal-to-noise ratio of a reflection from a target. While intuitively appealing, the justification of the model was based on a heuristic rather than statistical analysis. GB mapping is also popular in the scan-based SLAM [21, 26, 36] community, where successive raw (usually laser) measurements are aligned to estimate the vehicle pose. As the maps from scan-based approaches lack the formal statistical representation of their feature-based counterparts, the GB mapping methods outlined in these article were employed to fuse scans into a meaningful probabilistic map. Extensive mapping experiments of indoor and semistructured outdoor environments were presented. Evidential approaches to GBRM have also been proposed. While lacking statistical mathematical models, such approaches have produced some impressive results [24, 60].

4.10.2  Gaussian Approximations to Bayes Theorem 4.10.2.1  Extended Kalman Filter (EKF)

The origins of the feature map can be traced back to the seminal work of Smith et al. [5] in which the environment was assumed to consist of simplified representations of the physical landmarks—the features. The work of Smith et al. also first established the “vector of all the spatial variables, which we call the system state vector” (i e., M = [m1, m2,…, mm]). This joint vector state based solution to the SLAM problem, with a nonlinear Gaussian measurement model, was implemented under an extended Kalman filter (EKF) framework. This work focused on propagating the joint vehicle map estimate in the presence of the interfering Gaussian noise from the vehicle process model and the range-bearing exteroceptive sensor measurements. Moutarlier [61] then implemented the framework on a land robot. The SLAM solution presented in Smith et al. [5] can be viewed as a special case (Gaussian noise

4.10  Bibliographical Remarks

151

and a joint vehicle-map state) of the Bayesian FB-SLAM filter recursion and thus one of the first proposed approximate Gaussian solutions to the Bayesian FB-SLAM problem. Under Gaussian noise assumptions, the Kalman filter produces the optimal minimum-variance Bayesian estimate of the joint-vehicle map state. The work emphasized that the consistent maintenance of the correlations between the vehicle and map feature estimates was critical to the structure of the problem. The example discussed in Smith et al. [5] presented a map with features of unity detection probability and assumed that the measurement-to-map association was known, the sensor had no biases, and there were no spurious measurements. With these strict assumptions on the measurement, the Kalman-based SLAM estimate is indeed Bayes optimal. This structure has received much attention and the convergence properties of the map estimates were analyzed by Dissanayake et al. [1]. First-order linearization errors introduced in the extended Kalman framework were addressed by using higher-order Hessian functions [45] or the unscented Kalman filter of Julier and Uhlmann [62]. Standard noise power thresholding12 of A-scopes was used by S. Clark [8] using an FMCW radar for feature detection. The range and bearing measurements of the detected point were then propagated through an extended Kalman filter framework to perform navigation and mapping. The method was shown to work in an environment containing a small number of well-separated, highly reflective beacons. The method was extended slightly in [63] where, even bounce specularities were used to extract pose invariant landmarks. Again the environment contained reflective metallic containers. 4.10.2.2  Gaussian Mixture (GM) Implementations

In [64, 65], a Gaussian mixture (GM) implementation of the Bayesian SLAM recursion was described. The joint posterior was assumed to be a Gaussian mixture density, as was the measurement likelihood, g(Zk½zk). The work emphasized the fact that explicit data association decisions were not required in the implementation as the measurement was incorporated by the convolution with the predicted density. However, this work assumed the same standard measurement model of Equation 4.22, in that detection uncertainty was not considered in the formulation. Using a GM density for the measurement likelihood can only describe the uncertainty in location of a single feature. The condition that the weights of each Gaussian component must sum to unity assumes that a feature exists somewhere within the measurement, Zk (i.e., spurious detections are not considered). Furthermore, the. Bayes posterior distribution is not useful unless information about the state can be extracted, which was not clearly defined. Furthermore, as highlighted in [66] when applied to an underwater sonar sensor [64], the measurement can contain either multiple point features multiple readings from the surface of the same (extended) feature. Thus, the convolution with the predicted density would result in the divergence of the filter. However, the work was significant in the fact that convolution,

12. Fixed threshold detection is indeed the optimal detector in the case of spatially uncorrelated and homogeneous noise distributions of known moments, but only under these circumstances.

152

Robotic Navigation and Mapping

instead of explicit data association decisions, was used to calculate the posterior density. 4.10.3  Non-Parametric Approximations to Bayesian FB-SLAM

The Rao-Blackwellized, particle filter FastSLAM concept was established by Montemerlo et al. [67, 68]. FastSLAM 2.0 provided an extension to this principle, using the most recent measurements to refine the robot pose’s proposal distribution, which in the initial FastSLAM algorithm was derived purely from the vehicle’s motion model [35] This concept demonstrated a superior performance to the initial FastSLAM algorithm, based on the idea that a vehicle’s exteroceptive sensors usually provide more reliable data than a vehicle’s control input estimators (such as odometry). The RFS approach to SLAM was first suggested in [69] with preliminary studies using “brute force” implementations. The approach modeled the joint vehicle trajectory and map as a single RFS and recursively propagated its first order m­oment. Initial results of a Rao-Blackwellized (RB) implementation of the PHD-SLAM filter were presented in [70]. This was extended to present a more rigorous analysis of the RFS approach to SLAM, an improved version of the RB-PHD-SLAM filter, as well as real and simulated experimental results in [71]. The merits of the RFS approach were demonstrated, particularly in situations of high clutter and data association ambiguity.

4.10.4  Other Approximations to Bayesian FB-SLAM 4.10.4.1  Graph-Based SLAM

Graph-based SLAM represents an environment via spatial constraints between robot poses in the form of a graph consisting of nodes and edges that can be recursively updated via matching techniques. In 1997, Lu and Milios presented the SLAM problem as constraints between robot poses and used these to fuel a technique known as scan matching for generating a map based on raw range/bearing data. The concept considered the entire robot trajectory as the frame of reference for globally consistent range scan alignment. In a similar fashion, graph-SLAM represents robot poses and feature positions as nodes and connects them with arcs based on robot motion and measurement constraints. As shown in Golfarelli et al. [72] a good illustration of this concept is the representation of the motion and measurement constraints as springs and the robot poses as masses [57]. The springs model information constraints with stiffness values inversely proportional to the motion and measurement model’s covariance values as appropriate. A smaller covariance (more certain motion model) yields a stiffer spring and vice versa [39]. GraphSLAM then determines the values of the entire history of robot poses and the feature poses by relaxing the entire mass-spring system, so that an equilibrium is achieved. The positions in which each robot pose and feature settles can be considered to be the graphSLAM solution. Mathematically, this is equivalent to minimizing the entire energy of the constrained system. The system can be mathematically

4.10  Bibliographical Remarks

153

modeled in its information form, in which an information matrix containing each spring’s constraint (energy) information is updated. Note that in contrast to EKFSLAM, the graphSLAM concept summarized so far maintains estimates of the entire pose history of the robot, referred to as the “Full” SLAM problem [51]. In 2004, Folkesson et al. also demonstrated a solution to the SLAM problem based on graphical methods [73]. In a similar fashion to the graphSLAM approach highlighted earlier, the graph in this work also represented the entropy of a joint PDF on the pose of the vehicle, the detected map feature coordinates, vehicle motion estimates, and the feature measurements, but also included the data association variables that linked feature measurements to detected map features. The criterion for incorporating new measurements was based on the energy change of the constrained system represented by the graph, caused by the corresponding data association decision. With this strategy, it was relatively easy to make changes to the graph by reversing data association decisions that were deemed to have resulted in adverse energy changes. Further, features could be merged and associations, between features and particular poses, eliminated by simply moving or removing graph edges based on changes in the energy. Such tasks would be extremely difficult, or even impossible, with EKF approaches. In [74], Frese et al. introduced an incremental graph based technique, which optimized SLAM based on multigrid methods used to solve partial differential equations. This work noted the computational burden involved when a cycle had occurred within a vehicle’s trajectory. When a vehicle revisits an area after a large cycle, it is often referred to as a “loop closure,” in the robotics literature [75, 76], a­lthough a rigorous definition, which characterizes such an event in terms of its algorithmic difference to a normal measurement update, has not been precisely defined. It was noted that after such “loop closures,” the computational time of graphSLAM becomes very high, since the accumulated errors must be backpropagate­d and corrected throughout the entire map. Therefore, the work by Frese et al suggested an optimization technique called “multilevel relaxation” which increased the convergence speed of Gauss-Siedel relaxation methods by allowing the map to be optimized at different resolutions. On a similar note, the recent work by Grisettiie et al. [77], also focused on a hierarchical optimization approach to the graphSLAM problem to reduce the computational burden. In this work, the state space was represented as a manifold, which removed the problem of singularities during the estimation of 3D rotations, and could be represented at differing resolutions. When a new observation was made, only the parts of the high-resolution graph manifold were updated, which were necessary for data association decisions. This approach focused on minimizing computation time while providing fast support for data association decisions. Comparisons with other recent graphSLAM methods [78, 79] were made in this work. 4.10.4.2  Sparse Extended Information Filter (SEIF) SLAM

A popular method for reducing the inherent computational burden of EKF SLAM is to adopt the information form of the Kalman update [48], coupled with an interfeature information link reduction process known as sparsification. Based fundamentally on the same vector formulation and Bayesian recursion as classical

154

Robotic Navigation and Mapping

EKF SLAM, the concept, first developed by Thrun et al. [80] and coined the sparse extended information filter (SEIF) SLAM algorithrn; exploited the fact that the normalized information matrix is naturally almost sparse, and only closely located features are linked through nonzero elements. Therefore, with minimal loss, information links between certain features and the robot can be set to zero, while the remaining links of interest are strengthened to compensate for the approximation loss. The approach conceptually demonstrated numerous enhancements over classical EKF-SLAM in terms of computational requirements. However, due to its vectorbased formulation, it was initially conditioned on known data association. When the necessary and costly process of covariance recovery to perform probabilistic data association is taken into consideration, the computational advantages become less dramatic. In addition, it was demonstrated that the sparsification process could lead to an inconsistent filter, which spurned the development of the exactly SEIF SLAM filter (ESEIF-SLAM) [81] and so called D-SLAM algorithm [82], both of which resolved the issue of inconsistency in different manners. Unlike SEIF-SLAM, ESEIFSLAM limited the number of features linked to the vehicle pose via the sparsification process, performing a kidnapped robot localization when the number breached a predefined threshold. In contrast, D-SLAM decoupled the problem of mapping and localization by first updating the map with relative feature measurements and then obtaining a localization estimate via the combination of a kidnapped robot location estimate and an EKF-SLAM location estimate. These were based only on the features within the sensor FoV. Both methods resulted in consistent estimation processes based on sparse feature map information matrices. Wang et al. published a review article on various sparse information filtering methods in roBotics, comparing the trade-offs necessary to achieve sparseness for computational benefits [83]. In particular, to eliminate the effects of linearization, 1D simulations comparing ESEIF and D-SLAM were carried out, as well as real experiments, which demonstrated ESEIF-SLAM’s superior mapping abilities, at the expense of robot location estimation. Alternative approaches, somewhat analogous to the FastSLAM algorithms, also gained popularity as a means of achieving sparse matrices. By maintaining an information vector of the historical pose estimates as well as its subsequent information matrix, Eustice et al. [16] showed that an exactly sparse matrix could be achieved, allowing the entire robot trajectory to be recursively updated in constant time. However, the propagation of the map was left to a post-processing batch operation or represented via the matching of measurements from the estimated poses. When all the features in the map (assuming a perfect estimate of the number of features had been obtained) were concatenated with a vector containing all the poses, then Dellaert et al. showed that SLAM could be formulated as a nonlinear stochastic estimation problem [84] in an approach known as square root smoothing and mapping (SAM). A sparse information matrix was realized since all poses and all features were maintained and estimated concurrently. However the batch algorithm formulation and increasing memory requirements limited its practical applicability. To address these issues, an incremental version, coined iSAM, was developed by Kaess et al. [85] to recursively update the state vector and information matrix as measurements arrived. This algorithm updated only the necessary components,

4.10  Bibliographical Remarks

155

as opposed to recalculating all entries, as in the batch approach of SAM. Given the stochastic vector basis and recursive nature of iSAM, data association was necessary and was carried out with an efficient method of extracting the relevant covariance values from the information matrix. As with SAM, unlike previous information matrix filters, the approach was not based on Kalman filtering but on QR factorization methods. 4.10.5  Feature Association and Management

Due to state and measurement vectors typically being of different dimensions, and due to the fact that features are typically not detected in the order in which they are stored within the state vector, researchers necessarily include data association and feature management routines into vector-based SLAM algorithms. Feature management is the name given to automatic routines which decide when to “add” or “remove” features to/from a map state vector. Feature association corresponds to automatically deciding which newly sensed feature belongs to which currently estimated feature. Together with a conditional measurement independence assumption, some impressive results were demonstrated with a variety of sensor types, which heavily relied on such feature association and management routines. MMW radar-based SLAM was demonstrated by Clark [63], laser-based SLAM by Guivant et al. [8, 27] sonar-based SLAM by Leonard [31], and underwater SLAM by Smith [86]. An extension of the NNSF, presented in Section 4.6.2.1, is the track splitting filter [47]. For every measurement that falls into the validation ellipsoid surrounding a predicted measurement, the track is split. This means that for each measurement, a separate, updated state and covariance are computed and augmented with the existing ones, for propagation with the appropriate state estimator. To curb the potential exponential growth of tracks, each track’s likelihood is computed, so that unlikely ones can be removed from the state and covariance estimates. Reference [47] further introduces the probabilistic data association filter (PDAF) and joint PDAF (JPDAF) for measurement to target matching. Whereas in the PDAF, all incorrect measurements are modeled as random interference, the JPDAF validates each measurement that each state estimate. When multiple measurements gate with a predicted feature, this allows all such measurements to influence the state update, rather than only the measurement that is statistically closest. This improves the performance in multi target (multi feature) situations when the existence of neighboring features gives rise to interference, which is not correctly modeled under the PDAF framework. In the target tracking literature, this has been shown to improve the robustness of tracking multiple targets that have cross [87, 88]. A map management routine was introduced by Dissanayakee et al. in which extracted features were stored in a tentative list and a “feature quality” criterion was calculated over multiple time instances [1]. When the quality measure exceeded a predefined threshold, features were considered nonspurious and included in the map. If the “quality” of a feature in the map dropped below a threshold, the feature was removed. For the purposes of map management, some approaches simply use the number of successful associations as the metric [11], while others evaluate a feature quality density function [1, 28] or adopt discrete occupancy estimation routines from GB mapping [35]. Approaches using exponential decay functions have also

156

Robotic Navigation and Mapping

been proposed [89]. These are usually performed as a post-processing step on the estimated map posterior after a data association decision has been made [28, 33]. Advanced methods that allow reversible data association across a finite window of time frames have also been considered [11, 90]. 4.10.6  Random Finite Sets (RFSs)

The FB autonomous robotic framework is closely related to the multi sensor, multi target filtering problem, where the objective is to jointly estimate the time-varying number of targets and their states from sensor measurements in the presence of data association uncertainty, detection uncertainty, clutter, and noise. The first systematic treatment of this problem using random set theory was conceived by Mahler in 1994 [91], which later developed into finite set statistics (FISST). Moreover, this treatment was developed as part of a unified framework for data fusion using random set theory. Reference [92] provides an excellent overview of FISST. As indicated in Section 4.7.4, the FISST Bayes multi-object filter is generally intractable. Therefore, in 2000, Mahler proposed to approximate the multi-object Bayes recursion by propagating the probability hypothesis density (PHD) of the posterior multi-object state [54, 55]. Mahler refined FISST to what he called generalized FISST and published this along with the derivation of the probability hypothesis density (PHD) filter in 2003 [55]. Additionally, the relationship between FISST set derivatives and probability densities (as radon nikodym derivatives of probability measures) for random finite sets was established in [93]. Further, generic SMC implementations of the multiobject Bayes filter and PHD filter, with convergence analyses, were also provided. Numerous independent SMC solutions were also proposed [93–96] and applied to a wide range of practical problems including fature point tracking in image s­equences [97] tracking acoustic sources from time difference of arrival (TDOA) measurements [98], and tracking using radar bi-static data [99]. The reader is referred to [100] for a more complete survey of applications. In 2005 a closed-form solution to the PHD recursion for the linear Gaussian multi-target model was published together with the GM-PHD filter for linear and mildly non-linear multi target models [101]. While more restrictive than SMC approaches, GM implementations are much more efficient. Moreover, they obviate the need for clustering, which is an expensive step in the SMC implementation. Convergence results for the GM-PHD filter were established in [102]. Further extensions were introduced in 2006 through the cardinalized PHD (CPHD) recursion, which is a generalization of the PHD recursion that jointly propagates the posterior PHD and posterior distribution of the number of targets [103, 104]. A detailed treatment can be found in [54].

4.10.7  SLAM and FBRM Evaluation Metrics 4.10.7.1  Vehicle State Estimate Evaluation

In [11], the square innovation of the 3-DOF pose is used as the error metric, with a chi-squared (due to the Gaussian assumptions) confidence limit used to show the

4.10  Bibliographical Remarks

157

consistency of the location estimate. Some algorithms [105, 106] simply use visual comparison of the GPS data with the estimated path, while some work (assuming GPS is not available), to visually compares the path with a satellite image [44]. For particle-based solutions (Section 4.6.3), the root mean square error (RMSE) of the particle representation of pk½k(Xk½Zk,Uk−1, X0) is evaluated at each time instant [32, 34–35, 53]. As the particle representation models a non-Gaussian density, stochastic bounds on the estimation error are difficult to obtain. For indoor deployments where there is no GPS signal available, in some cases there is no explicit quantitative evaluation of the localization results [21, 26]. 4.10.7.2  Map Estimate Evaluation

In [53, 107] where simulated tests are performed for FB-SLAM, the map estimate is typically evaluated through sub optimal methods of plotting the estimated features mean location along with the ground truth. With multiple features there is no method of evaluation that jointly considers the entire map estimate in a mathematically concise manner. Other FB mapping algorithms provide no apparent quantification of the map estimate [34, 35]. Indoor SLAM experiments generally rely on visual inspection and intuitions of a structured (walls, corners, and so on environment [20, 26] with no map (or localization) error metric shown.

References   [1] Dissanayake, M. W. M. G., P. Newman, S. Clark, H. F. Durrant-Whyte, and M. Csorba, “A Solution to the Simultaneous Localization and Map Building (SLAM) Problem,” IEEE Trans. Robotics and Automation, Vol. 17, No. 3, June 2001, pp. 229–241.   [2] Dong, D. F., W. S. Wijesoma, and A. P. Shacklock, “An Efficient RAO-Blackwellized G­enetic Algorithmic Filter for SLAM.” In IEEE International Conference on Robotics and Automation, Rome, Italy, April 2007, pp. 2427–2432.   [3] Begum, M., G. K. I. Mann, and R. Gosine, “A Fuzzy-Evolutionary Algoritrn for Simulataneous Localisation and Mapping of Mobile Robots.” In IEEE Congress on Evolutionary Computation, Vancouver, Canada, July 2006, pp. 1975–1982.   [4] Porta, J., “CuikSLAM: A Kinematic-Based Approach to SLAM.” In IEEE International Conference on Robotics and Automation, Barcelona, Spain, April 2005.   [5] Smith, R., M. Self, and P. Cheeseman, “A Stochastic Map for Uncertain Spatial Relationships.” In The Fourth International Symposium of Robotics Research, 1987, pp. 467–474.   [6] Durrant-Whyte, H., and T. Bailey, “Simultaneous Localisation and Mapping (SLAMpp.  Part I, the Essential Algorithms,” Robotics and Automation Magazine, June 2006, pp. 99–110.   [7] Nieto, J., T. Bailey, and E. Nebot, “Recursive Scan-Matching SLAM,” International Journal of Robotics and Autonomous Systems, Vol. 55, No. 1, 2007, pp. 39–49.   [8] Clark, S., and H. F. Durrant-Whyte, “Autonomous Land Navigation Using Millimeter Wave Radar.” In Proceedings of the IEEE International Conference on Robotics und Automation, Belgium, May 1998, pp. 3697–3702.   [9] Elfes, A., Occupancy Grids: A Probabilistic: Framework for Robot Perception and Navigation, PhD Thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, May 1989.

158

Robotic Navigation and Mapping [10] Foessel, A., Scene Modeling from Motion-Free Radar Sensing, PhD Thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. January 2002. [11] Wijesoma, W. S., L. D. L Perera, and M.D. Adams, “Toward Multidimensional Assignment Data Association in Robot Localization and Mapping,” IEEE Transactions on Robotics, Vol. 22, No. 2, April 2006, pp. 350–365. [12] Murphy, R., “Bayesian map Learning in Dynamic Environments.” In Proc. Conf Neural Inf. Process. Syst., Colorado, 1999, pp. 1015–l021. [13] Thrun, S., “Robotic Mapping: A Survey,” Technical Report CMU-CS-02-111, Carnegie Mellon University, Pittsburgh, PA, February 2002. [14] Gillespie, T. D., Fundamentals of Vehicle Dynamics, Warrendale, PA: Society of Automotive Engineers, 1992. [15] Kim, J., and S. Sukkarieh, “Real-Time Implementation of Airborne Inertial-SLAM,” International Journal of Robotics and Autonomous Systems, Vol. 55, No. 1, 2007, pp. 62–71. [16] Eustice, R. M., H. Singh, and J. J. Leonard, “Exactly Sparse Delayed-State Filters for View-Based SLAM,” IEEE Transactions on Robotics, Vol. 22, No. 6, December 2006, pp. 1100–1114. [17] Borrmann, D., J. Elseberg, K. Lingemann, A. Nüchter, and J. Hertzberg, “Globally Consistent 3D Mapping with Scan Matching,” International Journal of Robotics and Autonomous Systems, Vol. 56, No. 2, 2008, pp. 130–142. [18] Elfes, A., “Using Occupancy Grids for Mobile Robot Perception and Navigation,” IEEE Computer Society Press, Vol. 22, No. 6, June 1989, pp. 46–57. [19] Foessel, A., J. Bares, and W. R. L. Whittaker, “Three-Dimensional Map Building with MMW RADAR.” In Proceedings of the 3rd International Conference on Field and Service Robotics, Helsinki, Finland, June 2001. [20] Gutmann, J. S., and K. Konolige, “Incremental Mapping of Large Cyclic Environments.” In Conference on Intelligent Robots and Applications (CIRA), Monterey, CA, 1999. [21] Hiihnel, D., W. Burgard, D. Fox, and S. Thrun, “An Efficient FastSLAM Algorithm for Generating Maps of Large-Scale Cyclic Environments from Raw Laser Range Measurements.” In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, 2003, pp. 206–211. [22] Konolige, K., “Improved Occupancy Grids for Map Building,” Autonomous Robots, Vol. 4, No. 4, 1997, pp. 351–367. [23] Moravec, H., and A. E. Elfes, “High Resolution Maps from Wide Angle Sonar.” In Proceedings of the IEEE International Conference on Robotics and Automation, New Jersey, March 1985, pp. 116–121. [24] Pagac, D., E. Nebot, and H. F. Durrant-Whyte, “An Evidential Approach for Map Building for Autonomous Vehicles,” IEEE Transactions on Robotics and Automation, Vol. 14, No. 4, August 1998, pp. 623–629. [25] Thrun, S., “Learning Occupancy Grids with Forward Models,” Autonomous Robots, Vol. 15, No. 2, 2003, pp. 111–127. [26] Grisetti, G., C. Stachniss, and W. Burgard, “Improved Techniques for Grid Mapping With Rao­ Biackwellized Particle Filters,” IEEE Transactions on Robotics, Vol. 23, No. 1, F­ebruary 2007, pp. 34–45. [27] Guivant, J., E. Nebot, and S. Baiker, “Autonomous Navigation and Map Building Using Laser Range Sensors in Outdoor Applications,” Journal of Robotic Systems, Vol. 17, No. 10, October 2000, pp. 565–283. [28] Makarsov, D., and H. F. Durrant-Whyte, “Mobile Vehicle Navigation in Unknown Environments: A Multiple Hypothesis Approach,” IEE Proceedings of Control Theory Applications, Vol. 142, No. 4, July 1995, pp. 385–400. [29] Se, S., D. Lowe, and J. Little, “Mobile Robot Localisation and Mapping with Uncertainty Using Scale­Invariant Visual Landmarks,” International Journal of Robotics Research, Vol. 21, No. 8, August 2002, pp. 735–758.

4.10  Bibliographical Remarks

159

[30] Lacroix, S., and G. Dudek, “On the Identification of Sonar Features.” In International Conference on Intelligent Rohot.and Systems (IROS), France, September 1997. [31] Leonard, J. J., and H. F. Durrant-Whyte, “Dynamic Map Building for an Autonomous Mobile Robot.” In Proceedings of the IEEE International Workshop on Intelligent Robots and Systems, Ibaraki, Japan, July 1990, pp. 89–96. [32] Montemerlo, M., and S. Thrun, “Simultaneous Localization and Mapping with Unknown Data Association Using FastSLAM.” In IEEE International Conference on Robotics and Automation, Vol. 1, September 2003, pp. 412–418. [33] Niera, J., and J. D. Tardos, “Data Association in Stochastic Mapping Using the Joint C­ompatibility Test,” IEEE Transactions on Robotics and Automation, Vol. 17, December 2001, pp. 890–897. [34] Montemerlo, M., and S. Thrun, “FastSLAM: A Factored Solution to the Simultaneous Localization And Mapping Problem.” In 18th National Conference on Artificial Intelligence, American Association for Artificial Intelligence, Menlo Park, CA, 2002, pp. 593–598. [35] Montemerlo, M., S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges.” In Proceedings of the 18th International Joint Conference on Artificial Intelligence, CA, 2003, pp. 1151–1156. [36] Lu, F., and E. Milios, “Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans,” Journal of Intelligent and Robotic Systems, Vol. 18, No. 3, 1997, pp. 249–275. [37] Martin, M. C., and H. P. Moravec, “Robot Evidence Grids,” Technical Report CMU-RITR-96-06, Carnegie Mellon University, Pittsburgh, PA, March 1996. [38] Mullane, J., E. Jose, M. D. Adams, and W. S. Wijesoma, “Including Probabilistic Target Detection Attributes into Map Representations,” International Journal of Robotics and Autonomous Systems, Vol. 55, No. 1, January 2007, pp. 72–85. [39] Adams, M., J. Mullane, and B. N. Vo, “Laser and Radar Based Robotic Perception,” Now Publications: Foundations and Trends in Robotics, Vol. 1, No. 3, 2010. [40] Adams, M. D., and F. Tang, W. S. Wijesoma, and C. Sok, “Convergent Smoothing and Segmentation of Noisy Range Data in Multiscale Space,” IEEE Transactions on Robotics, Vol. 24, No. 3, June 2008, pp. 746–753. [41] Perera, L. D. L., W. S. Wijcsoma, and M. D. Adams, “The Estimation Theoretic Bias Correction Problem in Map Aided Localisation,” International Journal of Robotics Research, Vol. 25, No. 7, July 2006, pp. 645–667. [42] Thrun, S., “Particle Filter in Robotics.” In Uncertainty in AI, 2002. [43] Durrant-Whyte, H. F., and T. Bailey, “Simultaneous Localization and Mapping: Part II,” IEEE Robotics and Automation, September 2006, pp. 108–117. [44] Newman, P., P. Cole, and K. Ho, “Outdoor SLAM Using Visual Appearance and Laser Ranging.” In IEEE International Conference on Robotics and Automation, May 2006, pp. 1180–1187. [45] Kirubarajan, T., and Y. Bar-Shalom, Multisensor-Multitarget Statistics in Data Fusion Handbook, D. Hall (Ed.), Boca Raton, FL: CRC Press, 2001. [46] Kalman, R. E., “A New Approach to Linear Filtering and Prediction Problems,” T­ransactions of the ASME-Journal of Basic Engineering, Vol. 82, Series D, 1960, pp. 35–45. [47] Bar-Shalom, Y., and T. E. Fortmann, Tracking and Data Association, New York: Academic Press, 1988. [48] Maybeck, P., Stochastic Models, Estimation and Control, Volume 1, New York: Academic Press, 1982. [49] Mahalanobis, P. C., “On the Generalized Distance in Statistics,” Proceedings of the N­ational Institute of Sciences of India, Vol. 2, No. 1, 1936, pp. 49–55.

160

Robotic Navigation and Mapping [50] Gordon, N. J., D. J. Salmond, and A. F. M. Smith, “Novel Approach to Nonlinear/NonGaussian Bayesian State Estimation,” IEE Proceedings on Radar and Signal Processing, April1993, Vol. 140, No. 2, pp. 107–113. [51] Thrun, S., W. Burgard, and D. Fox, Probabilistic Rohotics, Cambridge, MA: MIT Press, 2005. [52] Reid, D. B., “An Algorithm for Tracking Multiple Targets,” IEEE Transactions on Automatic Control, Vol. 24, 1979, pp. 843–854. [53] Nieto, J., J. Guivant, E. Nebot, and S. Thrun, “Real Time Data Association for FastSLAM.” In IEEE International Conference on Robotics and Automation, Vol. 1, September 2003, pp. 412–418. [54] Mahler, R., Statistical Multisource Multitarget Information Fusion, Norwood, MA: Artech House, 2007. [55] Mahler, R., “Multi-Target Bayes Filtering via First-Order Multi-Target Moments,” IEEE Transactions on AES, Vol. 4, No. 39, October 2003, pp. 1152–1178. [56] Erdinc, O., P. Willett, and Y. Bar-Shalom, “The Bin-Occupancy Filter and Its Connection to the PhD Filters,” IEEE Transactions on Signal Processing, Vol. 57, No. 11, November 2009, pp. 4232–4246. [57] Mullane, J., B.-N. Vo, M. Adams, and B.-T. Vo. Random Finite Sets for Robot Mapping and SLAM, Springer Tracts in Advanced Robotics, Vol. 72, Berlin: Springer, 2011. [58] Schuhmacher, D., B. T. Vo, and B. N. Vo, “A Consistent Metric for Performance Evaluation of Multi­Object Filters,” IEEE Transactions on Signal Processing, Vol. 86, No. 8, 2008, pp. 3447–3457. [59] Huttenlocher, D. P., G.A. Klandennan, and W.J. Rucklidge, “Comparing Images Using the Hausdorff Distance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 9, September 1993, pp. 850–863. [60] Yang, T., and V. Aitken, “Evidential Mapping for Mobile Robots with Range Sensors,” IEEE Transactions on Instrumentation and Measurement, Vol. 55, No. 4, August 2006, pp. 1422–1429. [61] Moutarlier, P., and R. Chatila, “Stochastic Multisensory Data Fusion for Mobile Robot Location and Environmental Modeling.” In Fifth International Symposium of Robotics Research, 1989, pp. 85–94. [62] Julier, S. J., and J. K. Uhlmann, “A New Extension of the Kalman Filter to Nonlinear Systems.” In Int. Symp. Aerospace Defense Sensing, Simul. and Controls, Orlando, FL, 1997, pp. 35–45. [63] Clark, S., and G. Dissanayake, “Simultaneous Localisation and Map Building Using Millimetre Wave Radar to Extract Natural Features.” In Proceedings of the IEHE lmernational Conference on Robotics and Automation, Michigan, May 1999, pp. 1316–1321. [64] Durrant-Whyte, H. F., S. Majumder, M. de Battista, and S. Scheding, “A Bayesian Algorithm for Simultaneous Localisation and Map Building.” In The Tenth International Symposium of Robotics Research (ISRR), Victoria, Australia, 2001. [65] Majumder, S., Sensor Fusion and Feature Based Navigation for Subsea Robots. PhD Thesis, University of Sydney, August 2001. [66] Bailey, T., Mobile Robot Localisation and Mapping in Extensive Outdoor Environments. PhD Thesis, Australian Centre for Field Robotics, University of Sydney, 2002. [67] Montemerlo, M., S. Thrun, D. Koller, and B. Wegbreit. “FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem.” In Proc. AAAI National Conference on Artificial Intelligence, 2004, pp. 593–598. [68] Montemerlo, M., S. Thrun, and B. Siciliano. FastSLAM: A Scalable Method for the Simultaneous l.ocalization and Mapping Prohlem in Robotics, Berlin: Springer, 2007. [69] Mullane, J., B. N. Vo, M. Adams, and W. S. Wijesoma, “A Random Set Formulation for Bayesian SLAM.” In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, France, September 2008.

4.10  Bibliographical Remarks

161

[70] Mullane, J., B. N. Vo, and M. Adams, “Rao-Blackwellised PHD SLAM.” In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Alaska, May 2010. [71] Mullane, J., B. N. Vo, M. D. Adams, and B. T. Yo, “A random-Finite-Set Approach to Bayesian SLAM,” IEEE Transactions on Robotics, Vol. 27, No. 2, April 2011, pp. 268– 282. [72] Golfarelli, M., D. Maio, and S. Rizzi “Elastic Correction of Dead-Reckoning Errors in Map Building.” In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Victoria, Canada, October 1998, pp. 905–911. [73] Folkesson, J., and H. Christensen, “Graphical SLAM: A Self-Correcting Map.” In IEEE International Conference on Robotics and Automation, New Orleans, LA, April 2004, pp. 383–390. [74] Frese, U., P. Larsson, and T. Duckett, “A Multilevel Relaxation Algorithm for Simultaneous Localisation and Mapping,” IEEE Transactions on Robotics, Vol. 21, No. 2, April 2005, pp. 196–207. [75] Bosse, M., P. M. Newman, J. J. Leonard, and S. Teller, “SLAM in Large-Scale Cyclic Environments Using the Atlas Framework,” International Journal of Robotics Research, Vol. 23, No. 12, December 2004, pp. 1113–1139. [76] Newman, P., and K. Leong Ho, “SLAM: Loop Closing with Visually Salient Features.” In IEEE International Conference on Robotics and Automation (JCRA), April 2005. [77] Grisetti, G., R. Kümmerle, C. Stachniss, U. Frese, and C. Hertzberg, “Hierarchical Optimisation on Manifolds for Online 2D and 3D Mapping.” In IEEE International Conference on Robotics and Automation (ICRA), May 2010, pp. 273–278. [78] Frese, U., “Treemap: An O(log n) Algorithm for Indoor Simultaneous Localization and Mapping,” Autonomous Robots, Vol. 21, No. 2, 2006, pp. 103–122. [79] Grisetti, G., C. Stachniss, and W. Burgard, “Nonlinear Constraint Network Optimization for Efficient Map Learning,” IEEE Transactions on Intelligent Transportation Systems, Vol. 10, No. 3, September 2009, pp. 428–439. [80] Thrun, S., Y. Liu, D. Koller, A. Ng, and Z. Ghahramani, et al., “Simultaneous Localization and Mapping with Sparse Extended Information Filters,” International Journal of Robotics Research, Vol. 23, No. 7–8, August 2004, pp. 693–716. [81] Walter, M., R. Eustice, and J. Leonard, “Exactly Sparse Extended Information Filters for Feature Based SLAM,” International Journal of Robotics Research, Vol. 26, No. 4, April 2007, pp. 335–359. [82] Wang, Z., S. Huang, and G. Dissanayake, “D-SLAM: A Decoupled Solution to Simultaneous Localisation and Mapping,” International Journal of Robotics Research, Vol. 2, No. 26, 2007, pp. 187–204. [83] Wang, Z., S. Huang, and G. Dissanayake, “Tradeoffs in SLAM with Sparse Information Filters.” In Proceedings of the 6th International Conference on Field and Service Robotics, Chamonix, France, 2007. [84] Dellaert, F., and M. Kaess, “Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing,” International Journal of Robotics Research, Vol. 25, No. 12, December 2006, pp. 1181–1203. [85] Kaess, M., A. Ranganathan, and F. Dellaert, “iSAM: Incremental Smoothing and Mapping,” IEEE Transactions on Robotics, Vol. 24, No. 6, December 2008, pp. 1365–1378. [86] Smith, C. M., J. J. Leonard, A. A. Bennet, and C. Shaw, “Feature Based Concurrent Mapping and Localisation.” In Proceedings of Oceans ‘97, October 1997, pp. 90–102. [87] Fitzgerald, R. J., “Development of Practical PDA Logic for Multitarget Tracking by Microprocessor.” In Multitarget-Multisensor Tracking: Advanced Applications, Norwood, MA: Artech House, 1990, pp. 1–23. [88] Bar-Shalom, Y., K. C. Chang, and H. A. P. Blom, “Tracking Splitting Targets in Clutter by Using an Interacting, Multiple Model Joint Probabilistic Data Association Filter.” In

162

Robotic Navigation and Mapping Multitarget-Multisensor Tracking: Advanced Applications, Vol. II, Norwood, MA: Artech House, 1992, pp. 93–110.   [89] Andrade-Cetto, J., and A. Sanfeliu, “TemporalLandmark Validation in CML.” In IEEE International Conference on Robotics and Automation (ICRA), Vol. 2, September 2003, pp. 1576–1581.   [90] Bibby, C., and I. Reid, “Simultaneous Localisation and Mapping in Dynamic Environments (SLAMIDE) with Reversible Data Association.” In Robotics Science and Systems (RSS-III), Atlanta, GA, June 2007.   [91] Mahler, R., “Global Integrated Data Fusion,” Proc. 7th Nat. Symp. on Sensor Fusion, Vol. 1, 1994, pp. 187–199.   [92] Mahler, R., “An Introduction to Multisource-Multitarget Statistics and Applications,” Lockheed Martin Technical Monograph, March 2000.   [93] Vo, B., S. Singh, and A. Doucet, “Sequential Monte Carlo Methods for Multi-Target Filtering with Random Finite Sets,” IEEE Transactions on AES, in press.   [94] Sidenbladh, H., “Multi-Target Particle Filtering for the Probability HypoThesis Density.” In Proc. Lnt’I Conf. on Information Fusion, Cairns, Australia, 2003, pp. 800–806.   [95] Zajic, T., R. Ravichandran, R. Mahler, R. Mehra, and M. Noviskey, “Joint Tracking and Identification with Robustness Against Unmodeled Targets.” In Proc. SPIE Signal Processing, Sensor Fusion and Target Recognition XII, Vol. 5096, 2003, pp. 279–290.   [96] Sidenbladh, H., and S.-L. Wirkander, “Tracking Random Sets of Vehicles in Terrain.” In Proc. 2003 IEEE Workshop on Multi-Object Tracking, WI, June 2003.   [97] Ikoma, N., T. Uchino, and H. Maeda, “Tracking of Feature Points in Image Sequence by SMC Implementation of the PHD Filter.” In Proc. SICE Annual Conference, Vol. 2, 2004, pp. 1696–1701.   [98] Vo, B.-N., S. Singh, and W.-K. Ma, “Tracking Multiple Speakers with Random Sets.” In Proc. Int. Conf. Acoustics, Speech and Signal Processing, Vol. 2, Montreal, Canada, 2004.   [99] Tobias, M., and A. Lantennan, “A probability HypoThesis Density-Based Multi-Target Tracking with Multiple Bistatic Range and Doppler Observations,” IEE Radar Sonar and Navigation, Vol. 152, No. 3, 2003, pp. 195–205. [100] Mahler, R., “A Survey of PHD Filter and CPHD Filter Implementations.” In Proc. SPIE Defense and Security Symposium of Signal Processing, Sensor Fusion and Target Recognition XII, April 2007. [101] Vo, B. N., and W. K. Ma, “The Gaussian Mixture Probability HypoThesis Density Filter,” IEEE Transactions on Signal Processing, Vol. 54, No. 11, November 2006, pp. 4091– 4104. [102] Clark, D. E., and B.-N. Vo, “Convergence Analysis of the Gaussian Mixture Probability HypoThesis Density Filter,” IEEE Transactions on Signal Processing, Vol. 55, No. 4, 2007, pp. 1024–1212. [103] Mahler, R., “PHD Filters of Higher Order in Target Number,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 43, No. 4, 2007, pp. 1523–1543. [104] Vo, B. T., B. N. Vo, and A. Cantoni, “Analytic Implementations of the Cardinalized Probability HypoThesis Density Filter,” IEEE Transactions on Signal Processing, Vol. 55, No. 7, July 2007, pp. 3553–3567. [105] Ramos, F. T., J. Nieto, and H. F. Durrant-Whyte, Recognising and Modelling Landmarks to Close Loops in Outdoor SLAM.” In International Conference on Robotics and Automation (ICRA), Rome, Italy, April 2007. [106] Konolige, K., M. Agrawal, and J. Sola, “Large-Scale Visual Odometry for Rough Terrain.” In International Symposium on Research in Robotics (JSRR), Hiroshima, Japan, 2007. [107] Ruiz, I. T., S. de Raucourt, and Y. Petillot, “Concurrent Mapping and Localization Using Sidescan Sonar,” IEEE Journal of Oceanic Engineering, Vol. 29, No. 2, April 2004.

Part II Radar Modeling and Scan Integration

Chapter 5

Predicting and Simulating FMCW Radar Measurements 5.1  Introduction This chapter addresses the issues of predicting the A-scope displays from MMW radars, which use the FMCW range estimation technique. This is important for two reasons. First, in automotive and autonomous robotic applications, such sensors are used in conjunction with vehicle navigation and map state estimation filters. This is so that typically uncertain vehicle motion knowledge can be optimally fused with noisy sensor information. It is then possible to infer estimates of the state of interest, which is typically the vehicle’s pose and/or information of the surrounding object locations. Hence it is essential that predicted A-scopes can be computed to apply a Bayesian recursive estimation framework, based on previous measurements and uncertain vehicle motion information. A method for predicting such observations for an FMCW, MMW radar, based on estimates of the range to, and RCS of, targets is therefore given in this chapter. This is possible once prior robot and feature estimates are initialized, which include the vehicle’s pose, the estimated feature locations, and their RCS e­stimates. Second, it is extremely useful to be able to simulate MMW radar data, given certain environmental configurations. This aids the development of reliable object detection algorithms, based on theoretical sensor and noise models, which can then be applied more effectively to real MMW radar data. This is because random noise and clutter can be independently controlled in such simulated scans. To achieve this, the simple radar range equation, modeling of the FMCW process, and an experimental analysis of the power and range noise statistics are used. The radar used in this chapter is the Navtech MMW, 77-GHz FMCW radar introduced in Figure 1.6. Its specifications are given in Appendix A. The chapter is organized as follows. In Section 5.2, the FMCW equations of Chapter 2 are revisited to provide an insight into A-scope generation in the presence of noise. This is essential for predicting an A-scope correctly. Experimentally determined received power noise statistics during object absence and presence are demonstrated in Section 5.3. This section further provides an experimental analysis to determine the spatial range noise in the FMCW process and shows that the received power noise is of far greater consequence than the spatial range noise. Radar A-scope prediction, using the experimentally determined noise statistics of Section 5.3, is the subject of Section 5.4. The aim is to generate realistic, with simulated noise, predicted radar A-scopes, both 165

166

Predicting and Simulating FMCW Radar Measurements

from expected target RCS and range estimates, and from predicted vehicle locations and prior A-scope arrays. Section 5.5 introduces an error metric, based on the Pearson correlation coefficient [1] for quantifying the accuracy of predicted A-scopes, compared with actual data. The metric is used throughout the results in Section 5.6. A comparison of the predicted and actual scans demonstrates the feasibility of the proposed A-scope prediction techniques for providing spatial and received power predictions for trees, lamp posts, pillars, cars, and other artifacts in outdoor environments.

5.2  FMCW Radar Detection in the Presence of Noise An in depth understanding of the FMCW power versus range A-scope generation principle is essential for predicting the radar measurements. This was introduced in Section 2.6.2. Here the inclusion of noise into the FMCW process is considered, and Equations 2.26 through 2.31 are extended accordingly. For the accurate prediction of radar A-scopes, the characterization of noise is important. It is therefore the aim of this section to demonstrate how radar noise affects the FMCW received power and range estimates. The two main noise components are thermal and phase noise. It will be shown that thermal noise predominantly affects the power reading, while phase noise affects the range estimate. In Section 5.3 the type of noise distributions affecting the received power and range estimates will be examined. In the presence of noise, the transmitted signal in Equation 2.26, uT(t), as a function of time t, becomes é ù æ Df ö t υT (t) = [ AT + aT (t)]cos êωc t + 2π ç ÷ ò tdt + ψ (t)ú è Td ø 0 ë û

éæ ù æ Df ö ö = [ AT + aT (t)]cos êç ω c + π ç ÷ t ÷ t + ψ (t)ú è Td ø ø ëè û



(5.1)

where it is recalled that Df is the FMCW swept frequency (chirp) bandwidth and Td is the chirp repetition time. In addition to the terms defined in Equation 2.26, aT(t) is the thermal noise and y(t) is the phase noise present in the signal that is introduced inside the transmitting electronic sections, as a result of non ideal VCO operation (Section 2.8.2.1). The corresponding received signal is then given by

éæ ù ö æ Df ö υR (t - τ ) = [ AR + aR (t - τ )]cos êç ω c + π ç ÷ (t - τ )÷ (t - τ ) + ψ (t - τ )ú è Td ø ø ëè û

(5.2)

where aR(t – t) is the thermal noise and y(t – t) is the phase noise. The sources of noise affecting the signal’s amplitude were discussed in Chapter 2, Section 2.7.

5.2  FMCW Radar Detection in the Presence of Noise

167

The result of the FMCW mixing process is then given by υ T (t)υR (t - τ ) = [ AT + aT (t)][ AR + aR (t - τ )] éæ ù æ Df ö ö ´ cos êç ω c + π ç ÷ t ÷ t + ψ (t)ú è Td ø ø ëè û éæ ù ö æ Df ö ´ cos êç ω c + π ç ÷ (t - τ )÷ (t - τ ) + ψ (t - τ )ú è Td ø ø ëè û



(5.3)

Again, applying the product of two sinusoids identity now yields uout(t – t) as υout (t - τ ) =



[ AT + aT (t)][ AR + aR (t - τ )] [B1 + B2 ] 2

(5.4)

where é æ ù ö æ æ Df ö æ Df ö ö B1 = cos ê2 ç ω c + π ç ÷ (t - τ )÷ t - ç ωc - π ç ÷ τ ÷ τ + ψ (t) + ψ (t - τ )ú , è Td ø è Td ø ø ø è ë è û

é æ æ Df ö B2 = cos ê2π ç ç ÷ ë è è Td ø

ö æ æ Df ö τ÷ t - ç ωc + π ç ÷ è Td ø ø è

ù ö τ÷ τ + ψ (t) + ψ (t - τ )ú ø û



(5.5)

Recall that the second cosine term, B2, is the signal containing the beat frequency. Therefore, in FMCW radars, the selected output of the mixer consists of this beat frequency component, B2, and noise components with similar frequencies to it. The output of the filter within the mixer stage, which selects the difference frequency signal component, B2, of Equation 5.4 is then given by

υbeat (t, τ ) =

é æ æ Df ö A¢ cos ê2π ç ç ÷ 2 ë è è Td ø

ö æ æ Df ö τ÷ t - ç ωc + π ç ÷ è Td ø ø è

ù ö τ÷ τ + D ψ(t, τ )ú ø û

(5.6)

where A¢ = [At + aT (t)][AR + aR(t − t)] is the product of the transmitted and received signal amplitudes, each corrupted by noise. Dy(t, t) = y(t) − y(t − t) is the differential phase noise. Hence, the noise free amplitude and frequency terms of the beat frequency signal derived in Equation 2.31 in Section 2.6.2, can now be seen to be corrupted by terms that, in general, can depend on both time t and delay time t (which depends directly on range) (i.e., terms aR(t − t) and Dy(t, t)). Hence, a brief discussion of the noise components aR(t − t) and Dy(t, t) of Equation 5.6, present in the beat signal, and their effect on both the received power and range estimates is considered next. This information is necessary in understanding the radar A-scope and an effective representation of targets.

168

Predicting and Simulating FMCW Radar Measurements

5.3  Noise Distributions During Target Absence and Presence 5.3.1  Received Power Noise Estimation

As shown in Equation 5.4, the beat frequency signal is affected by the thermal noise signal aR(t − t), which contributes to A¢ in Equation 5.6. Therefore this noise source directly corrupts the estimated received power. Estimating this noise source is necessary for simulating the noisy received power values at range bins corresponding to target absence (target hypothesis H0) and presence (target hypothesis H1). A simple and useful technique to estimate the received noise power, in terms of its distribution type and moments, is to monitor several recorded A-scopes from the output of the radar. 5.3.1.1  Power Noise During Target Absence

Figure 5.1 shows a normalized distribution corresponding to 5000 measurements of the received, linearized power Slin. This was calculated from Equation 2.21 and recorded from the radar, with its scanning swash plate stationary, pointing into air, in a direction with no targets [2]. The continuous curve is an exponential distribution with the same mean as the sample mean of the discrete data. It can be seen that the received power during target absence approximately follows an exponential distribution, as predicted in Chapter 3 Section 3.4 (Figure 3.5), and in much of the radar literature [3, 4]. In this case, the continuous exponential distribution is characterized by ïìλta exp (- λ ta S lin ) p(S lin H0) = Exponential(λ ta ) = í ïî0

if S lin ³ 0 otherwise

(5.7)

with target absence parameter lta = 256.48.

600

Act. noise dist. Equiv. exp. dist.

500 400 p(S lin |H 0)



300 200 100 0

0

0.005

0.01

0.015 S lin

Figure 5.1  Noise distribution during target absence.

0.02

0.025

5.3  Noise Distributions During Target Absence and Presence

169

5.3.1.2  Power Noise During Target Presence

In the literature, various received power distribution types have been applied to model the received power when a target is presence. These vary from simple Gaussian (as considered in the basic detection theory analysis of Chapter 3, Section 3.4) to Rayleigh, gamma, Chi-squared and more elaborate distributions. More details are given in the Bibliographic Remarks Section 5.8 at the end of this chapter. As discussed in Section 2.7.1.3, received power fluctuations from a given target from one scan to the next can also be modeled by the classical Swerling models [5]. These provide probability distributions on the fluctuating RCS values of targets, which model changes in RCS due to aspect changes as a target moves relative to the radar. This is often considered to be the main influencing factor on fluctuations in the received power between radar scans. Normalized distributions, corresponding to 5000 discrete linearized power values, recorded from the Navtech FMCW radar (Figure 1.6) using stationary targets of high and low RCS are shown in Figure 5.2. The continuous curves show theoretical Rayleigh distributions with the same mean values as the sample means of the discrete data. As a first approximation, the Rayleigh distributions can be deemed appropriate allowing the Swerling I model to be applied (Section 2.7.1.3). It is also reasonable to assume that the noise power statistics are independent of range and are dependent on target aspect RCS fluctuations and the radar receiver electronics only, an approximation that is demonstrated in Figures 5.2(a) and (b). Based on this analysis, noisy predicted power values can be simulated. In the case of known target absence at particular range bins, samples are randomly drawn from the exponential distribution of Equation 5.7 with lta = 256.48. In the case of known target presence at a particular range bin, and its expected received power, samples are drawn at random from a Rayleigh distribution with parameter stp d­ependent upon the expected received power value. The equations used to achieve this will be given in Section 5.4.1. 5.3.2  Range Noise Estimation

Another source of noise which affects the A-scopes is the phase noise Dy(t, t) in Equation 5.6. Phase noise is generated by the path leakage of the transmitted signal to the mixer, resulting in a spectrum of frequencies with finite bandwidth instead of a single beat frequency. This introduces noise into the range estimate itself, and experimental data shows that this results in a slight broadening of the power peaks along the range axis. This will be noticeable in many of the actual scans shown in Section 5.6. To demonstrate and estimate the phase noise effect, a large number of superimposed A-scopes obtained with the same radar swash plate bearing angle can be plotted. An example of this, for 5,000 A-scope displays, is shown in Figure 5.3. Figure 5.3(a) shows the entire set of A-scope displays, with the returned power Slog-rc in dB, while Figure 5.3(b) shows the same set of A-scope displays, using a linearly scaled power axis Slin, with the effect of the range compensation filter removed. The A-scope displays contain three dominant targets, observed due to

170

Predicting and Simulating FMCW Radar Measurements

2

x 10

1.8 1.6

p(S lin |H 1)

1.4 1.2 1 0.8 0.6 0.4 0.2 .

0

0

0.5

1

1.5

S lin

2 5

x 10

(a) Distribution for high received power values.

5

x 10

4.5 4 3.5

p(S lin |H 1)

3 2.5 2 1.5 1 0.5 0 0

100

200

S lin

300

400

500

(b) Distribution for low received power values

Figure 5.2  Received power distributions during target presence for high and low received power values.

5.3  Noise Distributions During Target Absence and Presence

171

S log_rc (dB)

40

20

0

S lin

(a) Superimposed logarithmic, range compensated, A-scope displays Slog_rc.

(b) Superimposed, linearized power A-scope displays Slin. Figure 5.3  5,000 superimposed A-scope displays, recorded at the same bearing angle.

the partial occlusion of objects that lie inside the radar’s beam width. To estimate the phase (and hence range) noise distribution, a statistical analysis is demonstrated here on the targets at approximately 10 and 136.5 meters, which have differing RCS values, to determine any power and/or range dependency of the range noise 1.

In Figure 5.3(a) (received power with a dB scale) three power returns are evident, due to the range compensation filter and logarithmic scaling. Clearly, after the display is linearized and the range compensation filter effect removed, targets at higher range (such as the third target at 150 m) cannot be visualized on a linear scale, as shown in Figure 5.3(b).

172

Predicting and Simulating FMCW Radar Measurements 4000

Number

3000

2000

1000

0

9.8

9.9

10.0

10.1

10.2

136.6

136.7

Range (m) (a) Target at 10m. 5000

Number

4000

3000

2000

1000

0

136.3

136.4

136.5

Range (m)

(b) target at 136.5m Figure 5.4  Experimental histograms of range noise for targets at different ranges.

statistics. The resulting range histograms are shown in Figures 5.4(a) and (b). The distributions show only the range values that correspond to the maximum power values recorded from a given target, with the radar swash plate fixed, taken from 5,000 linearized A-scopes, Slin. It is apparent that the resulting range standard deviations are approximately independent of range and returned power and, more importantly, that they are significantly less than the range resolution (0.25 m) for this particular radar. This can be seen in Figure 5.3(b) since, on the linear scale, the power “peaks” are very sharp. Therefore, in the prediction of radar A-scopes, the broadening of peaks will not be considered significant in comparison with the

5.4  Predicting Radar Measurements

173

range resolution of the radar’s range bins, and the phase (and hence range) noise is ignored. It is therefore reasonable to assume that the thermal noise, which affects the received power value, plays a more crucial role for this radar. This has a direct influence on the correct detection of a target. Feature/target extraction algorithms must be able to minimize the affect of this noise on target existence decisions.

5.4  Predicting Radar Measurements The tools are now complete to predict radar A-scope displays. In this chapter, two methods of predicting the displays are given. The first is simply based on generating the predicted received power values in each range bin, based on a target at an estiˆ with an estimated RCS (Gˆ RCS) value. The second method predicts mated range r, such A-scopes based on a robot motion model. 5.4.1  A-Scope Prediction Based on Expected Target RCS and Range

First, the discretized radar range bin q = u closest to the predicted range (i.e., r(u) » rˆ)  is chosen to contain the predicted power value from the target. The calculation of this predicted power value now follows. In Chapter 2, during target presence, Equation 2.18 showed that, in general, for large SNP, Slog-rc(q = u) is dependent only on the RCS of the target in range bin q = u, as approximately verified in Figures 2.10 and 2.13. Therefore, Equation 2.18 gives Slog-rc(u) as a function of G RCS(u). Given an expected target RCS in range bin q = u, Gˆ RCS(u), the corresponding expected received power Sˆ log_rc (u) is therefore determined from Equation 2.18

Sˆ log_rc (u) » Constant + 10log10 (Gˆ RCS(u))

(5.8)

From Equations 2.17 and 2.21, the expected linear received power (with range compensation removed) is given by

ˆ log_rc (u)- 40 log10 (AFMCW r(u))) / 10) Sˆ lin (u) = 10((S

(5.9)

Based on the statistical analysis of the received power during target presence in Section 5.3.1.2, Sˆ lin (u) is used as the mean value of a Rayleigh power distributio­n, * lin from which a random sample power value S (u) is drawn. This is to simulate the received power noise during target presence, in range bin u. For a Rayleigh distribution, its parameter s tp is related to the mean Sˆ lin (u) of the distribution as follows

2.

3.

2 σ tp = Sˆ lin (u) π

(5.10)

It should be noted from Figure 5.4, however, that the phase noise is significant enough to just cause neighboring range bins to be affected by a target. This is why the guard cells, introduced in Chapter 3, Section 3.5.1, are necessary to stop the bins immediately neighboring a power spike from corrupting the noise power estimate in CFAR processors. Which is theoretically proportional to the received power, assuming the range compensation, high pass filter to be ideal.

174

Predicting and Simulating FMCW Radar Measurements

Hence in range bin q = u, the received power value is simulated as (S lin (u))(i) ~ Rayleigh(σ tp )



(5.11)

To generate the power values in range bins q (1 £ q £ Q; q ¹ u) (Q being the number of bins in each A-scope), random samples are drawn from the exponential noise distribution of Equation 5.7 (S lin (q ¹ u))(i) ~ Exponential(λ ta )



(5.12)

where l was defined in Equation 5.7. Results demonstrating predicted A-scope displays based on expected target RCS and range will be shown in Section 5.6.1.1. ta

5.4.2  A-Scope Prediction Based on Robot Motion

In a robot navigation scenario, radar A-scope prediction is possible given prior robot and feature estimates, which include the vehicle’s pose, estimated feature locations, and their estimated RCS values. 5.4.2.1  Vehicle Motion

The control inputs to the vehicle can be considered to be its time-varying forward speed V and steering angle g s as already considered in the Ackerman-based vehicle model of Section 4.2.1.1. Therefore, in terms of these inputs, the vehicle state can be predicted by taking expectations of the Equation 4.9 to give



é xˆ k k -1 ù ê ú ê yˆ k k -1 ú = êë φˆk k -1 úû

é ù é xˆ k -1 k -1 ù ê DTVk -1 cos[φˆk -1 k -1 ]ú ú ê ú ê ˆ ê yˆ k -1 k -1 ú + ê DTVk -1 sin[φk -1 k -1 ] ú êë φˆk -1 k -1 úû ê DTVk -1 tan[(γ s )k -1 ] ú ê ú L ë û

(5.13)

where xˆ k k -1 represents the predicted x coordinate of the vehicle at time k, given all measurement and input data up to, and including, time k − 1; xˆ k −1 k −1 represents the previously estimated x coordinate of the vehicle at time k − 1 (and similarly for y and heading f); DT is the time between updates k − 1 and k and Vk – 1 and (g s)k−1 are the discretized forward velocity and steering angle commands given to the vehicle at time k − 1. Note that Equation 5.13 represents the expected value of a particular case of the general vehicle model Equation 4.7 introduced in Section 4.2.1, which includes the vehicle process noise term uk. That is, Equation 5.13 is of the form



Xˆ k|k -1 = f veh (xˆ k -1|k -1 ,Uk -1)

(5.14)

Also, assuming the static, 2D point feature environment of Equation 4.10, the predicted point target coordinates, for the ith target, are simply é xˆ i ù é xˆ i ù ê k k -1 ú = ê k -1 k -1 ú (5.15) ê yˆ i ú ê yˆ i ú ë k k -1 û ë k -1 k -1 û for all þ point features i = 1... þ.

5.4  Predicting Radar Measurements

175

5.4.2.2  Spatial Prediction

From Figure 4.1, it can be seen that at time k it is possible to predict the range rˆki k -1 and bearing θˆki k -1 of the ith point target, in terms of the variables defined in Equations 5.13 and 5.15: rˆki k -1 =



(xˆ

i k k -1

- xˆ kradar k -1

) + (yˆ 2

i k k -1

- yˆ kradar k -1

) 2

é yˆ i ù - yˆ kradar k k 1 k -1 ú i ˆ θk k -1 = arctan ê i - φˆk k -1 ê xˆ k k -1 - xˆ kradar ú k -1 û ë



(5.16)

(5.17)

where, due to the radar’s offset distance from the vehicle’s origin, xˆ kradar = xˆ k k -1 + a cos[φˆk k -1 ] - b sin[φˆk k -1 ] k -1 yˆ kradar = yˆ k k -1 + a sin[φˆk k -1 ] - b cos[φˆk k -1 ] k -1





(5.18)

where a and b are defined in Figure 4.1. When a radar scan is to be predicted, after vehicle motion, feature i is predicted to be in the A-scope Slog-rc, which will be recorded at bearing angle θˆki k -1 and will be in the nearest range bin qˆ i corresponding to range value rˆki k -1 (Equation 5.16). 5.4.2.3  Power Prediction

This section continues the analysis of Section 5.3.1.2, where power distributions during target presence were given. When an object is re-observed at time k, as in Figure 4.1, the range to the object, as well as the angle of incidence between the radar’s transmitted signal and the object itself, will have changed. For an ideal radar, the range compensation filter should eliminate any effect that the new range value  has on the received power at time k, which will be referred to as Sklog_rc (qˆ i ). In general, the angle θki at which an object is sensed by the radar will also affect its RCS value. In this chapter, sensed objects are assumed to be cylindrical in nature, exhibiting the same RCS value, independent of the angle from which they are sensed (Swerling V model). That is, i i Gˆ kRCS = Gˆ kRCS k -1 -1 k -1



(5.19)

In general, this is clearly a very simplifying assumption, but experiments will show that such ideal conditions are approximately obeyed for cylindrical objects such as lamp posts, tree trunk, and pillars. The predicted power is related to the predicted RCS of Equation 5.19 through Equation 2.18 and is given by Sˆ log_rc (q i ) » constant

ˆ log_rc

i

+ 10log10 (Gˆ RCSi )

(5.20)

(q ) corresponds to the predicted received power with logarithmic where S compression, and range compensation applied, in the range bin q that corresponds to feature i.

176

Predicting and Simulating FMCW Radar Measurements Table 5.1  Summary of the Symbols Used to Represent a Target’s State, Its Measurement and Predicted Measurement Target State

Radar Measurement

Predicted Radar Measurement

xki yki

rk(q ) θki

rˆk k -1(qˆ i ) θˆ i

G kRCSi

Sklog_rc (qi )

Sˆklog_rc (qˆ i ) k -1

i

k k -1

Hence the target’s state can be considered to consist of its planar x and y coordinates and its RCS value GRCS. Actual and predicted observations of a target consist of the range to target i, measured to the nearest discrete range bin q denoted as r(qi), the target’s relative bearing angle q i, and the received power Slog-rc(qi). This is summarized in Table 5.1.

5.5  Quantitative Comparison of Predicted and Actual Measurements To assess the quality of the mathematically predicted scans, comparisons with actual scans are necessary, based on a principled metric. To quantify the “similarity” between a predicted A-scope, computed from previously recorded data from previous vehicle positions, and the current, corresponding real A-scope, the Pearson’s coefficient of determination is used [1]. This coefficient provides a measure of the strength of the linear relationship between the predicted A-scopes Sˆ lin , with predicted power values Sˆ lin (q) and the observed A-scopes Slin, with power values Slin(q), and will provide an indication of how accurate the predicted A-scopes are. This coefficient P 2 (Sˆ lin , Slin ) (0 < P 2 (Sˆ lin , Slin ) < 1) is then given by



é P (S , S ) = ê ê êë 2

ˆ lin

lin

(Qå(Sˆ

2

Qå Sˆ lin (q)S lin (q) - å Sˆ lin (q)å S lin (q) lin

(q))2 -

( å Sˆ lin (q))

2

)(Qå(S

lin

(q))2 -

( å S lin (q))

2

)

ù ú (5.21) ú úû

where each summation is taken from q = 1 to Q (in this case Q = 800) at a given scanning bearing angle. A value of P 2 (Sˆ lin , Slin ) = 1 shows the two A-scopes are fully correlated and P 2 (Sˆ lin , Slin ) = 0 shows that the two A-scopes are uncorrelated. Before determining this coefficient, the actual received A-scope under comparison is linearized, and then the effect of the range compensation filter removed, assuming it behaves as a 40 dB/decade, high pass filter, as mentioned in Section 2.5.2. This allows comparison with a range uncompensated, predicted, linear power A-scope. Due to the large dynamic range of the received signals, however, for graphical comparison, the predicted and the actual A-scopes are still shown on a logarithmic scale, with range compensation applied, simulated with a 40 dB/decade, high pass filter for the predicted FMCW A-scopes.

5.6  A-scope Prediction Results

177

5.6  A-scope Prediction Results 5.6.1  Single Bearing A-Scope Prediction 5.6.1.1  Prediction from a Single Feature

As an example, a single A-scope is to be predicted for an object with a known RCS (10 m2), assumed to be at a distance of 10.25 meters (range bin q = 41) from the radar. This will then be qualitatively and quantitatively compared with a real Ascope, recorded under the same conditions. Analysis of the predicted (Figure 5.5) and actual A-scope (Figure 5.6) displays shows a slight mismatch in the noise frequency with respect to range. This mismatch is most likely due to the unmodeled phase noise throughout the entire A-scope. It should also be noted that the actual radar used here is unable to report received power values for objects at ranges below 5 m, as can be seen in Figure 5.6. The predicted magnitude of the received power, calculated from Equations 5.8 through 5.11, from the target that yielded a received power of 30 dB, corresponds to that actually received in the controlled experiment. The P 2 (Sˆ lin , Slin ) value (Equation 5.21) obtained for Figures 5.5 and 5.6 is 0.9741, indicating a close similarity in the predicted and actual A-scope displays. 5.6.1.2  Prediction from Multiple Features

If a predicted range A-scope contains multiple targets down-range, the predicted power from the first (closest) target only will be predicted correctly, since, in this work, no account is taken of the power absorption from the j − 1 targets occluding the jth target down-range (for j > 1). As an example, Figures 5.7 (predicted A-scope Sˆ log_rc) and 5.8 (real A-scope Slog-rc) show range compensated A-scope displays with 40

(dB)

30

10

S

^ log_rc

20

0

0

20

40

60

80

100

120

Range (m) Figure 5.5  Predicted A-scope Sˆlog_rc for a single target, with a 40 dB/decade range compensation filter applied.

178

Predicting and Simulating FMCW Radar Measurements 40

S log_rc (dB)

30

20

10

0

–10

0

40

20

60

80

Range (m)

120

100

Figure 5.6  A-scope Slog_rc obtained from an actual target of RCS 10 m2 at 10.25 meters.

reflections from two features. The first reflection is from a tree, and the second reflection is from a concrete pillar. As indicated in Figure 5.7, under the current model, the power value from the second feature cannot be predicted. However, the remainder of the A-scope, including the received power from the closest target, is predicted adequately, as indicated by an accompanying correlation coefficient P 2 (Sˆ lin , Slin ) = 0.9807. This value indicates that the predicted A-scope has a close “similarity” to the real A-scope with multiple features and that it is possible to predict A-scopes with multiple feature returns accurately. Unknown predicted power from the second feature

30

10

^

S log_rc (dB)

20

0

-10

0

20

40

60

80

100

120

140

160

180

200

Range (m) Figure 5.7  Predicted A-scope Sˆlog_rc for two co-aligned targets with a 40 dB/decade range compensation filter applied.

5.6  A-scope Prediction Results

179

30

10

S

log_rc

(dB)

20

0

-10 0

20

40

60

80

100

120

Range (m)

140

160

180

200

Figure 5.8  Actual A-scope S log_rc obtained from the two co-aligned targets.

5.6.2  360° Scan Multiple A-Scope Prediction, Based on Robot Motion

This section compares the full 360° predicted radar scans, derived from the motion and received power models of Section 5.4, with actual scans. The predicted full scans are then statistically compared with the actual ones to quantify the goodness of the proposed prediction method, and the results are analyzed. Qualitative comparisons are also given, in terms of the known existence and position of objects within the scanned environments. The aim is to show how useful the predictive model is, based on detected features from an initial radar scan. These form an initial state that is then propagated through the vehicle’s kinematic model of Section 5.4.2, together with the noise analysis, to produce predicted scans in other radar locations. The radar is then moved to that location, and a scan is taken for comparison purposes. For analyzing the proposed prediction method, tests were carried out in the outdoor car park environment of Figure 2.17, also shown in Figure 5.9 with many artifacts labeled. The real radar scan obtained from the initial vehicle location (marked “0” in the photograph) is shown in Figure 5.10. The shaded scale in the figure indicates the received power values. Features were extracted from this initial, raw scan using the CA-CFAR processor, explained in Section 3.5.1, with Pfa set to 10–5. The power values that exceeded the CA-CFAR threshold are shown in Figure 5.11 and are used to predict the feature locations in subsequent scans. Next, the vehicle was moved and its predicted location was based on odometry and the vehicle kinematic model Equation 4.9 [6]. A predicted scan was then calculated from that predicted vehicle location, by translating the previously extracted feature locations through the vehicle motion model (Equations 5.16 and 5.17) and using the previous A-scope prediction method. The full predicted 360° scan from the parking lot is shown in Figure 5.12. The vehicle was then commanded to move to that location (position “l” in Figure 5.9), which is approximately 2.3 meters

180

Predicting and Simulating FMCW Radar Measurements

Figure 5.9  Photograph of the parking lot environment used for comparing predicted and actual radar scans, with certain artifacts and natural objects labeled. Note that the car in the foreground was not present when the radar scans were recorded. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

Figure 5.10  360° radar scan showing multiple A-scope displays Slog_rc at successive bearing angles, recorded from the initial vehicle location “0” in Figure 5.9, showing the power received from range values up to 50 m. The shaded scale on the right indicates the received power values in dB. Features include trees, tamp posts, concrete buildings, and pillars. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

5.6  A-scope Prediction Results

181

90

o

40 1 0

30 20 10 0

0

o

0 10 20 30 40

Y (m)

X (m) Figure 5.11  CA-CFAR target detection applied to the scan of Figure 5.10. The peaks show the received power values, which have exceeded the adaptive CA-CFAR threshold and are considered to be valid detections.

away from the initial location, and an actual radar scan was then obtained. This is shown in Figure 5.13, in which certain identifiable objects are labeled for comparison with Figure 5.12. From a qualitative point of view, it can be seen that artifacts such as the pillars, cars, and concrete corner section correspond well between the predicted and actual scans. The most notable differences between them are caused by A-scope saturation, phase noise, and clutter present in the actual scan. A-scope saturation occurs when a very high RCS target is present at a particular angle, which can cause a large portion of the power values, at that bearing, to be high, due to flooding of the receiver amplifier [7]. This is evident at the two labeled sections in Figure 5.13. As mentioned in Section 5.3.2, phase noise, which is also not modeled in the scan predictor, slightly broadens the power peaks, thus increasing the uncertainty in the range at which objects are detected. This can be seen, as the received power from the wall, pillars, and so on are “widened” along their respective range axes of detection in comparison with their predicted values in Figure 5.12. Clutter is also evident in the real data and is not modeled in the predicted scans. For quantifying the statistical similarity between the predicted and actual scans, again the correlation coefficient is obtained between each actual and predicted A-scope. The correlation coefficient P 2 (Sˆ lin , Slin ) for the full scan is plotted at each bearing angle in Figure 5.14, from which it is evident that relatively high (maxima) correlation values exist in the directions of the detected pillars, concrete corner section, large trees, and a small section of the left wall. However, not all of the

182

Predicting and Simulating FMCW Radar Measurements

Figure 5.12  Predicted 360° scan (Sˆlog_rc A-scopes) for the next predicted vehicle location (location “1” in Figure 5.9), with predicted features labeled. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

predicted A-scopes are correlated (or similar) with the real A-scopes, due to any of the following reasons: ·

·

There is no clutter modeling incorporated into the prediction model. Hence when a real A-scope, containing land clutter, is compared with the predicted one (with no clutter), a lower correlation results. Features may appear/disappear due to either partial occlusion of objects or due to object(s) being outside the radar’s maximum visible range, at the position from which the predicted scan was computed.

Figure 5.15 (which is a zoomed view showing the pillars, cars and concrete corner section labeled in Figure 5.12) shows a more detailed view of these predicted artifacts. Again, in these scans, the main predicted and detected objects are clear; as are the effects of phase noise and clutter in the real data. Importantly, most objects are reliably predicted, making them suitable as features within robot navigation frameworks. The corresponding zoomed actual scan is shown in Figure 5.16. Figure 5.17 shows the predicted 360° radar scan obtained from a vehicle location which is 11 meters away from the initial location (location “2” in Figure 5.9). This is to show the usefulness of the predicted observations, for faster mov-

5.6  A-scope Prediction Results

183

Figure 5.13  Actual 360° radar scan (multiple Slog_rc A-scopes) from the outdoor environment, 2.3 m away (location “1” in Figure 5.9) from the initial scan of Figure 5.10. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

ing ­vehicles, or for slower prediction/update times, it becomes necessary to predict a scan at greater distances away from the data set used to compute it. The actual scan from that vehicle location is shown in Figure 5.18. The correlation coefficient between each A-scope is shown in Figure 5.19. As the vehicle moves from positions “1” to “2” in Figure 5.9, the magnitude of the correlations in Figure 5.19 in the directions of the concrete corner section, and the pillars are notably reduced, while those in the direction of vehicle motion (corresponding to trees and lamp posts) increase. Again, importantly, most of the labeled features (which can be seen more clearly in the zoomed views of Figures 5.20 and 5.21) are accompanied by relatively high correlation coefficients (labeled in Figure 5.19). Finally, a predicted scan is generated for a large distance maneuver corresponding to 30m of vehicle motion (placing the vehicle at position “3” in Figure 5.9). Note that a slight angular shift in the scan is apparent due to a slight vehicle rotation reported by the odometry, that influences the transformation (Equation 4.9) used to predict the scan of Figure 5.22. When traversing such a large distance, one would expect less of the predicted features to match those in the real scan recorded at position “3” due to the occlusion or cluttering of distant objects recorded at position “0”. Figure 5.23 shows the

184

Predicting and Simulating FMCW Radar Measurements

Figure 5.14  Correlation coefficient of predicted and actual scans obtained from Figures 5.12 and 5.13. Note that 0° corresponds to the 0° line marked in Figure 5.12. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

Figure 5.15  A zoomed view of a section of the predicted scan of Figure 5.12 showing the multiple predicted A-scopes Sˆ log_rc in the vicinity of the radar. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

5.6  A-scope Prediction Results

185

Figure 5.16  A zoomed view of the corresponding section from the actual scan (Slog_rc) of Figure 5.13. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

30 60 20 20

20 10

–50

0

–40 –30

–10 –20

–20

–10

–30

0 10

–40

20

–50

30 40

–60

Figure 5.17  Predicted radar scan (Sˆ log_rc) from a vehicle location 11 meters away (location “2” in Figure 5.9) from the original scan of Figure 5.10. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

186

Predicting and Simulating FMCW Radar Measurements

30 50 0

20 10

–50 –40

0 –30

–10

–20

–20

–10 0

–30

10

–40

20 30

–50 40

–60

Figure 5.18  Actual 360° radar scan (Slog rc) after the vehicle drives 11 meters (to position “2” in Figure 5.9) from the original location of Figure 5.10. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

actual scan recorded at position “3” along with the correlation coefficient plotted in Figure 5.24, where it can be seen that many features are still correctly predicted. The trees, the lamp posts, and a sign post, to the rear (between scanning angles 0° and 180°) of the vehicle at position “3,” can be seen in both the predicted and actual scans and have correspondingly high correlation coefficients. This is logical, as this region is closer to the radar, when the scan was predicted at position “0”. Figures 5.25 and 5.26 show zoomed views in the vicinity of the robot with a clearer view of the successfully predicted trees, lamp posts, and, sign post, corresponding to the peaks in the correlation coefficient of Figure 5.24. It can be observed in the correlation plots that the correlation coefficient corresponding to the same object is different in different scans. In reality, the RCS of most objects will change, as the sensor to object angle of incidence changes, which would also cause the correlation coefficient, for the same object, to change. It should also be noted that the correlation coefficient results from the correlation between an entire A-scope, at a given radar bearing angle, which contains the object of interest as well as noise and/or clutter. Hence, the predicted A-scopes, are created from the features sensed in one location and propagated to another location through the vehicle’s kinematic model. These features will not, in general, lie at the same sensor

5.6  A-scope Prediction Results

187

Figure 5.19  Correlation coefficient of the predicted and actual scans of Figures 5.17 and 5.18. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

Figure 5.20  A zoomed section of the predicted scan (Sˆlog_rc ) of Figure 5.17 with labeled objects. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

188

Predicting and Simulating FMCW Radar Measurements

Y (m

Figure 5.21  A zoomed view of the corresponding section from the actual scan (Slog_rc ) of Figure 5.18. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

bearing angles after the vehicle/sensor has moved, meaning that the new A-scope containing the object of interest is likely to contain radar data from other parts of the environment. If the new A-scope contains clutter and/or other objects, the correlation coefficient will change. The important result here is that, at the locations of the predicted objects, the correlation coefficients contain maxima, indicating highly likely feature predictions.

5.7  Summary A technique for predicting scanned MMW radar data, based on the FMCW range measurement technique, has been presented. This allows the development of an estimation framework in the raw, high dimensional, measurement space, as opposed to the reduced dimensional post-processor, detection space. This is useful for two reasons. 1. Observations can be predicted for the purpose of data association, based on predicted vehicle maneuvers. This is a fundamental component of current stochastic autonomous navigation techniques, which require the spatial prediction and then detection of targets for vehicle pose and map estimation. . Note that in Chapters 8, 9, and 10, robot navigation methods that eliminate the need for data association will be introduced, which remove this necessity.

5.7  Summary

189

10

60 20 20 0

0 –10 –40

–20

–30

–30

–20 –10

–40 0

–50

10

–60

20 30

–70

40 50

–80

ˆ log_rc) 30 m from the position of the original scan of Figure Figure 5.22  Predicted radar scan (S 5.10. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.) 2. The methods can be adapted to simulate radar scans in the presence of noise, for which different feature extraction algorithms can be tested, allowing the user to control target RCS values and the corrupting sensor noise parameters. Feature extraction algorithms (typically CFAR) require the noise within the neighboring bins of a feature to be estimated, which is why the prediction of entire A-scope displays is useful. Qualitative comparisons of predicted and actual scans were shown in the results; however, quantified comparisons provide more insight into the prediction quality. Therefore, a metric, based on the Pearson correlation coefficient was utilized and demonstrated, which provided quantified comparisons for validating the prediction technique. To test the concept, predictions of scans were made at progressively larger distances from the scan used to generate the predictions. Even at a distance of 30 m, robust feature prediction was demonstrated. Further research, in which clutter modeling is carried out, would be useful for refining the model, and therefore, in Chapter 8, a recently formulated framework for robot mapping, based on random finite sets will be presented, in which this analysis is extended to take into account false alarms caused by clutter and A-scope saturation.

190

Predicting and Simulating FMCW Radar Measurements

10 50 0

0 –10 –40

–20

–30

–30

–20 –10

–40 0

–50

10

–60

20 30

–70

40 50

–80

Figure 5.23  Actual scan (Slog_rc) 30 m from the position of the original scan of Figure 5.10. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

Figure 5.24  Correlation coefficient of predicted and actual scans of Figures 5.22 and 5.23. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

5.7  Summary

191

Figure 5.25  A zoomed section of the predicted scan (Sˆ log_rc) of Figure 5.22 with labeled objects. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

–10 100 50 0 –15

15

Figure 5.26  A zoomed view of the corresponding section from the actual scan (Slog_rc) of Figure 5.23. (© 2010, IEEE. Reprinted with permission from E. Jose et al., “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010.)

192

Predicting and Simulating FMCW Radar Measurements

5.8  Bibliographical Remarks An initial version of this chapter appeared in IEEE Sensors, May 2010 [8]. The importance of modeling both radar beam patterns and the FMCW process itself was recognized by Alex Foessel in his work on 3D mapping [9]. In this work, Foessel noted that the received FMCW radar signal contains a sum of sinusoids, each with frequencies proportional to the range of multiple objects down-range and each corrupted by random phase delays caused by the reflection at each object [10]. He also noted that the fast Fourier transform (FFT) stage in FMCW radars limits the resolution of line-of-sight objects in close proximity. Based on observed data, recorded in controlled environments, he developed a sensor model based on the width of power peaks within the A-scopes. Using rules, the type of object sensed was inferred [11]. For example, narrow received power spikes were related to point like objects, and wide pulses were determined to correspond to surfaces with a high angle of incidence with the radar beam as opposed to thick objects. Further deductions regarding the origins of power values recorded beyond an initial power peak were also highlighted. It was hypothesized in this work that A-scopes containing many echoes provide high evidence of empty space between the radar and the first echo, as well as between the echoes themselves. Finally, this work considered the experimentally determined radar beam width in the grid-map cell occupancy probability calculations. The causes of less-than-ideal performance in FMCW radars have been highlighted in classical texts such as Skolnik [3], in which the accuracy of the beat frequency measurement device, transmission line effects, transmitter leakage, and multiple reflections have all been labeled as possible error sources. In Brooker’s work the problems of non linear chirp behavior and phase noise are mentioned [12]. Electronic design methods for improving chirp oscillator performance and reducing transmitter to received signal leakage are discussed. The effects of phase noise are explained and shown to be potentially significant around target power values [13]. This results in the blurring of edges in a radar image and can occlude low-power reflections altogether. Beasley further addressed these issues in his 2006 paper [14]. He showed that the phase noise problem has more effect around very large RCS radar targets. This is verified in the work by Adamski et al. [15, 16] and Wu et al. [17]. Beasley specifically demonstrates that the target-to–phase noise ratio can be increased by scan integration over a number of steps. This will be the subject of Chapter 6. FMCW modeling work by Gurgel and Schlick has concentrated on the dynamic range of the radar [18]. They state that the dynamic range is a critical issue in such devices, since the strong signal needed for beat frequency comparison directly from the transmitter, as well as the weak signals from the echoes, must be processed together. Electronic and physical spacing issues are addressed in this work to maximize this dynamic range. Further modeling work with FMCW radar by Krishnan focused on modeling by measurement and simulation, and modeling using a calibration target [19]. Here, the point spread function of an FMCW radar used to measure snow thickness on the surface of sea ice was assessed. This function provides an indication of

5.8  Bibliographical Remarks

193

the non ideal behavior of such a radar in terms of its response to signals of different frequency. This work demonstrated that modeling by considering each section of the radar and applying their transfer functions to signals can be equivalent to directly using the radar with calibration targets, specifically designed to yield the point spread function.

References   [1] Wilcox, R. R., Fundamentals of Modern Statistical Methods, Berlin: Springer, 2001.   [2] Brooker, G. M., D. Birch, and J. Solms, “W-Band Airborne Interrupted Frequency Modulated CW Imaging Radar,” Aerospace and Electronic Systems, IEEE Transactions on, Vol. 41, No. 3, July 2005, pp. 955–972.   [3] Skolnik, M. I., Introduction to Radar Systems, New York: McGraw Hill, 1982.   [4] Barkat, M., Signal Detection and Estimation, Norwood, MA: Artech House, 2005.   [5] Swerling, P., “Probability of Detection for Fluctuating Targets,” IRE Trans. Inform. Theory, Vol. 6, No. 2, April 1960, pp. 269–308.   [6] Dissanayake, M. W. M. G., P. Newman, S. Clark, H. F. Durrant-Whyte, and M. Csorba, “A Solution to the Simultaneous Localization and Map Building (SLAM) Problem,” IEEE Trans. Robotics and Automation, Vol. 17, No. 3, June 2001, pp. 229–241.   [7] Brooker, G. M., S. Scheding, M. V. Bishop, and R. C. Hennessy, “Development and Application of Millimeter Wave Radar Sensors for Underground Mining,” Sensors Journal, IEEE, Vol. 5, No. 6, December 2005, pp. 1270–1280.   [8] Jose, E., M. D. Adams, J. S. Mullane, and N. M. Patrikalakis, “Predicting Millimeter Wave Radar Spectra for Autonomous Navigation,” IEEE Sensors Journal, Vol. 10, No. 5, 2010, pp. 960–971.   [9] Foessel, A., “Radar Sensor Model for Three-Dimensional Map Building.” In Proc. SPIE, Mobile Robots XV and Telemanipulator and Telepresence Technologies VII, H. M. Choset, D. W. Gage, and M. R. Stein, editors, Vol. 4195, SPJE, November 2000. [10] Foessel, A., S. Chheda, and D. Apostolopoulos, “Short-Range Millimeter-Wave Radar Perception in a Polar Environment.” In Proceedings of the Field and Service Robotics Conference, August 1999. [11] Foessel, A., J. Bares, and W. R. L. Whittaker, “Three-Dimensional Map Building with MMW RADAR.” In Proceedings of the 3rd International Conference on Field and Service Robotics, Helsinki, Finland, June 2001. [12] Brooker, G. M., “Understanding Millimetre Wave FMCW Radars.” In Proceedings of the 1st International Conference on Sensing Technology, Palmerston North, New Zealand, November 2005, pp. 152–157. [13] Brooker, G., M. Bishop, and S. Scheding, “Millimetre Waves for Robotics.” In Australian Conference for Robotics and Automation, Sydney, Australia, November 2001. [14] Beasley, P. D. L., “The Influence of Transmitter Phase Noise on FMCW Radar Performance.” In Proceedings of the 3rd European Radar Conference (EuRAD), Manchester, UK, September 2006, pp. 331–334. [15] Wu, L., S. S. Peng, and X. Q. Shi, “Effects of Transmitter Phase Noise on Signal Processing in FMCW Radar.” In Proceedings of the International Conference on Signals and Electronic Systems ICSE 2000, Ustronie, Poland, 2000, pp. 51–56. [16] Adamski, M. E., K. S. Kulpa, M. Nalecz, and A. Wojtkiewicz, “Phase Noise in Two­Dimensional Spectrum of Video Signal in FMCW Homodyne Radar.” In 13th International Conference on Microwaves, Radar and Wireless Communications, (MIKON-2000), Vol. 2, 2000, pp. 645–648.

194

Predicting and Simulating FMCW Radar Measurements [17] Wu, L., S. S. Peng, and X. Q. Shi, “Effects of Transmitter Phase Noise on Millimeter Wave LFMCW Radar Performance.” In Proceedings of the International Conference on Microwave and Millimetre Wave Technology 2008 (ICMMT), Nanjing, China, April 2008, pp. 1415–1418. [18] Gurgel, K. W., and T. Schlick, “Remarks on Signal Processing in HF Radars Using FMCW Modulation.” In Proceedings of the International Radar Symposium (IRS’2009), Hamburg, Germany, September 2009, pp. 63–67. [19] Krishnan, S., “Modeling and Simulation Analysis of an FMCW Radar for Measuring Snow Thickness,” Technical report, School of Electronics and Communication Engineering, University of Madras, M.Sc. Thesis, 2000.

Chapter 6

Reducing Detection Errors and Noise with Multiple Radar Scans

6.1  Introduction In Chapter 3, methods of radar detection were presented, which derived an adaptive, received power threshold based on a desired probability of false alarm given a single array of power versus range values (A-scope). For received power values that exceed this threshold, the relationship between the power values and their probabilities of detection were then derived. In many slow-moving robotic vehicle applications, such as the mapping of mines or surveying, the luxury of making repeated scanned A-scope measurements of the same scene exists, yielding the possibility to combine these measurements and improve detection. It then makes sense to utilize the potentially high landmark correlation between the scans to further reduce the uncertainty in assessing the probability of the existence of landmarks and to reduce the noise. In the radar literature, this concept is referred to as scan integration and is the main subject of this chapter. The single A-scope-based CFAR power thresholds derived in Chapter 3 assumed exponential noise distributions with IID samples. In reality, these assumptions are often violated, particularly in the presence of clutter, yielding a significantly greater probability of false alarm than the theoretical values derived. This problem is compounded by a higher rate of missed detections than that predicted by the CFAR theory. For this reason, various articles have proposed analyses that attempt to derive more realistic expressions/approximations for Pfa and PD using many forms of CFAR, under various noise/landmark distribution assumptions. Details of these were given in the Bibliographical Remarks Section 3.7 of Chapter 3. Scan integration improves actual probabilities of detection for a fixed false alarm probability. Standard methods of scan integration will therefore be introduced, based in particular on the concepts of noncoherent integration and binary integration. The complexities of implementing each method will be highlighted, together with their theoretical detection probability improvements as a function of the received signal power. Also, a scan integration method coined the target presence probability (TPP) estimator, which exploits multiple radar scans to improve multiple scan-based probabilities of detection, will be explained. It is accompanied by a noise reduction technique based on a method referred to as spectral subtraction to reduce the 195

196

Reducing Detection Errors and Noise with Multiple Radar Scans

received power noise in radar A-scope displays. Performance comparisons of TPP, binary integration, and noncoherent detection will be given and discussed with reference to their ease of implementation. Due to its relative ease of implementation and good detection performance, the TPP integration technique will be the scan integration method of focus in this chapter. In particular, the TPP estimates are shown to reduce errors, in terms of both landmark number and location estimates, when compared to single scan, CFAR hypotheses. Also, the reduced noise A-scopes are shown to exhibit significantly less noise than the raw A-scopes, after as few as three or four successive scans of an environment. The analyses presented in this chapter are applicable to any sensor that outputs received power as a function of range imaging data at discrete bearing angles in the presence of received power noise, irrespective of the range estimation methods used. As in Chapter 5, however, the results are derived from the same continuously scanning, MMW, FMCW radar. The chapter proceeds as follows. In Section 6.2, the parking lot scene of Chapter 5 is re-examined in terms of actual radar scans and the objects that should be reliably detected. The single scan–based CA-CFAR processor is reviewed for the purpose of detecting such objects, together with remarks on its less than ideal performance in realistic environments. Section 6.3 therefore introduces the classical noncoherent and binary integration methods respectively, together with analyses of their efficiency in increasing landmark detection probabilities. Section 6.4 then introduces the TPP integration method, which fuses current and previous CA-CFAR landmark decision hypotheses. Section 6.5 derives

(

)

(

)

the false alarm PfaTPP and detection PDTPP probabilities for the TPP processor, based on a TPP threshold using Monte Carlo (MC) methods. It further demonstrates the TPP estimators performance, by showing higher PD values for a given received SNP than the single scan–based CA-CFAR processor and the binary integrator. The noncoherent, binary and TPP integration techniques are compared in Section 6.6, and a small note regarding the effect of multi-path reflections is given in Section 6.7. Section 6.8 shows the application of the TPP integration method in an urban environment and Section 6.9 extends the TPP method to noise reduction within A-scopes, again based on multiple scan integration. After the summary in 6.10, the chapter concludes with bibliographical comments in Section 6.11.

6.2  Radar Data in an Urban Environment Figure 6.1 shows a section of a cluttered outdoor parking lot, in which all of the radar scans in this chapter were recorded. A received power scan from the radar is shown in Figure 6.2, where the vertical bar indicates the magnitude of the received power Slog (in dB) at each location. The white line in Figure 6.1 indicates a par-

.

Examples include lidar and underwater sonar.

6.2  Radar Data in an Urban Environment

197

Tree Hedge

Radar

Figure 6.1  A parking lot scene within a university campus environment. The white line indicates the radar bearing angle at which the single A-scope of Figure 6.3 was observed.

ticular bearing angle, relative to the radar, at which the A-scope of Figure 6.3 was recorded. For ease of comparison with ground truth reference, only ranges up to 40 m are used in the initial set of experiments. With the radar in the position shown in Figure 6.1, this distance covers most of the objects within the photo. In Figures 6.1 and 6.3,

Small trees

Target trees

Lamp posts Palm trees

Figure 6.2  A scanned section of the parking lot environment of Figure 6.1, shown at approximately the same orientation. Note the spatial uncertainty of the trees, lamp posts, and walls, as well as noise. The vertical axis represents the received power Slog_rc in dB.

198

Reducing Detection Errors and Noise with Multiple Radar Scans

Hedge

20

Tree

0

5

10

15

20

25

30

35

40

Figure 6.3  An A-scope display obtained from the parking lot environment of Figure 6.1. The environment contains two main landmarks, a hedge (at an actual range of 18.75 m) and a tree at 27.75 m.

the known locations of the hedge and tree are labeled as ground truth reference for the single line-of-sight landmarks. The distances to the hedge and tree were measured independently by making multiple measurements with a laser range finder. To the nearest range bin (i.e., at a resolution of 0.25 m), the ranges were, 18.75 and 27.75, respectively. 6.2.1  Landmark Detection with Single Scan CA-CFAR

As a basis for all of the scan integration methods examined in this chapter, initial single scans are processed with received power thresholds, based on the classical cell averaging–constant false alarm rate (CA-CFAR) processor. Section 3.5.1.1 in Chapter 3 introduced the CA-CFAR processor for the detection of landmarks based on single A-scopes and based on a constant PfaCA-CFAR. In particular, Slin(q) ³ SCA-CFAR(q) defines the CA-CFAR-based landmark presence hypothesis (H1CA-CFAR) and Slim(q)
SCA-CFAR(q, l) (detection assumed) and I(q, l) = 0 otherwise, that is, I(q, l) = 1 is equivalent to the CA-CFAR hypothesis decision H1CA-CFAR (landmark present) and I(q, l) = 0 is equivalent to hypothesis decision H0CA-CFAR (landmark absence), based on a single A-scope. The Mbinary out of L binary integrator then declares a detection when L



å I(q, j - 1) ³ Mbinary j =1

(6.15)

Let PL(d) denote the probability of d detections in L scans. The probability of declaring a detection PDbinary is the summation of all PL(d) for which d ³ Mbinary [4]. That is,

6.3  Classical Scan Integration Methods

203

PDbinary =



L

å

PL (l)

(6.16)

l = Mbinary

Since PDCA-CFAR is the single scan probability of detection, 1 - PDCA-CFAR is its corresponding probability of a missed detection. Under the Swerling I model, these probabilities can be assumed constant from scan to scan, and PL(l) given by the binomial probability distribution

(

L! PCA-CFAR l !(L - l)! D

PL (l) =



) (1 - PDCA-CFAR )

L -l

l



(6.17)

so that from Equation 6.16, the probability of detection of the binary integrator is PDbinary =



L



l = M binary

for

(

CA-CFAR Cl , L P D

) (1 − PCA-CFAR ) D l

1 ≤ Mbinary ≤ L and

L− l



(6.18)

CA-CFAR 0 ≤ PD ≤1

Following a similar analysis of Equations 6.16 through 6.18, it can be seen that the probability of false alarm of the binary integrator is given by Pfabinary =



L



l = M binary

for

(

CA-CFAR Cl , L Pfa

) (1 − PfaCA-CFAR ) l

1 ≤ Mbinary ≤ L and

L− l



(6.19)

CA-CFAR 0 ≤ Pfa ≤1

where, in both Equations 6.18 and 6.19,Cl,L is the binomial coefficient Cl ,L =



L! l !(L - l)!

(6.20)

To determine Mopt for each L, a numerical method is necessary. Starting with binary M = 1, Pfa can be iteratively calculated from Equation 6.19, until a desired value is reached (typically such that Pfabinary £ PfaCA-CFAR). Based on just Mbinary = 1, Pfabinary is likely to be too large, and therefore Mbinary must be increased. Mbinary is binary £ PfaCA-CFAR. This defines the minimum value of Mbinary first increased until Pfa to ensure a satisfactory probability of false alarm. Subsequently, Mbinary is further iterated, and the corresponding probability of detection PDCA-CFAR, according to Equation 6.18, is calculated. The value of Mbinary that corresponds to the maximum value PDCA-CFAR is chosen as Mopt and integration is complete. Schnidman presented approximate values for Mopt as a function of the required probability of false alarm, L and a chosen Swerling model, which can be used to initialize the iteration process, resulting in a faster convergence of the integrator [5]. Figure 6.5 shows probability of detection curves for the binary integrator versus received signal-to-noise power, for various values of the scan number L. binary

204

Reducing Detection Errors and Noise with Multiple Radar Scans

Figure 6.5  The performance of the binary integrator for different scan number L shown together with that of the single scan CA-CFAR processor. For each value of L, Mopt was determined to be 2 in the examples shown.

Under the Swerling I model, for L = 1 or 2, binary integration simply reproduces the CA-CFAR processor, but for L ≥ 3 improvements in the probability of detection result.

6.4  Integration Based on Target Presence Probability (TPP) Estimation In both the noncoherent and binary integrators, iterative solutions to non-linear equations are necessary to determine thresholds to achieve the desired probability of false alarm and maximize detection. In contrast, in this section, a recursive smoothing technique is used to integrate radar scans, such that a target presence probability (TPP) threshold can be set to achieve a required probability of false alarm. This threshold will be shown to converge to a predictable value as the number of scans L increases. In the TPP technique, a CFAR test statistic is used to set an indicator function, as for the binary integrator. A recursive measure of the TPP is then derived based on the current indicator function and past TPP values. This provides a recursively averaged probabilistic value of landmark presence based on multiple CA-CFAR hypotheses. The recursion exploits a smoothing parameter, which can be derived according to a desired probability of detection, after a desired number of iterations. It is shown that the TPP estimator can maintain the same probability of false alarm as the CA-CFAR processor but yields landmark estimates with a higher probability of detection. To further assess the performance of the estimator, a metric is defined, based on the second-order Wasserstein construct, which quantifies both the detection and spatial errors of estimated landmarks in a joint manner. This is an important concept in detection since, in the presence of false alarm and missed

6.4  Integration Based on Target Presence Probability (TPP) Estimation

205

S lin (q, l )

Indicator function

I (q, l )

1 _ ap

P TPP (q, l )

+ + ap

S CA–CFAR (q, l)

z

P TPP (

)

Figure 6.6  Block diagram summarizing the TPP algorithm.

detection possibilities, an algorithm should be penalized for both landmark number and spatial errors. To derive a TPP estimate, the indicator function I (q, l) defined in Section 6.3.2, is used again. Due to the expected correlation between successive landmark hypoth� TPP (q, l) is eses, an estimator for the conditional TPP, P

TPP � TPP (q, l ) = a p � P P (q, l − 1) + (1 − ap )I (q, l)

(6.21)

where αp is a smoothing parameter (0 ≤ αp ≤ 1). This TPP estimate is useful because . SCA-CFAR(q, l) can be derived based upon a priori known, or assumed, radar receiver statistics and CFAR methods with quantified PfaCA-CFAR and PDCA-CFAR values; 2. Hence, I(q, l) is set based on a constant value of PfaCA-CFAR and represents CA-CFAR decision hypotheses H0 or H1CA-CFAR based on a single A-scope; 3. The high correlation of landmark presence between successive radar scans is used (due to αp) to recursively estimate a probabilistic value of landmark presence, based on multiple A-scopes. Points to note about the TPP estimate of Equation 6.21 are �TPP (q, l - 1)ö being 1, and the current CFAR . In the extreme case of a prior TPP æ P è ø �TPP (q, l) = 1 independent of α . processor yielding hypotheses HCA-CFAR , P 1

p

�T PP(q, l − 1) being 0, and the 2. Similarly, in the extreme case of a prior TPP P    �TPP (q, l) = 0 indecurrent CFAR processor yielding hypothesis H0CA-CFAR , P pendent of αp. 3. The value of αp determines the fading memory of the recursive TPP estimator. It has the ability to smooth contradictory CFAR results, that can occur due to violations of the CFAR assumptions. This is based on independent scans of the same scene. The TPP algorithm is summarized in the block diagram of Figure 6.6. .

Conditioned on the indicator function I(q, l).

206

Reducing Detection Errors and Noise with Multiple Radar Scans

6.5  False Alarm and Detection Probabilities for the TPP Estimator

(

)

In this section, the probability of detection PDTPP curves are evaluated for the TPP processor, as functions of the SNP, ηSNP, the number of TPP iterations l and αp. It will be shown that it is possible to maintain the same theoretical PfaCA-CFAR value as single scan–based CA-CFAR (i.e., PfaTPP = PfaCA-CFAR), by varying the TPP threshold T TPP(αp, l) with l. At large values of l, TTPP(αp, l) approaches a steady state value. Depending on αp, it will be shown that, below a critical number of iterations lcrit, CA -CFAR PDTPP simply equals PDCA-CFAR. However, as soon as l > lcrit, PDTPP > PD for a TPP TPP SNP � given η . Hence for the TPP processor, P (q, l) > T (a p , l) defines its land� TPP (q, l) < T TPP (a p , l) defines the TPP landmark presence hypothesis H1TPP and P mark absence hypothesis H0TPP . The analysis is based on analytical and numerical methods. To analyze the TPP performance, it is tested on multiple A-scope arrays Slin(l) under noise only and landmark and noise conditions. TPP

6.5.1  TPP Response to Noise: P f a

To analyze the TPP response to noise only, sequential A-scope arrays made up of simulated received power samples drawn from the no landmark H0CA-CFAR hypothesis are used. This corresponds to the continuous exponential distribution of Figure 3.5. These A-scope values, Slin(q, l), provide the inputs for the CA-CFAR processor in �TPP (q, l), is analyzed for various Figure 6.6, and the output of the TPP processor, P values of αp and l. This experiment is repeated as several Monte Carlo (MC) runs to yield histograms of the resulting TPP values, from which the threshold T TPP(αp, l) can be selected to achieve a desired PfaTPP Here, the desired value is PfaTPP = PfaCA-CFAR, and the goal is that for given hNSP, PDTPP > PDCA-CFAR.

Number density

Number density

1

1 1 Pfa

P fa )2

Range of T

Range of T TPP values

Pfa 0

0 P

TPP

0.5 (q, l = 0)

(a) l = 0 TPP iterations.

1

P fa ) Pfa 2 P fa 0

0

ap P

TPP

TPP

values

0.5 (q, l = 1)

ap

1

(b) l = 1 TPP iteration.

Figure 6.7  The histograms that would result for 0 and 1 iterations of the TPP processor, based on perfect CA-CFAR performance. For brevity, in the figures, Pfa = PfaCA-CFAR.

6.5 False Alarm and Detection Probabilities for the TPP Estimator

207

Figure 6.7(a) shows the theoretical result of a MC run with 0 iterations of the TPP processor, which corresponds to a distribution of CA-CFAR, based I(q, l) values. Under the usual IID and exponential noise assumptions, this corresponds to the CA-CFAR output where I(q, l) = 0 with probability 1 − PfaCA-CFAR and I(q, l) = 1 with probability PfaCA-CFAR. Figure 6.7(b) shows the histogram after l = 1 iteration. From Equation 6.21, �T PP(q, l = 1), and their corresponding probabilities the resulting TPP values, P �T PP( q, l = 1)  are shown in column l = 1 (middle) of the table in Figure 6.8. It P  P   can be seen in the table that the TPP outputs occur in mirror imaged pairs about the central, horizontal, dashed line. This is also evident in the graphs of Figure 6.7, where the lines of symmetry are the vertical dashed lines at TPP value 0.5. For � TPP (q, 2) = 1 − a p2 is the mirror image of α p2. The example, for l = 2, TPP output P probabilities of each output occurring are, however, in general, different. It can also be seen from the table that the TPP output has 2l+1 discrete values with probabilities dependent upon PfaCA-CFAR . Considering the “line of symmetry” in the table, an important observation is that, at iteration l, the smallest TPP output for all itera� TPP (q,0) = 1) occurs just tion sequences below this dashed line (which started with P TPP l � below the line and has value P (q, l) = a p. Due to the imaging property of the TPP outputs, the largest TPP output for any iteration sequence above the line of sym� TPP (q,0) = 0) at iteration l, occurs just above the line and metry (which started with P TPP l � (q, l) = 1 − a p. Therefore, for values of α and l for which has value P p

(

)

α lp ³ 1 - α lp



(6.22)

the distributions corresponding to iteration sequences above and below the line of symmetry, in the table of Figure 6.8 remain separate. In this case, selecting the TPP threshold TTPP (αp, l) according to 1 - α lp < T TPP (α p , l) < α lp



(6.23)

l yields the desired PfaTPP . This is because no TPP values can occur between 1 - α p and l α p. This interval for the threshold is labeled in Figures 6.7(a) and (b) for l = 0 and 1.

P

TPP

(q, 0 )

Prob.

P

TPP

(q, 1 )

Prob. I =0

I =0

0

I =0

1

ap

I =1

Pfa)Pfa I =1 I=0

Pfa)

Pfa I =1 I=0

Pfa 1

(q, 2 )

0

2

I =1 I =0

ap

TPP

Pfa)

0

Pfa) I =1

P

2

Pfa

I =1

Prob. 3 Pfa) 2

ap

Pfa) Pfa 2

a p)

ap

Pfa) Pfa 2

a 2p a p2

Pfa) Pfa

2

Pfa

Pfa)

2

Pfa)

ap) Pfa

ap ap 1

2

Pfa

Pfa) 3

Pfa

�TPP (q,l ) and their probabilities P æç P �TPP (q,l )ö÷ (marked as “Prob.”) for Figure 6.8  The values of P è ø CA-CFAR l = 0,1,2. Pfa corresponds to Pfa .

208

Reducing Detection Errors and Noise with Multiple Radar Scans

Another property evident in the table of Figure 6.8 is that for any column (chosen l)

å



�TPP (q, l) = 1 - PCA-CFAR P(P fa

and

å

� P(P

�TPP (q,0) = 0 P

TPP

�TPP (q,0) = 1 P

(q, l) = PfCA-CFAR a

(6.24)

This means that the sum of the probabilities of all values P TPP (q, l) that stemmed � TPP (q,0) = 0 (i.e., all TPP values above the dashed line of symmetry in Figure from P � TPP (q, l) that 6.8) equals 1 - PfaCA-CFAR and the sum of the probabilities of all values P � TPP (q,0) = 1 (i.e., all TPP values below the dashed line of symmetry stemmed from P in Figure 6.8) equals PfaCA-CFAR. The TPP processor becomes useful when either l and/or αp are such that Inequality 6.22 is not true. In this case the TPP distributions for the sequences above and below the line of symmetry in the table of Figure 6.8 mix. This is demonstrated in Figure 6.9, where the case that αp > 0.5 and enough iterations have taken place such that α pl < 0.5 is shown. Under these conditions, the TTPP(αp, l) value necessary crit CA-CFAR to maintain PfaTPP = Pfa approaches a limit very rapidly for l > l . This is demonstrated in the curves of Figure 6.10, where it can be seen that for the small PfaCA-CFAR chosen, TTPP (ap, l)®(1 – ap). It can be seen in each graph, that for low values of l an “envelope” exists within which T TPP(αp, l) can be chosen. However, once the envelope boundaries meet when l = lcrit the threshold T TPP(αp, l) has a unique relationship with l, until it converges to a steady value. The envelope boundaries meet when the terms either side of Inequality 6.22 become equal (i.e., when l = lcrit), that occurs when l crit =



log0.5 log α p

(6.25)

Number density 1

Pfa )l

+1

l

Pfa ) P fa

Pfa ) P fal l +1

P fa

0

0

ap

a pl

1 0.5 ( q, l ) P TPP

a pl

ap

1

Figure 6.9  A histogram that would result for l iterations of the TPP processor, based on perfect CACFAR performance. For brevity, in the figures, Pfa = PfaCA -CFAR.

6.5 False Alarm and Detection Probabilities for the TPP Estimator

209

Figure 6.10  Values of T TPP(αp,l) necessary to achieve PfaTPP = PfaCA-CFAR = 10–5 for various αp values. Note the “envelopes “ (marked as “High T TPP” and “Low T TPP “) corresponding to regions of permitted TTPP(αp, l) values that exist for I ≤ l crit.

Therefore, PDTPP > PDCA-CFAR is only possible if the distributions cross, meaning that αp > 0.5 and therefore lcrit > 1. 6.5.2  TPP Response to a Landmark and Noise: PDTPP

PDTPP can now be derived as a function of the received SNP, ηSNP, l, and αp. The theoretical analysis utilizes Equation 3.19 to give a value for PDCA-CFAR corresponding to a particular ηSNP for the CA-CFAR processor. MC runs are used to produce multiple CA-CFAR responses to a landmark, with P(I (q, l) = 0) = 1 - PDCA-CFAR and P(I (q, l) = 1) = PDCA-CFAR. A similar table to that of Figure 6.8 can be constructed for these TPP output sequences, which would show that for l £ lcrit, the threshold TTPP(αp,vl) selected from Section 6.5.1, would yield PDTPP = PDCA-CFAR . However, when l > lcrit, Figures 6.11, 6.12, 6.13, and 6.14 show that PDTPP ³ PDCA-CFAR for a given ηSNP, demonstrating the usefulness of the estimator. In Figure 6.11, based on just two iterations, αp= 0.55, 0.6 and 0.7 yields PDTPP > PDCA-CFAR. This means that for any value of αp ≤ 0.707, as is the case in Figure 6.11, an is the case in Figure 6.11, an expected improvement in can be expected with just two TPP iterations. Figure 6.12 shows that after three iterations the three curves for αp = 0.55, 0.6 are TPP still indistinguishable from one another but now all exhibit a significantly higher PD than after two TPP iterations. Figure 6.13 shows the results after l = 6 iterations, where it can be seen the αp = 0.7 curve now yields the highest PDTPP values for the three values of ap shown. Figure 6.14 shows PDTPP for ap = 0.7 after various iteration numbers l. Qualitatively speaking, when l > lcrit and αp > 0.5, TPP outputs corresponding to noise are “pushed” to the left of the distributions in Figures 6.7 and 6.9 with a much higher probability than being pushed right (for PfaTPP < 0.5). For example, �TPP (q, l) = 1), corresponding to l + 1 consecutive CA-CFAR after l iterations, P(P false alarms, is (PfaCA-CFAR )l +1, which becomes negligible with increasing l. This is

210

Reducing Detection Errors and Noise with Multiple Radar Scans 5

= 1× 10 , 2 TPP i terations P fCA-CFAR a 1 0.9 0.8 PDTPP (SNP)

0.7 0.6 0.5 0.4 0.3

P DCA-CFAR (W = 10) p = 0 .55 p = 0 .6 p = 0 .7

0.2 0.1 0

1

10

100

ηSNP

Figure 6.11  PDTPP versus ηSNP after just 2TPP iterations for various of αp.

theoretically shown in Figure 6.9, where the distributions have just started to mix. In the opposite manner, TPP output values corresponding to a CA-CFAR detection tend to be pushed to the right of the distribution with increasing probability as l increases. This effect creates a greater separation between the TPP noise only and landmark with noise distributions, yielding PDTPP >> PDCA-CFAR with increased l. 6.5.3  Choice of αp, TTPP (αp, l  ) and l

Numerical methods can be used to select values of ap and l for a desired TPP performance. A useful goal could be to increase PDTPP to beyond PDCA-CFAR with minimum iteration number l. Clearly from Equation 6.25, and Figure 6.11, this could be achieved = 1 × 10 , 3 TPP iterations P fCA-CFAR a

1 0.9 0.8 P DTPP(SNP)

0.7 0.6 0.5 0.4 0.3

P DCA-C FA R (W = 10) p = 0 .55 p = 0 .6 p = 0 .7

0.2 0.1 0

1

10 ηSNP

TPP Figure 6.12  PD versus ηSNP after 3 TPP iterations for various values of αp.

100

6.5 False Alarm and Detection Probabilities for the TPP Estimator

211

5

= 1 × 10 , 6 TPP iterations P fCA-CFAR a 1 0.9 0.8

P DTPP (SNP)

0.7 0.6 0.5 0.4 0.3

P DCA-C FA R (W = 10) p = 0 .55 p = 0 .6 p = 0 .7

0.2 0.1 0

1

10 SNP

100

Figure 6.13  PDTPP versus ηSNP for different αp values and a fixed iterations number l = 6.

by selecting any value of αp such that 0.5 < αp < 0.707, resulting in PDTPP > PDCA - CFAR after only l = 2 iterations. It is also evident however that as l increases, PDTPP increases at a lower rate than those corresponding to higher ap values, as verified in Figure 6.12 and 6.13. Hence if more time is available to run the TPP processor, a higher value of αp is the more sensible choice, as confirmed in Figures 6.13 and 6.14, at the cost of lcrit being higher, meaning that more scans are initially required for PDTPP > PDCA-CFAR. 6.5.4  Numerical Method for Determining T TPP (αp ,l ) and PTPP D

The TPP value is a function of αp and the sequence of l indicator function values. From Equation 6.21 p

1

= 0 .7

0.9 0.8

PDTPP (SNP)

0.7 0.6 0.5 0.4

P DCA-CFAR (W = l l l l l

0.3 0.2 0.1 0

1

10) =1 =2 =3 =5 =6

10 SNP

Figure 6.14  PDTPP versus ηSNP for different iteration numbers and αp = 0.7.

100

212

Reducing Detection Errors and Noise with Multiple Radar Scans l

�TPP (q, l) = I(q,0) × α lp + (1 - α p ) I(q, j)α l - j P p å



(6.26)

j =1

where I(q, j) is the jth value in the sequence of indicator function values, and j = {0,... ,l} and I(q, 0) is the value used to initialize the TPP algorithm. The likelihood for a particular TPP value depends on PfaCA-CFAR (for the noise) or PDCA-CFAR (for a landmark) and the number of zeros and ones from the indicator function values; that is, in the case of target absence

(

) (

) × (P

)

N1

)

) × (PDCA-CFAR )

N1

�TPP (q, l) HCA-CFAR = 1 - PCA-CFAR P P 0 fa

N0

CA-CFAR fa



(6.27)



(6.28)

and for target presence

(

(

�TPP (q, l) HCA-CFAR = 1 - PCA-CFAR P P D 1

N0

where N1 is the number of ones in the sequence of indicator function values and N0 is the number of zeros. These values can be obtained from the following relations:

l

N1 = å I(q, i)

and

N0 = l - N1 + 1

(6.29)

i =0

TPP To find a TPP threshold (TTPP(αp, l)) for a desired Pfa and l iterations, many TPP values are generated together with their associated likelihoods. These pairs éP �TPP (q, l), P(P �TPP (q, l))ù are stored in a list ordered according to their TPP value ë û TPP � P (q, l). The list is then checked from the highest TPP value (1) to the lowest (0). For each element in the list, the likelihood is added to an accumulator. The scan finTPP ishes when the accumulated probability for the first time exceeds the desired Pfa . The previously checked TPP value is then used as the TPP threshold TTPP(αp,l), corresponding to a PfaTPP £ PfaCA-CFAR. This method is summarized in Figure 6.15.

Generate TPP likelihood pairs

accum = likelihood of TPP value

Create list sorted by TPP value

accum > desired

TPP P fa

No

Take previous TPP value from list

Yes Take highes t TPP value from list

Previous TPP value is TPP threshold

accum += likelihood of TPP value

Figure 6.15  Flow diagram to find TTPP(αp, l ) given l, αp and a desired PfaTPP .

6.6  A Comparison of Scan Integration Methods

213

α

η Figure 6.16  A comparison of the noncoherent, binary, and TPP integrators based on same number of scans L = 3, corresponding to two TPP iterations.

6.6  A Comparison of Scan Integration Methods Figures 6.16, 6.17, and 6.18 show a comparison of the noncoherent, binary, and TPP integrators. Note that 1 TPP iteration corresponds to L = 2 in the noncoherent and binary integrators. Note that in all cases the noncoherent detector exhibits the highest efficiency in improving detection probability. When comparing it to the TPP processor, one iteration of the TPP algorithm yields the same results than the two out of two binary

α

η Figure 6.17  A comparison of the noncoherent, binary, and TPP integrators based on same number of scans L = 4, corresponding to three TPP iterations.

214

Reducing Detection Errors and Noise with Multiple Radar Scans

α

η Figure 6.18  A comparison of the noncoherent, binary, and TPP integrators based on the same number of scans L = 7, corresponding to six TPP iterations.

integrator, being the same as the theoretical CA-CFAR. Based on just two or three TPP iterations (Figures 6.16 and 6.17), both the binary and TPP integrators perform equally well. As the number of TPP iterations increases beyond four, the TPP algorithm results in higher PDbinary values than the equivalent Mbinary out of L binary integrator with both methods maintaining Pfa £ PfaCA-CFAR. This can be seen in Figure 6.18 for the case of 6 TPP iterations (equivalent to L = 7). Sections 6.3.1 and 6.3.2 highlighted the complexities of implementing the noncoherent and binary integrators. The determination of either the received power threshold, in the case of the noncoherent integrator, or the optimization of the number of required detections (Mopt out of L) for the binary integrator required numerical iterative methods. The TPP algorithm, on the other hand, bases its single scan landmark presence/ absence hypotheses on the standard CA-CFAR processor, requiring a single adaptive power threshold. This threshold was shown to converge to a known limiting value (1 – αp) for a large number of iterations. Due to its simplicity of implementation and relatively good performance, the TPP integrator will be further analyzed in this chapter.

6.7  A Note on Multi-Path Reflections The concept of multi path reflections was introduced in Section 2.7.1.5. Since the TPP detector bases its landmark presence decisions on ηSNP, it is unable to differentiate between true landmarks and multi path reflections. Therefore, based on a single radar sweep, it considers them to be valid landmarks. The detection and removal of such multi path power spikes can be achieved with extra processing at multiple frequencies or often when the radar perceives the landmarks from other positions, at which such reflections will, in general, not be reinforced [7, 8].

6.8  TPP Integration of Radar in an Urban Environment

215

6.8  TPP Integration of Radar in an Urban Environment 6.8.1  Qualitative Assessment of TPP Applied to A-Scope Information

Figure 6.19(a) shows somewhat more ambiguous A-scope power values Slog(q, l) from A-scope l = 43, recorded at the same position and bearing angle shown in Figure 6.1. This time the A-scope is accompanied by its adaptive CA-CFAR threshold SCA-CFAR(q, l) (Equation 3.17) with PfaCA-CFAR = 10-5 and W = 10 with two guard cells either side of the CUT. In comparison with Figure 6.3 (scan l = 1), a widening of the power peak corresponding to the hedge is evident, as well as a secondary peak beyond the tree at approximately 32 m. Figure 6.19(b) shows PDCA-CFAR (q, l = 43) calculated from Equation 3.19, and Figure 6.19(c) shows the indicator function I(q, l = 43) indicating the landmark present/absence hypothesis. Note that the closer landmark (hedge at 18.75 m) is correctly detected by both I(q, l = 43) and by the TPP � TPP (q, l = 66)  , algorithm (Figure 6.19(d)) after the complete set of l = 66 iterations, P CA-CFAR but the tree at a range of 27.75 m is accompanied by a low PD value and in� TPP (q, l = 66) correctly classified by I(q, l = 43) as being empty space. The recursive P estimate of Figure 6.19(d), however, correctly estimates the presence of both the � TPP (q, l = 66) > T TPP (q, l = 66) at the correct distances. hedge and tree since P Figure 6.20 shows the values of Figures 6.19(a)–(d), at the particular range bin q = 111 (range = 27.75 m) corresponding to the tree, as a function of the A-scope number l. Note the fluctuations in PDCA-CFAR in Figure 6.20(b) and the corresponding falsely declared landmark absence hypotheses (Figure 6.20(c)) for some of the individual A-scopes. Figure 6.20(d) shows the TPP estimate versus A-scope number for different chosen parameter values αp, and the corresponding thresholds TTPP, based on PfaTPP = PfaCA-CFAR. It can be seen that for αp = 0.9 (and above), then no matter at what � TPP (q, l) > T TPP (a p , l), corresponding A-scope number l the TPP recursion is stopped, P C A− C FA R = 10−5, always yielding the correct hypothesis H1TPP. When αp = 0.55, a to Pfa faster response to changes in the indicator function can be seen at a price that the resulting TPP drops below TTPP(αp, l) (incorrect landmark hypothesis) at iteration 43 only.

6.8.2  Quantitative Assessment of TPP Applied to Complete Scans

A full quantitative assessment of the effect on complete scans of scan integration with the TPP processor requires complete knowledge of the ground truth terrain (object location and number). In most environments, obtaining such ground truth knowledge is a challenge in itself. In the parking lot environment of Figure 6.1, multiple runs of a vehicle carrying a laser range finder, selected for its high angular resolution, were carried out to generate a ground truth estimate of the true location and number of objects. Figure 6.21 shows the superimposed laser data (left) and the manually extracted locations of objects from that data (right). For ease of quantifying the detection results, the point landmarks from the right, hand map are used as the ground truth standard in terms of both landmark number and location. The metric then used to compare landmark detections to this ground truth estimate must jointly consider errors in landmark location and number estimates. Such a metric, based on a second order Wasserstein construction, was introduced

216

Reducing Detection Errors and Noise with Multiple Radar Scans

S log(q, l = 43) S CFAR (q, l = 43)

40

S log /dB

20 0

5

10

15

20 25 Distance/m

30

35

40

(a) A-scope display and corresponding CA-CFAR threshold for P CA-CFAR = 10–5, fa W = 15 with four guard cells either side of the CUT.

P D (q, l = 43)

1 0.8 0.6 0.4 0.2 0 5

10

15

20 25 Distance/m

30

35

40

(b) P CA-CFAR at each range bin q. D

I (q, l = 43)

1 0.8 0.6 0.4 0.2 0 5

10

15

20 25 Distance/m

30

35

40

(c) Indicator function (landmark present hypothesis) at each range bin q. ^TPP

P

(q,l = 66)

T TPP (q,l = 66)

0.8 0.6 0.4

P

^TPP

(q,l = 66)

1

0.2 0 5

10

15

20 25 Distance/m

30

35

40

(d) TPP based on 65 prior radar scans, at each range bin q and threshold T TPP for P TPP f a = 10 – 5.

�TPP (q , l = 66) versus Figure 6.19  Slog(q, l = 43), SCA-CFAR(q, l = 43), PDCA-CFAR (q,l = 43), I(q, l = 43), and P range bin q.

6.8  TPP Integration of Radar in an Urban Environment 15

217

S log (q = 111,l)

10

S CFAR (q = 111,l)

S log /dB

5 0

10

20

30

40

50

60

30

40

50

60

50

60

Scan number l (a) A-scope number l and corresponding CA-CFAR threshold.

P D (q = 111,l)

1 0.8 0.6 0.4 0.2 0

10

20

Scan number l

(b) P CA-CFAR in each A-scope l. D

I (q = 111,l)

1 0.8 0.6 0.4 0.2 0

10

20

30

40

Scan number l (c) Indicator function (landmark present hypothesis) in each A-scope l. I (q = 111,l)

1.2

a p = 0 .5

a p = 0 .7

a p = 0 .9

P TP P(q = 111,l)

1 0.8 0.6 0.4 0.2 0 0

10

20

30

Scan number l

40

50

60

(d) TPP based on all prior radar scans, with I (q = 111, l ) superimposed. �TPP (q = 111,l) versus AFigure 6.20  S (q = 111, l), SCA-CFAR(q = 111, l ), PDCA-CFAR (q = 111,l) and P scope number l, corresponding to the tree at range bin q = 111 (range 27.75 m). log

218

Reducing Detection Errors and Noise with Multiple Radar Scans

Y/m

Y/m

10

10

0

0

X/m 30

20

10

X/m

40

30

20

10

40

Figure 6.21  (Left): Several laser scans superimposed from the section of the parking lot in Figure 6.1. (Right): Manually extracted ground truth location of point landmarks (trees, lamp posts, signs) as well as walls.

in Chapter 4 Section 4.8.3. Therefore, based on Equation 4.80, the quality of estimated maps can be compared in a quantitative manner. Figure 6.22 shows the results of plotting the indicator function I(q, l = 10) corresponding to scan number 10 from the same scene, which generated the received power scan of Figure 6.2. The plot shows a comparison of the hypothesized landmarks, and the superimposed ground truth point landmarks (dashed vertical lines) taken from Figure 6.21. Note that the tree labeled in Figures 6.1 and 6.3 (labeled here as “3rd tree”) is detected in this particular scan. As another example, Figure 6.23 shows the 1 0.8

3rd tree

4th tree

0.6

False alarms

0.4 0.2

20

0

15 10 5 0 y /m

–5 –10 –15 –20 0

5

10

15

20

25

30

35

40

45

Tree missed

x /m

Figure 6.22  The indicator function I(q, l = 10) (vertical axis), corresponding to the tenth scan. Note that the tree (labeled “3rd tree”) is detected along with several false alarms.

6.8  TPP Integration of Radar in an Urban Environment

219 1

3rd tree (missed)

0.8

4th tree (missed)

0.6

False alarm 20

0.4 0.2 0

15 10 5 0 y/m

–5 –10 –15 –20 0

5

10

15

20

25

30

35

40

45

False alarm x/m

Figure 6.23  The indicator Junction I(q, l = 49) (vertical axis), corresponding to the forty-ninth scan. Note that both “3rd tree“ and “4th tree” are now missed, and note the presence of other false alarms.

indicator function plot corresponding to scan number 49. Note that in this particular scan, the tree examined in the earlier figures (“3rd tree”) is missed by the CA-CFAR detector, as is “4th tree.” Note also the presence of the labeled false alarms in both Figures 6.22 and 6.23, which are typical in the CA-CFAR results from single scans. Figures 6.24 and 6.25 show the TPP estimates after 49 and 72 scans,respectively, using the chosen value of αp = 0.9, with corresponding TTPP(αp, l) = 0.1 for PfaTPP = PfaCA-CFAR = 10-5. Note that after 49 scans (Figure 6.24) the tree examined earlier (“3rd tree”) has a TPP estimate of 0.67 and the tree to its right (“4th tree”), which is often undetected in the single scans, has a TPP value of 0.21, both of which are above TTPP = 0.1. After 72 scans (Figure 6.25) the TPP estimate of the “3rd tree” is 0.84 and that of the difficult to detect tree (“4th tree”) to its right is now increased to 0.62. Interestingly, in both TPP plots, the false alarms seen in the individual indicator plots of Figures 6.22 and 6.23 have extremely small TPP values (0.08 and 0.18, respectively), although theoretically, PfaTPP = PfaCA -CFAR so that no improvement with respect to false alarms is expected. A more principled comparison of the indicator values for individual scans and the TPP estimates with the ground truth point landmarks can be based on the second order Wasserstein metric, introduced in Section 4.8. Equation 4.80 can be applied with different values of the minimum cut-off parameter c, with higher values penalizing data sets more for dimensionality errors rather than spatial errors. Range bins �TPP (q, l) ³ T TPP (α p , l) = 0.1, corresponding to P TPP = PCA -CFAR = with TPP values P fa fa –5 10 , were considered to be valid landmarks from the TPP estimator. If multiple landmarks existed in very close proximity to one another, the center of mass of the cluster was used to represent the spatial location of the estimated landmark. The summation in Equation 4.80 is calculated for each of the ground truth landmarks,

220

Reducing Detection Errors and Noise with Multiple Radar Scans 1 0.8

3rd tree 4th tree

0.6 0.4

False alarm

0.2

20

0

15 10 5 y/m

0 –5 –10 –15 –20 0

5

10

15

20

25

30

35

40

45

x/m

Figure 6.24  TPP estimate (vertical axis) after 49 scans. Note that the false alarm TPP values are �TPP = 0.67 and the “4th tree,” �TPP = 0.27). The “3rd tree” has P greatly reduced (the labeled one has P TPP � = 0.21. which is undetected in many of the individual scans, has P

together with each detected landmark 1 to ||, based on Equation 4.81, and the minimum value is selected. Figure 6.26 shows the value of Equation 4.80 as a function of the scan number �TPP. Note that the Wasserstein metric, applied with spatial and l for I(q, l) and P landmark number errors penalized equally (c = 1), shows that the error of the TPP 1 0.8

3rd tree 4th tree

0.6 0.4

False alarm

0.2

20

0

15 10 5 0 y/m

–5 –10 –15 –20 0

5

10

15

20

25

30

35

40

45

x /m

Figure 6.25  TPP estimate (vertical axis) after 72 scans. Note that the “3rd tree” now has an in�TPP = 0.62. creased Pˆ TPP = 0.84, and the “4th tree,” now has P

6.8  TPP Integration of Radar in an Urban Environment

221

0.985

c

0.98

0.975

10

20

30

40

Scan number l

50

60

70

Figure 6.26  Wasserstein metric versus A-scope number l for I(q, l ) and the TPP estimate. This error corresponds to a user set cut-off parameter c = 1.

estimate reduces with scan number, whereas that of the indicator function fluctuates (due to sporadic missed detections) and remains relatively large as expected. 6.8.3  A Qualitative Assessment of an Entire Parking Lot Scene

A qualitative analysis of the TPP integration estimates for the near full range scans (up to 100 m) based on photographic evidence is now given. Figure 6.27 shows a wide angle view of the parking lot environment analyzed earlier. Various labels have been added to the photograph for comparison with the TPP and reduced noise plots. Figure 6.28 shows scanned A-scopes for the parking lot scene, at approximately the same orientation as the photo of Figure 6.27. Many of the objects in the photo are discernible in Figure 6.28; however, their exact spatial location is somewhat blurred and power peaks corresponding to many missed detections and false alarms are evident.

Concrete corner section

Lamp posts 6 Pillars Wall + entrance chain Wall visible through entrance

Large trees

Small trees + lamp posts

Figure 6.27  A wider view of the parking lot scene, with items labeled for qualitative comparison with the TPP estimates. Note when the scans were recorded, no cars were present in the lot.

222

Reducing Detection Errors and Noise with Multiple Radar Scans 80 60 40 20 0 –20 100

–40 –60 –80

50

0 y/m –100 –50

–50 0 50 x/m

100–100

Figure 6.28  Received power Slog(q, l = 1) (vertical axis in dB) from the parking lot scene, Ranges of up to 100 m are recorded.

1 0.8 0.6 0.4 0.2 100

50

0 y/m –100 –50

–50 0 50 x/m

100 –100

Figure 6.29  I(q, l = 49) (vertical axis) for the full 100m range parking lot scan.

0

6.9  Recursive A-Scope Noise Reduction

223 1 0.8

Concrete corner section 6 Pillars

Wall and entrance chain

0.6 0.4

Lamp posts

0.2 100

0

50

W all and curb (next building)

0 y /m

–100 –50

–50

Small trees and lamp posts

0 50

x/m

Trees 100 –100

Figure 6.30  TPP estimate (vertical axis) for the full 100 m range parking lot scan.

Figure 6.29 shows the landmark hypotheses (indicator function I(q, l = 49)) for a single scan. As expected, the scan is extremely cluttered showing many false alarms as well as missed detections (such as the tree considered earlier). This scan shows that the actual false alarm rate seems to far exceed the CFAR derived value of PfaCA-CFAR = 10-5. Figure 6.30 shows the TPP estimates after a large number of static scans (l = 72). Again, actual objects are labeled for comparison with Figure 6.27. Interestingly, in this case, note the suppression of the many false alarms produced by the single scan hypotheses in Figure 6.29. As was demonstrated quantitatively for a sub�TPP > T TPP (α p , l) represent more of set of this scene, in Figure 6.26, the objects with P the true objects than those highlighted by the indicator function in Figure 6.29 and appear to be more precisely located than the received power peaks in Figure 6.28.

6.9  Recursive A-Scope Noise Reduction A by product of the TPP integration method is that it can be used for recursive A-scope noise reduction. This is achieved by averaging past power values in a range bin-wise manner using a smoothing parameter that is adjusted by the TPP value. Signals that are likely to result from landmarks therefore have minimum effect on the noise estimate. The results will demonstrate that noise reduction is achieved as a result of the technique, allowing objects to be identified with greater ease than in the raw A-scope. Let the received power from the radar in range bin q and A-scope l be corrupted by noise n(q, l). A method to estimate its variance Sn = E[|n(q, l)|2] is to apply temporal, recursive smoothing to the noisy received power signals during times of landmark

224

Reducing Detection Errors and Noise with Multiple Radar Scans

absence [9]. The current best landmark hypotheses H0TPP (q, l) and H1TPP (q, l) can be used to control the adaptation of the expected noise parameter as follows: �n (q, l) = α d S �n (q, l - l) + (1 - α d )S lin (q, l) H0TPP (q, l) : S �n (q, l) = S �n (q, l - l) H1TPP (q, l) : S



(6.30)

where αd is a smoothing parameter. αd can be selected according to a desired response for Sˆ n (q, l) to changes in Slin(q, l) which corresponds to a landmark entering or leaving the CUT. �TPP (q, l) in Equation 6.21 provides a recursively derived probability Since P value corresponding to H1TPP (q, l), Equations 6.30 can be combined as �n (q, l) = S �n (q, l - l)P �TPP (q, l) S �n (q, l - l) + (1 - α d )S lin (q, l)] + [α d S



(6.31)

�TPP (q, l)) ´ (1 - P Introducing

�TPP (q, l) α�d (q, l) = αd + (1 - αd )P

(6.32)

Equation 6.31 can be rewritten as

�n (q, l) = α� d (q, l)S �n (q, l - l) + [(1 - α � d (q, l))]S lin (q, l) S

(6.33)

~

Note that in Equation 6.32, ad(q,l) is a time-varying smoothing parameter. Hence the noise power can be estimated using past A-scope power values, together with a smoothing parameter that itself varies according to the TPP. The adaptive noise variance estimate is useful because . The decision to determine landmark presence or absence is based on the �TPP (q, l), which can corremost recent, recursively estimated, TPP value P spond to a higher probability of detection than single-scan-based CA-CFAR estimates; 2. An increase in the estimated noise, as a result of still incorrectly deciding landmark absence/presence, has little effect; 3. It can be used to obtain a reduced noise A-scope, as shown in the following. The adaptive noise estimator is summarized in the block diagrams of Figures 6.31 and 6.32. Sˆ n (q, l) can be used to obtain a reduced noise A-scope, using the method of bin-wise power subtraction, referred to in the literature as spectral power subtraction [10]. Again here it is necessary to differentiate between reduced noise power . Note that for FMCW radars, since the A-scope is directly derived from a received power versus beat frequency variable, it is in fact a power versus range spectrum.

6.9  Recursive A-Scope Noise Reduction

P

TPP

225

1 _ αd

( q, l )

+

~ α d (q, l )

+ αd

Figure 6.31  Block diagram summarizing the time varying parameter α� d (Equation 6.32). RN_log S (q, l)). estimates that are linear (� S RN_lin (q, l)) and logarithmically compressed (� The noise reduction algorithm will yield reduced noise received power estimates RN_lin � S (q, l); however, due to the large dynamic range of this estimate, graphical results RN_log RN_lin S (q, l) = 10log10 [� S (q, l)]. exploit the logarithmically compressed version �

To apply the basic “spectral” subtraction algorithm, the average noise power �n (q, l) must be subtracted from the noisy A-scope power values. In [11], a method S that further reduces background noise for SNPs between -5 and 20 dB was devised based on subtracting an over estimate of the noise power and preventing the resultant spectral components from reaching below a preset minimum level, termed the “specRN_lin tral floor.” This method leads to a reduced noise power estimate � S (q, l) given by if

�n (q, l) S lin (q, l) > c S

otherwise

(6.34)

where cover is an over-subtraction factor (cover ³ 1) and dfloor is the spectral floor parameter (0 < dfloor < 1). A value of cover that is larger than 1 represents the fact that it necessary to subtract more than the expected value of the noise (which is a smoothed estimate) to make sure that most of the noise peaks are removed. The spectral floor parameter dfloor, when greater than zero, ensures that the remnants of the noise peaks are masked by neighboring power values of comparable magnitude. This results in a reduction of broadband noise, when compared with the original power spectral subtraction method. Results of applying this technique, based on an expected received SNP of 0 dB, using the corresponding suggested values for cover and dfloor from [11], (cover = 4 and dfloor = 0.01) will be demonstrated in Section 6.9.1. 6.9.1  Single A-Scope Noise Subtraction

Figure 6.33 shows the results of the spectral subtraction algorithm of Equations 6.33 and 6.34.

Power vs. range spectrum 20 0

+ ~ 1 _ a d (q, l )

S lin (q, l )

S RN log(q,l = 6 6, c = 1 ) / dB



�n (q, l) ìïS lin (q, l) - cover S RN_lin � S (q, l) = í floor �n (q, l) ´S ïîd

00

20

40

S

60 80 Distance /m

lin ( q, l )

100

120

S n ( q, l ) + ~ a d ( q, l ) S n(

z )

Figure 6.32  Block diagram summarizing the A-scope noise reduction algorithm (Equation 6.33).

226

Reducing Detection Errors and Noise with Multiple Radar Scans 20 S log (q, l = 66)/dB

0

(a)

log (q, l = 66, cover )/dB

0

40

60 80 Distance /m

100

120

20 0

S

^ RN

(b)

20

S

^RN

(c)

log (q, l = 66, c = over4)/dB

0

20

40

60 80 Distance /m

100

120

20

40

60 80 Distance /m

100

120

20 0

0

Figure 6.33  Original A-scope power values S log (q ,l = 66) corresponding to the hedge at 18.75 m, the tree at 27.27 m and now a wall at 43.00 m, and its reduced noise counterparts.

12

x 10

c=1

10

c=4

MSE

8 6 4 2 0 5

10

15

Spectrum number Scan number l

20

25

Figure 6.34  Mean squared noise versus scan number l for c = 1 and c = 4.

30

6.9  Recursive A-Scope Noise Reduction

227

In Figure 6.33(a) the same A-scope power values Slog(q, l) corresponding to Figure 6.3 are shown, but this time up to a distance of 125 m. Viewing the A-scope over a larger distance gives a better visualization of the noise. Figure 6.33(b) shows the effect of subtracting the estimated noise variance Sˆ n (q, l) from the power (i.e., c = 1 and d = 0 in Equation 6.34). In this A-scope, however, many noise “spikes” still remain, as predicted in the literature [11]. Applying Equation 6.34 with an over-subtraction factor of 4, and a floor parameter d = 0.01, yields Figure 6.33(c). In this A-scope, the noise reduction is far superior to that in Figure 6.33(b), while the power values corresponding to the landmarks (hedge, tree, and now the wall at 43.00 m) remain prominent. 6.9.2   Multiple A-Scope—Complete Scan Noise Subtraction

To quantitatively analyze the true noise reduction as a function of the number of A-scopes l, Figure 6.34 plots the mean squared error (MSE) noise value for each reduced noise plot as l increases, for second order Wasserstein metric cut-off parameter values cover = 1 and cover = 4. This is based on power samples taken from the long-range scan of Figure 6.28, from which power values corresponding to landmarks were removed, leaving only noise samples. For the sake of performance quantification, the removal of the power values corresponding to landmarks can be carried out either manually or by trusting the TPP integration method and removing all power values for which PTPP > TTPP. Choosing cover = 4, it is evident, that only after four scans, the noise power has been substantially suppressed and has approximately reached its minimum value. Therefore, Figure 6.35 shows the reduced noise version of Figure 6.2, together with ground truth (dashed vertical lines), after the application of Equations 6.33 and 6.34 with only four scans. Comparison with the original received power plot (Figure 6.2), shows a much greater contrast in the power values corresponding to landmarks with significantly less noise in the empty regions. Figure 6.36 shows the reduced noise received power plot, which was generated from Figure 6.28, after the application of Equations 6.33 and 6.34 and using only three consecutive iterations. The objects from the photo in Figure 6.27 are 80 60 40 20 0 20

−20

15

−40

10

−60 5

−80 0

y/m

−5 −10 −15 −20 0

5

10

15

20

25

30

35

40

45

x /m

Figure 6.35  Reduced noise received power data for l = 4 (vertical axis in dB) from the parking lot scene.

228

Reducing Detection Errors and Noise with Multiple Radar Scans 80 60

Wall and entrance chain

Concrete corner section 6 Pillars

40 20 0

Lamp posts

–20 100

–40 –60 –80

50

Wall and cerb (next building)

0 y/m

–100 –50

–50

Small trees and lamp posts x/m

0 50

Trees 100 –100

Figure 6.36  Reduced noise received power data SRN_log(q,l = 3) (vertical axis is dB) from the carpark scene.

also labeled in the reduced noise power plot of Figure 6.36. Note that the noise is now suppressed and the relatively large power values from the landmarks are more pronounced.

6.10  Summary This chapter has analyzed various scan integration methods, which can increase the probability of detection of CFAR processors when applied to radar. Three integration methods, noncoherent, binary, and TPP, were derived and compared in terms of their efficiency in increasing detection probabilities while maintaining low probabilities of false alarm and their ease of implementation. Although the noncoherent detector outperforms the others, in terms of increasing detection probabilities, the necessary, adaptive power threshold needs to be iteratively re-estimated on every new scan of the environment. Due to its relative ease of implementation, the newly proposed TPP integration method was investigated. This integrator recursively smoothes current and previous CA-CFAR decision hypotheses. This can take place while maintaining the same probability of false alarm as the CA-CFAR processor, by taking several scans of a scene. It is also shown experimentally that the spatial errors associated with detected landmarks are reduced, in comparison with those detected based on single scans. The estimator requires a single smoothing parameter αp that can be calculated based on a required PDTPP for a given number of iterations. Importantly, the parameter αp is independent of the radar antenna’s characteristics. Hence, the TPP estimator can be applied with any other sensor of this type and in any environment.

6.11  Bibliographical Remarks

229

Although not experimentally tested here, it should be noted that, as with other integration techniques, the TPP estimator could be used to make PfaTPP < PfaCA-CFAR at a cost of reducing the PDTPP values shown in this chapter. However, the analysis here was designed to give PfaTPP = PfaCA-CFAR, with the ensuing MC analysis yielding PDTPP > PDCA-CFAR. Initially, qualitative assessment was used to assess the TPP integrated scans in terms of their ability to correctly detect and localize landmarks in an urban environment. It was noted, however, that estimation has very little meaning without a clear concept of estimation error. Therefore, for quantitative assessments, a second order Wasserstein metric was adopted which jointly assesses the errors made by a landmark estimator, based on the spatial error of the landmark estimates and the number of detections compared to ground truth. A small, controlled area of an environment, which was measured independently by laser distance sensors, was used to assess the TPP estimator performance. A decrease in error, with increasing scan number, was evident in the TPP estimates. Finally, the TPP estimates were further used to estimate the noise power within the A-scopes. A smoothed estimate of the noise power was based on previous values and the currently sensed power value. The TPP estimate was integrated into the estimator so that received power values resulting from landmarks were highly unlikely to corrupt the noise estimate. Through the use of spectral subtraction, a decreasing noise power, with increase of scan number was demonstrated, with great improvements after only three or four scans.

6.11  Bibliographical Remarks Scan integration methods have been applied to successive radar scans for the purpose of improving detection probability since the 1960s. In this chapter two classical methods, noncoherent and binary integration, have been introduced, together with an analysis of a TPP integration method. The concept of exploiting multiple scans for improved landmark detection is a field of research that has followed various paths. As early as 1986, Nitzberg suggested the averaging of power values from prior scans to estimate the background power level, exploiting the temporal stationarity of the environment [12], assuming Rayleigh background noise statistics. Recently, Schnidman [13] extended this approach based on a non centered chi-2 Gamma distribution, which he demonstrated to be a suitable model for high-resolution sea radar clutter [13, 14]. In 2010, Xiangwei further analyzed the performance of this method under a Weibull received power noise distribution [15]. This has been applied in both high-resolution radar and in low grazing angle applications. Lops et al, proposed a scan-by-scan averaging (SSA)-CFAR processor [16]. This article Addressed an issue known as the “self-masking target” that occurs when an object enters a range bin and remains there for a certain number of scans. The detection threshold in CA-CFAR then increases, lowering the corresponding PDCA-CFAR. Based on Swerling two model assumptions [17] for the landmark echo, PDCA-CFAR curves versus target persistent scan number were derived. With these curves, for a given SNP, a number of necessary scans could be chosen to achieve a desired

230

Reducing Detection Errors and Noise with Multiple Radar Scans

PDCA-CFAR. The curves are, however, still based on the restrictive assumptions of a well-behaved CFAR test statistic with a Gaussian PDF. Track before detect (TBD) methods are also applied to the detection of weak signals buried in noise [18, 19]. In this technique, a track score is computed by adding a small value if a CFAR processor yields a landmark present hypothesis and is reduced by a small amount otherwise. A track is confirmed if this “track score” exceeds a certain threshold. These methods are often based on tracking models, which provide extra information about the object being sought, in terms of its motion or other attributes that separate it from its background noise or clutter. In the case of the TPP estimator, no such information is used in the determination of landmark presence probability. A similar technique based on multi-object tracking is also given in [20]. Scan averaging methods, which require particular radar transceiver designs, have also been used to improve target detection based on multiple received scans [21].

References   [1] Skolnik, M. I., Introduction to Radar Systems, New York: McGraw Hill, 1982.   [2] El Mashade, M. B., “Detection Performance of the Trimmed-Mean CFAR Processor with Noncoherent Integration,” Radar, Sonar and Navigation, IEE Proceedings, Vol. 142, No. 1, February 1995, pp. 18–24.   [3] DiFranco, J. V., and W. L. Rubin, Radar Detection, Dedham, MA: Artech House, 1980.   [4] Brunner, J. S., “A Recursive Method for Calculating Binary Integration Detection Probabilities,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 26, No. 6, November 1990, pp. 1034–1035.   [5] Schnidman, D. A., “Binary Integration for Swerling Target Fluctuations,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 34, No. 3, July 1998., pp. 1043–1053.   [6] Gray, R. M., and L. D. Davisson, Introduction to Statistical Signal Processing, Cambridge: Cambridge University Press, December 2004.   [7] Sen, S., and A. Nehorei, “Slow-Time Multi-Frequency Radar For Target Detection In Multipath Scenarios.” In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 2010, pp. 2582–2585.   [8] Krolic, J. L., J. Farrel, and A. Steinhardt, “Exploiting Multipath Propagation for GMTI in Urban Environments.” In IEEE Conference on Radar (IEEE Cat. No.06CH37730C), 2006, pp. 65–68.   [9] Cohen, I., and B. Berdugo, “Noise Estimation by Minima Controlled Recursive Averaging for Robust Speech Enhancement,” IEEE Signal Processing Letters, Vol. 9, No. 1, 2002, pp. 12–15. [10] Boll, S. F., “Supression of Acoustic Noise in Speech Using Spectral Subtraction,” IEEE Transactions on Acoustic, Speech and Signal Processing, Vol. 27, No. 2, April 1979, pp. 113–120. [11] Berouti, M., R. Schwartz, and J. Makhoul, “Enhancement of Speech Corrupted by Acoustic Noise.” In Proceedings of the IEEE ICASSP’79, 1979, pp. 208–211. [12] Nitzberg, R., “Clutter Map CFAR Analysis,” IEEE Transactions on Aerospace Electronic Systems, Vol. 22, No. 4, July 1986, pp. 419–421. [13] Schnidman, D. A., “Radar Detection in Clutter,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 41, No. 3, November 2005, pp. 1056–1067.

6.11  Bibliographical Remarks

231

[14] Schnidman, D. A., “Generalized Radar Clutter Model,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 35, No. 3, July 1999, pp. 857– 865. [15] Xiangwei, M., “Performance Analysis of Nitzberg’s Clutter Map for Weibull Distribution,” Digital Signal Processing, Vol. 20, No. 3, May 2010. [16] Lops, M., and M. Orsini, “Scan-by-Scan Averaging CFAR,” IEE Proceedings F: Radar and Signal Processing, Vol. 136, December 1989, pp. 249–254. [17] Swerling, P., “Probability of Detection for Fluctuating Targets,” IRE Trans. Inform. Theory, Vol. 6, No. 2, 1960, pp. 269–308. [18] Blackman, S., and R. Popoli, Design and Analysis of Modern Tracking Systems, Norwood, MA: Artech House, 1999. [19] Boers, Y., H. Driessen, J. Torstensson, M. Trieb, and R. Karlsson, et al., “Track-Before­ Detect Algorithm for Tracking Extended Targets,” IEE Proceeding of Radar Sonar Navigation, Vol. 153, No. 4, August 2006, pp. 345–351. [20] Meng, L., W. Grimm, and J. Donne, “Radar Detection Improvement by Integration of Multi-Object Tracking.” In Proc. 5th International Conf. on Information Fusion, Vol. 2, November 2002, pp. 1249–1255. [21] Silverstein, J. D., and G. H. Goldman, “Tests of Criteria for Averaging Polarimetric Radar.” In Proc. SPIE—The International Society for Optical Engineering, Vol. 1317, 2003, pp. 137–147.

Pa r t I I I Robotic Mapping with Known Vehicle Location

Chapter 7

Grid-Based Robotic Mapping with Detection Likelihood Filtering 7.1  Introduction The analysis in Chapter 6 focused on the reliable detection of objects based on either single or multiple radar scans, from a single vehicle position. The optimization of extracting information from a sensor, when located in a single position, defines the detection problem. This chapter takes this analysis a stage further, by applying detection theory in several vehicle (and hence sensor) locations and fuses the results into a “sensible” and consistent map. This is known as the autonomous mapping problem and is a vital research area in robotics, potentially enabling an autonomous system to gain a deeper understanding of its surroundings based on detections accumulated from prior positions. The challenge of mapping is therefore to generate algorithms that can accumulate hypothesized detections from multiple robot poses and use these to reinforce or contradict each other to form a useful map. Mobile robot navigation is typically formulated as a dynamic state estimation process where predicted vehicle and landmark locations are fused with sensor readings. Reliable landmark detection from noisy sensor data is critical to the successful convergence of any such estimation-based algorithm. Most robotic methods are concerned only in the location of detected landmarks; thus, the noise in the sensor readings is typically assumed to have two components (i.e., in range, r, and bearing, θ). In robotics, commonly used range/bearing sensors are scanning laser range finders and, to a lesser extent, scanning sonars or fixed sonar arrays. With these sensors, landmark detection is performed internally, resulting in a single (r, θ) output to the first signal considered detected. No other information is returned about the world along the bearing angle θ, and yet it is typical, in the case of most sensor models, to assume empty space up to range r [1]. This signal may correspond to a landmark or may be a false alarm or multi-path effect, depending on the environmental properties. As observed in Chapters 5 and 6, sensor noise in range/bearing measuring radars, and in fact many other range/bearing sensors such as ladars and sonars, has three components, since an added uncertainty exists in the detection process itself. Most robot mapping algorithms disregard this uncertainty and assume an ideal detector, where every detection is treated as a valid landmark and added to the map after passing some heuristic landmark initialization requirements. Using this assumption, the distribution of the landmark’s spatial coordinates can be conveniently modeled with probability density functions (typically Gaussian), where the probabilistic sum under the distribution is unity. That is, complete certainty is 235

236

Grid-Based Robotic Mapping with Detection Likelihood Filtering

assumed that a landmark exists somewhere within that area, thus readily allowing for numerous stochastic filtering techniques to be applied. For most occupancy grid maps, the occupancy is distributed in a Gaussian manner as a function of the returned range, and the power of the returned signal is rarely considered, resulting in discrete observations of occupancy in each cell. The discrete Bayes filter is then used as a solution, which is possible as it subtly assumes a completely known occupancy measurement model to update the posterior occupancy probability. This will be referred to throughout this chapter as grid-based robotic mapping (GBRM) based on Spatial Likelihoods (SL), or GBRM-SL. In the case of radar, as demonstrated throughout this book, the output A-scope is a complete signal power profile along the direction of beam projection, without any signal detection being performed. At each range bin, a power value is returned, thus giving information at multiple ranges for a single bearing angle. From an information theoretic point of view, all of these power values should contribute to the time-varying recursive estimate of the state of the map. Therefore in this chapter the measurement space is redefined, so that state-dependant measurement likelihoods can be obtained and used in the propagation of the map occupancy random variable. The required measurement likelihoods for occupancy filtering are in fact those commonly encountered in the landmark detection and hypotheses decisions, examples of which were given in Chapter 3, based on CFAR detection. These will be referred to throughout this chapter as measurement likelihoods (ML), and the concept of GBRM with ML filtering, referred to throughout as GBRM-ML, will be presented and compared with the classical GBRM-SL concept. The GBRM-ML concept was first presented by Mullane et al. in [3]. However, the required MLs are generally a priori unknown as they are a highly non linear function of the landmark’s SNP and the surrounding environment. The probabilistic occupancy mapping problem is therefore reformulated as a continuous joint estimation problem, where the MLs are treated as continuous random states that must be jointly estimated with the map. Whereas the previous chapters derived and utilized a sensor’s detection and false alarm probabilities for detection purposes, this chapter explicitly considers a sensor’s detection and false alarm probabilities and explores the problem of signal detection in the occupancy mapping formulation. It will be shown that, contrary to many grid-based occupancy realizations, the discrete Bayes filter is no longer applicable to the propagation of the map occupancy variable, as the ML itself is not discrete. A particle solution is applied, which recursively estimates both the posterior on the map and the MLs. The ideas presented in this chapter are demonstrated with the MMW FMCW radar, and comparisons with GBRM-SL approaches, using constant discrete MLs, are shown. A manually constructed ground-truth map and satellite imagery are also provided for map validation. The chapter is organized as follows. Section 7.2 outlines the general occupancy grid problem, showing how the exact occupancy variable measurement likelihood can be used when signal detection theory is considered. The problems with a discrete Bayes filter solution are also discussed. Section 7.3 introduces the concept .  And certain underwater sonars [2].

7.2  The Grid-Based Robotic Mapping (GBRM) Problem

237

of ML estimation, and discusses a hidden Markov model (HMM) used to predict the changes of a map cell’s occupancy value. Section 7.4 discusses a particle filter solution to the occupancy variable estimation recursion. Section 7.5 then presents results of the GBRM-DL method with data collected from outdoor field experiments. Comparisons with GBRM-ML approaches, map validation, comparisons with images, and occupancy maps generated by laser range finders are given.

7.2  The Grid-Based Robotic Mapping (GBRM) Problem Probabilistic robotic mapping comprises stochastic methods of estimating the posterior density on the map, when at each time instance, the vehicle trajectory, Xk = [X0,..., Xk], is assumed a priori known. The posterior density of interest at time k, given all measurements up to and including time k, for the robotic mapping problem is, therefore,

pk|k (Mk|Zk, Xk)

(7.1)

This density encapsulates all the relevant statistical information of the map Mk, where Z0:k = [Z0,..., Zk] denotes the history of all sets of sensor measurements (or scans) up to and including the measurement scan at time k. As introduced in Chapter 4, each measurement, Z k = z 1k ,…, z kzk, with zk being the number of detections at time, k. The density can be recursively propagated, with the standard conditional independence assumptions [4], via the well-known Bayesian update,

pk k (Mk Zk , X k ) =

p(Zk Mk , Xk )pk k -1(Mk Zk -1 , Xk )

ò p(Zk Mk , Xk )pk k -1(Mk Zk -1 , Xk )dMk



(7.2)

Since a static map is commonly assumed,

pk k -1(Mk Zk -1 , Xk ) = pk -1 k -1(Mk -1 Zk -1 , X k -1)

(7.3)

that is, the time update density in the Bayes recursion is simply the posterior density at time k – 1. As noted in Chapter 4, even under static map assumptions, due to occlusions, Equation 7.3 may not be valid. To model this added uncertainty, an extended formulation is required with vehicle state dependant Markov transition matrices, which will be examined in Section 7.3.4. A grid-based map discretizes the naturally continuous Cartesian spatial state space into a fixed number of fixed sized cells and is addressed in this chapter. The map is therefore represented by M k = [m1k ,…, mkm ], where for GBRM, m is the number of a priori assigned grid cells, at predefined discrete spatial Cartesian coordinates. As the Cartesian location of the ith cell, mki , is a priori assigned, the grid-based map state, mki , comprises an estimate of the probability of a landmark 2. Note that, contrary to the feature-based methods introduced in Chapter 4, under GBRM, m corresponds to the fixed number of cell states for which the occupancy values should be estimated and is therefore not time varying. Under the FBRM concept, the number of feature states is itself random and, in general, time varying and is therefore denoted as mk.

238

Grid-Based Robotic Mapping with Detection Likelihood Filtering

existing in that discrete cell, at time k. In this chapter, this is referred to as the occupancy state space and is the filtering state space of any GBRM algorithm. Here mki Î P, with the constraint,

å π =1

π ÎP



(7.4)

The set P can consist of an arbitrary number of hypotheses but usually contains {O, E} in the case of a Bayesian approach [7] and {O, E, U} in the case of a Demp sterShafer approach [8], where O, E, U represent “occupied,”“empty,” and “unknown,” ˆ k then respectively. In this work, the classical Bayesian approach is examined, and M represents the estimate of occupancy in each cell at time, k. The emptiness estimate is denoted M k = [m1k ,…, mkm ]. The true state of the ith cell is denoted mi for occupied and mi for empty. The most popular method of evaluating the recursion of Equation 7.2 is by modeling the map Mk as a zero order Markov random field so that each occupancy state, mki can be independently estimated; that is,

pk k (Mk Zk , Xk ) =

i =m

Õ pk k (mki Zk , X k )

(7.5)

i =1

and the update becomes pk k (Mk Zk , X0:k ) = j = 3k ,i = m



Õ

p(zkj mki , Xk )pk k -1(mki Zk (j), X k )

j j =1,i =1 ò p(zk

mki , Xk )pk k -1(mki Zk (j), X k )dmki

(7.6)

Note that for the GBRM problem, the number of measurements, zk, equals the number of map cells, m. This is because map cells that are not observed (do not interact with sensor beam) are assigned a dummy, non informative measurement. Furthermore, since the trajectory, Xk, is assumed known, the map cell-­measurement correspondence is assumed known, and thus i = j " i = [1,..., m]. Therefore, without loss of generality, under the GBRM framework, zk is not time varying, so that the index k can be dropped and zk = z = m. The robotic mapping state space filtering problem can then be written as

Mk = Mk -1



Zk = h(Mk , Xk )

(7.7)

(7.8)

which indicates a static time-update and where the measurement is related to the detection, rather than spatial, attributes of a measurement through a function . Note that in the commonly considered spatial state space feature-based approaches [9], the number of measurements typically does not equal the number of elements in the map state.

7.2  The Grid-Based Robotic Mapping (GBRM) Problem

239

hkdetection () of the state, as explained in Section 4.5, with vk being sampled from an assumed a priori known noise distribution. Recall that the state in the GBRM problem is an estimate of the existence of a landmark at a given discrete location, (i.e., filtering in the occupancy state space). 7.2.1  GBRM Based on Range Measurements

This section details state-of-the-art methods of evaluating the recursion of Equation 7.6. For clarity of exposition, the case of the single map cell, mi, is outlined with the i, j cell and measurement parentheses being discarded. Since the trajectory, Xk, is assumed known, it is also discarded from the density functions. The measurement likelihood may therefore be written as, p(zk|mk), and, assuming it to be discrete, it can be shown that [7] log

P(mk z k ) k

1 - P(mk z )

= log

P(mk zk ) 1 - P(mk zk )

P(mk -1 z k -1) 1 - P(m0 ) + log + log P(m0 ) 1 - P(mk -1 z k -1)



(7.9)

where m0 is the initial estimate on landmark occupancy in the given cell and is typically set at 0.5 [5]. Note that P(mk|zk) inversely maps from the measurement at time k to the occupancy state mk and is often referred to as an inverse sensor model [4]. These “inverse” models are also required by Dempster’s update rule,

å

M(ρ3 ) =

M z (ρ1)M m ( ρ2 )

ρ1 Ç ρ 2 = ρ3

1-

å

(7.10)

M z (ρ1)M m ( ρ2 )

ρ1 Ç ρ 2 = 0



Here Mz (·) and Mm(·) represent mass functions, respectively containing the sensor and prior map evidences in support of each hypothesis, {r1, r2, r3}; that is, a direct mapping from the sensor measurement to the evidence in support of each hypothesis. This approach, however, requires “intuitive” models that lack mathematical justification and are contrary to the way in which range measuring sensors operate. This may result in inconsistent maps as shown in [7]. Approaches using the “forward” sensor model, p(zk|mk), have also been proposed [7, 10]. These more theoretically founded approaches attempt to obtain the likelihood of a measurement, given the state mk. For GBRM-SL algorithms, the measurement zk, used for the evaluation of the likelihood, p(zk|mk), comprised a range reading reported by the exteroceptive range sensor, assuming a 1D reading. A range reading corrupted by a Gaussian distributed noise signal of variance σ 2 results in a measurement likelihood



P(zk mk ) =

(zk - r)2

1 2π σ

2

e

2σ 2

.

(7.11)

240

Grid-Based Robotic Mapping with Detection Likelihood Filtering

where r is the true range to the cell. When considering the spatial state (i.e., the location of a given cell m), the measurement equation for the likelihood of Equation 7.11 becomes, zk = hspatial (m, Xk ) + wkspatial



(7.12)



where wkspatial ~ N (0,σ 2) and hspatial() is a function relating the spatial state of m, to the range reading, zk, as explained in Section 4.5.1. However, for the GBRM problem, since the spatial state space is a priori discretized into cells of fixed location, filtering occurs in the occupancy state space. GBRM-SL approaches use a discrete interpretation of the likelihood of Equation 7.11 and use the evaluation of the likelihood at discrete locations as the occupancy measurement, as depicted in Figure 7.1. The figure demonstrates a sensor model, which spreads uncertainty in the range space only, for a true target located at range r(q), which in the case of imaging radar, corresponds to a target existing in the qth range bin of its A-scope display. Hence Figure 7.1 demonstrates a GBRM a priori assumed spatial sensor model based on spatial likelihoods (SLs). If this model is then used for GBRM, taking the arbitrary case of a detection to occur in range bin q – 2, zk = r(q – 2), the resulting occupancy measurement becomes, (r(q - 2)- r)2

1 2π σ



2

e

2σ 2

(7.13)

Occupancy state space

when the target was predicted to occur at range r. This measurement is a function of the range reading, r(q – 2), and the range measurement noise, σ 2. However, with respect to the filtering state of interest, mk, it can be seen that such a measurement has no dependence on the occupancy state. Therefore, this shows that range-based approaches adopt a state-independent measurement for propagation of the occupancy state estimate. Furthermore, the occupancy measurement is discrete, which allows for the subsequent discrete Bayes filter implementation proposed in the literature [5, 7, 10].

Occupancy measurements

P ( z k | m k)

r (q

r (q ) ) r (q

)

r (q)

r (q +2) Range r r (q +3) r (q +1)

Figure 7.1  The indirect generation of occupancy measurements from standard range-based algorithms. The evaluation of the Gaussian range likelihood in the surrounding discrete cells with spatial states r (q – 3),..., r (q + 3) are used as occupancy measurements.

7.2  The Grid-Based Robotic Mapping (GBRM) Problem

241

The range at which a sensor reports the presence of a landmark can be used in the filtering of its spatial estimate. However, while this may be correlated with the sensor’s ability to correctly detect the landmark, the reported range at which the landmark is hypothesized to exist does not provide a measurement of m (the occupancy state) but only a measurement of its location. To correctly formulate the GBRM problem from a Bayesian perspective and have a truly state-dependant measurement, zk should be redefined as a binary set with zk ® zkdet Î {Detection, No Detection}. Therefore, by using a range reading as the measurement, GBRMSL sensor models subtly assume complete knowledge of the sensors’ detection characteristics, namely p(zk|m) (probability of detection), and p(z k m) (probability of false alarm), and the occupancy measurements become discrete. That is, p(zk|m) (and p(z k m)) are assumed completely known. Note this is typically the case for most likelihood calculations including data association [11] and particle filter SLAM solutions [6], where landmark DLs are assumed known or ignored completely. Based on these issues, the following section outlines the reformulation of the discrete GBRM filter. 7.2.2  GBRM with Detection Measurements

Once the occupancy measurement, zk, is defined in detection space rather than range-bearing space, the measurement likelihoods (for both detection and non­detection) become real signal processing parameters. A simple expansion of the occupancy posterior where the measurements are detections and non detections shows how the occupancy measurement likelihoods can be obtained when the signal processing stage is considered. Consider the probability of occupancy given a history of measurements,

P(mk (z det )k )

(7.14)

The measurement history z(det)k is now considered to be a series of binary hypothesis decisions on the presence or absence of landmarks, derived through some function of the measured signal power, given by the measurement model. Methods to determine this binary hypothesis for radar have been addressed in Chapter 3, where CA-CFAR OS-CFAR single-scan CA-CFAR and OS-CFAR hypothesis tests H0/1 and H0/1 were presented (Section 3.5), and in Chapter 6 where multiple-scan detection methods TPP were introduced, such as the TPP processor, yielding hypotheses H0/1 (Section 6.5). These were based on constant probabilities of false alarm. Thus, each GBRM measurement, zk, in this chapter, is the output of a likelihood ratio test and based on a general detector can be denoted H1detector if a detection was made or H0detector if no detection was made. For example, if a CA-CFAR processor is the chosen detector, then H1CA-CFAR is used in place of the general term H1detector, and so on. Term 7.14 can be expanded about both measurement hypotheses yielding



P(mk zk = H1detector ,(z det )k -1) = γ -1detector P(zk = H1detector mk )P(mk (z det )k -1) (7.15) H



γ Hdetector = P(zk = H1detector mk )P(mk (z det )k -1) + P(zk = H1detector mk ) P(mk (z det )k -1 (7.16) 1

1

242

  

Grid-Based Robotic Mapping with Detection Likelihood Filtering

P(mk zk = H0detector ,(z det )k -1) = γ -1detector P(zk = H0detector mk )P(mk (z det )k -1) H0



(7.17)

γ Hdetector = P(zk = H0detector mk )P(mk (z det )k -1) + P(zk = H0detector mk )P(mk (z det )k -1) (7.18) 0 These equations calculate, in closed form, a statistically correct posterior of the occu­ pancy random variable, where the measurement likelihoods P((z det )k = H1detector mk ), P((z det )k = H1detector mk ), P((z det )k = H0detector mk ), and P((z det )k = H0detector mk ) are those frequently encountered in landmark detection algorithms, such as CFAR processors. A graphical representation of the landmark detection hypothesis was already introduced in Chapter 3 (Figure 3.5) and a modified version, for GBRM, is shown in Figure 7.2. p(Slin|m, Wm) and p(S lin m, Wm ) represent the received signal fluctuation densities under both landmark presence, m, and landmark absence, m, respectively, and are further discussed in Section 7.3.4. To keep the analysis general, the moments of the landmark presence and absence densities are referred to as Wm and Wm, respectively. A likelihood ratio is then defined by [12], L(S lin ) =



p(S lin m, Wm ) p(S lin m, Wm

(7.19)



and,



zkdet

ìïH1detector = í detector ïîH0

if L(S lin ) ³ Sdetector otherwise

(7.20)



The four probabilities present in the detection hypothesis problem, which are also required by Equations 7.15 and 7.17, are typically referred to as



P(zk = H1detector m) - Landmark detection likelihood (DL) = PDdetector



P(zk = H1detector m) - False alarm likelihood (FAL) = Pfadetector



P(zk = H0detector m) - Missed detection likelihood (MDL) = 1 - PDdetector



P(zk = H0detector m) - “Noise” detection likelihood (NDL) = Pfadetector Given a binary measurement space, discrete measurement likelihoods can be used to calculate the occupancy posterior, through the updated Equations 7.15 and 7.17, which in turn require completely known measurement likelihoods (target absence and presence probabilities). However, these likelihoods can generally only be calculated exactly when two a priori assumptions are made—a known mean landmark SNP, η SNP, and known landmark power fluctuation PDF, p(Slin|m, Wm) [13].

. For example, for the particular cases of exponential noise and Gaussian target presence distributions, Wm = Stp and Wm = lta, as defined in Section 3.4.1.

7.2  The Grid-Based Robotic Mapping (GBRM) Problem

Number Density

lin

p (S | m,Ω m )

S

243

detector

p (S lin | m, Ω m)

det

P (zk =H 0detector | m )

det

P (zk =H 1detector | m )

det P (zk =H 0detector | m )

Decide H

S detector 0

lin

Decide H

det

P (zk =H 1detector | m ) Received Signal Power S lin

detector 1

Figure 7.2  A graphical representation of the received signal classification problem in a given map cell at time k. Sdetectorrepresents the decision threshold and Wm and Ωm represent the general moments of the landmark presence and absence distributions, respectively. The hypothesis decisions are detector H0detector: Landmark absent, and H1 : Landmark present. The measurement likelihoods required to calculate the posterior probability of occupancy are also indicated.

Under the further assumption of IID noise power samples, a suitable power threshold can be calculated that will exactly obtain the theoretically derived measurement, and hence occupancy, likelihoods. In this case, the measurements required to calculate the posterior occupancy probability, P(mk|z0:k), are deterministic, and a discrete Bayes filter implementation is valid. However, when these assumptions are relaxed, the primary assumption being the known SNP, these measurement likelihoods become continuous, and thus the propagation of the occupancy random variable estimate must be carried out using continuous filtering methods such as the EKF or particle filter. As the measurement likelihoods are two complimentary sets, {P(z k = H1detector mk ), P(z k = H0detector mk )} and {P(z k = H1detector mk ), P(z k = H0detector mk )}, only one likelihood from each set needs to be estimated. Furthermore, a discrete filtering approach that disregards the uncertainty of the measurement likelihood estimate will equally weigh both the measurement and the prior as they are considered to have equal covariances. Occupancy measurements are in fact highly correlated with the vehicle location and the structure of the environment, and should not be treated equally. For example, specular reflections are likely to occur at high angles of incidences, and clutter free observations should be treated with greater confidence than those with interfering signals. 7.2.3  Detection versus Range Measurement Models

In this section, the occupancy posterior is propagated for a set of simulated data, illustrating that optimal performance, in terms of estimating the correct number of landmarks, can be achieved when the actual measurement likelihoods are used, as opposed to a priori assigned values as is typically reported in autonomous mapping algorithms. Figure 7.3 shows a simulated data set with the resulting detection matrix after the application of a detection algorithm. A simulated landmark exists in range bin 11, and all other cells are empty. The fluctuating signal models the change in power from a landmark, which, for example, can result from the effects giving rise to the Swerling models (Section 2.7.1.3) or speckle (Section 2.7.1.4). The signal amplitude in empty cells fluctuates according to an assumed noise model, which in

244

Grid-Based Robotic Mapping with Detection Likelihood Filtering

this case is the exponential model of Section 3.4. In this example, the false alarm likelihood is set high at 0.1 (i.e., 1 in 10 measurements yields a false alarm), due to the small sample window (21 × 20 cells) to ensure that in the ensuing detection tests, some false alarms fall within the surveillance region. Using Equations 7.15 and 7.17, the occupancy posterior at each time step is evaluated. Both the range measurement approach and detection measurement approach are compared in this simulated test. Both methods use the detection sequence depicted in Figure 7.3; however, the values of the measurement likelihoods are different. The true likelihoods for this sample set, obtained theoretically from the detection algorithm, are: PD = 0.80663 and Pfa = 0.1. Using the range measurement model of Equation 7.11 [7, 10] detection measurement likelihoods are essentially a priori assumed and the statistics of the detection algorithm used are ignored. The results in Figure 7.4 show the estimated number of landmarks in the region after 20 measurement updates, where cells with an occupancy probability greater than 0.51 are deemed occupied and cells with values less than 0.49 are deemed empty. The figure on the left plots the results from the detection measurement model, using the theoretically true false alarm likelihood and varying the PD, while the figure on the right uses the true PD and varies the false alarm likelihood. Also plotted on both graphs is the estimated number of landmarks from the range measurement model. As the range measurement model does not consider the statistics of the detection algorithm, the estimated number of landmarks in the region remains at eight for each test as can be seen in the figure. This is due to overly pessimistic a priori assignment of the likelihoods from the Gaussian model of Equation 7.11, and therefore the estimated number of landmarks far exceeds the true number. Naturally this value can be tuned or scaled to improve mapping performance for a particular case, however with multiple landmarks having varying measurement likelihoods, optimal performance is almost impossible. Detection matrix

20

20

18

18

16

16

14

14

Range bin

Range bin

Sample data

12

12

10

10

8

8

6

6

4

4

2

2 5

10 15 Measurement number

20

5

10 15 Measurement number

20

Figure 7.3  Sample data and the resulting detection matrix. Note numerous false alarms within the surveillance region as the false alarm likelihood is set to 0.1. For the simulated sample data, the darker the shade, the higher the measured power intensity in that range bin. The detection matrix output shows black for zkdet = H1detector in that cell and white for zkdet = H0detector.

7.3  Mapping with Unknown Measurement Likelihoods True DL: 0.80663

True FAL: 0.1 25

8

Estimated landmark number

Estimated landmark number

9

245

20

From range measurement model

7 6

15

5 4

10

From detection measurement model

3

From detection measurement model

2

5

From range measurement model

1 0 0

0.2

0.4

0.6

Assumed DL with FAL:0.1

0.8

1

0 0

0.2

0.8 0.4 0.6 Assumed FAL with DL:0.80663

1

Figure 7.4  Comparison of the estimated number of landmarks in the data set after 20 updates using the range measurement model and the detection measurement model, where the true number of landmarks is one. As the range measurement model ignores the statistics of the detection algorithm, a priori assignment may cause inaccuracies. The figure also plots the sensitivity of the occupancy propagation equations to incorrect measurement likelihood estimates.

For the detection measurement model, where the (independent of range) MLs are used, the values of the likelihoods are varied from 0 ® 1, as plotted along the horizontal axes of the graphs in Figure 7.4, for each test. Note that the error is large at low assumed detection and high estimated false alarm likelihoods; however, as the assumed likelihoods approach the true values, the estimated landmark number converges on the true number (i.e., one landmark present). Thus, this simple simulation shows that, since optimal performance can be achieved by using the actual measurement likelihoods, these state-dependant likelihoods should be jointly estimated with the occupancy posterior.

7.3  Mapping with Unknown Measurement Likelihoods This section outlines a GBRM-ML algorithm to jointly estimate both the occupancy random variable and the corresponding measurement likelihoods. 7.3.1  Data Format

From a radar perspective, the environment can be considered to consist of an unknown number of spatially distributed signal PDFs of both unknown distribution type with unknown moments. Figure 7.2 illustrated a noise signal PDF in a given cell, p(Slin(x,y)|m ¯ , Wm¯), and a landmark signal PDF, p(Slin(x,y)|m, Wm). A single radar scan therefore acquires samples from these underlying environmental PDFs and returns them in the form of the A-scope at different bearing angles. The data in a single A-scope then contains Q samples at discrete range increments q along each bearing angle. The data is generally modeled by a Q dimensional joint PDF,

PSlin (1),…,Slin (Q) (S lin (1),…, S lin (Q))

(7.21)

246

Grid-Based Robotic Mapping with Detection Likelihood Filtering

where [Slin(1), ... , Slin(Q)] are Q individual samples from assumed independent signal PDFs observed by the radar. Thus fS lin (1) ,…,S lin (Q) represent the received power intensity PDFs in each range bin. Many examples of such A-scope displays have been shown throughout this book. A complete scan therefore contains A-scope displays at bearing angles {q1,...,θ|Q|} Î Q and the complete set of measurement samples can be denoted Slin(q : 1® Q, Q). As before, let p(Slin|m, Wm) – be the landmark signal PDF and p(Slin|m , Wm– ) be the noise signal PDF, in the qth range bin where Wm and Ω m are the unknown distribution moments. The noise * samples, S lin (q) ∼ p(S lin m, Ω m) , are assumed IID, " q Î [1,...,Q], and the moments of p(Slin|m, Wm) are a function of the landmark’s mean SNP, η SNP. The spatial distributions are typically modeled by point spread Gaussian functions using the sensor’s range and bearing covariances. 7.3.2  GBRM Algorithm Overview

Figure 7.5 shows a block diagram of the estimation problem under consideration. A single A-scope, Slin(q: 1 ® Q, θi), is considered here that can then be easily generalized to model the complete 360° set of measurements, Slin(q : 1 ® Q, Q). The system input is the true map state along bearing angle θi, M(θi) = [m1,..., mQ],which is a vector of Q binary numbers indicating the presence, mq = 1 (also denoted m), or absence, mq = 0, (also denoted m) of a landmark in each range bin. The sensor model can therefore be expressed as Slin(M, θi), meaning that an A-scope, at scanning bearing angle θi, is constructed as a function of the underlying map. The SNP corresponding η for each landmark is also required, which are modeled in the vector of measurement likelihoods, L(θi) = [l(l),...,l(Q)]. The block labeled “Sensor model” in Figure 7.5 then uses the landmark’s range, expected mean power η SNP, and distribution moments Wm to generate a noise-free A-scope, which is in turn corrupted by noise samples taken from p(S lin m, Wm) to construct Slin(M, θi), the collection of power samples comprising an A-scope display at bearing angle θi. Note that unlike most filtering formulations that assume an a priori known noise distribution and moments, here the moments remain unknown and must be locally estimated.

Noise M (θ i) P(Z kdet(θ i))

Sensor Model

+

lin S ( M, θi)





M k(θ i)

P(Z kdet(θ i)=H 1CFAR )

Estimator ∧

Z kdet(θ i)

CFAR Detector

Figure 7.5  A block diagram of the GBRM-DL algorithm. The aim is to estimate the posterior on the ˆ . Note the occupancy vector, M. This, however, requires estimates of the measurement likelihoods L additive noise in radar signal processing is typically non-Gaussian. The outputs of the model are the map state Mk and sensor likelihoods Lk, at angle θi. For convenience of notation, the angle θi is dropped in the output terms Mk and Lk.

7.3  Mapping with Unknown Measurement Likelihoods

247

7.3.3  Constant False Alarm Rate (CFAR) Detector

The “CFAR detector” block of Figure 7.5 contains the signal detection algorithm that should have a constant false alarm rate property (Chapter 3). Its input is Slin(M, θi) and its output, Zkdet = [zkdet (1)… zkdet (Q)]T , is a vector of Q binary hypotheses, with zkdet (q) Î {H1detector , H0detector }. Using the assumed noise model, p(S lin m, Wm ), the sliding window CFAR techniques of Section 3.5 are used to generate a local estimate of the noise distributions’ moments in each range bin. These are then used in the likelihood ratio test of Equation 7.19 to generate the H1detector or Hdetector hypothesis 0 det decisions, which constitute the observations Zk . 7.3.4  Map Occupancy and Detection Likelihood Estimator

The estimation problem is therefore to evaluate the joint likelihood on the occupancy and measurement likelihood random variables at each time k,

P(mk , P(zkdet |z Power )k )

(7.22)

where the measurement (z Power )k consists of a history of all raw measurements, and P(zkdet ) contains the measurement likelihoods (detection and non detection). This can be expanded to

p(mk , p(zkdet ,(z power )k )) = 1 - p(mk = 1 | p(zkdet ,(z power )k ))

(7.23)

Since occupancy is a discrete binary random variable,

p(mk = 0 | p(zkdet ,(z power )k )) = 1 - p(mk = 1 | p(zkdet ,(z power )k ))

(7.24)

As previously stated, lk contains both the detection and non detection likelihoods,

P(zkdet = H1detector m = 1, (z power )k ) = PDdetector

(7.25)



P(zkdet = H1detector m = 0, (z power )k ) = Pfadetector

(7.26)



P(zkdet = H0detector m = 1, (z power )k ) = 1 - PDdetector

(7.27)



P(zkdet = H0detector m = 0, (z power )k ) = 1 - Pfadetector

(7.28)

representing a complimentary set of measurement likelihoods. Thus, it is only necessary to maintain an estimate of p(λk = H1detector z0:k ), which will be denoted p(lk|z0:k) unless explicitly stated otherwise, whose true value is p(lk|z0:k) Î {Pfa,...,1}. As the vehicle traverses the environment, landmarks may randomly appear/ disappear in the data due to occlusions as well as falling in and out of the radars’ FoV. Therefore, contrary to standard occupancy grip mapping algorithms, a non static time update is suggested and used here (i.e., p(mk|mk−1, z0:k−1) ¹ p(mk−1|z0:k−1). That is, cells can randomly change from occupied to empty or vice versa during

248

Grid-Based Robotic Mapping with Detection Likelihood Filtering

vehicle motion. The process is therefore modeled as a hidden Markov model (HMM) where the transition matrix is given by



éPoo Peo ù X=ê ú ë Poe Pee û

(7.29)

Poo is the probability of an occupied cell remaining occupied (a stationary landmark remaining within the sensor’s field of view), Peo being the probability of an empty cell becoming occupied (possibly due to occlusion effects), with Poe being the opposite. Pee is the probability of an empty cell remaining empty. Applying Bayes rule to the first term of Equation 7.23 gives

p(mk | p(zkdet ,(z power )k )) µ p(p(zkdet , zkpower | m ¢))p(m ¢ | z power )k -1

(7.30)

where m¢ is the predicted occupancy obtained from the Markov transition matrix. Since lk is not dependant on the power measurements themselves, the occupancy variable

p(mk | p(zkdet ,(z Power )k )) µ p(zkdet ) p(zkPower | m ¢)p(m ¢ | z Power )k -1

(7.31)

The second term of 7.23 is a sensor-specific representation of the DL density where a detection may mean range readings in terms of radar, laser, and sonar sensors, or feature detections if feature extraction methods are applied to the raw sensor data. Since the focus of this book is on radar, the density is obtained from examination of CFAR detection theory (Chapter 3, Section 3.5). Hence

p(zkdet ) = f detection (Slin (M, θi ))

(7.32)

where f detection() is a non linear function of the raw data incorporating CFAR signal detection theory. As discussed previously in Section 7.3.1, the measurement data Slin(M, θi) is assumed to consist of a vector of Q consecutive, independent signal power measurements, Slin(q), for each bearing angle. A priori signal distribution assumptions are made on both p(Slin|m, Wm) and p(S lin m, Wm ) "q Î [1... Q], where the distribution moments Wm and Ω m are generally assumed unknown and must be estimated using the signal power information. To make an estimate of lk, we must first estimate the mean SNP, η SNP. Taking the measured power in bin q, {Slin(q)|m}, as the signal plus noise measurement (assuming the existence of a landmark), it is therefore necessary to estimate the local noise power at that bin to generate an SNP estimate. As shown in Chapter 3 Section 3.5, based on leading and lagging windows around the CUT, 2W power measurements are used to generate this estimate. It was shown that numerous adaptive CFAR techniques exist for generating local estimates of the unknown noise distribution parameters. While most signal processing literature considers the probability of detection for landmarks of known SNP, this chapter requires lk to be estimated from the measured power information using an assumed distribution p(Slin|Wm, m), but with an unknown mean SNP. Thus, given an estimate of the local power noise intensity in bin q, {S lin (q) m)}, generated by the detector η SNP (q), can be estimated as

7.3  Mapping with Unknown Measurement Likelihoods

OS-CFAR p(zkdet ) = pD (q) =



kOS -1

Õ i =0

249

2w - i 2w - i + (y/2w(1 + η SNP (q)))

(7.33)

7.3.5  Incorporation of the OS-CFAR Processor

An ordered-statistics (OS)-CFAR approach was shown in Chapter 3 Section 3.5.2.3 to be most robust in situations of high clutter and multi landmark situations, as is commonly encountered in a field robotics environment. Under certain distribution assumptions, using an OS-CFAR-based noise estimate, closed form solutions exist for ηˆ SNP (q) [14], η SNP (q) =

S lin (q) Sklinos



(7.34)

with



p(S lin |W m = λ ta , m) = Exponential(λ ta ) = λ ta exp (- λta S lin ) p(S lin | Wm = ηSNP , m) = Rayleigh(ηSNP ) =





p(ηSNP ηˆ SNP ) =

1 ηˆ SNP

æ π (S lin )2 ö π S lin exp ç÷ 2 (ηSNP )2 è 4(ηSNP )2 ø

æ ηSNP ö exp ç - SNP ÷ è ηˆ ø

(7.35)

(7.36)

(7.37)

where I0 is a modified Bessel function of order 0 [15]. These equations describe the received power fluctuation models of the raw data, Slin(M, θi), acquired by the imaging sensor. For the radar sensor adopted in this work, these well-established models describe an omni-directional distribution of the signal power reflected from a landmark with multiple scatterers. As in Chapter 5 Section 5.3.1, an exponential distribution is assumed for the radar noise signal, with a Ricean fluctuation model, p(Slin|Wm= hSlin, m), adopted for the landmark signal. Since the intensity of the received power is dependant on the viewing aspect, the Swerling I model (Section 2.7.1.3), p(ηSnp ηˆ SNP ) is adopted to model the omni directional density of the SNP from a given landmark [16]. The GBRM-ML framework can naturally be adapted to any given distribution assumptions that can derive closed-form likelihood equations such as Equation 7.34. Since the assumed densities for radar systems are typically non-Gaussian and lk is a non linear function of the observed data Slin(M, θi) (Equation 7.34), a particle approach is adopted to solve the recursion. Note the measurement likelihood estimate is valid for the radar used in this work. By deriving a similar likelihood for other range-measuring exteroceptive sensors such as underwater sonar and laser, the GBRM-ML algorithm can be generalized for use in multiple autonomous mapping applications.

250

Grid-Based Robotic Mapping with Detection Likelihood Filtering

7.4  GBRM-ML Particle Filter Implementation The objective of this filter is to propagate the posterior of the joint density, p(mk, power k -1 ) )) can be represented p(zkdet (z power )k )). Assume the prior p(mk−1, P (zkdet -1 (z (i) N * (i) by a set of weighted particles {ψ k -1, wk -1 }i =1 such that N



p(ψ k -1 (z power )k -1) » å wk(i-) 1 δψ (i) ( ψk -1) k -1 i =1

(7.38)

é mk -1 ù ψ k -1 = ê det ú ëêp (zk -1)ûú

(7.39)

where



is the joint state containing the estimate on the map and the corresponding measurement likelihoods. Note that the measurement likelihood p (zkdet -1 ) exists for both mk = 1 (where it will be the landmark DL) and mk = 0 (where it will be the false alarm likelihood). To propagate the densities, the standard particle filter recursion with resampling is followed [17, 18] ψ k(i) ∼ v(ψ k ψ k(i-) 1 , zkpower )

wk(i)

=

wk(i-) 1



(7.40)



p(zk ψ k(i) ) p(ψ k(i) ψ k(i-) 1) v(ψ k(i) ψ k(i-) 1 , zkpower

(7.41)

power

where p(ψ k ψ k(i-)1, z k ) is the proposal likelihood, indicating the occupancy and measurement likelihood of the state yk, based on particle ψ k(i-)1 and measurement (i) (i) zk. The transition likelihood p ψ k -1 ψ k -1 describes the predicted state values and consists of the Markov time update described previously in Section 7.3.4 to propagate the occupancy random variable. This is a random particle set sampled from the previous posterior p(yk–1½(zpower)k–1) where the binary value of the m component is changed or remains fixed according to the probabilities set by the transition matrix, X (Equation 7.29). A static time update for the measurement likelihood estimate is used; that is, p(m¢ (z power )k -1) ~ p(mk -1 (z power )k -1, X)

(



)

power k -1 p(p1(zkdet (z power )k -1) = p(p(zkdet ) )) -1 (z

(7.42)

where, in a similar fashion to m¢, l¢ is the predicted measurement likelihood. (i) power ) depends on whether the cell continThe proposal likelihood υ(ψ k ψ k -1 , zk ues in its same state (i.e., during the Markov transition, the cell value remains unchanged) or whether it changes state. In the case of the state remaining the same,

p(ψ k(i) ψ k(i-) 1 , z power )) = p(ψ k(i) ψ k(i-) 1)

(7.43)

7.5  Comparisons of Detection and Spatial-Based GBRM

251

In the case of the state changing or a new detection, the data at time k is used to initialize the estimate of p(zkdet ) according to the measurement likelihood estimate algorithm outlined in Section 7.3.4. That is, the DL is taken to be the estimated likelihood at that location using measurement data Slin(M, θi) and Equation 7.34. Finally, the likelihood used to weight the particles is obtained from the likelihood of a landmark being present in the cell, which is also the same likelihood used by the detection Equation 7.19 and is given by



p(zkpower ψ k(i)) =

p(S lin (q) m, Wm ) p(S lin (q) m, Wm )

(7.44)

where Slin(q) is the received power intensity at particle location y(i). The weighted particle set is then resampled. Estimates of the posterior occupancy and measurement likelihood can be extracted using the expected a posteriori, N



ψˆ k = å wk(i) ψ k(i) i =1

(7.45)

7.5  Comparisons of Detection and Spatial-Based GBRM This section outlines experimental comparisons of the GBRM-ML concept, using both real and synthetic data, with GBRM-SL approaches. The section demonstrates superior mapping capabilities based on GBRM-ML estimation using metrics such as the sum total squared error and quantitative comparisons of cell classification errors. 7.5.1  Dataset 1: Synthetic Data, Single Landmark

Simulated data was first generated using idealistic Gaussian fluctuating noise power values and a Gaussian fluctuating landmark of unknown mean power intensity. The synthetic data is the same as that used previously in Section 7.2.3. One hundred Monte Carlo trials were performed in which the occupancy posterior of each cell was propagated using the GBRM-ML algorithm (Equation 7.45), and the GBRM-SL algorithm where the PD is a priori assumed. For each trial, the assumed PD is increased by 0.01 to cover all possible values. As in Figure 7.3, the FAL is set at 0.1. Again, cells in which the posterior occupancy estimate was greater than 0.51 were declared occupied, and those less than 0.49 were declared empty. Figure 7.6 shows the percentage of missed detections (occupied cells declared empty) and false alarms (empty cells declared occupied) for each trial. Note that when the assumed DL equals the false alarm likelihood, there is no information gained by either a detection or a non detection; thus, the map remains in its initial state of P(m0) = 0.5. The percentage of false negatives remains high as expected when the DL is less than the FAL, but quickly drops to zero as the DL increases, thus showing that GBRM-ML has a superior performance.

252

Grid-Based Robotic Mapping with Detection Likelihood Filtering

Figure 7.6  Comparison of simulated mapping results using online PD estimation and the classical occupancy models where the likelihood is assumed. At pessimistically assumed PD for the classic GBRM-SL approach, the number of false positives (left graph) and false negatives (right graph) are dramatically greater than those using the GBRM-ML approach.

7.5.2  Dataset 2: Real Experiments in the Parking Lot Environment

This section outlines a real application of the GBRM-DL method in the outdoor parking lot environment used in Chapters 5 and 6. This environment contains numerous landmarks of varying size, with landmark-dependant DLs. Herein lies the inability to obtain the DLs for each object off-line using training data, as the range of potential object types, and hence DLs, makes it infeasible. The mobile platform of Figure 1.6, carrying the FMCW radar and scanning laser range finders, was remotely driven around the lot. The laser range finders were used for two reasons. First, due to the lasers’ relatively high angular resolution, matched laser scans provided a means of determining the ground truth trajectory of the vehicle. Second, estimated grid maps were obtained, based on the lasers as well as the radar, for comparison. Also on the mobile platform, four optical encoders recorded the input velocity to each wheel, and a single-axis fiber optic gyroscope, mounted over the rear axle, gave estimates of the vehicle heading. In the urban surroundings of the experiments, the GPS data acquired was not accurate enough to determine ground truth location, due to multipath reflections and satellite outages. As the laser scans frequently obtained returns from the ground, automatic matching techniques failed, and therefore consecutive laser scans were manually matched to determine the true location of the vehicle at each update. Laser data typically returns the range to the first landmark detected along the beam. As shown in Chapter 2 Section 2.4, in the case of the radar, a single A-scope can in fact contain information from multiple landmarks (see Figure 2.10 for example) mainly due to the wider beamwidth (~1.8° compared with ~0.1° for the laser) and the ability of the radar beam to penetrate some foliage. The radar beam can penetrate and propagate into the trees and return information about multiple landmarks, whereas the laser is limited to one return per beam. This optimistic approach therefore assumes that every measurement, detection and non detection, along the entire beam is valid. That is, for each time step k, each peak in the data, under both the null and alternate detection hypotheses, is treated as a potential landmark, and no model of the sensor beam propagation properties is assumed. Detections registered

7.5  Comparisons of Detection and Spatial-Based GBRM

253

Figure 7.7  Comparison of the detections reported by the radar (left) for the entire parking lot loop with those of the laser. (right) Notice the high false alarm rate for the radar detections. False alarms (ground and grass detections) are also present in the laser data, particularly along the central region (–45,10) to (15,10).

over the test by a laser sensor and the radar sensor are shown in Figure 7.7. In this experiment, the parameters for the radar detection OS-CFAR module (Section 7.3.3 and Equation 7.34) were chosen as W = 20, kos = 30, and Pfa = 1 × 10−6. The figures show all OS-CFAR detections for the radar (left) and all returned range values for the laser range finder (right), superimposed onto the ground truth vehicle trajectory obtained by scan matching the laser data. An overview of the parking lot environment is shown in Figure 7.8 with the vehicle’s path and other artifacts labeled. It should be noted that during the experiment, only one car was present in the lot. Using the radar detection sequence depicted in Figure 7.7, both GBRM-ML and GBRM-SL approaches were implemented. Approximately 300 sensor scans were registered over the test, with the final posterior estimates of the map shown in Figure 7.9. This shows mapping results from the GBRM-ML algorithm compared with GBRM-SL approaches using assumed or range-based likelihoods [7]. Visual inspection shows a reduction in false landmark declarations and an improved posterior map estimate. 7.5.2.1  Quantified Comparisons of GBRM-SL and GBRM-ML

Using the manually constructed ground truth map of the environment in Figure 7.10, quantitative comparisons with GBRM-SL approaches can be generated that .  A section of which was previously used for ground truth purposes in Section 6.8.2.

254

Grid-Based Robotic Mapping with Detection Likelihood Filtering

Entrance passage

Vehicle path

Short wall

Tree

Low lying bush

Figure 7.8  An overview of the testing ground from which the data presented in Figures 7.7 and 7.9 was acquired. Also indicated is the path traversed by the vehicle. Only one car was present at time of data acquisition.

Figure 7.9  The resulting radar-based occupancy maps produced by the GBRM-ML algorithm (left) and GBRM-SL approaches with assumed likelihoods (right). Note the larger presence of falsely declared occupied cells using GBRM-SL methods. The only car present in the lot during the time of the experiment is highlighted with an ellipse.

7.5  Comparisons of Detection and Spatial-Based GBRM

255

40

20

Meters

0

–20

–40

–60

–80

–30

–20

–10

0

10 20 Meters

30

40

50

Figure 7.10  Ground truth binary representation of the lot testing environment. The map was manually constructed from observation of the testing ground shown in Figure 7.8.

highlight the advantages of the GBRM-ML concept. The error metric adopted for this work is a modification of the sum of the squared error [19] over the estimated grid map, Mˆ k, and the true map, M. For outdoor mapping applications, where the majority of cells are empty, the sum of squared error, which equally weighs each occupied and empty cell, results in an uninformative measure, especially in the presence of large landmark detection uncertainty. This is a consequence of the total number of empty cells far exceeding the occupied cells. An error metric, which weighs both occupied and empty cell occupancy estimates based on their true numbers, is therefore introduced. This is defined as a normalized averaged sum of the squared error (NASSE) metric, defined as æ 1 mO NASSE = 0.5 ç å (P(mki ((z det )k )i, mi =) - 1)2 è mO i = 0



m ö 1 + (P(mki ((z det )k )i, mi = 0) - 0)2 ÷ å (m - mO ) i = mO +1 ø

(7.46)

256

Grid-Based Robotic Mapping with Detection Likelihood Filtering 0.4

NASSE metric

0.35

0.3

Range-based likelihoods

0.25 Detection-based likelihoods 0.2

0

50

100

200 150 Update number

250

300

350

Figure 7.11  Comparison of the NASSE metric between the estimated map posterior and the ground truth map. The result shows the monotonic reduction in the map error. For a given number of updates, the GBRM-ML approach outperforms the GBRM-SL methods, which use range-based likelihoods.

where m is the total number of cells in the map, M, and mO is the number of occupied map cells, determined from the ground truth map of Figure 7.10. This metric presents a normalized error measure for use in environments of largely unequal numbers of occupied and empty cells. Figure 7.11 shows the sequential NASSE error for both approaches. It can be seen that the rate of monotonic error reduction of the GBRM-ML approach exceeds the GBRM-SL method, with a reduced mapping error achieved in the final posterior map estimate. Figure 7.12 shows an error metric comparison for the case of noisy vehicle control inputs. In this case, the standard sum of squared error (SSE) metric shows monotonic error reduction whereas the NASSE clearly shows mapping inconsistency. This figure shows that the NASSE metric is a useful technique for classifying the true map error. 7.5.2.2  Influence of Detection Module Parameters

To generate detection measurements from the A-scope data for both the range­likelihood and detection-likelihood methods implemented, the adaptive OS-CFAR processor introduced in Chapter 3 Section 3.5.2 was adopted. Parameters to be determined include the sliding window width, W, which samples local power readings and generates an estimate of the local noise in each range bin, q, as well as the “K-value,” kos, and the desired false alarm rate, FAL. See Section 3.5.2 for more details. To synthesize an environment consisting primarily of landmarks with low detection probability, the FAL parameter of the detection algorithm can be set arbitrarily low. The NASSE error plots for excessively low and high FAL values are shown in Figure 7.13. At extremely low FAL, further advantages of the GBRM-DL

7.5  Comparisons of Detection and Spatial-Based GBRM

257

0.045

SSE metric

0.04 0.035 0.03 0.025 0.02

0

50

100

200 150 Update number

250

300

350

50

100

200 150 Update number

250

300

350

0.475 0.47

NASSE metric

0.465 0.46 0.455 0.45 0.445 0.44 0.435 0.43

0

Figure 7.12  Comparison of the error metrics/or a loop with noise injected into the location estimates. This figure shows the susceptibility of the classical sum of squared error (SSE) metric to maps containing unequal number of occupancy and empty cells. The NASSE error metric clearly indicates a non converging map due to multiple missed landmarks, whereas the standard metric incorrectly shows a monotonically decreasing error.

algorithm are evident, since it explicitly considers the detection statistics, specifically the missed DLs, in the mapping recursion. For range-based methods, if there is no range reading as the result of a missed detection, no range likelihood will exist. GBRM-SL approaches therefore typically assign intuitive occupancy measurements in regions of no range reading [7, 10, 20, 21]. At high rates of false alarm, an increased rate of error reduction is evident with the GBRM-ML approach due to the formulation incorporating the detector’s false alarm probability. This demonstrates the advantages of the GBRM-ML approach in the presence of both landmarks with low detection probability and high rates of false alarm. The OS-CFAR parameters W and kos set the upper limit on the expected environmental landmark density along a single A-scope in the environment. th The estimate of the noise signal power is taken as the kos ordered sample, with W = 20 and kos = (3W/2) = 30. The range resolution of 25 cm per range bin allows distances of (30/4)m to comprise empty space and at most (10/4)m to comprise landmarks [22]. The OS-CFAR detection routine is quite robust to changing

258

Grid-Based Robotic Mapping with Detection Likelihood Filtering 0.55 0.5

P f a = 1×1030

NASSE metric

0.45

Range-likelihood Detection-likelihood

0.4 0.35 0.3

P f a = 1×10–1

0.25

P f a = 1×1030 P f a = 1×10–1

0.2

50

0

100

150 200 Update number

250

300

350

Figure 7.13  Comparison of the NASSE metric in the parking lot environment for very low detection OS-CFAR OS-CFAR probability and various values of Pfa­ ranging from 1 × 10–30 to 1 × 10–1. In the graph, Pfa­ is labeled as Pfa.

win dow size; however, increased error at excessively small or large window sizes is evident, as shown in Figure 7.14. While these parameters may influence the detections registered by the imaging radar, this influence applies to all applications adopting such a sensor. The results presented highlight that given any set of detector parameters, the stochastic GBRM-ML approach outperforms classical GBRM-SL approaches, as it theoretically incorporates the resulting statistics, as a result of a given parameter choice into the mapping recursion. 0.3 Range-likelihood Detection-likelihood

Final NASSE

0.25 0.2

0.15 0.1 0.05 0 0

10

20

30

40

50 W/2

60

70

80

90

OS-CFAR Figure 7.14  NASSE for both approaches varying detection sliding window width, W, with Pfa­ = 1 x 10–6 kos = (3/2) W. For a given set of detector parameters, the GBRM-ML approach outperforms that of range-likelihood based methods, as the detection statistics are integrated into the mapping algorithm.

7.5  Comparisons of Detection and Spatial-Based GBRM

259

7.5.3  Dataset 3: A Campus Environment

The second dataset comprises an outdoor semi-urban environment within a university campus. Due to the practical challenges of obtaining a ground truth binary map from such an extensive environment, a high-resolution undistorted (1 pixel/m) plain-view satellite image shown in Figure 7.15 was used to provide map validation. The approximately 500 m long path traversed by the vehicle is also shown in white. Figure 7.16 shows the same image with superimposed hypothesized landmarks returned by the GBRM-ML algorithm. While quantification is challenging in such an environment, the results show good correlation with the satellite image, as numerous trees and buildings are clearly accurately registered. From a comparison of the maps produced by the radar using the GBRM-ML algorithm in Figure 7.17 and the lasers in Figure 7.18, the merits of radar as an outdoor exteroceptive sensor are evident. Due to the multiple-landmark-per-bearing-angle detection capability of the radar sensor, far more detail is apparent in its map than that of the single-landmark-per-bearing-angle laser. As an example, the Heritage building (labeled in Figure 7.15), which is partially occluded by the moat, is visible and mapped in detail in the radar map Figure 7.17. In the laser-based map of Figure 7.18, large sections of this building remain unmapped due to partial occlusion by the moat. As expected, although difficult to verify the mapping quality, the level of penetration into the foliage opposite the buildings can be seen in much greater detail in the radar-based map when compared with the laser-based map.

Foliage

Moat

Heritage building

Figure 7.15  A satellite image of the testing grounds within university campus. This provides some ground validation for the GBRM-ML algorithm. The estimated vehicle path, obtained from manually matching successive laser scans, is superimposed in white.

260

Grid-Based Robotic Mapping with Detection Likelihood Filtering

Figure 7.16  The satellite image of Figure 7.15 with radar-based landmark detections superimposed.

300 250 200

Meters

150 100 50 0 0

0

0

50

100 150 Meters

200

250

300

350

Figure 7.17  The final posterior map estimates from the radar and of the outdoor campus environment. Note the large information content of the resulting map due to its ability to detect multiple landmarks at a single bearing angle.

7.6  Summary

261

300 250 200

Meters

150 100 50 0 0

0

0

50

100 150 Meters

200

250

300

350

Figure 7.18  The final posterior map estimates from the laser range finder corresponding to Figure 7.17.

7.6  Summary This chapter presented a mapping algorithm for jointly estimating the occupancy variable along with its detection measurement likelihood. It showed that the measurement likelihood typically used in autonomous robotics occupancy grid algorithms should in fact be modeled as a density function as opposed to a deterministic function. By examining the measurement model and using signal detection theory, it was shown that the occupancy random variable can be calculated in closed form without the need for heuristic models. However, the measurement likelihood used in the occupancy posterior calculation is itself an estimated entity, thus requiring a joint estimation frame work to propagate both the occupancy and measurement likelihood estimates. The standard discrete Bayes estimation framework therefore no longer applies to the occupancy grid problem, and a particle filter approach was demonstrated. Using Markov transitions, the effects of occlusions and the appearance of new landmarks in the region were integrated into the algorithm. Particle representations allow the propagation of the measurement likelihoods, which are derived through a nonlinear function of the radar’s received power, subject to non-Gaussian noise. Weights for these particles were obtained from the likelihood ratio used by the, detector. The resulting set of posterior particles was then resampled, and the recursion was established. This concept was demonstrated for the FMCW radar used in earlier chapters in an outdoor environment. Advantage was taken of the fact that the radar provides access to unprocessed range data, allowing custom designed landmark detectors to be applied. The

262

Grid-Based Robotic Mapping with Detection Likelihood Filtering

framework then allowed for the accurate assignment of map occupancy probabilities, irrespective of the hypothesis chosen by the detector. Mapping results were presented for loops in an outdoor parking lot as well as a trajectory within a university campus. Comparison with maps produced from GBRM-SL approaches, ground-truth maps, laser occupancy maps, as well as images showed the merits of the GBRM-ML concept. However, this chapter focused mainly on the online estimation of the DL and assumed the false alarm likelihood to be known. As introduced in Chapter 3, using detection theory, CFAR thresholds are set using an a priori assumption on the distribution of the noise signal, which is also assumed to be homogeneous. Significant work needs to be carried out to accurately estimate the false alarm likelihood in regions of the map that deviate from the homogeneous noise assumption, which can have drastic results for the occupancy posterior. Evidential modeling methods may be applied, as there would be large uncertainty as to the true false alarm probability. This could further improve the mapping accuracy and is a possible avenue for future research.

7.7  Bibliographical Remarks The introduction of measurement likelihood filtering into the GBRM problem was first introduced by Mullane et al. in [3]. In cluttered outdoor or even underwater environments where numerous false alarms occur, “landmark management” techniques are often used to identify “unreliable” landmarks and delete them from the map. This is in order to reduce the possibility of false data association decisions. From the literature, two common methods of identifying true landmarks from noisy measurements are the discrete Bayes filter [7, 23], which propagates a landmark existence variable obtained from a sensor model and the “geometric landmark track quality” measure [24, 25] which is a function of the innovation for that landmark. The discrete Bayes filter approach is more commonly used in an occupancy grid framework for map-building applications. Signal processing problems are not new to the field of autonomous mapping and landmark detection but are generally treated in a simplified manner. In the underwater domain, sonars also return a power versus range vector, similar to the A-scope, which is difficult to interpret. In his thesis [26], S. Williams outlined a simple landmark-detection technique for autonomous navigation in a coral reef environment. The maximum SNP exceeding a priori constant threshold was chosen as the point target. Clearly this method of extraction results in a large loss of information, as the power information at all other ranges except that which is declared a landmark is disregarded. This can compromise the quantity of information present in the map. In [27] S. Majumder attempted to overcome this loss by fitting a sum of Gaussian probability density functions to the raw sensor data; however, this represents a likelihood distribution in the range space of a single point landmark which can be misleading as the data can be the result of multiple landmarks, leading to the association of non-corresponding landmarks. A. Foessel [21] also demonstrated radar mapping capability through the use of a log odds approach using a heuristic scoring scheme. Impressive results were

7.7  Bibliographical Remarks

263

produced; however, detection statistics were not considered and mathematical justification for the model was also not provided. Moravec and Elfes demonstrated that a 2D occupancy grid, generated from a 2D model for a sonar sensor after a single range/bearing reading, could be repre sented [28]. At the time of publication (1989), the occupancy grid framework, and sensor modeling techniques summarized here presented a new method of fusing sensor readings over time, in a stochastic manner so as to deal with sensor and prior occupancy uncertainty [28]. However, the concept also raised many questions. One concern was the presence of redundant readings from a sensor. The assumption under Elfes’s occupancy grid mapping technique is that all readings from a sensor provide independent information related to the occupancy value of a single cell in the occupancy grid. However, a single sensor reading actually provides information about the combined probability of occupancy of a set of cells, rather than a single cell. Further, for sensors with a wide beamwidth, such as sonar, all the cells at the leading edge of the beam are often assigned high probabilities of occupancy, when in fact usually only one cell is responsible for the sonar echo. Sonar, and in fact radar, sensors are also well known to suffer from specular reflections, in which the transmitted energy bounces specularly from smooth walls, resulting in either no detection of the received energy at all or large overestimates in the range to the detection. Modeling such sensor data as Gaussian in nature is not realistic, since it will tend to increase the occupancy confidence of cells far away from the true object itself. In short, such grid mapping algorithms often generate maps that are inconsistent with the sensor data, particularly in cluttered environments. After its introduction by Elfes, much of the grid-based mapping literature distributed occupancy uncertainty in the spatial domain to model the uncertainty of the sensing and map estimation process [4, 28]. However, while this environmental representation deals with detection and spurious measurements to propagate the landmark existence estimate, such a representation in its mathematical structure does not inherently encapsulate and propagate the spatial uncertainty of sensor measurements [3].

References [1]

[2]

[3]

[4] [5]

Leonard, J. J., and H. F. Durrant-Whyte, “Dynamic Map Building for an Autonomous Mobile Robot.” In Proceedings of the IEEE International Workshop on Intelligent Robots and Systems, Ibaraki, Japan, July 1990, pp. 89–96. Ribas, D., P. Ridao, J. D. Tardos, and J. Niera, “Underwater SLAM in a Marina Environment.” In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, October 2007. Mullane, J., M. Adams, and W. S. Wijesoma, “Robotic Mapping Using Measurement Likelihood Filtering,” International Journal of Robotics Research, Vol. 2, No. 28, 2009, pp. 172–190. Thrun, S., W. Burgard, and D. Fox, Probabilistic Rohotics, Cambridge, MA: MIT Press, 2005. Moravec, H., and A. E. Elfes, “High Resolution Maps from Wide Angle Sonar.” In Proceedings of the IEEE International Conference on Robotics and Automation, New Jersey, March 1985, pp. 116–121.

264

Grid-Based Robotic Mapping with Detection Likelihood Filtering [6]

[7] [8]

[9]

[10] [11]

[12] [13]

[14]

[15] [16] [17]

[18]

[19] [20]

[21]

[22] [23]

[24]

[25]

Grisetti, G., C. Stachniss, and W. Burgard, “Improved Techniques for Grid Mapping With Rao­ Biackwellized Particle Filters,” IEEE Transactions on Robotics, Vol. 23, No. 1, February 2007, pp. 34–45. Thrun, S., “Learning Occupancy Grids with Forward Models,” Autonomous Robots, Vol. 15, No. 2, 2003, pp. 111–127. Mullane, J., M. D. Adams, and W. S. Wijesoma, “Evidential versus Bayesian Estimation for Radar Map Building.” In Proceedings of the 9th IEEE International Conference on Control, Automation, Robotics and Vision (ICARCV 2006), Singapore, December 2006. Smith, R., M. Self, and P. Cheeseman, “A Stochastic Map for Uncertain Spatial Relationships.” In The Fourth International Symposium of Robotics Research, 1987, pp. 467– 474. Konolige, K., “Improved Occupancy Grids for Map Building,” Autonomous Robots, Vol. 4, No. 4, 1997, pp. 351–367. Wijesoma, W. S., L. D. L Perera, and M.D. Adams, “Toward Multidimensional Assignment Data Association in Robot Localization and Mapping,” IEEE Transactions on Robotics, Vol. 22, No. 2, April 2006, pp. 350–365. Kay, S., Fundamentals of Statistical Signal Processing, Vol II: Detection Theory, Englewood Cliffs, NJ: Prentice Hall, 1998. Gandhi, P. P., and S. A. Kassam, “An Adaptive Order Statistic Constant False Alarm Detector.” In Proceeings of the IEEE International Conference on Systems Engineering, Ohio, August 1989, pp. 85–88. Rohling, H., and R. Mende, “OS CFAR Performance in a 77-GHz Radar Sensor for Car Applications.” In Proceedings of the CIE International Conference of Radar, October 1996, pp. 109–114. Kreysig, E., Advanced Engineering Mathematics, New York: John Wiley & Sons, 2006. Skolnik, M. I., Radar Handbook, 2nd Ed., New York: McGraw-Hill, 1990. Gordon, N. J., D. J. Salmond, and A. F. M. Smith, “Novel Approach to Nonlinear/NonGaussian Bayesian State Estimation,” IEE Proceedings on Radar and Signal Processing, April 1993, Vol. 140, No. 2, pp. 107–113. Arulampalam, M. S., S. Maskell, N. Gordon, and T. Clapp, “A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,” IEEE Transactions on Signal Processing, Vol. 50, No. 2, February 2002, pp. 174–188. Martin, M. C., and H. P. Moravec, “Robot Evidence Grids,” Technical Report CMU-RITR-96-06, Carnegie Mellon University, Pittsburgh, PA, March 1996. Elfes, A., Occupancy Grids: A Probabilistic: Framework for Robot Perception and Navigation, PhD Thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, May 1989. Foessel, A., J. Bares, and W. R. L. Whittaker, “Three-Dimensional Map Building with MMW RADAR.” In Proceedings of the 3rd International Conference on Field and Service Robotics, Helsinki, Finland, June 2001. Rohling, H., “Radar CFAR Thresholding in Clutter and Multiple Target Situations,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 19, No. 4, 1983, pp. 608–621. Montemerlo, M., S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges.” In Proceedings of the 18th International Joint Conference on Artificial Intelligence, CA, 2003, pp. 1151–1156. Dissanayake, M. W. M. G., P. Newman, S. Clark, H. F. Durrant-Whyte, and M. Csorba, “A Solution to the Simultaneous Localization and Map Building (SLAM) Problem,” IEEE Trans. Robotics and Automation, Vol. 17, No. 3, June 2001, pp. 229–241. Makarsov, D., and H. F. Durrant-Whyte, “Mobile Vehicle Navigation in Unknown Environments: A Multiple Hypothesis Approach,” IEE Proceedings of Control Theory Applications, Vol. 142, No. 4, July 1995, pp. 385–400.

7.7  Bibliographical Remarks

265

[26] Williams, S., Efficient Solutions to Autonomous Mapping and Navigation Problems. PhD Thesis, Australian Centre for Field Robotics, University of Sydney, 2001. [27] Majumder, S., Sensor Fusion and Feature Based Navigation for Subsea Robots. PhD Thesis, University of Sydney, August 2001. [28] Elfes, A., “Using Occupancy Grids for Mobile Robot Perception and Navigation,” IEEE Computer Society Press, Vol. 22, No. 6, June 1989, pp. 46–57.

Chapter 8

Feature-Based Robotic Mapping with Random Finite Sets 8.1  Introduction The occupancy grid approach introduced in Chapter 4 and expanded in Chapter 7 propagates estimates of landmark existence on a grid with a fixed, predetermined number of cells with assumed known robot location. In environmental representations of this type, the number of map states is therefore predefined and constant, and only the cells’ “contents,” which correspond to the likelihood of an objects existence at that cell’s coordinates, need to be updated. Hence, the grid, which fully represents the environment, can be represented mathematically by either an array or matrix of predefined, fixed dimensions. Grid-based approaches, however, suffer from many disadvantages. Standard occupancy grid maps do not maintain the dependencies of the occupancy probability of each cell. Also, a full map posterior is generally intractable, due to the huge combinatorial number of maps that can be represented by a grid. Further, the grid size and its resolution (cell size) must be once and for all determined prior to mapping, thus restricting the area that can be mathematically represented by the robot. Feature-based robot mapping (FBRM) approaches utilize a compressed form of the sensor data, known as features (such as point clusters, circles, lines, and corners). The feature-based (FB) map representation has gained widespread popularity, particularly in the outdoor robotics domain. This is due to its ability to estimate large-scale maps and provide correctional information for simultaneous vehicle pose estimation, based only on this compressed representation of the sensor data. FBRM approaches can be computationally appealing, since few features need to be detected and updated per sensor scan, and feature-to-feature and feature-tovehicle correlations can be maintained. They fundamentally differ from their gridbased counterparts in that they are required to estimate the location of an initially unknown number of features. In this chapter, FB map-only estimation is addressed (i.e., the vehicle trajectory is assumed known). As introduced in Chapter 4 Section 4.7, estimating a FB map requires the joint propagation of the FB map density encapsulating uncertainty in feature number and location. In Chapter 4, Section 4.7.1, the detrimental consequences of representing a stochastic map under a vectorbased framework were discussed. This chapter therefore continues the general RFS approach introduced in Chapter 4, Section 4.7, and presents its implementation details for FBRM. Where necessary, references to the concepts introduced in Chapter 4, Section 4.7 are made throughout. The joint propagation of the FB map density is therefore addressed, which leads to an optimal map estimate in 267

268

Feature-Based Robotic Mapping with Random Finite Sets

the presence of unknown map size, spurious measurements, feature detection, and data association uncertainty. The implementation details of the first-order moment recursion, the probability hypothesis density (PHD) filter, are presented with landbased, FMCW radar data. The feasibility of the PHD filter framework is demonstrated, particularly in situations of high clutter density and large data association ambiguity. This chapter establishes the tools for a more generalized representation of the FB map, which is a fundamental component of the more challenging SLAM problem to follow in Chapter 9. The chapter begins in Section 8.2 by substituting the unknown map state, conditioned on the known robot pose, into the generalized PHD filter predictor and updated equations first introduced in Chapter Section 4.7.5. The PHD filter is then implemented in the form of a Gaussian mixture (GM) function in Section 8.3. Referred to as the Gaussian mixture model (GMM)-PHD-FBRM filter, each Gaussian is conditioned on the known vehicle pose. This allows the weights of each Gaussian to be determined via the PHD filter and the parameters of each Gaussian to be updated via a standard extended Kalman filter (EKF). Also in this section, details of how new features can be integrated into the estimator, via a feature proposal strategy, are presented. Section 8.3 culminates with a pseudo code example of the GMM-PHD-FBRM filter. A note on computational complexity, with comparisons to that of EKF-FBRM, is given in Section 8.4. Radar-based results are demonstrated in Section 8.5 with comparisons to the classical NN-EKF approach introduced in Chapter 4, Section 4.6.1. Finally, after the Summary in Section 8.6, Section 8.7 offers some bibliographical comments on related robotic mapping work and summarizes the latest developments in the related field of multi-object filtering.

8.2  The Probability Hypothesis Density (PHD)-FBRM Filter The state of interest in this chapter contains the feature positions m given the robot pose Xk, and its estimation is referred to as feature-based robotic mapping (FBRM). Due to the uncertainty in landmark existence as well as location, resulting from radar-based detection algorithms (Chapter 3), the RFS framework of Chapter 4, Section 4.7 and its first-order moment approximation, the PHD filter, is to be applied. Hence in the general, PHD-based state predictor/corrector Equations 4.66 and 4.67

Yk ® m Xk

(8.1)

and from Equation 4.66 the prediction of the map intensity function υk k -1(Y k ) becomes

υ k k -1(Yk ) ® υ k k -1(m Xk ) = υ k -1 k -1(m Xk -1) + bk (m Xk )

(8.2)

where now bk(m|Xk) is the PHD of the new feature RFS, B(Xk). Note now that υk -1 k -1(m X k -1) corresponds to the estimate of the intensity function at time k – 1, given all observations up to and including time k – 1 and the robot’s location. Again, for ease of notation, this will be referred to as uk – 1() in future instances.

8.3  PHD-FBRM Filter Implementation

269

The PHD filter corrector Equation 4.67, uk(Yk), then becomes



 υk k (Ψ k ) → υk k (m Xk ) = υk k −1(m Xk ) 1 − PD (m Xk ) +   PD (m Xk )g(z m, Xk ) ∑  z∈Zk ck (z Xk ) + ∫ PD (ξk )g(z ξ k )υk k −1(ξk )d ξk 

(8.3)

where now uk|k–1(m| Xk) is the predicted intensity function from Equation 8.2, ξk is a subset of Mk, and PD (m Xk ) = the probability of defecting a feature at m, from vehicle location Xk g(z m, Xk ) = the spatial measurement model of the sensor at time k, ck (z Xk ) = intensity of the clutter RFS Ck (Xk ) (in Equation 4.55) at time k Hence the FBRM filter estimates uk (m| Xk), which, under a Gaussian mixture (GM) implementation, has the form of the PHD (intensity function), shown in Figures 4.11 and 4.12. In the FBRM case, under this representation, each Gaussian is conditioned on the known vehicle state Xk.

8.3  PHD-FBRM Filter Implementation This section outlines a suitable implementation of the PHD-FBRM framework. For accurate comparison with vector-based methods, a Kalman-based, GM implementation is adopted, referred to as the Gaussian mixture model (GMM)-PHD-FBRM filter. The FBRM filter estimate uk|k(m| Xk), under a GM implementation, has the form of the PHD (intensity function), shown in Figures 4.11 and 4.12. In the FBRM case, under this representation, each Gaussian is conditioned on the known vehicle state Xk. Recall from Chapter 4, Section 4.7.2.2, the map is modeled as static in feature location, but it is monotonically increasing in size as new features are observed. The Gaussian components of the mixture represent the multitude of locations of features in the map, while their masses represent the number of features in that given region. In this case, let the prior map intensity, υk–1|k–1, be a GM of the form

υk −1 k −1(m Xk −1) =

Jk−1

∑ wk(j−) 1N (m; µk(j−)1, Pk(j−)1)

(8.5)

j =1

( j) which is a mixture of Jk–1 Gaussians, with wk( j-)1, µk( j-)1 and Pk -1 being their corresponding predicted weights, means, and covariances, respectively. Let the new feature intensity in Equation 8.2 bk (m| Xk) at time k also be a GM of the form



bk (m Xk ) =

Jb , k

å ω b(j,)k N (m; µ b(j,)k , Pb(,jk) ) j =1

(8.6)

270

Feature-Based Robotic Mapping with Random Finite Sets

where, Jb,k defines the number of Gaussians in the new feature intensity at time k ( j) ( j) and ωb,k, µb,k and Pb(,jk) determine the shape of the new feature GM proposal density according to a chosen strategy. This is analogous to the proposal distribution in the particle filter of FastSLAM 2 [1] and provides an initial estimate of the new features entering the map. More details on the chosen strategy will be provided in Section 8.3.1. Under the PHD predictor/corrector equations, the predicted intensity υk|k–1 will also be a GM

υ k k -1(m Xk ) =

Jk k - 1

å w k(j)k-1 N (m; µk(j)k-1, Pk(jk) -1)

(8.7)

j =1

which consists of Jk|k–1 = Jk–1 + Jb,k Gaussians representing the union of the prior map intensity, υk–1|k–1 (m|Xk ), and the proposed new feature intensity according to Equation 8.2, where wk( j|)k -1 = wk( j-) 1 ü ïï µk( j|)k -1 = µk( j-) 1 ý for j Î {1,…, Jk -1 } (previously observed features) ï Pk(|jk) -1 = Pk( j-)1 ï þ

wk( j|)k -1 = wb( j,-k Jk -1) ü ï ï µk( j|)k -1 = µb( j,-k Jk -1) ý for j Î {( Jk -1 + 1),…, Jk|k -1 } (newly observed features) ï Pk(|jk) -1 = Pb(,jk- Jk -1) ï þ Since the measurement likelihood is also of Gaussian form, it can be seen from Equation 8.3 that the posterior intensity, υk|k, is then also a GM given by

Jk k−1   ( j) υk k (m Xk ) = υk k −1(m Xk ) 1 − PD (m Xk ) + ∑ ∑ υG ,k (z, m Xk )   z∈Zk j =1

(8.8)

From the general PHD filter update Equation 8.3, the components of the above equation are given by



( j) ( j) ( j) ( j) υG ,k (z, m xk ) = wk (z xk ) N (m; µk k , Pk k )

wk( j) (z Xk ) =

PD (m Xk )wk( j)k -1q( j) (z, Xk ) c(z) +

Jk k - 1

å

i =1

(

PD (m

(8.9)



(8.10)

Xk )wk(i)k -1q( j) (z, Xk )

)

where, q(i) (z, Xk ) = N z; hspatial (µ(ki|)k-1), Ek(i) , with hspatial () being the general, nonlinear, spatial measurement model equation, introduced in Equation 4.28, Chapter 4,

8.3  PHD-FBRM Filter Implementation

271 ( j)

Section 4.5.1, and Ek(i ) is defined next. The components of the jth Gaussian, µk|k and Pk(|jk) , can be obtained from the standard EKF update equations

Ek( j) = Rk + ÑX hspatial Pk( jk) -1(ÑX hspatial )T

(8.11)



Kk( j) = Pk( jk) -1(ÑX hspatial )T [E(kj) ]-1

(8.12)



µk( j)k = µk( j)k -1 + Kk( j) (z - hspatial ( µk( j)k -1))

(8.13)



Pk( jk) = [I - Kk( j)Ñx h spatial ]Pk( jk) -1

(8.14)

with Ñ X hspatial being the Jacobian of the measurement equation with respect to the landmark’s estimated location, numerically evaluated at the current best estimate � k -1 k -1. R is the spatial noise measurement matrix, which is a block diagonal maX k trix consisting of the submatrices for each individual feature, according to Equation 4.24, and Kk( j) is the Kalman gain for the ith Gaussian (Section 4.6.1). As stated previously, the clutter RFS, Ck, is assumed Poisson distributed in number and uniformly spaced over the mapping region [2, 3]. Therefore the clutter intensity is given by ck (z) = ncVU (z)



(8.15)

where nc is the clutter rate per scan, V is the volume of the surveillance region, and and U(z) denotes a uniform distribution on the measurement space. 8.3.1  The FBRM New Feature Proposal Strategy

While any arbitrary strategy can be used for the new feature intensity bk (m| Zk–1, Xk–1), an intuitive strategy closely related to vector-based implementations is used in this work. As seen in Equation 8.6, given a GM intensity representation, the mean, co-variance, and weight of each Gaussian must be chosen. The GM component means and covariances determine the spatial distribution on the likely J location of new features entering the map, with the sum of the weights, S j =b,1k wb( j,)k then providing an estimate of the expected number of new features to appear at time k. The new feature intensity at time k, bk (m| Zk–1, Xk–1), is adaptively determined by the previous measurements Zk–1 and the previous vehicle state Xk–1. The components of the GM of Equation 8.6 are then determined according to

wb( j,)k = 0.01,

µb( j,)k = (hspatial )-1(zkj -1 , Xk -1)

Pb(,jk) = ÑX hspatial (µ( j) , Xk -1)Rk [ ÑX hspatial (µ( j) , Xk -1)]T

(8.16) (8.17)

272

Feature-Based Robotic Mapping with Random Finite Sets Table 8.1  GMM-PHD-FBRM-Birth Algorithm GMM-PHD-FBRM-Update (Zk–1, Xk–1) // Implementation of Section 8.3.1 (Any strategy is valid) // For each measurement 1. for i = 1 to zk–1 do // initialize the mean 2. µb(i,)k = (hspatial )-1(z(ki)-1, Xk -1) // initialize the covariance 3. Pb(i,)k = Ñ x hspatial (m(bi,)k , Xk -1) // initialize the weight 4. wb(i,)k = α 5. end for // Set the number of birth components 6. Jb,k = zk–1 // Construct birth PHD J 7. bk (m Z k -1 , Xk -1) = {µb(i,)k , Pb(i,)k , wb(i,)k }i =b,1k 8. return (bk(m|Zk–1, Xk–1))

Therefore, the implementation initially considers all detections at time k – 1 to be potential new features at time k. This allows for features of low detection probability, which perhaps only become detectable at close ranges, to be reliably estimated. 8.3.2  Gaussian Management and State Estimation

Note from Equation 8.8 that every measurement is combined with every Gaussian component, resulting in an explosive growth in the number of Gaussians in the posterior intensity. Therefore, as with GM implementations [4], pruning and merging operations are required. Gaussians that are determined to be sufficiently close to each other, based on a Mahalanobis distance threshold, are merged into a singular Gaussian as in [5]. Recall from Equation 4.64 that the integral of the intensity function over a region on the space of features equals the number of features Table 8.2  GMM-PHD-FBRM-Predict Algorithm GMM-PHD-FBRM-Predict (Zk−1, Xk−1, vk−1 k−1(m | Xk−1)) // To obtain Equation 8.7 // PHD Prediction: existing components 1. for i = 1 to Jk–1 do // Static map assumption 2. µk(i|)k−1 = Mk(i−) 1|k−1, Pk(i|k) −1 = Pk(i−)1|k−1, wk(i|)k−1 = wk(i−) 1|k−1 3. end for // Generate map birth components 4. GMM-PHD-FBRM-Birth (Zk–1, Xk–1) // PHD Prediction: new components 5. for i = 1 to Jb,k do // Components from birth proposal bk (m Z k -1 , Xk ) 6. µk( J|kk --11+ i) = µb(i,)k , Pk(|Jkk--11+1) = Pb(i,)k , wk( J|kk --11+ i) = wb(i,)k 7. end for // Total number of Gaussian components in vk|k -1(m X k -1) 8. Jk|k -1 = Jk -1 + Jb,k // The predicted map PHD 9. vk|k -1(m X k ) = {µk(i|)k -1, Pk(i|k) -1, wk(1)|k -1}iJ=k|1k -1 10. return (uk|k–1(m|Xk))

8.3  PHD-FBRM Filter Implementation

273

Table 8.3  GMM-PHD-FBRM-Prune Algorithm GMM-PHD-FBRM-Update (Zk , Xk , uk|k–1(m| Xk–1)) // Initialize number of Gaussian components 1. L= 0 // Missed detections and Update Terms 2. for i = 1 to Jk| k–1 do // Increment component counter 3. L = L + 1 // miss-detected component update of (8.8) 4. µk(L) = µk(i|)k -1, Pk(L) = Pk(i|k) -1 // miss-detected weight update of (8.8) 5. wk(L) = (1 - PD )ωk(i|)k -1 // measurement prediction 6. zk(i|)k -1 = hspatial (µk(i|)k -1, Xk ) // Calculate Jacobian 7. H = ÑX h spatial (µ(ki|)k-1, Xk ) // Kalman Gain of (8.12) 8. Kk(i) = Pk(i|k) -1[H ]T [Sk(i) ]-1 // Innovation Covariance of (8.11) 9. Sk(i) = HPk(i|k) -1[H ]T + R // Update Covariance of (8.14) 10. PU(i,)k = [I - Kk(i)H ]Pk(i|k) -1 11. end for // For each measurement 12. for i = 1 to zk do // For each map PHD component 13. for j = 1 to Jk| k–1 do // Updated component mean of (8.13) 14. µk(L + j) = µk( j|)k -1 + Kk( j)(zk(i) - zk( j|)k -1) // Updated GMM component covariance 15. Pk(L + j) = PU( j,)k // Numerator of (8.9) 16. τ ( j) = PDwk( j|)k -1 2π Sk( j)

-0.5

´ exp((zk(i) - zk( j|)k -1)[Sk( j) ]-1(zk(i) - zk( j|)k -1)) 17. end for // For each map PHD component 18. for j = 1 to Jk| k–1 do // denominator of (8.9) (L + j) = τ ( j) (c(z) + S lJ=k|1k -1τ (l)) 19. wk 20. end for 21. L = L + k| k–1 22. end for // Number of components in updated GMM 23. JK = L // The updated map PHD 24. vkk (m | Xk ) = {µk(i) , Pk(i) , wk(i) }iJ=k1 25. return (Xk, uk|k(m|Xk))

274

Feature-Based Robotic Mapping with Random Finite Sets

present in that region. Therefore, in some instances (particularly for closely lying features and/or high measurement noise), more than one feature can be represented by a single Gaussian component. A multilevel threshold is thus used to estimate the dimensionality according to el, with L being the number of features modeled by a single Gaussian component. State estimation occurs by extracting the means and covariances of the Gaussian components whose associated weights exceed the given thresholds. 8.3.3  GMM-PHD-FBRM Pseudo Code

In this section an example of a pseudo-code implementation of the GMM-PHDFBRM filter is given. Table 8.1 outlines the generation of the feature birth density, while Table 8.2 details the construction of the predicted GMM-PHD. Table 8.3 presents its update given the current measurement, Table 8.4 describes the Gaussian

Table 8.4  GMM-PHD-FBRM-Prune Algorithm GMM-PHD-FBRM-Prune (uk|k(m|Xk), Dmin, Tmin, Jmax) // Initialize number of Gaussian components 1. L = 0 // Remove components with weight below Tmin 2. I = {i = 1,…, J k |wk(i ) > Tmin } // Gaussian merging 3. do while I ¹ 0 // Increment component counter 4. L = L + 1 // Get index of max weight component 5. j = arg maxwk(i ) i∈I

// Cluster those within distance Dmin

6. K = {i ∈ I|(µk(i ) − µ (kj ))T [Pk(i) ]−1(µ (ki) − µ(kj)) ≤ Dmin } // Combine component weights �k(L) = 7. w

∑ w(ki)

i∈K

// Weighted average mean 1 8. µ�k(L) = (L) å wk(i) µk(i) � k i ÎK w // Combined covariance 1 9. P�k(L) = (L) å wk(i) Pk(i) + (µ�k(L) - µ k(i))(µ�k(L) - µ k(i))T � wk i ÎK // Remove K from I and repeat 10. I = I – K 11. end for 12. Jk = L // If max component number exceeded 12. if Jk ³ Jmax 13. replace {ω�(ki ) , µ�(ki ) , P�(ki ) }iJk=1 with those of the Jmax largest weights. 11. end for // The pruned map PHD � (i) , µ�(i) , P�(i) } Jk 13. υk|k (m|Xk ) = {w

(

k

14. return (υk|k (m|Xk ))

)

k

k

i =1

8.5  Analysis of the PHD-FBRM Filter

275

Table 8.5  GMM-PHD-FBRM-Estimate Algorithm GMM-PHD-FBRM-Estimate (uk|k (m|Xk), Tfeature) // Initialize the map estimate ˆ k=Ø 1. M 2. for i = 1 to Jk do 3. if wk(i ) > Tfeature // concatenate estimate ˆ k = [M ˆ kµ (i ) ] 4. M k 5. end if 6. end for ˆ k) 7. return (M

management algorithm, and, finally, Table 8.5 shows the process used to extract feature map estimates at each time.

8.4  PHD-FBRM Computational Complexity At time k, given zk measurements and mk map states, the computational complexity of a naive implementation of a NN-EKF-FBRM solution is (zkmk), due to conditional feature measurement independencies given an assumed known vehicle trajectory and evaluation of the measurement-map state associations. According to updated Equation 8.3, the complexity of the PHD-FBRM solution is also (zkmk). Simulations of the PHD-FBRM filter in [6] demonstrate linear increase in computation time for the GMM-PHD-FBRM Filter, indicating that real-time implementation is possible.

8.5  Analysis of the PHD-FBRM Filter This section presents a practical implementation of the FBRM finite-set-based framework for ground-based autonomous vehicles to demonstrate its applicability to real-world scenarios. The testing environment is a section of a university campus that contains numerous point features (trees, lamp posts, fire hydrants) that comprise the point-feature map. Figure 8.1 gives an overview of the testing area, with the inset vehicle-centric perspective showing the typical point features present. As with any practical implementation of an estimation algorithm, although difficult to obtain, ground-truth is essential for error evaluation. For an FBRM evaluation, both the true number and locations of all point features must be determined. For this reason, realistic simulations can be beneficial, in which noisy sensor data is simulated, together with clutter. In an FBRM simulation, the ground truth feature map, in terms of feature number and location, is known, together with the vehicle’s trajectory. Such a simulation was already shown in Section 4.8.3.1, in which the NN-EKF-FBRM algorithm of Section 4.6.2.1 and the GMM-PHD-FBRM algorithm of this chapter were compared, to highlight the mapping algorithm evaluation capabilities of the second-order Wasserstein metric. This metric will again be adopted in an actual radar-based FBRM experiment in this chapter.

276

Feature-Based Robotic Mapping with Random Finite Sets

400 350

300

Meters

250 200 150 100

50

50

100

150

200 Meters

250

300

350

Figure 8.1  An overview of the testing ground within the university campus. The path and raw laser measurements (white) are superimposed on a satellite image. The inset of a vehicle-centric view shows the environment mainly comprising point features such as trees and lamp posts. Laser range points (white) are projected into the image plane for ground truth verification.

For an actual radar-based comparison of the GMM-PHD-FBRM and NNEKF-FBRM concepts, feature map ground truth was achieved as best as possible, through observation of the synchronized video stream and successive manual scan matching of all the corresponding laser data. The manually extracted centroid of all laser point features identified on the video stream then provided the ground truth locations. The goal of an FBRM algorithm is therefore to estimate the number and location of all the point features within the mapped region, based on a single vehicle run, using the radar data. The path length was just over 300 m. A subset of the superimposed raw radar data from a section of the vehicle’s trajectory is shown in Figure 8.2. Note the many high-powered point detections in close proximity to the vehicle’s trajectory, corresponding mainly to trees and lamp posts. Each radar scan is processed with the OS-CFAR processor (Section 3.5.2), and the resulting hypothesized detections are used as point features. For the purpose of comparison, a nearest-neighbor EKF (NN-EKF) vectorbased FBRM implementation is adopted (introduced in Sections 4.6.1 and 4.6.2.1), . Laser range finders are used for ground truth verification since they have a high angular resolution, and detected point feature centroids can therefore be located with precision. However, multiple vehicle runs were required to ensure that the laser range finders detected all of the point features, for reasons highlighted in Section 1.2. Furthermore, clear atmospheric conditions and low light levels were necessary, conditions that are not necessary for general radar operation.

8.5  Analysis of the PHD-FBRM Filter

277

Meters

400

360

320 60

100 Meters

140

Figure 8.2  A subset of superimposed radar scans (maximum radar range set to 35 m) from part of the trajectory shown in Figure 8.1. High-intensity received power values are shown as lighter areas, and the robot s trajectory is shown in black.

coupled with the “log-odds” map-management method. The map-management approach “propagates” the map size by assigning an intuitive log odds score to each associated or unassociated feature. An estimate of the probability of a feature existing can then be readily recovered from the log-odds representation. For this analysis, a 95 percent X2 association confidence window was chosen. Taking the true map at time k, Mk, to be the subset of the entire map that has entered the sensor’s field-of-view (FoV), the quality of the map estimate can be examined over the course of the trial. Feature number estimates, provided by the PHD-FBRM filter and the NN-EKF algorithm, are compared in Figure 8.3 at each time step with the true number also provided. The true feature number, as a function of the measurement update number k, was determined from the repeated trials using only the laser range finder. Based on this, the true number of features that have entered the FoV of the radar, as the vehicle moves, is determined. It can be seen that feature number estimation error is introduced into the NN-EKF, vector-based approach as the feature initialization and termination routines generally rely on accurate data association decisions. These can be prone to failure in environments of fluctuating detection probabilities and frequent spurious measurements. A graphical comparison of the final posterior map estimates from both filters is depicted in Figure 8.4, highlighting some mapping errors from both filters. It is evident that the GMM-PHD approach reports fewer false features, but it also missed some features of poor detectability and/or feature extraction reliability. Figure 8.5 plots the FBRM joint estimation error (Equation 4.80) after each update. The ideal error is the mapping error (as a function of the entire map), given that each feature that enters the sensor FoV is instantaneously detected and assigned an error-free

278

Feature-Based Robotic Mapping with Random Finite Sets 100

Map dimensionality

90 80 70

NN-EKF

60 50

True

40

GMM-PHD

30 20 10 0

0

100

200

300

400

500

Measurement update number Figure 8.3  The posterior feature map dimensionality estimate following each measurement update compared to ground truth.

1400

1380

False GMM-PHD features (container corners) NN-EKF missed feature

1360

1340 False NN-EKF feature declarations

Meters

1320 GMM-PHD missed feature

1300

1280

1260

1240 Ground truth

1220

1200 260

280

300

320

340

360

380

400

420

440

460

Meters

Figure 8.4  Final posterior feature map estimate comparison. The GMM-PHD method demonstrates improved feature mapping capabilities; however, some false features are evident where container corners, were misinterpreted as being point features.

8.6  Summary

279 5 4.5

FBRM error (m)

4

NN-EKF estimation error

3.5

Ideal error

3 2.5

GMM-PHD estimation error

2 1.5 1 0.5 0

0

100

200

300

400

500

Measurement update number Figure 8.5  FBRM estimation error, based on the Wasserstein metric of Equation 4.80 at each measurement update in comparison to the total error given perfect detection and estimation once a feature enters the sensor s FoV.

localization estimate. The GMM-PHD method closely follows the ideal error with temporary glitches as some dimensionality estimation errors occur (also evident in Figure 8.3). The final posterior estimation error is also seen to be significantly less than that of the NN-EKF approach. It should be noted that this example serves to demonstrate that a first-order PHD approximation to the RFS-based FBRM problem generates sensible results to the feature map estimation problem. This PHD approximation will be extended to the full SLAM problem in Chapters 9 and 10, where experiments are shown in cluttered outdoor and marine environments. Here, comparisons will be made with the more recent EKF-JCBB method (introduced in Section 4.6.2.2) and FastSLAM with multi-hypothesis (MH) data association (introduced in Section 4.6.5) algorithm.

8.6  Summary This chapter presented a random set theoretic framework for feature-based map estimation in the context of autonomous navigation problems. Estimating a FB map encompasses estimating the location of an unknown number of features. This book has shown that with radar, feature existence uncertainty modeling is at least as important as spatial uncertainty modeling. By adopting an RFS map model and measurement, the uncertainty in the map size and feature locations can be jointly propagated and estimated. This approach therefore generalizes the notion of Bayes optimality to the realistic situation of a priori unknown number of features. To demonstrate the applicability of the RFS FBRM framework to solve real FB autonomous problems, it was shown how the first-order approximation, the PHD filter, could be implemented. The PHD filter alleviates the need for data association and map dimensionality estimation filters, as the filter incorporates these sources of uncertainty into the Bayesian recursion. Given its non reliance on data association, the RFS framework is more suited to applications with high clutter and is therefore essential in radar-based mapping algorithms.

280

Feature-Based Robotic Mapping with Random Finite Sets

The PHD filter was applied to the FBRM problem in which a vehicle’s pose is assumed known at all times. A GM representation of the PHD was applied to the problem, in which the means and covariance of each Gaussian component could be updated under the standard EKF concept. Comparisons of the PHD-FBRM filter and a vector-based NN-EKF, based on the same vehicle input commands and radar-based detections, showed a superior performance of the PHD-FBRM method in terms of estimating the true number of features. The FBRM error metric of Section 4.8.3, which penalizes mapping algorithms based on both estimated feature dimensionality as well as spatial errors, was also used to demonstrate the advantages of the PHD-FBRM approach.

8.7  Bibliographical Remarks The GMM-PHD-FBRM Filter was first introduced by Mullane et al. with a comprehensive analysis based on simulations and radar data given in [6]. In the robotics literature, the most common types of exteroceptive sensors deployed for FBRM applications are 2D range-bearing measuring devices, such as laser or sonar range finders [7, 8], or stereo vision systems from which this information can be calculated [9]. Such sensors’ measurements are subject to often assumed Gaussian range and bearing measurement noise, which are typically used to update the time predicted map state through an EKF filtering framework [2, 10]. Two common methods of identifying true landmarks from noisy measurements are the discrete Bayes filter [1, 11], which propagates a landmark existence variable obtained from a sensor model and the “geometric landmark track quality” measure [3, 12] which is a function of the innovation for that landmark. This is in contrast to the radar literature, in which the identification of true landmarks is often based on the CFAR detection process introduced in Chapter 3 and expanded in Chapter 6 for repeated scanning. In the underwater domain, sonars also return a power versus range array, similar to that of the radars examined in this book. In his thesis [13], S. Williams outlined a simple landmark detection technique for autonomous navigation in a coral reef environment. The maximum signal-to-noise ratio exceeding an a priori constant threshold was chosen as the point target. In [14], S. Majumder attempted to overcome this loss by fitting a sum of Gaussian probability density functions to the raw sensor data (in the form of an A-scope); however, this represented a likelihood distribution in range space of a single point landmark. In general, the FB autonomous framework is closely related to the multi-sensor, multi target filtering problem, where the objective is to jointly estimate the timevarying number of targets and their states from sensor measurements in the presence of data association uncertainty, detection uncertainty, clutter, and noise. The first systematic treatment of this problem using random set theory was conceived by Mahler in 1994 [15], which later developed into finite set statistics (FISST), under which the PHD filter was conceived. Moreover, this treatment was developed as part of a unified framework for data fusion using random set. Reference [16] provides an excellent overview of FISST.

8.7  Bibliographical Remarks

281

References [1]

[2]

[3]

[4]

[5]

[6] [7]

[8] [9]

[10]

[11] [12]

[13] [14] [15] [16]

Montemerlo, M., S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges.” In Proceedings of the 18th International Joint Conference on Artificial Intelligence, CA, 2003, pp. 1151–1156. Wijesoma, W. S., L. D. L Perera, and M.D. Adams, “Toward Multidimensional Assignment Data Association in Robot Localization and Mapping,” IEEE Transactions on Robotics, Vol. 22, No. 2, April 2006, pp. 350–365. Makarsov, D., and H. F. Durrant-Whyte, “Mobile Vehicle Navigation in Unknown Environments: A Multiple Hypothesis Approach,” IEE Proceedings of Control Theory Applications, Vol. 142, No. 4, July 1995, pp. 385–400. Durrant-Whyte, H. F., S. Majumder, M. de Battista, and S. Scheding, “A Bayesian Algorithm for Simultaneous Localisation and Map Building.” In The Tenth International Symposium of Robotics Research (ISRR), Victoria, Australia, 2001. Vo, B. N., and W. K. Ma, “The Gaussian Mixture Probability HypoThesis Density Filter,” IEEE Transactions on Signal Processing, Vol. 54, No. 11, November 2006, pp. 4091– 4104. Mullane, J., B.-N. Vo, M. Adams, and B.-T. Vo. Random Finite Sets for Robot Mapping and SLAM, Springer Tracts in Advanced Robotics, Vol. 72, Berlin: Springer, 2011. Chong, K. S., and L. Kleeman, “Feature-Based Mapping in Real, Large Scale Environments Using an Ultrasonic Array,” International Journal of Robotics Research, Vol. 18, No. 1, 1999, pp. 3–19. Leonard, J., and H. F. Durrant-Whyte, Directed Sonar Sensing for Mobile Robot Navigation, Boston: Kluwer Academic Publishers, 1992. Davison, A. J., Y. González Cid, and N. Kita, “Real-Time 3D SLAM with Wide-Angle Vision.” In Proc. IFAC Symposium on Intelligent Autonomous Vehicles, Lisbon, Portugal, July 2004. Guivant, J., E. Nebot, and S. Baiker, “Autonomous Navigation and Map Building Using Laser Range Sensors in Outdoor Applications,” Journal of Robotic Systems, Vol. 17, No. 10, October 2000, pp. 565–283. Thrun, S., “Learning Occupancy Grids with Forward Models,” Autonomous Robots, Vol. 15, No. 2, 2003, pp. 111–127. Dissanayake, M. W. M. G., P. Newman, S. Clark, H. F. Durrant-Whyte, and M. Csorba, “A Solution to the Simultaneous Localization and Map Building (SLAM) Problem,” IEEE Trans. Robotics and Automation, Vol. 17, No. 3, June 2001, pp. 229–241. Williams, S., Efficient Solutions to Autonomou.t Mapping and Navigation Problems. PhD Thesis, Australian Centre for Field Robotics, University of Sydney, 2001. Majumder, S., Sensor Fusion and Feature Based Navigation for Subsea Robots. PhD Thesis, University of Sydney, August 2001. Mahler, R., “Global Integrated Data Fusion,” Proc. 7th Nat. Symp. on Sensor Fusion, Vol. 1, 1994, pp. 187–199. Mahler, R., “An Introduction to Multisource-Multitarget Statistics and Applications,” Lockheed Martin Technical Monograph, March 2000.

pa r t i V Simultaneous Localization and Mapping

Chapter 9

Radar-Based SLAM with Random Finite Sets 9.1  Introduction This chapter compares Bayesian frameworks for feature-based SLAM, again in the general case of uncertain feature number and data association. In this chapter a RaoBlackwellized (RB) implementation of the PHD-SLAM filter is derived, referred to throughout as the RB-PHD-SLAM filter, which has similarities to, and subtle differences from the vector-based FastSLAM approach [1]. The RB-PHD-SLAM filter is based on Gaussian mixture (GM)-PHD filter for the map and a particle filter for the vehicle trajectory. As with Chapter 8, this chapter is based on the concepts introduced in Section 4.7 and makes references to these concepts throughout. A tractable PHD approximation to the SLAM problem is derived, which propagates the posterior PHDs of multiple trajectory-conditioned maps and the posterior distribution of the trajectory of the vehicle. Furthermore, this approach to SLAM admits the concept of an “expected” map via the PHD construct, which is not available in vector-based SLAM approaches. The chapter is organized as follows. Section 9.2 discusses the factorized RFS SLAM recursion, in which the posterior density of the map conditioned on the trajectory and the trajectory itself can be jointly propagated. The RFS framework is then applied to this factorized solution, where it is demonstrated that subtle differences regarding the use of sets make a direct, naive implementation of FastSLAM to the RFS problem inappropriate. In particular, the likelihood of the measurement, conditioned on the trajectory, which is necessary for the calculation of the particle weights, cannot be approximated under an EKF framework, as in FastSLAM [1]. Solutions that give a closed-form solution to this problem are presented in this section. Section 9.3 outlines an implementation of the RB-PHD-SLAM filter. As in Chapter 8, a Gaussian mixture model (GMM) technique is used to represent the map PHD; however, in this chapter, each Gaussian component is conditioned on a hypothesis of the vehicle’s entire trajectory. The vehicle trajectory hypotheses are represented as samples under a particle filter framework. As in Chapter 8, Section 9.3 culminates in a pseudo code example to further explain its implementation. In Section 9.4, the computational complexity of the filter is given, together with comparisons with that of FastSLAM. To compare the RB-PHD-SLAM approach with vector-based methods, comparisons are made with classical EKF SLAM, which utilizes the joint compatibility branch and bound (JCBB) [2] data association method and FastSLAM with multiple hypothesis (MH) data association [3], introduced in Sections 4.6.2.2 and 4.6.5. Section 9.5 shows these comparisons with the MMW-FMCW radar data and mobile robot platform of Figure 1.6, in the parking lot environment analyzed in C­hapters 5, 285

286

Radar-Based SLAM with Random Finite Sets

6, and 7. Data sets from the radar were recorded, along with odometry and single axis gyro rate data. For algorithm performance evaluation in this case, the ground truth trajectory and feature map were estimated by independent means.

9.2  Slam with the PHD Filter In this chapter, an RFS-based solution to the simultaneous estimation of the FB map and the robot’s location is presented. Since the full RFS-SLAM Bayes filter of Equations 4.59 and 4.60 is numerically intractable, it is necessary to look for tractable but principled approximations. As in Chapter 8, a first-order approximation based on the PHD filter is examined. A tractable PHD approximation to the SLAM problem is derived, which propagates the posterior PHDs of multiple trajectoryconditioned maps and the posterior distribution of the trajectory of the vehicle. Furthermore, this approach to SLAM admits the concept of an “expected” map via the PHD construct, which is not available in vector-based SLAM approaches. This section derives a recursion that jointly propagates the posterior PHD of the map and the posterior density of the vehicle trajectory. Analogous to FastSLAM (Section 4.6.5), the RFS-SLAM recursion can be factorized as shown in Section 9.2.1. Section 9.2.2 addresses the PHD representation of the map component only, while Section 9.2.3 extends this algorithm to perform SLAM. 9.2.1  The Factorized RFS-SLAM Recursion

Using standard conditional probability, the joint posterior RFS-SLAM density of Equation 4.60 can be decomposed as

pk|k (Mk , X k Z k ,U k −1, X0 ) = pk|k (X k Z k ,U k −1, X0 )pk (Mk Z k , X k )

(9.1)

Thus, the recursion for the joint RFS map-trajectory posterior density according to Equation 4.60 is equivalent to jointly propagating the posterior density of the map conditioned on the trajectory and the posterior density of the trajectory. In this section, for compactness,

pk|k −1(Mk X k ) = pk|k −1(Mk Z k −1, X k )

(9.2)



pk|k (Mk X k ) = pk (Mk Z k , X k )

(9.3)



pk|k (X k ) = pk (X k Z k ,U k −1, X0 )

(9.4)

pk k−1(Mk X k ) = ∫ f M(Mk Mk −1, Xk )pk −1|k −1(Mk −1 X k −1)δ Mk −1

(9.5)

and it follows that

9.2  Slam with the PHD Filter

287

gk (Zk Mk , Xk )pk|k −1(Mk X k ) g(Zk Z k −1, X k )



pk|k (Mk X k ) =



pk|k (X)k = g(Zk Z k −1, X k )

f veh (Xk Xk −1,Uk −1)pk −1(X k −1) gk (Zk Z k−1)

(9.6)

(9.7)

Apart from adopting RFS likelihoods for the measurement and map, the recursion defined by Equations 9.5, 9.6, and 9.7 is similar to that of FastSLAM. However, the use of RFS likelihoods has important consequences in the evaluation of Equation 9.7, to be seen later in Section 9.2.3. In FastSLAM, it should be noted that the map recursion of Equation 9.6 is further decomposed into the product of K independent densities for each of the K features assumed to exist in the map. Furthermore, FastSLAM is conditioned on the inherently unknown data association assignments. In contrast, RFS-SLAM is not conditioned on any data association hypotheses to determine the number of features to estimate, and the recursion of Equation 9.6 is that of an RFS feature map. Consequently, the propagation of the map involves probability densities of random finite sets, and marginalization over the map involves set integrals. Similar to FastSLAM, the effect of the trajectory conditioning on RFS-SLAM is to render each feature estimate conditionally independent; thus, the map correlations, critical to EKF-SLAM, are not required. 9.2.2  PHD Mapping—Rao-Blackwellization

This section details the trajectory-conditioned PHD mapping recursion of Equation 9.6, first proposed in [4]. The PHD filter for the SLAM problem can be implemented in RB form. Again, referring to the PHD predictor-corrector of Equations 4.66 and 4.67, substituting Ψ k → m X k



(9.8)

the Rao-Blackwellized prediction of the map intensity function nk|k–1(Yk) becomes

υk k −1(Ψ k ) → υk k −1(m X k ) = υk−1(m X k−1) + b(m Xk )

(9.9)

where again, b(m|Xk) is the PHD of the new feature RFS, B(Xk), initially introduced in Section 4.7.5. Note that nk|k–1(m|Xk) is now conditioned on a vehicle trajectory Xk. The corresponding RB-PHD corrector Equation 4.67, nk|k(yk), then become­s  υk|k (Ψ k ) → υk|k (m X k ) = υk k−1(m X k ) 1 − PD (m Xk ) + 

PD (m Xk )g(z m, Xk ) k z∈Zk ck (z Xk ) + ∫ PD (ξ Xk )g(z ξ , Xk )υk k −1(ξ X )d ξ



(9.10)

288

Radar-Based SLAM with Random Finite Sets

where again PD (m Xk ) = the probability of defecting a feature at m, from vehicle pose Xk . g(z m, Xk ) = the spatial measurement model of the sensor at time k. and, ck (z Xk ) = PHD of the clutter RFS Ck (Xk ) (in Equation 4.55) at time k

(9.11)

In contrast to the FBRM approach of Chapter 8, the RB-PHD-SLAM filter estimates multiple, conditionally independent PHDs (intensity functions). Each independent map PHD is conditioned on each of the N hypothesized vehicle t­rajectories, represented as particles. Referring again to the GM example representations of PHDs in Figures 4.11 and 4.12, in any particular map PHD, each Gaussian representing a/some possible feature(s) is conditioned on one of the N hypothesized vehicle trajectories. N such conditionally independent PHDs, one per hypothesized trajectory, are then estimated. 9.2.3  PHD-SLAM

This section extends the trajectory-conditioned PHD mapping recursion to the SLAM problem. With the hindsight of FastSLAM, the most obvious extension of PHD mapping to SLAM is to exploit the factorization Equations 9.5, 9.6, and 9.7, (e.g., PHD for mapping and particle filtering for localization). This technique requires the computation of the posterior density of the vehicle trajectory in Equation 9.7, in particular the term g(Zk|Zk–1, Xk), which requires set integration

g(Zk Z k −1, X k ) = ∫ p(Zk , Mk Z k −1, X k )δ Mk

(9.12)

This set integral is numerically intractable, and a naive approach is to directly apply the EKF approximation used in FastSLAM [5]. However, an EKF approximation cannot be used since the likelihood Equation 9.12, defined on the space of finite-sets, and its FastSLAM counterpart, defined on a Euclidean space, are two fundamentally different quantities and it is not known how they are even related. Therefore, in this case, it is fundamentally incorrect to use the EKF approximation in [1], as it will not result in a valid density, and thus its product with Equation 9.6 cannot give the joint posterior of the map and pose. An EKF approximation requires a hypothesized data association assignment. Since there is no concept of data association in the RFS-SLAM framework (there is no fixed ordering of features or measurements in the state), alternative methods of evaluation of Equation 9.12 are required. Fortunately, by rearranging Equation 9.6, it can be seen that g(Zk|Zk–1, Xk) is merely the normalizing constant,

g(Zk Z k −1, X k ) =

g(Zk Mk , Xk )pk k−1(Mk X k ) pk|k (Mk X k )



(9.13)

9.2  Slam with the PHD Filter

289

Note here, that the LHS does not contain the variable Mk, while the RHS has Mk in both the denominator and numerator. In essence, Mk in Equation 9.13 is a dummy variable, and thus Equation 9.13 holds for any arbitrary choice of Mk. This allows the substitution of any choice of Mk to evaluate g(Zk|Zk–1, Xk). This is an important result, as it allows the likelihood of the measurement conditioned on the trajectory (but not the map) to be calculated in closed form. The following considers two simple choices, with derivations given in Appendix B [6, 7]. 9.2.3.1  The Empty Strategy

It was mentioned in Section 4.7.4.3 that if the RFS Mk is Poisson distributed in its number and the points within Mk are IID distributed, then the probability density of Mk can be recovered exactly from the PHD intensity function according to Equation 4.65. Similarly the predicted and posterior RFS maps can be approximated by Poisson RFSs with PHDs nk|k–1(m|Xk) and nk|k(m|Xk), respectively, giving

pk k −1(Mk X k ) ≈



pk|k (Mk X k ) ≈





m∈Mk

υk k −1(m X k )

exp(∫ υk k −1(m X k )dm)



(9.14)

∏ υ k|k (m X k )

m∈Mk

exp(∫υ k|k (m X k )dm)



(9.15)

Setting Mk = Ø, and using the Poisson RFS approximations, Equations 9.14 and 9.15, as well as the RFS measurement likelihood, Equation 4.57, it follows that (see Appendix B)

(

)

ˆ k|k − m ˆ k k −1 − ∫ ck (z Xk )dz g(Zk Z k −1, X k ) ≈ k kZk exp m where k kZk =

(9.16)

∏ ck (z Xk ) with ck(z|Xk) being the PHD of the measurement clut-

z∈Zk

ˆ k = ∫ν k|k (m X k )dm and m ˆ k k −1 = ∫ν k k −1(m X k )dm. ter RFS Ck(Xk). In addition, m Equation 9.16 gives the closed-form likelihood of the measurement, conditioned on the trajectory and not on the map. 9.2.3.2  The Single Feature Strategy

In a similar manner, to evaluate the normalizing constant for the case of Mk = {m}, again using Equations 9.14 and 9.15, and Section 4.57. g(Zk Z k −1, X k ) ≈

1 β

 chosen Xk ))κ kZk +  (1 − PD (m 

PD (mchosen Xk ) ∑ k kZk −{z} gk (z mchosen , Xk )  uk k −1(mchosen X k )   z∈Zk  

(9.17)

290

Radar-Based SLAM with Random Finite Sets

with

(

)

ˆ k k −1 − m ˆ k|k + ∫ ck (z)dz υk|k (mchosen X k ) β = exp m



(9.18)

For this choice of Mk, mchosen can be, for example, the feature with the least uncertainty or the maximum measurement likelihood. Due to the presence of the measurement likelihood term g(z mchosen , Xk ), it is expected that, in general, the single feature map update will outperform that of the empty map update. Note that in Equation 9.13, every choice of Mk should give the same result; however, Equations 9.16 and 9.17 use different approximations of pk(Mk|Xk), yielding slightly different results. In principle, any map strategy can be used, including more features; however, the computation required to evaluate the trajectory conditioned measurement likelihood would also increase. The following section presents a RB implementation of the PHD-SLAM filter, referred to as the RB-PHD-SLAM filter.

9.3  Implementing the RB-PHD-SLAM Filter Following the description of the PHD-SLAM filter in the previous section, an RB implementation is detailed in this section. A particle filter is used to propagate the vehicle trajectory (Equation 9.7), and a GM-PHD filter is used to propagate each trajectory-conditioned map PHD (Equation 9.6). As such, let the PHD-SLAM density at time k – 1 be represented by a set of N particles, each accompanied by their

)

(

own intensity functions ν k(i−) 1|k −1 m (X k −1)(i) , representing their belief of the map,





}

N (i) k −1 (i) (i) ) , υk −1|k −1(m (X k −1)(i)) k −1 ,(X i =1

(9.19)

where ηk(i−) 1 is the weight of the ith particle, (X k−1)(i) =  X0 X1(i) X2(i) … Xk(i−) 1  is the ith hy  (i) k (i) ν m ( X ) pothesized vehicle trajectory, and k −1|k −1 is its map PHD. The filter then

)

(

proceeds to approximate the posterior density by a new set of weighted particles,





}

N (i) k (i) (i) k (i) k ,(X ) , υk|k (m (X ) ) i =1

(9.20)

as follows. 9.3.1  PHD Mapping—Implementation

As with the PHD-FBRM filter implementation of Chapter 8, in which a known vehicle pose was assumed, a GM representation for the prior map intensity is used; however, in this case, since the vehicle’s pose is not given, the GM map must be defined conditioned on each vehicle’s trajectory hypothesis or particle. Therefore, let

(

)

the prior map PHD for the ith particle, ν k(i−) 1|k −1 m Xk(i−) 1 be a GM of the form,

9.3  Implementing the RB-PHD-SLAM Filter

υk(i−) 1|k −1(m



291 Jk(i−) 1

∑ wk(i−, j1) N (m; µk(i−, j1), Pk(i−,1j))

Xk(i−) 1) =

(9.21)

j =1

(i)

which is a mixture of Jk −1 Gaussians, with wk(i−, j1), µk(i−, j1) and Pk(i−,1j) being the corresponding predicted weights, means, and covariances, respectively, for the jth Gaussian component of the map PHD of the ith trajectory. In a similar fashion to Equation 8.6, let the new feature intensity for the particle b(mZk−1, Xk(i)) from the sampled pose Xk(i) at time k also be a GM of the form



b(m

Zk−1, Xk(i)) =

Jb(i,)k

∑ wb(i,,kj) N (m; µb(i,,kj), Pb(i,,kj))

(9.22)

j =1

(i)

where Jb,k defines the number of Gaussians in the new (birth) feature intensity at time k and ωb(i,,kj ), µb(i,,kj), and Pb(i,,kj) are the corresponding components. This is analogous to the proposal distribution in the particle filter and provides an initial estimate of the new features entering the map. The predicted intensity is therefore also a GM. υ k(i)k -1(m Xk(i) ) =



Jk(i )k -1

å w k(i,kj)-1 N (m; µk(i,kj)-1, Pk(ik, j-)1)

(9.23)

j =1

which consists of Jk(i)k −1 = J (ki)−1 + Jb(i,)k Gaussians representing the union of the prior map intensity, ν k−1 (m Xk(i−) 1), and the proposed new feature intensity according to Equation 9.9. Since the measurement likelihood is also of Gaussian form, it follows from Equation 9.10 that the posterior map PHD, ν k(i|)k m Xk(i) , is then also a GM given by

(



)

Jk(i k) −1   (i , j) (i) (i) (i) (i)  (i) (i)  υk|k (m Xk ) = υ k k −1(m Xk ) 1 − PD (m Xk ) + ∑ ∑ uG,k (z, m Xk )   z∈Zk j =1  

(9.24)

The components of the previous equation are given by (i , j) (i , j) (i , j) (i , j) (i) (i) υG ,k (z, m Xk ) = ϕk (z Xk ) N (m; µk k , Pk k )





ϕk( j) (z

Xk(i) )

PD (m Xk(i) )wk(i,kj)-1 N (z; hspatial (µk(i,kj)-1),E(ki, j) )

= c(z) +

Jk(i )k -1

å

� =1

(9.25)



(9.26)

PD (m Xk(i) )ω k(i,k�-) 1 N (z; hspatial ( µk(i,k�-) 1),E(ki, �) )

As in Section 8.3 (Equations 8.11 through 8.14), terms mk|k, Ek and Sk can be obtained using any standard filtering technique such as EKF or UKF [8], and hspatial ( )

292

Radar-Based SLAM with Random Finite Sets

represents the spatially interpreted sensor model introduced in Section 4.5.1. In this chapter, the EKF updates of Section 8.3 are adopted. The clutter intensity c(z) is the same as that defined in Section 8.3, Equation 8.15. 9.3.2  The Vehicle Trajectory—Implementation

The PHD-SLAM filter adopts a particle approximation of the posterior vehicle trajectory, pk|k(Xk), which is sampled/resampled as follows: Step 1: Sampling Step � (i) ∼ f veh (X � (i) X(i) ,Uk −1) and set ·  For i = 1, …, N. sample X k k k −1

η�k(i) =

� k )(i))fx(X � (i) X(i) ,Uk −1) g(Zk Z k−1,(X k k −1 f

veh

� (i) X(i) ,Uk −1) (X k k −1

ηk(i−) 1

(9.27)

·  Normalize weights: ∑ i =1η�k(i) = 1 N

Step 2: Resampling Step

{

� k )(i) ·  Resample η�k(i) ,(X

}

N

i =1

{

� k )(i) to get ηk(i) ,(X

}

N

i =1

Since the vehicle transition density is chosen as the proposal density, as with FastSLAM, � k )(i))η(i) η�k(i) = gk (Zk Z k−1,(X k −1



(9.28)

� k )(i)) can be evaluated in closed form according to M being the and g(Zk Zk −1 , (X k empty map (Equation 9.16) or a single feature map (Equation 9.17), with



(i)

� m = k k −1

Jk(i k) −1



j =1

wk(i,kj−) 1

and

(i) � k|k m

Jk(i )

= ∑ wk(i, j)

(9.29)

j =1

Note that the incorporation of the measurement conditioned proposal of FastSLAM 2.0 [9] could also be accommodated in this framework. This direction of research focuses on more efficient SMC approximations and is an avenue for further research. 9.3.3  Estimating the Map

In contrast to vector-based SLAM algorithms, the PHD map representation allows for a natural ability to average feature maps. That is, independent map estimates from N independent trajectory particles can be readily averaged into an expected map, even with map estimates of different size and without having to resolve the intra map feature associations. Consequently, in the case of the RB-PHD-SLAM filter, both the expected vehicle trajectory and feature map can be determined as follows:

9.4  RB-PHD-SLAM Computational Complexity

293

Table 9.1  RB-PHD-SLAM Predict Algorithm RB-PHD-SLAM-Predict  (i)  ηk−1, (X k−1)(i) , ν k(i−) 1|k−1 m Zk−1, Xk(i−) 1 

{

(

)}

 , Zk−1, Uk−1  i =1  N

// Construct Equation 9.23 1. for i = 1 to N do // Sample pose from vehicle state transition equation

(

2. Xk(i) ∼ f veh Xk(i) Xk(i−) 1,Uk−1

)

// Predict map (same as Table 8.2)

(

(

3. GMM-PHD-FBRM-Predict Zk−1, ν k(i−) 1|k−1, ν k(i−) 1 m Zk−1, Xk(i−) 1 4. end for N   5. return  ηk(i−) 1, Xk(i) , ν k(i)k−1 m Zk−1, Xk(i−) 1  i =1  

{

)

)}

(

Given the posterior set of weighted particles and corresponding map PHDs,





}

N (i) k (i) (i) k (i) ,( X ) , υ ( m ( X ) ) k k|k i =1

(9.30)

and η = ∑ iN=1ηk(i) then

N � k = 1 η (i)(X k )(i) X ∑ η i =1 k

(9.31)

The posterior PHD of the map is then the expectation of the trajectory­conditioned PHDs and thus

υk|k (m X k ) =

1 N (i) (i) ηk υk|k (m (X k )(i)) ∑ η i =1

(9.32)

� k|k = ∫ υk|k (m X k )dm is the mass of the posterior map PHD, the expected map If m � k|k highest local maxima. estimate can then be extracted by choosing the m 9.3.4  GMM-PHD-SLAM Pseudo Code

An example pseudo-code implementation of the RB-PHD-SLAM filter is provided in Tables 9.1 and 9.2, with a maximum a posteriori (MAP) estimator for the trajectory and its corresponding map being given in Table 9.3. Note that an estimator for the expected a posteriori (EAP) trajectory and its map can also be constructed, which is shown in [6, 7].

9.4  RB-PHD-Slam Computational Complexity The computational complexity of RB-PHD-SLAM is O(mkzkN); that is, linear in the number of features (in the FoV), linear in the number of measurements, and linear in the number of trajectory particles.

294

Radar-Based SLAM with Random Finite Sets Table 9.2  RB-PHD-SLAM Update Algorithm RB-PHD-SLAM-Update

{

)}

(

N  (i)  , Zk   ηk−1, Xk(i) , ν k(i)k−1 m Zk−1, Xk(i−) 1 i =1   1. for i = 1 to N do // Update map PHD (Same as Table 8.3)

(

(

2. GMM-PHD-FBRM-Update Zk , Xk(i) , ν k(i)k−1 m Zk−1, Xk(i−) 1 // Predicted PHD mass

))

J (i )

� k k−1 = ∑ j =k1k−1 w(i , j) 3. m k k−1 // Updated PHD mass J (i ) � k|k = ∑ j =k1 wk(i , j) 4. m 5. if (Empty Strategy TRUE) do // Updated trajectory weight of (9.16), with clutter parameter lc (m −m −l ) 6. η (i) = c(z) Zk exp �k �k k −1 c η(i) k

(

)

k −1

7. end if 8. if (Single Feature Strategy TRUE) do // Select a given map state 9. j = {i = 1, …, J k(i ) m(i , j ) m}

10. a = (1 − PD )c(z) Zk + PDωk(ik, j−) 1 × �



∑ (c(z) Z

k

z∈Zk

11. b = exp(mk k −1 −mk|k +lc ) wk(i , j) // Update trajectory weight of (9.17) a (i ) (i ) 12. ηk = ηk k −1 b 13. end if 14. end for  15. return  ηk(i) , Xk(i) , ν k(i|)k m Zk , Xk 

{

−1

)N (z; zk(i,kj−) 1, Sk(i, j))

)}i =1 

(

N

Table 9.3  RB-PHD-SLAM-MAP Estimate Algorithm RB-PHD-SLAM-MAP Estimate N  (i)  (i) (i) , Zk , Tfeature   ηk−1, Xk , ν k k−1 m Zk−1, Xk−1 i =1  

{

(

)}

// Initialize the map estimate � k = 0/ 1. M 2. I = {1,…,N} // Get index of max weight component 3. j = arg max ηk(i) i∈I // Estimated Trajectory 4. Xˆ k = (X k )( j) // Estimate Map from corresponding map PHD 5. for i = 1 to J k( j ) do 6. if wk( j ,i ) > Tfeature // concentrate estimate � k = M � ( j ,i )  7. M  k mk  8. end if 9. end for // RB-PHD-SLAM-MAP Estimate ˆ k) 10. return (Xˆ k , M

9.5  Radar-Based Comparisons of RFS and Vector-Based SLAM

295

RB-PHD-SLAM simulations in [6] have shown that the computational time is comparable with that of the multiple hypothesis (MH)-FastSLAM algorithm. Note that due to the RB structure of RB-PHD-SLAM, binary tree-based enhancements, such as those applied to traditional FastSLAM [1], can be readily developed to further reduce the complexity to O(zN log(mk)). Furthermore, in contrast to data associationbased methods, the RB-PHD-SLAM approach admits numerous other computational enhancements, since the map PHD update can be segmented, executed in parallel, and subsequently fused for state estimation. This is in contrast to vector-based approaches, which do not scale gracefully with the number of features and measurements.

9.5  Radar-Based Comparisons of RFS and Vector-Based SLAM This section presents comparisons and analyses of vector-based NN-EKF-SLAM and MH-FastSLAM with set-based RB-PHD-SLAM using FMCW radar data sets in a built-up university campus environment. The parking lot environment of earlier chapters was again used, so that the vehicle could execute looped trajectories. Such trajectories are considered challenging for state-of-the-art SLAM algorithms [10]. A much larger scenario is tested, where accurate SLAM performance in the presence of high clutter is essential, at sea using an “autonomous kayak” as the vehicle and a commercially available X-band radar in Chapter 10. The results of superimposing the single scan–based, OS-CFAR detections, relative to odometry, as well as the odometry pose estimates themselves, are depicted in Figure 9.1 (left). The detections are based on 700 scans recorded during the 40 40 20 20

0

Meters

0

Meters

–20

–40

–60

–80

–50 –40 –30 –20 –10

0

Meters

10 20 30 40 50

0 10 Meters

20

30

40

50

Figure 9.1  Left: Odometry and extracted clusters from the radar data representing the raw inputs to the SLAM algorithms. Right: The ground truth trajectory (central oval path) obtained by matching laser data due to a lack of GPS data.

296

Radar-Based SLAM with Random Finite Sets

three-loop trajectory in the parking lot. The information displayed in this figure can be thought of as the information input to the SLAM algorithms, which must be processed to yield the best estimated trajectory and map. Given the tree coverage and surrounding buildings in the area, GPS was generally not available. Ground truth was thus obtained by manually matching successive scans from a laser range finder that was also mounted on the vehicle, with graphical verification also provided in Figure 9.1 (right). The vehicle was driven at approximately 1.5 m/s around the three loops, with a control input frequency of 10 Hz and a radar scan frequency of 2.5 Hz. The estimated ground truth vehicle trajectory of Figure 9.1 (right) allows the OS-CFAR detections to be superimposed onto this path for manual verification purposes. This is shown, together with a photo of the parking lot scene, in Figure 9.2. This figure is shown merely for reference purposes, so that visual comparison between now ground-truth referenced clusters of radar data can be made with objects in the photograph, some of which are labeled. It must be noted, however, that the information available to the vehicle’s onboard computer is only that of Figure 9.1 (left). The parameters used in the RB-PHD SLAM filter are shown in Table 9.4. In these experiments, the NN-EKF using joint compatibility branch and bound (JCBB) data association (Chapter 4, Section 4.6.2.2) and FastSLAM with multiple hypothesis (MH) data association (MH-FastSLAM, Chapter 4, Section 4.6.5), both using a mutual exclusion gate and a 95 percent c2 confidence gate, were used as the bench-

Superimposed OS–CFAR detections

Short w all

Entrance passage

40

Trees

20

Low lying bush

Meters

0

–20

–40

–60

Vehicle trajectory –80

–40 –30 –20 –10

0

10

20

30

40

50

Meters

Figure 9.2  OS-CFAR radar detections, superimposed onto the vehicle’s multiloop trajectory, within the parking lot environment.

9.5  Radar-Based Comparisons of RFS and Vector-Based SLAM

297

Table 9.4  Filter Parameters Used for the Parking Lot Trial Filter Parameter

Value

Velocity input standard deviation (std)

1.5 m/s

Steering input std.

5.0º

Range measurement std.

1.00 m

Bearing measurement std.

2.0º

Probability of detection PD

0.95

Clutter rate nc

20 m–2

Sensor maximum range

10 m

Sensor field-of-view (FoV)

360º

Feature existence threshold

0.5

mark comparisons. For each SLAM filter, identical sequences of control inputs and measurements were provided. The RB-PHD-SLAM filter used 50 trajectory particles, while for MH-FastSLAM a maximum limit of 2,000 particles (number of hypotheses considered prior to resampling) was used. The “single feature map” trajectory weighting of Equation 9.17 is adopted for the RB-PHD-SLAM filter. An implementation using the “empty map update” of Equation 9.16 was presented in [11]. While any feature can theoretically be elected to generate the trajectory weighting, in this implementation, that which generates the maximum likelihood among all measurements is chosen. A comprehensive study as to the best-suited feature-selection strategies is an interesting concept for future work. Vector-based SLAM filters deal with clutter through “feature management” routines, such as the landmark’s quality [12] or a binary Bayes filter [1]. These operations are typically independent of the joint SLAM filter update, whereas the RB-PHD-SLAM approach unifies feature management, data association, and state filtering into a single Bayesian update. While these methods have been used successfully, they generally discard the sensor’s, or feature extraction algorithm’s, detection (Pd) and false alarm (Pfa) probabilities and thus can be erroneous when subject to large clutter rates and/or measurement noise. Since the RB-PHD-SLAM approach assumes knowledge of these probabilities, as seen in Equation 9.10, a modified feature management routine coined the “feature existence filter” (see Appendix C), which incorporates both Pd and Pfa, is used with the benchmark algorithms in an attempt to be “fairer” to them in the comparisons. Given the small-sized loop, the maximum range of the radar was set at 30 m and the NN-EKF, MH-FastSLAM, and RB-PHD-SLAM algorithms were executed on the dataset. Figure 9.3 depicts the posterior estimated trajectory and map using the NN-EKF algorithm (left) and that from MH-FastSLAM (right), given the same control input samples. Finally, Figure 9.4 shows the SLAM estimates based on the RB-PHD-SLAM algorithm. For both the MH-FastSLAM and RB-PHD-SLAM algorithms, the map of the highest weighted trajectory estimate is used as the map estimate. In all the figures, the ground truth and estimated trajectories are labeled, and the circles represent the ground truth feature locations. The crosses represent the estimated map. In the

298

Radar-Based SLAM with Random Finite Sets 40

30

path estimate

20 20

10 0 –10 Meters

Meters

0

–20

–20

–30 –40

–40

–50 –60

–60 Ground truth path –20 –10

0

path estimate 10 20 Meters

Ground truth path

–70 –10

30 40

0

10 20 Meters

30 40

Figure 9.3  Left: The posterior estimate from NN-EKF-SLAM using the radar-based parking lot dataset. Right: The posterior estimate from MH-FastSLAM using the same dataset. Both algorithms result in a diverged vehicle trajectory estimate as well as incorrect map hypotheses.

30 20

path estimate

10 0

Meters

–10 –20 –30 –40 –50 –60 –70 –10

0

10 20 Meters

30

40

Figure 9.4  The posterior estimate from RB-PHD-SLAM using the same dataset. The integrated Bayesian framework for SLAM incorporating data association and feature management enhances the robustness of the SLAM algorithm given noisy measurements.

9.7  Bibliographical Remarks

299

case of FastSLAM, this is derived with respect to the MAP FastSLAM trajectory estimate (the particle (trajectory) with the final maximum weight). In the case of MH-FastSLAM, the trajectory “MH-FastSLAM path estimate” indicates the MAP trajectory estimate, which corresponds to the particle with the maximum weight, at each time. Given that the RB-PHD-SLAM filter jointly incorporates data association and feature number uncertainty into its Bayesian recursion, it is more robust to large sensing error, as it does not rely on hard measurement-feature assignment decisions. Furthermore, it jointly estimates the number of features and their locations, alleviating the need for feature-management methods [1, 12]. In terms of the estimated trajectory, the first order RB-PHD-SLAM algorithm can be seen to outperform both the vector-based NN-EKF and FastSLAM algorithms, both of which were implemented with data association methods considered to be robust. Given the noisy measurements from the radar sensor, the merits of the RB-PHD-SLAM approach are demonstrated. It should be noted that, as is the case with any experimental dataset, the ground truth feature map is extremely difficult to obtain, making it challenging to evaluate the feature map estimation error.

9.6  Summary This chapter presented a tractable solution for the feature-based full SLAM problem. The finite set representation of the map admits the notion of an expected map in the form of a PHD or intensity function. An RB implementation of the filter was derived in which the PHD of the map was propagated using a GM-PHD filter and a particle filter propagated the vehicle trajectory density. A closed form solution for the trajectory weighting was also presented, alleviating the need for approximation, which is commonly used in vector-based methods. Analyses were carried out, based on an outdoor FMCW radar data set. Results demonstrated the robustness of the RB-PHD-SLAM filter, particularly in the presence of large data association uncertainty and clutter, illustrating the merits of adopting an RFS approach to SLAM. In terms of its computational complexity, the RB SLAM filter was shown to be linear in the number of estimated features, measurements, and trajectory particles. It should be noted that computational enhancements are possible, in terms of parallelizable operations, which are not possible with vector-based approaches, which require data association.

9.7  Bibliographical Remarks SLAM based on RFS techniques was first suggested in [13] with preliminary studies using a “brute force” implementation. The approach modeled the joint vehicle trajectory and map as a single RFS and recursively propagated its first-order moment. Initial results of RB-PHD-SLAM were presented in [11] with further refinements in [7], and these techniques are reviewed in the single monograph [6]. A factorized approach to SLAM was established in the (now well known) FastSLAM concept [1]. However, this chapter has shown that the same factorization

300

Radar-Based SLAM with Random Finite Sets

method applied to vectors in FastSLAM cannot be applied to sets, since it results in invalid densities in the feature space. Therefore, one of the main contributions of the RB-PHD-SLAM approach is that it allows such a factorization to be applied to sets in a principled manner.

References   [1] Montemerlo, M., S. Thrun, and B. Siciliano. FastSLAM: A Scalable Method for the Simultaneous localization and Mapping Prohlem in Robotics, Berlin: Springer, 2007.   [2] Niera, J., and J. D. Tardos, “Data Association in Stochastic Mapping Using the Joint Compatibility Test,” IEEE Transactions on Robotics and Automation, Vol. 17, December 2001, pp. 890–897.   [3] Nieto, J., J. Guivant, E. Nebot, and S. Thrun, “Real Time Data Association for FastSLAM.” In IEEE International Conference on Robotics and Automation, Vol. 1, September 2003, pp. 412–418.   [4] Mullane, J., B. N. Yo, M. Adams, and W. S. Wijesoma, “A Random Set Approach to SLAM.” In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) Workshop on Visual Mapping and Navigation in Outdoor Environments, Japan, May 2009.   [5] Kalyan, B., K. W. Lee, and W. S. Wijesoma, “FISST-SLAM: Finite Set Statistical Approach to Simultaneous Localization and Mapping,” International Journal of Robotics Research, Vol. 29, No. 10, September 2010, pp. 1251–1262.   [6] Mullane, J., B.-N. Vo, M. Adams, and B.-T. Vo. Random Finite Sets for Robot Mapping and SLAM, Springer Tracts in Advanced Robotics, Vol. 72, Berlin: Springer, 2011.   [7] Mullane, J., B. N. Vo, M. D. Adams, and B. T. Yo, “A random-Finite-Set Approach to Bayesian SLAM,” IEEE Transactions on Robotics, Vol. 27, No. 2, April 2011, pp. 268– 282.   [8] Julier, S. J., and J. K. Uhlmann, “A New Extension of the Kalman Filter to Nonlinear Systems.” In Int. Symp. Aerospace Defense Sensing, Simul. and Controls, Orlando, FL, 1997, pp. 35–45.   [9] Montemerlo, M., S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges.” In Proceedings of the 18th International Joint Conference on Artificial Intelligence, CA, 2003, pp. 1151–1156. [10] Adams, M., “Simultaneous Localisation and Map Building,” Robotics and Autonomous Systems, Vol. 55, No. 1, January 2006. [11] Mullane, J., B. N. Vo, and M. Adams, “Rao-Blackwellised PHD SLAM.” In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Alaska, May 2010. [12] Dissanayake, M. W. M. G., P. Newman, S. Clark, H. F. Durrant-Whyte, and M. Csorba, “A Solution to the Simultaneous Localization and Map Building (SLAM) Problem,” IEEE Trans. Robotics and Automation, Vol. 17, No. 3, June 2001, pp. 229–241. [13] Mullane, J., B. N. Vo, M. Adams, and W. S. Wijesoma, “A Random Set Formulation for Bayesian SLAM.” In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, France, September 2008.

Chap te r 10

X-Band Radar-Based SLAM in an Off-Shore Environment 10.1  Introduction This chapter presents a simultaneous localization and mapping (SLAM) algorithm implemented on an autonomous sea kayak with a commercially available X-band marine radar. The autonomous surface craft (ASC) was remote controlled in an offshore test site in Singapore’s southern Selat Puah marine environment. Data from the radar, GPS, and an inexpensive single-axis gyro were logged using an onboard processing unit as the ASC traversed the environment, which comprised geographical and sea-surface vessel landmarks. An automated feature extraction routine is presented, based on a probabilistic landmark detector, followed by a clustering and centroid approximation approach. With restrictive feature modeling and a lack of vehicle control input information, it is demonstrated that via the Rao-Blackwellized, probability hypothesis density SLAM (RB-PHD-SLAM) filter presented in Chapter 9, useful localization and mapping results can be obtained, despite an actively rolling and pitching ASC on the sea surface. As in Chapter 9, vector-based SLAM algorithms based on the extended Kalman filter (EKF) with nearest neighbor (NN) data association (NNEKF-SLAM), and factorized solution to SLAM (FastSLAM) with multiple hypothesis (MH) data association, are also implemented and compared. The fundamental limitations of these techniques, in terms of their inability to jointly estimate feature number and encapsulate target and clutter probability density information, is clearly demonstrated in this highly cluttered, yet realistic scenario. In addition, the merits of investigating ASC SLAM are demonstrated, particularly with respect to map estimation, obstacle avoidance, and target tracking. Despite the presence of GPS and gyro data, heading information on such small ASCs is greatly compromised, which induces large sensing errors, further accentuated by the large range of the radar. The analysis presented in this chapter can be considered to a step toward realizing an ASC capable of performing environmental or security surveillance and reporting a real-time active awareness of an abovewater scene. While the vast majority of SLAM systems operate on grounded land vehicles [1, 2], the operating environment examined in this chapter is a marine scenario. Figure 10.1 depicts this scene as a sea chart fused with satellite imagery, where the scale of the image is approximately 10 km by 10 km. The test site within the scene where trials were carried out is also highlighted. Despite the availability of positional information via GPS with minimal occlusion, performing SLAM with an ASC gives rise to a number of challenging issues, especially when the map estimation component 301

302

X-Band Radar-Based SLAM in an Off-Shore Environment

Figure 10.1  The experimental testing ground at Singapore s Selat Puah: A fused satellite/sea-chart overview (© 2010 IEEE. Reprinted with permission from J. Mullane, et al., “X-Band Reader-based SLAM in Singapore’s Off-Shore Environment,” 11th International Conf. on Control, Automation, Robotics, and Vision, Singapore, 2010).

of SLAM is considered. In general, as the mass of water displaced by a surface vessel decreases, for a given sea-state, a smaller vessel will roll, pitch, and yaw more than a larger vessel. This means that the uncertainty of its heading will increase. Thus, for the 3 m long, 100 kg (fully laden) ASC used in the experiments of this chapter, heading uncertainty poses a serious concern, particularly when registering the relative measurements to a global frame. Moreover, the smaller size allows for increased maneuverability, introducing frequent sharp turns into the ASC’s trajectory, adding further challenges to the task of map building and target tracking. The chapter proceeds as follows. The ASC and the coastal environment, in terms of expected landmarks, are described in Section 10.2. Section 10.3 then describes the application of the OS-CFAR of Chapter 3, Section 3.5.2 to the extraction of landmarks from marine radar, in a coastal environment. Since the OS-CFAR processor yields many false alarms in such an environment, the vector-based SLAM algorithms, which rely on separate data association routines, either fail completely or yield very poor results. Therefore, in this section, radar PPI displays are treated as images, allowing image processing noise reduction methods to be applied. This results in a greatly reduced feature set, with which NN-EKF-SLAM can operate with some success. In Section 10.4, a brief review and implementation details of the RB-PHD-SLAM, NN-EKF-SLAM, and MH-FastSLAM concepts are given. Comparisons of all three SLAM concepts during multiple loop trajectories in the coastal environment are given in Section 10.5, with conclusions being presented in Section 10.6. A brief review of other work that examines the navigation of robotic vehicles, on and below the sea surface, are given in Section 10.7.

10.2  The ASC and the Coastal Environment

303

10.2  The ASC and the Coastal Environment This section details the hardware deployed. The ASC is a robotic sea kayak, which is a low cost, high–load bearing platform, being highly maneuverable and capable of operating in shallow waters. It was originally developed at the Department of Mechanical and Ocean Engineering at MIT for experiments in autonomous navigation in rivers and coastal environments [3]. For stabilization in the choppy waters common to Singapore’s Selat Puah, lateral buoyancy aids were added to the platform, under the Singapore–MIT Alliance collaborative venture (Center for Environmental Sensing and Monitoring) (CENSAM) as depicted in Figure 10.2. In Figure 10.2(a) the underwater, steerable, rear-mounted, remote control electric thruster is also visible. Figure 10.2(b) shows the ASC at sea, with the X-band radar, mounted on a 1.5 m length pole above the sea surface. The X-band radar used was the M-1832 BlackBox Radar from Furuno, powered by an onboard battery. The mechanically scanned beam has a width of 3.9° in azimuth and 20° in elevation. The large elevation beamwidth makes the sensor robust to the sometimes severe pitch and roll of the ASC. A GPS receiver (Crescent Hemisphere 110), as well as a KVH Industries DSP5000 single-axis gyroscope for 3D pose (xk, yk, fk) measurements were also used in the experiments. An onboard processing unit logged the GPS and gyro data at a rate of 1 Hz, with the radar data being sampled and logged at a scan rate of 0.5 Hz (i.e., one full 360° sweep of the environment required 2 seconds). Given that the distance traversed by the ASC over a single radar scan is negligible compared to its maximum range capability of 36 nautical miles, and compared to its low resolution, the issues of distortion with mechanically scanned sensors [4] were considered insignificant in this work. An automatic identification system (AIS) receiver was also used in some of the experiments. AIS is a short-range coastal tracking system used on ships for identifying and locating other vessels by electronically exchanging data. This enables the system to receive position and speed estimates from a large number of vessels present in the area. Since those vessels were used as features in the SLAM algorithms, this source of information was used as ground truth to verify and compare the features extracted from the radar data, with the position delivered by the AIS.

(a) The lateral stabilizers and thruster of the ASC.

(b) The ASCat sea, carrying the X-band radar.

Figure 10.2  Pictures of the autonomous surface craft (ASC) adapted kayak.

304

X-Band Radar-Based SLAM in an Off-Shore Environment

(a) The costal environment and landmarks.

(b) A buoy providing a fixed landmark.

Figure 10.3  The environment and landmarks used for coastal SLAM (© 2010 IEEE. Reprinted with permission from J. Mullane, et al., “X-Band Reader-based SLAM in Singapore’s Off-Shore Environment,” 11th International Conf. on Control, Automation, Robotics, and Vision, Singapore, 2010).

Figure 10.4  A sample scan from the X-band radar mounted on the ASC.

10.3  Marine Radar Feature Extraction

305

Figure 10.3 shows the coastal environment and the type of landmarks to be detected and used in the sea based SLAM experiments. While the unusually low mounting height of the marine radar undoubtedly increases the sea clutter interference in the logged data, by adopting suitable processing algorithms the signal can be readily used for recursive localization and map estimation filters as demonstrated later in Section 10.5. For the trials carried out in this work, the radar range bin resolution, dr(q), was set to 7.5 m, with a maximum range of 7.68Km. An example of a logged radar scan is superimposed on the ground truth image of Figure 10.1 from the measured pose of the ASC at that time and shown in Figure 10.4. The measured GPS data of the ASC trajectory is also just visible and shown as the small black path within the ellipse. In this example, note the slight angular misalignment due to the poor heading measurement. Valid reflections from landmarks at distances greater than 5 km from the ASC are evident in the data, despite the low mounting height of the sensor. The following section describes the extraction of features from this data for the purpose of performing SLAM.

10.3  Marine Radar Feature Extraction This section describes the extraction of suitable features from the data depicted in Figure 10.4. Noise in the radar A-scopes in coastal environments largely consists of sea clutter—reflections from the sea surface due to the rolling and pitching of the small ASC. Prior to performing feature selection, the data is first classified into target, 1 hypothesis, and, in this chapter, the no target or clutter 0 hypothesis, via a CFAR detector. As introduced in Chapter 3, Section 3.5, such probabilistic detection methods are based on an underlying assumption on the noise amplitude statistics. In a coastal environment, noise in the A-scopes can be reasonably assumed to be primarily comprised of sea clutter. The amplitude distribution of the sea clutter is obtained empirically by Monte Carlo (MC) analysis over a large number of sample scans, using manually selected windows containing only radar returns from the sea. The adaptive OS-CFAR detector, introduced in Section 3.5.2, is then applied to determine a set of features in the data. This set of features contains sea vessels and buoys as well as many sea clutter returns due to violations of the IID and exponential sea clutter assumptions used in the derivation of the CFAR processor. Nevertheless, the final SLAM test in this chapter, based on RFS methods, uses this set containing both correct, and many false, detected features directly, to demonstrate its robustness in situations of high clutter. For comparison purposes, however, it is desirable to test the data set with state-of-the-art SLAM techniques from the robotics literature. Standard NN-EKF-SLAM with standard map-­management techniques diverge almost immediately with such high-clutter levels, due to many data misassociations. Therefore, in an initial experiment, the features extracted by the CFAR detector are smoothed, thresholded, and finally clustered to yield a much reduced feature set. This of course 1.

At a radar height of just over 1.5 m above sea level, landmarks of height greater than 3 m above sea level (e.g., ships) will fall into the line of sight of the radar only at distances up to approximately 10 km anyway, due to the curvature of the earth. This neglects atmospheric refractional effects, which can actually increase the line of sight of a radar [5].

306

X-Band Radar-Based SLAM in an Off-Shore Environment

results in a loss of information, as valid targets will inevitably be accidently removed in such techniques. However, this manageable, reduced-size feature set allows an NNEKF-SLAM implementation for comparison purposes. 10.3.1  Adaptive Coastal Feature Detection—OS-CFAR

If Slin is once again the linearized received radar signal amplitude with any range compensation removed, then, in a similar fashion to the noise distribution used for MMW radar in Chapters 2 and 6, the empirical sea clutter amplitude, p(Slin|) can be approximated by an exponential distribution,



ìïλ clutter exp (- λclutter Slin ) p(S lin H0 ) = í ïî

If S lin > 0 Otherwise

(10.1)

where lclutter is the clutter exponential distribution parameter. For the coastal experiments, this distribution is demonstrated in Figure 10.5. While for this trial, the moment lclutter of the sea clutter amplitude distribution may be empirically estimated, constituting a priori information for the feature detection module, in practice, the moment may change depending on the sea state or roll/pitch of the ASC. Due to the closely lying point targets and to avoid the potential target masking issues discussed in Chapter 3, Sections 3.5.1.3 and 3.5.2.3, an ordered statistics (OS)-CFAR detection method is applied to locally estimate the moment, lclutter, in each range bin, q, and derive an adaptive threshold value, SOS-CFAR(q). Figure 10.6 shows a sample A-scope, recorded at a particular radar bearing angle, comprising sea clutter as well as both point surface craft and extended land landmarks. The theoretical probability of false alarm PfaOS-CFAR of the OS-CFAR processor was derived in Equation 3.28 (Section 3.5.2) as

PfaOS-CFAR = kos

( ) (k 2W kos

os

− 1)!(g + 2W − kos )! (g + 2W )!

(10.2)

where W is the CFAR window width either side of the CUT, kos is the OS-CFAR k-factor, and g represents the scaling constant that determines the decision threshold

Figure 10.5  The empirical sea-clutter amplitude distribution (© 2010 IEEE. Reprinted with permission from J. Mullane, et al., “X-Band Reader-based SLAM in Singapore’s Off-Shore Environment,” 11th International Conf. on Control, Automation, Robotics, and Vision, Singapore, 2010).

10.3  Marine Radar Feature Extraction

307

Figure 10.6  Left: A sample A-scope showing sea clutter, point, and extended targets. Right: The adaptive OS-CFAR detection threshold (© 2010 IEEE. Reprinted with permission from J. Mullane, et al., “X-Band Reader-based SLAM in Singapore’s Off-Shore Environment,” 11th International Conf. on Control, Automation, Robotics, and Vision, Singapore, 2010).

to achieve a fixed rate of false alarm. As explained in Chapter 3, Section 3.5.2.1, the value of g can then be obtained by nonlinear, numerical zero finding routines from Equation 10.2 according to the desired PfaOS-CFAR. A target is considered detected in range bin q if the received power Slin(q) obeys S lin (q) ³ SOS-CFAR (q) = γ T (S lin (q))

lin

(10.3)

where T(S (q)) is the test statistic determined from the cells neighboring the CUT q according to the OS-CFAR criterion. It can be recalled from Chapter 3, Section 5000 4000 3000

Distance/m

2000 1000 0

0

1000 2000 3000 4000 5000

Distance/m Figure 10.7  Raw received power S

log_rc

360° scan from the coastal environment.

308

X-Band Radar-Based SLAM in an Off-Shore Environment 5000 4000 3000

Distance/m

2000 1000 0

0

1000 2000 3000 4000 5000

Distance/m Figure 10.8  Received power values that have exceeded the OS-CFAR threshold. Power values below the adaptive threshold are replaced by “0” power.

3.5.2 that the OS-CFAR criterion is that the test statistic T(Slin(q)) is chosen as the ranked power value Sklin and that kos is normally chosen to be 3W/2. os The resulting threshold across all the range bins for the sample A-scope is shown in the right-hand graph of Figure 10.6, using the parameters, W = 20, PfaOS-CFAR = 0.05 = 0.05 and kos = (3W/2) = 30. As can be seen, the point targets are detected, while most of the land reflections are suppressed. This is because land reflections have the appearance of clutter measurements. This is useful for SLAM applications, since extended targets are more difficult to reliably parameterize as stable features. The resulting map, however, then reflects only point-like objects. A 360° scan of the coastal environment is shown in Figure 10.7 and the corresponding power values that have exceeded the applied OS-CFAR threshold are shown in Figure 10.8. In Figure 10.8, all received power values Slin(q) < SOS-CFAR(q) have been reduced to zero, and therefore removed from the scan as assumed noise or sea clutter values. Each of the remaining (nonzero) power values Slin(q) > SOS-CFAR(q) were considered as valid point features in the RB-PHD-SLAM, and MH-FastSLAM experiments. For the NN-EKF-SLAM experiment, this feature set, which still contains many false alarms, had to be further reduced, based on image-smoothing techniques. 10.3.2  Image-Based Smoothing—Gaussian Filtering

Based on the point target detections from the OS-CFAR threshold, the regions of the measurement data are further examined to assess their likelihood of repre-

10.3  Marine Radar Feature Extraction

309

senting stable features. As can be deduced from the measurement data of Figure 10.4, targets rarely occupy a single range bin. Therefore, to suppress the highfrequency signal fluctuations, which are primarily noise or sea clutter readings, methods from the machine vision research community are adopted. The received power data can be converted from its natural polar form into a Cartesian representation, using the weighted, four point transformation method introduced in Chapter 2, Section 2.9, allowing it to be treated as an image. This is useful for the application of image-processing techniques. This form of the scan constitutes a Cartesian grid with power values Slog(x,y), represented as color values, which can be treated as an image. The vision-based smoothing methods of the following sections were then applied. The smoothed features were then clustered based on nearby pixels and pruned according to minimum and maximum area constraints. To reduce noise introduced by spurious reflections from the sea and the radar receiver itself, a Gaussian filter was applied to the raw data. The 2D Gaussian lowpass filter was convolved with the regions of the scan identified as landmarks by the OS-CFAR threshold. The Gaussian smoothing operation, often used in computer vision, blurs the image and removes details and noise. An isotropic Gaussian in 2D has the form

f (x, y) =

æ x2 + y2 ö 1 exp çè ÷ 2πσ 2 2σ 2 ø

(10.4)

with standard deviation s and mean (0, 0). Figure 10.9 shows this distribution. The Gaussian smoothing operation is a 2D convolution that uses a Gaussian-shaped

Figure 10.9  2-D Gaussian distribution with mean (0, 0) and s = 1.

310

X-Band Radar-Based SLAM in an Off-Shore Environment

kernel. The following discrete kernel  with a size of 5 x 5 pixels approximates a Gaussian with mean (0, 0) and s = 1: é1 ê4 1 ê ê7 K= 273 ê ê4 êë1



4 16 26 16 4

7 26 41 26 7

4 16 26 16 4

1ù 4úú 7 ú ú 4ú 1úû

(10.5)

Using this kernel, smoothing was performed by convolution with the image pixels. Since the convolution is equivalent to multiplication in the frequency domain, and the Fourier transform of a Gaussian yields another Gaussian, this smoothing operation can be interpreted as a low-pass-filter, suppressing high-frequency noise. The parameters of this filter are the width of the quadratic kernel and the standard deviation of the Gaussian function. Figure 10.10 shows the result of applying this filter to the linearized radar image Slin(x, y) corresponding to the image shown in Figure 10.8. Notice that many, but not all, of the relatively highpower returns, which occupy a small area and mainly represent sea clutter, are suppressed.

5000 4000 3000

Distance/m

2000 1000 0

0

1000 2000 3000 4000 5000

Distance/m Figure 10.10  The result of applying the Gaussian filter to the Cartesian image, based on the received power values of Figure 10.8.

10.3  Marine Radar Feature Extraction

311

10.3.3  Image-Based Thresholding

To further reduce noise in the image, a simple threshold was applied and all values below the threshold were set to zero. The value was selected based on a histogram of the radar measurements over the whole sequence, as seen in Figure 10.11, beyond which the distribution showed little change. A suitable threshold for Slin(x, y) was then selected, below which most of the returns were assumed to still constitute noise or sea clutter, in Figure 10.12, the threshold of Slin(x, y) = 2000 was applied to the image of Figure 10.10. A series of objects with high radar reflectivity, corresponding to ships or islands, are visible and clearly separated from the dark background, corresponding to the sea. 10.3.4  Image-Based Clustering

Number of received power measurements

Clustering is the term used to identify parts of the radar image that belong to the same feature. This is achieved by combining all connected pixels that have non-zero value into one cluster. Figure 10.13 shows the result of clustering the pixels of Figure 10.12. Different shades are used to identify different clusters. Clearly with such image-processing-based techniques, many parameters can be set to achieve a set of clusters. While the OS-CFAR detector is based on a principled PfaOS-CFAR value, clear violations of the CFAR assumptions cause many more false alarms than those statistically expected. The image smoothing methods highlighted here are therefore implemented as a simple means to reduce the number

Selected threshold

Linear received power S lin (x, y)

Figure 10.11  Histogram of the Gaussian smoothed power values of Figure 10.10.

312

X-Band Radar-Based SLAM in an Off-Shore Environment 5000 4000 3000

Distance/m

2000 1000 0

0

1000 2000 3000 4000 5000

Distance/m Figure 10.12  Simple thresholding applied to the Gaussian smoothed radar image of Figure 10.10.

5000 4000 3000

Distance/ m

2000 1000 0

0

1000 2000 3000 4000 5000

Distance/ m Figure 10.13  The result of clustering the image of Figure 10.12.

10.3  Marine Radar Feature Extraction

313

of false alarms. In the SLAM results, it will be shown that state-of-the-art, vectorbased methods are unable to yield consistent estimates without such techniques being applied a priori. The presented RFS-based SLAM estimator will, however, be shown to be capable of operating on the results of the OS-CFAR processor (Figure 10.8) only, without the need to resort to such adhoc clutter/noise reduction techniques. 10.3.5  Feature Labeling

Two characteristics of each cluster are extracted in this step. First: the position of the center of the area in Cartesian coordinates is determined, and, second, the area of the cluster is determined in terms of pixel number and converted to square meters. The large clusters that represent parts of islands or the mainland were not used in the SLAM experiments. This is because the radar can only detect their outline only partially, which leads to unreliable features of changing size and position. Also, a minimum size for clusters determined to be features can be set, to remove some of the smaller noisy power values. In Figure 10.14, the extracted features are processed as described earlier and superimposed on to the cluster-image. The areas of the gray circles correspond to the area of each cluster, and their centers are located at the centers of the area of each cluster. In this example, clusters smaller than 200 m 2 and larger than 20,000 m2, as well as all features within a radius of 300 meters of the radar, were deleted. 5000 4000 3000

Distance/m

2000 1000 0

0

1000 2000 3000 4000 5000

Distance/m Figure 10.14  Features extracted from the clusters of Figure 10.13.

314

X-Band Radar-Based SLAM in an Off-Shore Environment

10.4  The Marine-Based SLAM Algorithms 10.4.1  The ASC Process Model

Unlike ground-based vehicles, which are generally restricted to forward-facing motion dynamics, a sea-based ASC is subject to numerous uncertain disturbances such as currents and wind, moving the ASC in any arbitrary direction. To account for these differences, the ASC vehicle kinematic model is slightly different from that applied to the wheeled, land vehicle in Chapter 5 (Section 5.4.2), and the following nonlinear process model is adopted: xk = xk -1 + Vk -1 DTk cos(φ k -1 + δ φk -1) + υ kx-1

yk = yk -1 + Vk -1 DTk sin(φ k -1 + δ φk -1) + υ ky-1

(10.6)

φ k = φk -1 + δ φk -1 + υkφ -1 where xk, yk and fk represent the Easting, Nothing, and ASC heading angle with respect to north at time k. This can be expressed in vector form using the vehicle’s state vector Xk = [xk yk fk]T:

Xk = f veh (Xk -1 ,U k -1 ,Vk -1)

(10.7)

where f veh() corresponds to a vehicle motion vector function encapsulating the three Equations 10.6, and Uk–1 represents a vector comprising the input velocity signal and the measured angular change; that is, Uk–1 = [Vk–1 dfk–1]T, recorded by an onboard single axis gyroscope. υkx-1 υky-1 and υkφ-1 represent random perturbations in the ASC motion due to external sea forces and are modeled by white Gaussian y signals, encapsulated in the noise vector υ k -1 = [υ kx-1 υ k -1 υkφ-1 ]T . DTk = tk – tk–1 is determined from the measurement rate of the gyro. In this experiment, for simplicity, Vk = Vk–1 and is chosen a priori due to the lack of suitable Doppler velocity log (DVL) sensors. A constant velocity model could also be assumed, accompanied by the recursive estimation of Vk, integrated into the SLAM algorithm. This vehicle process model will be used in all of the SLAM algorithms developed for comparison purposes in this chapter. 10.4.2  RFS SLAM with the PHD Filter

This section describes the feature-based SLAM algorithm of choice, used in this chapter. Stemming from the seminal developments in the tracking community [6, 7], recent SLAM investigations, introduced in Chapter 4, suggested that a feature map is more appropriately represented as a set, rather than a vector, of features, requiring the tools of RFS theory introduced in Chapter 4 and implemented as the RBPHD-SLAM filter with FMCW radar and a land-based robotic vehicle in Chapter 2. Note that although the gyroscopic measurements are recorded at time k, they are used to provide an estimate of the desired input change in heading at time k − 1. Hence the input value corresponding to the gyro information is dfk–1.

10.4  The Marine-Based SLAM Algorithms

315

9 [8–10]. A pseudo code implementation example was given in Tables 9.1 through 9.3. This approach is also adopted in this chapter with comparisons with classical vector-based NN-EKF-SLAM and MH-FastSLAM methods. 10.4.2.1  The RFS Measurement Model

The primary exteroceptive measurement sensor is the X-band radar. Such a sensor is prone to missed detections, false alarms, measurement noise, and data association uncertainty. To encapsulate such sources of uncertainty, the RFS measurement model of Chapter 4, Section 4.7.2.3 is adopted

Zk =



Dk (m, Xk ) È Ck (Xk )

(10.8)

mÎMk

which incorporates the feature detections Dk(m, Xk) and the spurious measurements Ck(Xk). Features within the set map Mk are referred to as m. The individual feature detections zki = [rki θki ]T comprise relative range and bearing measurements from the ASC pose at time k and are described by equations similar to those that comprise the spatial measurement function hspatial in Chapter 4, Section 4.5.1, in this case given by

rki = (xki - xkradar )2 + (yki - ykradar )2 + wkr

(10.9)



é yi - y radar ù k θki = arctan ê ki ú - φk -1|k -1 + wθk radar êë xk - xk úû

(10.10)

where (xki , yki ) are the Cartesian coordinates of the ith feature, (xkradar , ykradar ) represents the coordinates of the radar location on the ASC, and wkr and wθk represent the radar range and bearing noise at time k, respectively. In the case of an offset between the reference location of the vehicle (xk, yk) and the location of the radar, this can be compensated for, by using Equations 4.27 corresponding to the offsets in Figure 4.1. 10.4.2.2  The PHD SLAM Filter

As shown in Chapter 9, Section 9.2.1, the RFS-SLAM joint posterior can be factorized when the map is represented as a conditional PDF, conditioned on an entire vehicle trajectory Xk; that is,



pk k (X k , Mk Z k ,U k -1 , X0 ) =

pk k (X k Z k ,U k -1 , X0 )pk k (Mk Z k , X k )

(10.11)

where a Rao-Blackwellized implementation implies the mapping recursion is approximated by a Gaussian mixture (GM)-PHD filter, and the trajectory recursion by a particle filter [9, 11]. Further, Z0:k represents the set of all measurements from

316

X-Band Radar-Based SLAM in an Off-Shore Environment

time 0 to k, Uk–1 all inputs from time 0 to k – 1, and X0 the initial pose of the ASC. The calculation of the particle weighting likelihood, however, requires the evaluation of g(Zk Z k -1 , X k ) = ò pk k -1(Zk Mk Z k -1X k )δ Mk



(10.12)

which involves a set integral over all possible maps. Note that this likelihood is simply the normalizing constant of the Bayes recursion for propagating the RFS map density, pk(Mk|Zk, Xk) in Equation 10.11. The weighting likelihood can then be written g (Zk Z k -1 , Xk ) =



g(Zk Mk , Xk )pk|k -1(Mk X k ) pk k (Mk X k )



(10.13)

By approximating the predicted and updated RFS map densities as Poisson RFSs according to

pk|k -1(Mk X k ) »



pk k (Mk X k ) »



Õ

mÎMk

υk|k -1(m X k )

exp(ò υk|k -1(m X k )dm)

Õ

mÎMk



(10.14)

υk k (m X k )

exp(ò υk k (m X k )dm)



(10.15)

and setting the dummy variable Mk = 0/, a solution was presented and analyzed in [9, 11]. In this chapter, as in Chapter 9, an alternative choice of map is used, Mk = {mchosen}, where mchosen is a single feature chosen according to a given strategy. In this case, the weighting likelihood becomes



1 éæ g(Zk Z k -1, X k ) » ú÷ (1 - PDOS-CFAR (mchosen Xk ))κ kZk + β ëè PDOS-CFAR (mchosen X) å κ kZk -{z} g(z mchosen , Xk )ö υ k|k -1(mchosen X k )ù ú ÷ø z ÎZk û

(10.16)

with

(

)

ˆ k|k - m ˆ k|k -1 - ò ck (z)dz υk (mchosen X k ) β = exp m

where, κ kZk =

(10.17)

∏ ck (z |Xk) with, ck(z|Xk) being the PHD of the measurement c­lutter

z∈Z

RFS Ck(Xk). m ˆ k|k -1 are the estimated and predicted number of features ˆ k and m in the explored map Mk, while vk|k and vk|k–1 are the updated and predicted

10.4  The Marine-Based SLAM Algorithms

317

PHDs of the map. The map is estimated with a GM implementation of the PHD p­redictor,

υk|k -1(m X k ) = υ k -1 k -1(m X k -1) + b(m Xk )

(10.18)

where b(m|Xk) is the PHD of the new feature RFS, B(Xk), and corrector, é υk|k (m X k ) = υ k|k -1(m X k ) ê1 - PDOS-FAR (m Xk ) + ë

ù L(m Xk ) å c (z X ) + ò ú k k ξ ÎMk L(ξ Xk )υ k|k -1 (ξ X )d ξ û z ÎZk k



(10.19)

where L(m|Xk) = PDOS-FAR(m|Xk)gk(z|m, Xk), and PDOS-FAR (m|Xx ) = the probability of detecting a feature at m, from ASC pose Xk .

ck (z|Xk ) = PHD of the clutter RFS Ck in (10.8) at time k.



Further implementation details of this algorithm, including pseudo-code examples, are given in Chapter 9 and in [9, 10]. 10.4.3  NN-EKF-SLAM Implementation

For comparison purposes, an NN-EKF-SLAM implementation was carried out, based on the features extracted in Section 10.3.5. As noted, NN-EKF-SLAM could only be implemented on this greatly reduced feature set, since the numerous features extracted by the OS-CFAR processor of Section 10.3.1 contained too many clutter measurements, causing immediate EKF filter divergence. In contrast to RFS-SLAM, where the state to be estimated consists of a vector representation of the vehicle’s trajectory Xk and a set representation of the map Mk, vector-based SLAM methods proceed to estimate a single, joint vector state zk = [Xk Mk]T and hence provide an estimate of

pk|k (ζ k Zk ,U k -1, X0 )

(10.20)

as explained in Chapter 4, Section 4.6.1. Feature association, based on the nearest neighbor standard filter, was then carried out as described in Chapter 4 Section 4.6.2.1. 10.4.4  Multi-Hypothesis (MH) FastSLAM Implementation

The MH-FastSLAM implementation was based exactly on the details of FastSLAM in Section 4.6.4, with details of its associated MH data association possibility given in Section 4.6.5 and [12].

318

X-Band Radar-Based SLAM in an Off-Shore Environment

10.5  Comparisons of SLAM Concepts at Sea 10.5.1  SLAM Trial 1—Comparing PHD and NN-EKF-SLAM

This section details the analysis of the SLAM algorithm on the extensive dataset recorded from the test site shown in Figure 10.1. The ASC was remote controlled to execute a curved trajectory of over 1.8 km, logging more than 650 consecutive radar scans at a rate of 0.5 Hz in a trial run lasting more than 20 minutes. Multiple loops were traversed. The analysis focuses first on the location estimation errors from the RB-PHD-SLAM and NN-EKF-SLAM filters, followed by qualitative examinations of the estimated maps. In this first set of experiments, the ground truth locations of all of the actual sea vessels in the area was not available, and therefore the maps could only be examined in a qualitative manner based on the known configuration of sea vessels and the nearby island’s coastline. Further experiments in which sea vessel (map feature) ground truth was known will be demonstrated in Section 10.5.2. For the RB-PHD-SLAM filter, MC analysis is presented based on 50 sample runs using 100 trajectory particles in each trial. 10.5.1.1  Positional Estimation Analysis

The ground truth ASC position estimates, based on the GPS data, are shown as the labeled trajectories in Figures 10.15 and 10.16. Note that although GPS cannot be relied upon to provide useful, on-the-fly directional inputs to the SLAM algorithm, its long-term positional information is useful at reconstructing ground truth trajectory estimates. 5

x 10 1.348

Estimated ASC trajectories

UTM Northing

1.3475

1.347

1.3465

1.346

Sample predicted trajectory GPS trajectory

1.3455 3.574

3.5745

3.575

3.5755 3.576 UTM Easting

3.5765

3.577 5

x 10

Figure 10.15  The expected trajectories from each of the 50 MC trials, a sample trajectory from the assumed ASC motion model, and the GPS trajectory (© 2010 IEEE. Reprinted with permission from J. Mullane, et al., “X-Band Reader-based SLAM in Singapore’s Off-Shore Environment,” 11th International Conf. on Control, Automation, Robotics, and Vision, Singapore, 2010).

10.5  Comparisons of SLAM Concepts at Sea

319

105 1.348

M

UTM Northing

1.3475

1.347

Predicted ASC path

1.3465

1.346

GPS path

1.3455 3.574

3.5745 3.575

3.5755 3.576 UTM Easting

3.5765

3.577 10 5

Figure 10.16  The NN-EKF-SLAM estimated path, the predicted ASC trajectory from the assumed ASC motion model, and the GPS trajectory.

Figure 10.15 depicts the estimated ASC trajectories from each of the MC runs in comparison with the GPS estimated path. A sample trajectory (Xk)(i) (labelled) from the assumed ASC motion model, using the measured gyroscope data, is also provided. The results demonstrate that the RB-PHD-SLAM approach yields trajectory estimates, which accurately reconstruct the traversed path, despite the sensing and vehicle modeling difficulties. Quantification of the positional error is provided in Figure 10.17, indicating a maximum error of 45 m. While this result demonstrates that ASC-based SLAM is realizable, further investigations have to be carried out to achieve positional errors comparable to those of GPS.

Figure 10.17  Quantification of the mean error in the estimated position (middle curve) of the ASC by the RB-PHD-SLAM system. One sigma standard deviation bounds are also shown (lower and upper curves) (© 2010 IEEE. Reprinted with permission from J. Mullane, et al., “X-Band Reader-based SLAM in Singapore’s Off-Shore Environment,” 11th International Conf. on Control, Automation, Robotics, and Vision, Singapore, 2010).

320

X-Band Radar-Based SLAM in an Off-Shore Environment

Figure 10.18  NN-EKF-SLAM errors plotted with the 1s and 2s error bounds for the estimated xˆk |k, yˆk |k , and φˆk |k states of the ASC.

Figure 10.16 shows the estimated NN-EKF-SLAM path (labeled) in comparison with the GPS and predicted, motion model–based trajectories. Figure 10.18 plots the pose errors, relative to GPS, in all dimensions against their 1s and 2s bounds. The graphs show that the pose errors are often larger than the 2s limits and in this particular experiment are comparable with the RB-PHD-SLAM-based trajectory errors. In particular, maximum displacement errors of approximately 45 m, over the 1.8 km trajectory, were noted for each algorithm. It should be remembered, however, that NN-EKF-SLAM was only possible due to the heavily restricted feature set necessary for reliable data association. 10.5.1.2  Map Estimation Analysis RB-PHD-SLAM Map Analysis

Given that the ground truth heading information is unavailable, the quality of the resulting map estimate can be used to gauge the quality of the estimated ASC heading. This is especially suitable given the large sensing range of the radar. Since most of the point targets are stationary during the trial, and all of the extended targets (land masses) are stationary, for a given map estimation routine, the quality of the posterior map estimate from the temporal fusion of the measurement data provides an indication of the quality of the pose estimates. Using a simplistic linear func-

10.5  Comparisons of SLAM Concepts at Sea

321

Figure 10.19  The estimated map from the predicted ASC trajectory, in comparison to satellite and sea chart imagery. An uninformative map is evident given the poor pose estimates (© 2010 IEEE. Reprinted with permission from J. Mullane, et al., “X-Band Reader-based SLAM in Singapore’s Off-Shore Environment,” 11th International Conf. on Control, Automation, Robotics, and Vision, Singapore, 2010).

tion from signal power to log-odds occupancy (Equation 7.9) [13], the posterior occupancy grid can be propagated as each X-band radar measurement arrives. A zoomed view of the posterior map estimates from both a sample predicted trajectory and the estimated trajectory are provided in Figures 10.19 and 10.20, respectively. The fused map from the RB-PHD-SLAM estimates (Figure 10.20) is seen to be far more informative than that from the predicted trajectory (Figure 10.19),

Supply Ships

Buoys

Protruding Rocks

Figure 10.20  The estimated map from the RB-PHD-SLAM algorithm, in comparison to satellite and sea chart imagery. The map can be seen to coincide well with the islands present as well as various surface objects. Unfortunately, the GPS information, supplied from the ships (evident in Figure 10.1) in the area was unavailable during this experiment.

322

X-Band Radar-Based SLAM in an Off-Shore Environment

Buoys

Supply ships Protruding rocks

Figure 10.21  NN-EKF-SLAM results, superimposed on a satellite image of the area. The points correspond to estimated feature locations, and their corresponding ellipsi correspond to the features’ “3s” uncertainty regions. The lines emanating from the ASC correspond to the associated features from the final estimated ASC’s location. As in Figure 10.16, the GPS, estimated and predicted ASC trajectories are shown.

with the island coastline and various surface objects clearly evident. Some of the successfully mapped objects that could be clearly identified during the experiments are labeled in Figure 10.20. To provide a comparison between the SLAM estimates from the RFS-based PHD filter and the vector-based NN-EKF approach, Figure 10.21 shows the results of NN-EKF-SLAM during the same trial. Figure 10.21 shows the estimated NNEKF-SLAM state superimposed on a satellite image of the area. The map features shown are the final estimates at the end of the run. The features in the vicinity of the islands (lower part of the figure) have large covariance values and are likely to result from clutter measurements, as they are sporadically introduced and deleted by the map-management algorithms. Most of the mapped features in the upper part of the figure correspond well with those shown in the extracted grid map of Figure 10.20. All of the labeled features, marked as supply ships, buoys, and protruding rocks, were independently identified during the experiments and correspond to those labeled in the RB-PHD-SLAM results of Figure 10.20. 10.5.2  SLAM Trial 2—Comparing RB-PHD-SLAM and MH-FastSLAM

To demonstrate the true power of the RFS-based SLAM approach, a second experiment was performed at a time when AIS information was available, providing a quantifiable ground truth map in terms of feature number and location as time progressed. In this experiment, the detected features were based only on power values that exceeded the OS-CFAR threshold of Section 10.3.1, with none of the image-

10.5  Comparisons of SLAM Concepts at Sea

323

smoothing methods of Sections 10.3.2 to 10.3.4 applied. Figure 10.22 shows a GPS trajectory with the entire history of all collected point radar-received power values that have exceeded the OS-CFAR threshold (Equation 10.3, or Slin(q) ³ SOS-CFAR(q)) superimposed. These are shown as black points. This is all superimposed onto a satellite image of the area. It can be seen that many of the estimated targets appear close to the sea vessels detected under the independent AIS (shown as grey dots), which were taken as ground truth. It is also evident that many false alarms are present due to the sea clutter. Due to the NN-EKF-based SLAM approach being unable to cope with the high clutter, it was not used for comparisons in this trial. Figure 10.23 compares the posterior SLAM estimates from MH-FastSLAM (top graph) and RB-PHD-SLAM (bottom graph), where it can be seen that MHFastSLAM overestimates the number of features and exhibits larger spatial errors for those features that truly correspond to actual landmarks. The number estimated by the MH-FastSLAM algorithm is shown by the upper curve and that estimated by the RB-PHD-SLAM filter is shown in the lower curve. The labeled “true feature count” finally settles at 21, when all of the ships detected by the AIS should have entered the FoV of the ASC’s radar. Note that the RB-PHD-SLAM feature number estimate closely resembles and on average tends to the true feature number, according to the ground truth–based AIS. Erroneous associations for the MH-FastSLAM approach result in excessive feature declarations. Since the MH-FastSLAM vectorbased feature management routines are typically dependant on the data association decisions, this dramatically influences the map estimation error. The RB-PHD-SLAM approach can be seen to generate more accurate localization and feature number estimates; however, it can also be seen that some feature estimates are misplaced in comparison to the ground truth feature map. The framework is still demonstrated to be useful for high-clutter feature-based SLAM applications.

Figure 10.22  Overview of the test site (1°13’ N, 103°43’ E). showing the GPS trajectory (grey line) and GPS coordinates (grey dots) of the point feature map. The point feature measurement history is also provided (black dots) (© 2010 IEEE. Reprinted with permission from J. Mullane, et al., “A R­andomFinite-Set Approach to Bayesian SLAM,” IEEE Transactions on Robotics, Vol. 27, No. 2, 2011).

324

X-Band Radar-Based SLAM in an Off-Shore Environment

1.365

x 10

5

1.36

Northing

1.355 1.35

1.345 3.565

3.57

3.575

3.58

3.585

3.59

3.595

3.6

3.605

Easting 1.365

x 10

3.61 x 10

5

5

1.36

Northing

1.355 1.35

1.345 3.565

3.57

3.575

3.58

3.585

3.59

Easting

3.595

3.6

3.605

3.61 x 10

5

Figure 10.23  Upper graph: The posterior SLAM estimate from MH-FastSLAM. Lower graph: The posterior SLAM estimate from RB-PHD-SLAM, in comparison with the ground truth. Crosses represent the estimated landmark locations in each case, and the circles represent their ground truth (AIS) location­s (© 2010 IEEE. Reprinted with permission from J. Mullane, et al., “A-Random Finite-Set A­pproach to Bayesian SLAM,” IEEE Transactions on Robotics, Vol. 27, No. 2, 2011).

10.6  Summary This chapter examined the applicability of SLAM using an autonomous surface craft (ASC) in a marine environment. Adopting a commercially available X-band radar as the main exteroceptive sensor, the investigation demonstrated that despite the widespread presence of GPS information at sea, the heading measurements can still be prone to large error. In the experiments, heading information was therefore provided by an inexpensive single axis, digital gyroscope. However, despite this heading information, the large range of the radar and the small gyro errors resulted in uninformative map estimates, limiting the applicability of the

10.6  Summary

325

40 35

Feature count

30 25 20 15

True feature count

10 5 0 0

500

1000

Measurement index

1500

Figure 10.24  Comparison of the number of estimated features for each approach. The number estimated by the MH-FastSLAM algorithm is shown in the upper curve, and that estimated by the RB-PHD-SLAM filter is shown in the lower curve. The true feature count (based on the AIS) is labeled and settles at a constant value of 21. The noisy estimates are likely due to deviations from the Poisson clutter assumption in places.

craft to tasks such as autonomous mapping, obstacle avoidance, and surface craft tracking. Based on an automatic OS-CFAR point target detector, an RFS feature-based SLAM algorithm was developed to recursively estimate the ASC location and heading, as well as build the map. The point targets exploited were anchored supply ships and buoys present in the test site. The algorithm demonstrated how useful results are realizable, even with difficult to model vehicle dynamics and a lack of any input control measurements. Comparisons of the estimated maps demonstrate the merits of SLAM for an ASC, given uncertain heading and exteroceptive sensor measurement information. Comparative experiments were carried out with the state-of-the-art SLAM methods NN-EKF-SLAM and MH-FastSLAM. To be able to successfully execute NN-EKF-SLAM, the OS-CFAR radar detections needed to be post-processed using image-processing techniques to greatly reduce the feature set in an attempt to reduce clutter measurements to a minimum. Only then could any qualitative comparison of NN-EKF-SLAM and the RB-PHD-SLAM filter take place. The true advantages of the RB-PHD-SLAM filter were demonstrated in Section 10.5.2. It was demonstrated that successful map estimates were possible, when compared to the available ground truth number and position of sea surface vessels in the vicinity of the ASC’s trajectory. Based on unprocessed OS-CFAR detections, which contained many false alarms, the superiority of RB-PHD-SLAM over the state-of-the-art MH-FastSLAM approach was demonstrated. RB-PHDSLAM’s ability to jointly estimate both detected target number as well as location was shown, as it provided superior estimates of target number and location as time progressed. Future work in marine environments should incorporate the extended land targets and should examine the possibilities of joint mapping and target tracking from an ASC.

326

X-Band Radar-Based SLAM in an Off-Shore Environment

10.7  Bibliographical Remarks An initial version of this chapter was published in [14]. SLAM investigations in a marine environment have received widespread interest over the passed few years, though almost exclusively in the underwater domain. For surface vehicles capable of measurements above the surface of the water, though laser-abased systems have proven useful for obstacle avoidance [15], for SLAM investigations, a marine radar represents a natural choice of exteroceptive sensor. To the authors’ knowledge, the experiments described in this chapter are the first time that feature-based SLAM has been investigated using a commercially available, pulsed X-band radar mounted on an ASC. In [16], a delayed-state SLAM approach was presented and implemented on an underwater vehicle with mounted vision sensors. The approach emulated a structure­-from-motion type algorithm from the vision community, using images registered from the estimated vehicle poses to estimate the scene. Fixed sector scan sonar is also a popular sensor choice for underwater robotics applications and has been adopted in a number of influential works. A motion estimation and map-building algorithm based on a fusion of vision and sector scan sonar data is described in [17]. The approach relied on correlated matches between successive sonar scans and point matches between images to achieve the most likely pose of the vehicle. Correlation-based methods are commonly adopted in the presence of difficult to model data, such as that from an unknown and unstructured underwater terrain. Extending the directionality of sector scan sonar of the full 360° FoV, the mechanically scanned sonar has also been a popular choice. SLAM implementations in a swimming pool using a line-feature approach and scanning sonar were presented in [4]. The study was extended to a semistructured underwater scenario in [18]. Marine radar has been adopted as part of a project on autonomous sea craft development for deep sea rescue operations [19]. A test vehicle based on a rigid, inflatable pleasure craft used a commercially available radar primarily for obstacle detection.

References   [1] Dissanayake, M. W. M. G., P. Newman, S. Clark, H. F. Durrant-Whyte, and M. Csorba, “A Solution to the Simultaneous Localization and Map Building (SLAM) Problem,” IEEE Trans. Robotics and Automation, Vol. 17, No. 3, June 2001, pp. 229–241.   [2] Thrun, S., “Robotic Mapping: A Survey,” Technical Report CMU-CS-02-111, Carnegie Mellon University, Pittsburgh, PA, February 2002.   [3] Manley, J. E., “Unmanned Surface Vehicles: Fifteen Years of Development.” In OCEANS 2008, September 2008, pp. l–4.   [4] Ribas, D., P. Ridao, J. Niera, and J. D. Tardos, “SLAM Using an Imaging Sonar for P­artially Structured Underwater Environments,” In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2006.   [5] Skolnik, M. I., Introduction to Radar Systems, New York: McGraw Hill, 1982.   [6] Vo, B., S. Singh, and A. Doucet, “Sequential Monte Carlo Methods for Multi-Target Filtering with Random Finite Sets,” IEEE Transactions on AES, in press.

10.7  Bibliographical Remarks

327

  [7] Vo, B. N., and W. K. Ma, “The Gaussian Mixture Probability HypoThesis Density Filter,” IEEE Transactions on Signal Processing, Vol. 54, No. 11, November 2006, pp. 4091–4104.   [8] Mullane, J., B. N. Yo, M. Adams, and W. S. Wijesoma, “A Random Set Approach to SLAM.” In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) Workshop on Visual Mapping and Navigation in Outdoor Environments, Japan, May 2009.   [9] Mullane, J., B. N. Vo, M. D. Adams, and B. T. Yo, “A random-Finite-Set Approach to Bayesian SLAM,” IEEE Transactions on Robotics, Vol. 27, No. 2, April 2011, pp. 268–282. [10] Mullane, J., B.-N. Vo, M. Adams, and B.-T. Vo. Random Finite Sets for Robot Mapping and SLAM, Springer Tracts in Advanced Robotics, Vol. 72, Berlin: Springer, 2011. [11] Mullane, J., B. N. Vo, and M. Adams, “Rao-Blackwellised PHD SLAM.” In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Alaska, May 2010. [12] Nieto, J., J. Guivant, E. Nebot, and S. Thrun, “Real Time Data Association for FastSLAM.” In IEEE International Conference on Robotics and Automation, Vol. 1, September 2003, pp. 412–418. [13] Foessel, A., J. Bares, and W. R. L. Whittaker, “Three-Dimensional Map Building with MMW RADAR.” In Proceedings of the 3rd International Conference on Field and Service Robotics, Helsinki, Finland, June 2001. [14] Mullane, J., S. Keller, A. Rao, M. Adams, A. Yeo, et al., “X-Band Radar Based SLAM in Singapore’s Off-Shore Environment.” In The Eleventh International Conference on Control, Automation, Robotics and Vision (ICARCV 2010), Singapore, December 2010. [15] Bandyophadyay, T., L. Sarcione, and F. Hover, “A Simple Recative Obstacle Avoidance A­lgorithm and Its Application in Singapore Harbor.” In Proceedings of the 7th International Conference on Field and Service Robotics, MA, July 2009. [16] Eustice, R. M., H. Singh, and J. J. Leonard, “Exactly Sparse Delayed-State Filters for View-Based SLAM,” IEEE Transactions on Robotics, Vol. 22, No. 6, December 2006, pp. 1100–1114. [17] Kalyan, B., A. Balasuriya, H. Kondo, T. Maki, and T. Ura, “Motion Estimation and Mapping by Autonomous Underwater Vehicles in Sea Environments.” In Proceedings of IEEE Oceans, 2005, pp. 436–441. [18] Ribas, D., P. Ridao, J. D. Tardos, and J. Niera, “Underwater SLAM in a Marina Environment.” In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Diego, CA, October 2007. [19] Onunka, C., and C. Bright, “Autonomous Marine Craft Navigation on the Study of Radar Obstacle Detection.” In Proceedings of the 11th IEEE International Conference on Control, Automation, Robotics and Vision (ICARCV 2010), Singapore, December 2010.

Appendix A

The Navtech FMCW MMW Radar Specifications Figure A.l shows the radar used for the experimental analyses in Chapters 5 through 9. The lower section of the radar contains an upward facing antenna horn, which directs the electromagnetic energy up into the central cylindrical section. This cylindrical section contains a steerable reflector “swash” plate, which reflects the radio energy as desired. This is shown schematically in the right-hand diagram. The top section of the radar contains the scanning motors and gears.

Figure A.1  Photograph of the 77 GH: Navtech MMW radar used in many of the experiments throughout the book. The antenna (located at the bottom of the unit) and swash plate (located in the cylindrical central section) are contained in a perspex housing.

329

330

Appendix A

The specifications of this particular radar are given in Table A.1. Table A.1  Specifications of the Navtech Scanning FMCW radar. Radar Parameter: Transmitted frequency Chirp bandwidth Transmitted power Maximum range Range accuracy Beam width Scanner resolution Scanner field of view Scan speed Travel in elevation Elevation resolution Elevation scan rate Interfaces Supply voltage Dimensions Environmental protection Temperature range

Specification: 76 to 77 Ghz 600 MHz 15 dBm 200 to 800m (configurable) up to ±0.03 m 1.8° 0.09° 360° 2.5 Hz (other speeds optional) ±15° (not used in this book) 0.02° 5°/sec Ethernet / CAN / RS-232 +24V nominal (18–36V) 321 ´ 321 ´ 741 mm (Radar 321 ´ 321 ´ 171 mm + Optional 3D Raydome and Scanner 295 ´ 570 mm) IP 66, NEMA-4X -20°C to +70°C

Appendix B

Derivation of g(Zk |Zk–1, Xk) for the RB-PHD-SLAM Filter Recall Equation 9.6,

pk|k (Mk Xk ) =

g(Zk Mk Xk )pk|k -1(Mk X k ) g(Zk Z k -1X k )



(B.1)

and the Poisson RFS approximations,



pk|k -1(Mk X k ) »

pk|k (Mk Xk ) »

Õ

mÎMk

υk|k -1(m X k )

exp(ò υk|k -1(m X k )dm)

Õ



(B.2)

v k|k (m X k )

mÎMk

exp(ò vk|k (m X k )dm)

.

(B.3)

B.1  The Empty Strategy Substituting Equations B.2 and B.3 into Equation B.l, assigning Mk = 0/. and rearranging Equation B.l gives

g(Z k Z k -1 , X k ) = g(Zk  , Xk ) ´

Õ

mÎMk

vk|k -1(m X k )

Õ

vk|k ( m X k )

mÎMk

´

exp (ò vk|k(m X k)dm)

exp ( ò vk k -1(m X k )dm)

Since Mk = 0/, the empty set measurement likelihood is that of the clutter RFS (Pois son),

Õ ck (z Xk ) g(Zk � , Xk ) = exp (ò ck (z Xk )dz ) z ÎZk

The PHDs uk½k–1 and vk|k are empty, implying their product is 1, ˆ k|k -1 = ò vk|k -1(m X k )dm and m ˆ k = ò vk|k (m X k )dm giving m

g(Zk |Z k -1 , X k ) =

Õ ck (z Xk )exp(mˆ k|k - mˆ k|k-1 - ò ck (z Xk )dz)

z ÎZk

Note that while for the empty map choice, the likelihood g(Zk½Zk–1, Xk) does not contain a measurement likelihood term g(Zk½Mk, Xk), the history of measurements and trajectories are incorporated into the predicted and updated intensity terms, ˆ k|k -1 and m ˆ k|k, respectively. whose integrals appear as the terms m 331

332

Appendix B

B.2  The Single Feature Strategy Assigning Mk = {mchosen } with m being chosen according to a given strategy,



g(Zk Z k -1 , X k ) = g(Zk mchosen , Xk ) ´

Õ

mÎMk

vk|k -1(mchosen X k )

Õ

vk (m

chosen

k

X )

´

ˆ k) exp(m ˆ k|k -1) exp(m

mÎMk

= g(Zk mchosen , Xk ) ´

vk|k -1(mchosen X k ) vk (mchosen X k )

ˆk -m ˆ k|k -1) ´ exp(m

If Mk = {mchosen }, thus (4.57),



g(Zk mchosen , Xk ) =

(1 - PD (mchosen Xk )) Õ ck (z Xk ) z ÎZk

exp(ò ck (z Xk )dz)

PD (mchosen

+

æ ö S z ÎZk ç Õ ck (z Xk )÷ g(z mchosen , Xk ) è z ÎZk - z ø Xk ) exp(ò ck (z Xk )dz)

then,

æ (1 - PD (mchosen Xk)) Õ ck (z Xk) ç z ÎZk g(Zk Z k -1 , X k ) = ç exp(ò ck (z Xk )dz) è S z ÎZk æ Õ ck (z Xk )ö g(z mchosen , Xk )ö çè z ÎZ ÷ø ÷ k-z + PD (mchosen|Xk ) ÷ exp(ò ck (z Xk)dz) ø ´

vk k -1(mchosen Xk )

ˆ k k -1 - m ˆ k|k ) vk|k (mchosen Xk )exp(m

Appendix C

NN-EKF and FastSLAM Feature Management This appendix details the feature-management routine developed for both NN-EKF and FastSLAM in a cluttered environment, providing them with knowledge of the detection and false alarm probabilities for a fair comparison with the RFS approach. As with standard approaches [1], tentative new features are declared for unassociated measurements. The “existence probability” of each feature, PE( j,)k, given a 95 percent confidence gate and prior existence probability of PE( j,)k -1, then evolves through a binary Bayes filter according to Step 1: (Obtain association details within FoV)

J = { j ∈ Mk|m j ∈ FoV & m j not associated.}



J = { j ∈ Mk|m j ∈ FoV & m j associated.} Step 2: (Calculate hit, miss, and association probabilities) 2 Pfa = λ clutter /(rmax ´ π) ( J) Pmiss = (1 - PD ) ´ PE( J,k) -1 + PD ´ 0.05 ´ PE( J,k) -1



( J) ( J) Phit = PD ´ Passoc PE( J,k) -1 ( J) Passoc =

1 k -1 / 2 |E | exp(-0.5vEk-1vT ) 2π

Step 3: (Update existence probabilities)

PE( J,k) = PE( J,k) =

( J) Pmiss

(J ) Pmiss + (1 - Pfa )(1 - PE( J,k) -1) ( J) Phit

( J) Phit + Pfa (1 - PE( J,k) -1)

This adhoc but effective routine enhances the robustness of standard SLAM feature management as shown in Figure 9.3 when exposed to high clutter rates. Thus, both the benchmark and proposed approaches receive the same information for each filter loop. However, one of the fundamental merits of the proposed RFS framework is that feature management and data association are jointly incorporated into a single SLAM Bayesian update. 333

ART_Adams_Appendix C.indd

333

Manila Typesetting Company

06/29/2012

Index A Adaptive noise variance estimate, 224 Adaptive power thresholding, 100 Advanced Mining Technology Center (AMTC), 20, 21 Amplitude modulated continuous wave (AMCW) range estimation, 51 Approximate-Gaussian SLAM, 124 Approximate particle solutions, 128 A-scope with constant receive power thresholds, 84 defined, 36 delays, 47 detection theory and, 85 displays, 47, 49 displays, at discrete bearing angles, 67 with Gaussian mixture approximation, 85 linearized version, 49, 50 multiple, 227–28 noise power, 46, 229 normalized, 86 outputs, 37 qualitative assessment of TPP applied to, 215 radial, 67 with range compensation filter effect reversed, 48, 50 recursive noise reduction, 223–28 saturation, 181 theoretical, 120 A-scope prediction based on expected target RCS and range, 173–74 based on robot motion, 174–76 from multiple features, 177–78 power prediction, 175–76

results, 177–88 single bearing, 177 spatial prediction, 175 360-degree multiple, 179–88 from two co-aligned targets, 178 vehicle motion, 174 Australian Centre for Field Robotics (ACFR), 20, 21 Automatic identification system (AIS), 303 Automotive radar systems, 21–24 Autonomous navigation based only on inertial guidance, 3–4 dependent on artificial sources, 8–11 dependent on natural sources, 4–8 perspective, 11–12 research, 12 Autonomous surface craft (ASC), 301–24 coastal environment and, 303–5 process model, 314 SLAM, 301 trajectory, 305 X-band radar mounted on, 304 B Bang pulse, 68 Bayesian SLAM, 124–26 defined, 107 map representation, 112–15 vehicle state representation, 109–12 Binary integration, 195 comparison, 213–14 defined, 201–2 detection, 201–4 method, 201 performance of, 204 probability of detection, 203–4 probability of false alarm, 203 B-scope, 72 335

336

C Campus environment experiment defined, 259 final posterior map estimates, 260, 261 GBRM-DL algorithm, 259 satellite image, 259, 260 Cardinalized PHD (C-PHD) filter, 138, 156 Cartesian coordinates defined, 72 nearest neighbor polar conversion to, 73 power value, 75 transforming polar power values to, 73 weighted polar conversion to, 73–76 Cartesian spatial state space, 237 Cell averaging, constant false alarm rate (CA-CFAR) processors, 66–68, 89, 90–94 applied to scan, 181 calculations, 90–93 decision hypotheses, 228 false alarms, 93 IID assumption, 90 illustrated, 90 landmark detections, 93 landmark detection with, 198 OS-CFAR comparison, 99 performance of, 93–94 received power detection threshold, 92 summary, 100–101 target masking effect, 94 Censored greatest-of (CGO), 102 Center for Environmental Sensing and Monitoring (CENSAM) project, 25–27 Chaff, 62 Chapman-Kolmogorov equation, 108 Clustering defined, 311 feature extraction and, 313 image-based, 311–13 Clutter causes, 62 defined, 62 transitions, 101 See also Interference

Index

Coastal marine applications, 25–27 Coastal SLAM environment, 304 landmarks, 304 Coherent integration, 198–201 Constant false alarm rate (CFAR) processors adaptive power thresholding, 82 cell-averaging (CA). see cell averaging, constant false alarm rate (CA-CFAR) processors defined, 81 GBRM-DL, detector, 247 noise sample assumption, 86 ordered statistics (OS). see ordered statistics CFAR (OS-CFAR) processors phase noise and, 72 target detection methods based on, 86 Constant fraction timing discriminators (CFTDs), 70 Cross talk, 62 D Delaunay triangulation, 76 Detected feature state, 136–37 Detection approaches to, 84–86 binary integration, 201–4 error reduction, 195–230 landmark, 83–84 matrix, 244 measurement interpretation, 122–3 measurement model, 244–45 non adaptive, 84–85 post, 199 in presence of noise, 166–67 probability of, 143–44 range measurement models versus, 243–45 stochastic based adaptive, 86 Detection likelihood estimator, 247–49 Detection theory, 81–102 concepts of, 82–84 defined, 81 elements of, 83–84

Index

hypothesis free modeling, 85–86 introduction to, 81–82 with know noise statistics, 87–89 summary, 100–101 with unknown noise statistics, 89–100 Doppler FMCW radar, 56 frequency shift, 56, 57 measurements, 56–58 Dynamic map state, RFS, 135–36 E EKF-FBRM comparisons, 268 concepts, 276 nearest neighbor (NN), 275–277 Electronic countermeasures, 62 Empty strategy PHD-SLAM filter, 289 RB-PHD-SLAM filter, 331 Estimated vehicle path based on GPS, 5, 10 based on IMU-aided GPS, 5, 11 Exactly SEIF SLAM (ESEIF-SLAM), 154 Extended Kalman filter (EKF), 150–51 approximation, 288 update equations, 271 See also EKF-FBRM F Factorized RFS-SLAM recursion, 286–87 False alarms cell averaging, constant false alarm rate (CA-CFAR) processors, 93 PHD filter, 144 FastSLAM defined, 106, 129 feature management, 333 illustrated, 129 MH implementation, 317 multi-hypothesis (MH), 130 FB-SLAM approximate Gaussian solutions, 124–26 approximate particle solutions, 128 Bayesian, 124–26, 128, 152

337

feature association, 126–28 non-parametric approximations, 152 solutions, 124–33 state description by PDF, 137 vector-based, 130–33 Feature association, 155–56 JCBB, 127–28 ML, 126–27 Feature-based (FB) maps, 114–15 comments on, 115 defined, 114 illustrated, 114 posterior, 147 use of, 114–15 variable dimensionality, 119 Feature-based robotic mapping (FBRM), 118–20 algorithms, 118, 119 approaches, 267 defined, 106, 115 evaluation metrics, 156–57 evaluation with second order Wasserstein metric, 146–48 implementation details, 267 introduction to, 267–68 map estimate evaluation, 145–46, 157 new feature proposal strategy, 271–72 performance metrics, 145–48 with random finite sets, 133–45 RFS, 133–45, 279 vector-based feature mapping, 131 vehicle state estimate evaluation, 145, 156–57 See also PHD-FBRM filter Feature labeling, 313 Feature locations, predefined number of, 133 Feature mapping RFS-based, 132 vector-based, 131 Finite set statistics (FISST), 24, 156 FMCW radar, 21–24 chirp under Doppler frequency shift, 57 detection in presence of noise, 166–67 Doppler, 56

338

FMCW radar (cont.) FFT section, 58 frequency shift versus time, 56 ideal frequency chirp definitions, 55 less-than-ideal performance causes, 192 measurements, 165–93 mixing process, 167 MMW, 36 modeling, 192–93 phase noise, 70–72 power versus range, 166 range and beat frequency relationship, 54–56 range compensation, 44–45 range estimation, 35 range measurement, 53–56 range noise estimation, 169–73 received power noise estimation, 168–69 selected mixer output, 167 transceiver, 53 uncertainty, 70–72 Frequency modulated continuous wave. See FMCW radar G Gaussian filtering, 308–10 Gaussian mixture (GM) defined, 85 implementations, 151–52 Gaussian mixture model (GMM) defined, 85 PHD filter implementation, 148 See also GMM-PHD-FBRM filter GBRM-DL advantage of, 255 algorithm block diagram, 246 algorithm overview, 246 campus environment, 259–61 CFAR detector, 247 data format, 245–46 defined, 236 experiments in parking lot environment, 252–59 GBRM-SL comparisons, 251–61 map occupancy and detection likelihood estimator, 247–49

Index

occupancy maps producted by, 254 OS-CFAR processor, 249 particle filter implementation, 250–51 quantified comparisons of, 253–56 summary, 261–62 synthetic data, single landmark, 251–52 See also Grid-based robotic mapping (GBRM) GBRM-SL campus environment, 259–61 concept, 236 defined, 236 discrete interpretation of likelihood, 240 experiments in parking lot environment, 252–59 GBRM-DL comparisons, 251–61 occupancy maps producted by, 254 quantified comparisons of, 253–56 sensor models, 241 synthetic data, single landmark, 251–52 See also Grid-based robotic mapping (GBRM) Ghost in the machine problem, 62 Global positioning systems (GPSs), 2, 4 artificial information source in, 8–11 in autonomous robotic applications, 9 estimated vehicle path based on, 5 outages, 10 GMM-PHD-FBRM filter birth, 272 computation time, 275 defined, 268, 269 Estimate, 275 false features, 277 introduction of, 280 posterior feature map estimate, 278 Predict, 272 Prune, 273, 274 pseudo code, 274–75 GMM-PHD-SLAM filter, 293 Go-onto-location-in-space (GOLIS) systems categories, 3 defined, 2 Go-onto-target (GOT) systems, 2 Graph-based SLAM, 152–53 GraphSLAM, 152–53

Index

Grid-based (GB) maps, 112–14 comments on, 115 defined, 112 estimating, 114 occupancy representation, 112–13 Grid-based robotic mapping (GBRM), 117–18, 149–50 based on range measurements, 239–41 defined, 106 detection likelihood. see GBRM-DL with detection measurements, 241–43 introduction to, 235–37 occupancy state, 121 problem, 237–45 research, 117, 118 spatial likelihood. see GBRM-SL H Hausdorff distance (HD) metric, 7 Hidden Markov model (HMM) defined, 237 transition matrix, 248 Hypothesis free modeling, 85–86 I Image-based clustering, 311–13 Image-based smoothing, 308–10 Image-based thresholding, 311 Incremental SAM (iSAM), 154–55 Independent and identically distributed (IID) signals, 86 Individual compatibility nearest neighbor (ICNN) algorithm, 127 Inertial measurement units (IMUs) defined, 3 strap-down, 4 use of, 4 Inertial navigation systems (INSs), 3 Intelligent transportation system applications, 21–24 Interference, 61–63 J Joint compatibility branch and bound (JCBB) algorithm, 128

339

data association, 127–28 feature association, 107 Joint problem, 107–15 K Known noise statistics constant, 87–88 detection theory with, 87–89 probabilities of missed detection and noise, 89 probability of detection, 88–89 L Land-based SLAM applications, 24–25 Landmark-dependent DLs, 252 Landmark detection CA-CFAR processors, 93 hypothesized, 83–84 with single scan CA-CFAR, 198 Landmarks coastal SLAM, 304 estimated number of, 245 power fluctuation, 242 single point, 262, 280 TPP response to, 209–10 Laser measurements occupancy grid map with, 19 projected onto camera image plane, 14 Linear interpolation, 74–75 Logarithmic compression, 43–51 defined, 44 electronics, 44 of noise, 60–61 during target absence, 45–47 during target presence, 47–51 M Map estimation analysis, 320–22 Map occupancy, 247–49 Mapping experiment, 17 PHD, 287–88, 290–92 road, 21–23 with unknown measurement likelihoods, 245–49

340

Maps Bayesian SLAM, 112–15 feature-based (FB), 114–15 grid-based (GB), 112–14 occupancy, 17, 18, 19, 254 recovering from PHD intensity function, 141–42 Marine-based SLAM algorithms, 314–17 ASC process model, 314 multi-hypothesis (MH) FastSLAM implementation, 317 NN-EKF-SLAM implementation, 317 RFS SLAM with PHD filter, 314–17 Marine environment, ASC and, 303–5 Marine radar adaptive coastal feature detection, 306–8 adoption, 326 feature extraction, 305–13 feature labeling, 313 image-based clustering, 311–13 image-based smoothing, 308–10 image-based thresholding, 311 sea-clutter amplitude distribution, 306 SLAM investigations, 326 Mass functions, 239 Maximum likelihood (ML) estimation defined, 126 feature association, 126–27 Maximum radar range, 58 Mean squared error (MSE), 227 Millimeter wave (MMW) radar, 35 Navtech FMCW specifications, 329–30 transceiver block diagram, 53 Minimum detectable signal concept, 58 Mining applications, 19–21 Missile/aircraft guidance technologies, 2–11 Monocular cameras, scene capture, 13, 14 Multi-hypothesis (MH) FastSLAM defined, 130 implementation, 317 RB-PHD-SLAM comparison, 322–23 Multi-path effects detection and removal, 68 reflections, 66

Index

Multipath fading, 68 Multi-path reflections, 214 Multiple A-scope, 227–28 Multiple hypothesis (MH) feature association, 107 tracking (MHT), 130 Multiple line-of-sight targets, 58 Multiple radar scans, 195–230 defined, 195 integration based on TPP estimation, 204–5 integration methods, 198–204 integration methods comparison, 213–14 introduction to, 195–96 multi-path reflections, 214 recursive A-scope noise reduction, 223–28 summary, 228–29 TPP estimator false alarm and detection probabilities, 206–13 TPP integration in urban environment, 215–23 in urban environment, 196–98 Multitarget, multi-Bernoulli (meMBer) filter, 138 N Navtech FMCW MMW radar specifications, 329–30 Nearest neighbor polar coordinates, 73 Nearest neighbor standard filter (NNSF), 127 NN-EKF-FBRM algorithm, 275 NN-EKF feature management, 333 NN-EKF-SLAM estimated path, 319 implementation, 317 PHD comparison, 318–22 plotted errors, 320 results, 322 Noise complete scan subtraction, 227–28 distributions, 168–73 logarithmic compression of, 60–61 mean squared, 226 MSE value, 227

Index

positional, 86 range, 169–73 thermal, 173 TPP response to, 206–9 Noise figure, 59, 60 Noise power estimation of, 168–69 output, 59 during target absence, 168 during target presence, 169 thresholding, 151 Noise reduction with multiple scans, 195–230 received power, 227, 228 received power spectra algorithm, 225 recursive A-scope, 223–28 Noise statistics known, 87–89 unknown, 89–100 Non adaptive detection, 84–85 Noncoherent integrators comparison, 213–14 concept, 195 performance of, 202 probability of detection, 199–200 probability of false alarm, 200–201 Normalized A-scope, 86 Normalized averaged sum of the squared error (NASSE) metric both approaches, 258 comparison, 256, 258 defined, 255 error plots, 256, 257 O Occupancy maps GBRM-DL, 254 GBRM-SL, 254 illustrated, 17, 18, 19 with laser, 19 with radar, 18 Occupancy measurements, 240 Occupancy state space, 238 Ordered statistics CFAR (OS-CFAR) processors, 89, 94–100 adaptive, detection threshold, 307 adaptive coastal feature detection, 306–8

341

CA-CFAR processor comparison, 99 calculations, 95–97 defined, 94 detections, 253 illustrated, 97 landmarks in close proximity, 99 in marine-based mapping, 95 performance of, 97–100 radar detections, 296 A-scope, 98 summary, 100–101 target masking effect, 99 test statistic, 95 Ordered statistics greatest-of (OSGO), 102 Organization, this book, 27–28 Outages, 10 P Parking lot experiment, 252–58 defined, 252–53 detection module parameters, 256–58 ground truth binary representation, 255 NASSE metric, 255, 256, 257 NASSE metric comparison, 258 occupancy maps, 254 OS-CFAR detection routine, 257–58 quantified comparisons of GBRM-SL and GBRM-DL, 253–356 testing ground overview, 254 Particle filter implementation, 250–51 Phase noise, 70–72, 78–79 CFAR processors and, 72 distribution, 171 estimation, 169 generation of, 70 PHD-FBRM filter, 268–69 analysis of, 275–79 computational complexity, 275 corrector, 269 defined, 268 framework, 269 implementation, 269–75 summary, 279–80

342

PHD-SLAM filter, 286–90, 315–17 defined, 285 empty strategy, 289 NN-EKF-SLAM comparison, 318–22 posterior vehicle trajectory, 292 single feature strategy, 289–90 weighting likelihood, 316 Plan position indicator (PPI), 5–6 defined, 38 radar reflector, 41 sector, representation, 15 sector display, 14 superimposed scans, 15, 16 Polar coordinates defined, 72 nearest neighbor, to Cartesian data conversion, 73 weighted, to Cartesian data conversion, 73–76 Positional estimation analysis, 318–20 Positional noise, 86 Post detection, 199 Predetection integration, 198–99 Prior information, 144–45 Probabilistic data association filter (PDAF), 155 Probability density function (PDF) Gaussian, 262 joint, 108 Probability distribution binomial, 203 in spatial space, 121 Probability hypothesis density (PHD) FBRM filter. see PHD-FBRM filter mapping, 287–88, 290–92 of multiple trajectory-conditioned maps, 285 predicator/corrector equations, 270 state predictor-corrector, 268 Probability hypothesis density (PHD) estimator, 138–42 defined, 138 interpretation of, 138–40 as limit of occupancy probability, 140–41 recovering map from, 141–42 sample, 140

Index

Probability hypothesis density (PHD) filter, 24, 138 cardinalized (C-PHD), 138, 156 corrector equation, 142 defining, 142 false alarms, 144 Kalman filter analogy, 141 predictor-corrector form, 142 prior information, 144–45 probability of detection, 143–44 Probability of detection binary integration, 203–4 as function of received power, 81 with known noise statistics, 88–89 noncoherent integrator, 199–200 scan integration and, 195 TPP estimator, 206–13 Probability of false alarm binary integration, 203 noncoherent integrator, 200–201 TPP estimator, 206–13 Q Qualitative analyses, 18 Qualitative assessment of entire parking lot scene, 221–23 TPP applied to A-scope information, 215 Quantified performance analyses, 18 Quantitative assessment, TPP applied to complete scans, 215–21 R Radar FMCW. see FMCW radar fundamentals, 35–79 general application of, 12 marine, 305–13 measurement, 15 MMW, 35, 53, 329–30 motivation in robotics, 12–19 object penetration by, 42 occupancy grid map with, 18 penetrating ability, 76 range compensation, 43–51 reflector, 40–43

Index

A-scope outputs, 37 signal attenuation, 40–43 summary, 76 TOF. see TOF radar uncertainty sources, 58–68 waves, 42 Radar-based robotics research, 19–27 coastal marine applications, 25–27 direction of, 19–27 intelligent transportation system applications, 21–24 land-based SLAM applications, 24–25 mining applications, 19 Radar equation alternate form, 40 defined, 38 derivation, 39 extensions to, 76–77 importance, 39 simple form, 40 Radar measurements, 36–38 FMCW, 165–93 power compression, 43–51 predicting, 173–76 quantitative comparison, 176 range, 51–58 scanning bearing angle, 37 summary, 188–91 Random finite sets (RFS), 156 detected feature state, 136–37 dynamic map state, 135–36 FBRM framework, 279 FBRM with, 133–45 framework, 134 map model, 137–38 motivation, 133–35 radar-based SLAM with, 285–300 representations, 135–37 robotic vehicle state, 135 SLAM with, 133–45 Range bins, 36 dependence of, 46 histograms, 172 maximum, 58 reading, 239

343

A-scope prediction based on, 173–74 spectrum, 44 Range compensation, 43–51 defined, 44 filter effect reversed, 47 FMCW radar, 44–45 during target absence, 45–47 during target presence, 47–51 Range measurements detection models versus, 243–45 Doppler, 56–58 FMCW radar, 53–56 GBRM based on, 239–41 methods, 78 multiple line-of-sight targets, 58 state-independent, 240 techniques, 51–58 TOF pulsed radar, 51–53 Range noise estimation, 169–73 histograms, 172 Range walk defined, 69 errors in TOF systems, 69 Rao-Blackwellized (RB) strategy, 142, 287–88 RB-PHD-SLAM filter comparison, 285–86 computational complexity, 293–95 defined, 285 derivation for, 331–32 empty strategy, 331 Estimate, 294 estimated map, 321 fused map, 321 implementing, 290–93 map analysis, 320–22 map estimation, 292–93 MH-FastSLAM comparison, 322–24 parameters, 296 PHD mapping implementation, 290–92 Predict, 293 set-based, 295 simulations, 295 single feature strategy, 332 Update, 294 vehicle trajectory implementation, 292

344

Received power fluctuations, 63–64 Receiver noise figure, 59, 60 logarithmic compression of, 60–61 power, 59 Recursive A-scope noise reduction, 223–28 multiple A-scope, 227–28 single A-scope noise subtraction, 225 RFS-SLAM factorized recursion, 286–87 PHD filter, 314–17 PHD SLAM filter, 315–17 radar-based comparisons, 295–99 RFS measurement model, 315 trajectory conditioning on, 287 Robotic localization probabilistic, 116 solving, 115–16 Robotic mapping detection measurement interpretation and, 122–23 feature-based (FBRM), 106, 115, 118–20 grid-based (GBRM), 117–18 probabilistic, 116, 237 sensor measurements relationship to, 120–23 solutions, evaluating, 117–20 solving, 115–16 spatial measurement interpretation and, 121–22 state space filtering problem, 238 summary, 148–49 Robotic vehicles autonomous navigation, perspective, 11–12 missile/aircraft guidance technologies to, 2 remote control, 13 state, 135 Ruppert’s algorithm, 76 S Scan averaging methods, 230 Scan-by-scan averaging (SSA-CFAR) processor, 229–30

Index

Scan integration, 195–230 binary, 195 binary detection, 201–4 coherent, 198–201 defined, 195 landmark detection with single scan CA-CFAR, 198 multi-path reflections, 214 noncoherent, 195, 198–201 probability of detection and, 195 TPP, 196 in urban environment, 196–98 Scan matching, 8 Scanning, bearing angle, 37–38 Second-order Wasserstein metric, 146–47 Self-screening jammers, 62 Sensor measurements, robotic mapping relationship, 120–23 Sensor suites, ACFR, 20, 21 Sequential compatibility nearest neighbor (SCNN), 127–28 Set-based measurement, 121 Side lobes, 62 Signal processing problems, 262 Signal propagation/attenuation, 76–77 Signal-to-clutter ratio (SCR), 62 Simultaneous localization and mapping. See SLAM Single point landmarks, 280 SLAM, 24–25 algorithms, 110 approximate Gaussian, 124 concepts of, 106 defined, 105 delayed-state, 326 detection measurement interpretation and, 122–23 EKF-based, 126, 130 evaluation metrics, 156–57 evaluation with second order Wasserstein metric, 146–48 factorized approach, 299–300 FastSLAM, 106, 129–30 full Bayesian recursion solution to, 106 general Bayesian, 107–15 graph-based, 152–53 map estimate evaluation, 145–46, 157

Index

performance metrics, 145–48 problem, 106 with random finite sets, 133–45 scan-based, 150 scenario, 107–8 sensor measurements relationship to, 120–23 sparse extended information filter (SEIF), 153–55 spatial measurement interpretation and, 121–22 vector-based feature mapping, 131 vehicle state estimate evaluation, 145, 156–57 SLAM trial 1 defined, 318 map estimation analysis, 320–22 positional estimation analysis, 318–20 SLAM trial 2, 322–24 Smoothing and mapping (SAM), 154 Sparse extended information filter (SEIF) SLAM, 153–55 Spatial state space, 237 Speckle defined, 64 effect illustration, 65 minimizing effect of, 64 reduction based on multilook imaging, 64–66 Standoff jammers, 62 State-independent measurements, 240 Stochastic based adaptive detection, 86 Surface reconstruction algorithms, 76 Swerling models, 63–64, 78 T Target masking CA-CFAR processor, 94 ordered statistics CFAR (OS-CFAR) processors, 99 Target presence probability (TPP) algorithms, 205 defined, 195–96 flow diagram, 212 histograms, 206, 208 iterations, 210–11 processor, 208

345

response to landmark and noise, 209 response to noise, 206–9 technique, 204 threshold, 204, 212 value as function, 211 See also TPP estimation; TPP integration Template matching, 7 Thermal noise, 173 360-degree multiple A-scope prediction, 179–88 correlation coefficient, 184, 187, 190 correlation plots, 186 multiple A-scope displays, 180 vehicle location, 182, 185, 186 zoomed view, 184, 185, 187, 188, 191 Time of flight (TOF), 35 TOF radar constant fraction timing discriminators (CFTDs), 70 pulses in the presence of noise, 69 range estimation, 52 range measurement, 51–53 range resolution, 52 range walk errors, 69 uncertainty, 68–70 TPP estimation, 204–5 after 49 scans, 220 after 72 scans, 220 deriving, 205 noise power within A-scopes, 229 for parking lot scan, 223 points to note, 205 probability of detection, 206–13 probability of false alarm, 206–13 TPP integration comparison, 213–14 defined, 196 qualitative assessment applied to Ascope integration, 215 quantitative assessment applied to complete scans, 215–21 in urban environment, 215–23 Track before detect (TBD) methods, 230 Track splitting filter, 155 2D vehicle motion model, 110–12

346

U Uncertainty sources all radar types, 59 bibliographical remarks, 78 CFTDs, 70 FMCW radar, 70–72 interference and clutter, 61–63 multi-path effects, 66–68 phase noise, 70–72 received power fluctuations, 63–64 receiver noise, 59–61 speckle, 64–66 TOF radar, 68–70 See also Radar Unknown noise statistics, 89–100

Index

Vector-based SLAM filters feature management, 297 RFS-SLAM versus, 295–99 Vehicle state representation, 109–12 degrees of freedom (DOF), 109–10 2D vehicle motion model, 110–12 W Wasserstein metric FBRM estimation error based on, 279 A-scope number versus, 221 second-order, 146–47 Weighted polar coordinates, conversion to Cartesian data, 73–76

V Validation gates, 127 Vector-based FB-SLAM, 130–31

X X-band SLAM, 301–24