Adjustment Models in 3D Geomatics and Computational Geophysics: With MATLAB Examples [4] 9780128175880

Adjustment Models in 3D Geomatics and Computational Geophysics: With MATLAB Examples, Volume Four introduces a complete

1,101 132 43MB

English Pages 415 Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Adjustment Models in 3D Geomatics and Computational Geophysics: With MATLAB Examples [4]
 9780128175880

Table of contents :
1. Statistical Introduction
2. Propagation of Errors
3. Least Squares Adjustment Procedures
4. Common Observation Models and Their Adjustment in Geomatics
5. Adjustment Using General Observation Model
6. Adjustment with Constraints
7. Unified Approach of Least Squares
8. Fitting Geometric Primitives with Least Squares
9. 3D Transformation and Co-Registration
10. Kalman Filter
11. Introduction to Adjustment with Levenberg-Marquardt Method
12. Post Analysis in Adjustment Computations

Appendix: MATLAB Code for Generic 2D Geodetic Networks Adjustment References

Citation preview

ADJUSTMENT MODELS IN 3D GEOMATICS AND COMPUTATIONAL GEOPHYSICS

Computational Geophysics

ADJUSTMENT MODELS IN 3D GEOMATICS AND COMPUTATIONAL GEOPHYSICS With MATLAB Examples VOLUME

4

BASHAR ALSADIK Faculty member at Baghdad University – College of Engineering – Iraq (1999–2014) Research assistant at Twente University – ITC faculty – The Netherlands (2010–2014) Member of the International Society for Photogrammetry and Remote Sensing ISPRS

Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States # 2019 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-12-817588-0 For information on all Elsevier publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Candice Janco Acquisitions Editor: Amy Shapiro Editorial project manager: Hilary Carr Production project manager: Paul Prasad Chandramohan Cover Designer: Greg Harris Typeset by SPi Global, India

Preface The rapid development in different areas of science and technology in the last decades transferred engineering applications into a new era where the major areas of designation, analysis, prototyping, and implementation are being applied in computerized or automated ways. This means that a large number of computations that was difficult or computationally expensive to perform in the past are today easily handled and significantly automated or programmed. An important division of engineering, namely, traditional surveying and/or geodetic engineering, is positively affected by the developments or advents of satellite navigation, digital mapping, information technology, robotics, remote sensing, computer vision, sensors fusion, and other fields related to Geo-wise applications. Accordingly:

system (GNSS)-based positioning, as an example. Hence, the terminology of surveying is changing in many educational, academic, and governmental institutions to geomatics, geoinfomatics, or geospatial engineering, where three-dimensional observations are more applied. Consequently, the author came to the idea of writing this book that focuses on the adjustment of 3D geomatical observations, which inspires traditional techniques and puts them in a modern form supported by solved examples. Further, the book is focused on presenting both theoretical and practical context to ensure a clear understanding of adjustment computations topic to the readers. Many numerical examples as mentioned are introduced and supported by MATLAB codes to fully understand the topics. The author should mention that he is not a professional programmer, and he developed the codes mainly for educational purposes. The book is intended for students, lecturers, and researchers in the field of geomatics and its divisions. The readers can find the full list of the code examples published in this link https://nl.mathworks.com/matlabcentral/ fileexchange/70550-adjustment-models-in3d-geomatics. The structure of the book is designed in 12 chapters to cover essential preliminary and advanced adjustment topics. Chapter 1 introduces statistical definitions, concepts, and derivations. The most important method of the adjustment using the least squares

– Paper-based cartographic plotting has changed to soft-copy or digital mapping, web mapping, and geographic information systems (GIS). – Traditional photogrammetry is affected by computer or machine vision techniques and digital image processing advents. Further, satellite remote sensing with different image resolutions and/or spectral are replacing traditional aerial photography in many applications. – Field surveying equipment are developing away from traditional theodolites and electronic distance measurements (EDM) toward robotic total stations and global navigation satellite

vii

viii

PREFACE

principle and its mathematical derivation is also introduced. Then, in Chapter 2, error propagation technique is presented and the variance-covariance matrix of observations and unknowns is described. Similarly, the preanalysis procedure is presented. In Chapter 3, the adjustment using the condition equations and the observation equations is introduced and, at the end, the concept of the homogeneous least squares method is presented. For a better understanding, different adjustment models are shown in Chapter 4 such as intersection and resection either in 2D or 3D using observed distances, angles, azimuths, difference in heights, or computations from images. At the end of Chapter 4, an important computational geophysics application is shown of earthquake location determination. The general adjustment approach is presented in Chapter 5 when there is more than one observation in the mathematical model related to several unknowns. The important topic of adjustment with constraints is introduced in Chapter 6. Three main topics are presented in this chapter: adjustment with constraints, adjustment with additional parameters, and the adjustment with inner constraints (free nets).

More advanced topics are presented in the second half of the book, such as the unified approach of least squares adjustment of Chapter 7. This is an advanced adjustment technique where the unknowns have uncertainties and are then processed as observations in the model. Chapters 8 and 9 present more related applications in geomatics, namely the topic of fitting 3D geometric primitives and the 3D transformation computations respectively. Then the book continues to give introduction to other advanced topics such as the Kalman filter in Chapter 10 and the nonlinear least squares using Levenberg-Marquardt in Chapter 11. Finally, in the Chapter 12 of the book, the detection, identification, and adaptation (DIA) of the postadjustment techniques is introduced. The chapter presents the blunder detection methods of data snooping, robust estimation, and the random sample consensus (RANSAC). In the book’s Appendix, a MATLAB code is given for the adjustment of horizontal geodetic networks. Bashar Alsadik The Netherlands

C H A P T E R

1

Statistical Introduction 1.1 INTRODUCTION In geomatics, different kinds of measurements (termed as observations in the book) are applied. The imperfections in instrumentation, weather conditions, and limitations of the operator0 s skills produce field observations that encompass different kinds of errors. Accordingly, understanding the adjustment of observations and theory of errors are essential to process and solve geomatical problems. Because we need confidence in applied engineering or a scientific project, statistical measures of accuracy, precision, and reliability should be computed and/or standardized. It should be noted that fulfilling the required accuracy standards will take more time to implement, more professional labor, advanced instrumentation, and more computing power. To give real life examples about the importance of this chapter’s perspective and topics related to adjustment of observations, we list two examples in Fig. 1.1. The first example, shown in Fig. 1.1A, is about an engineering construction project. Several questions come to mind: Which end of the bridge is the correctly positioned/aligned one? How accurate it is? And what error type might be occurred in the calculations and executions? The second example, shown in Fig. 1.1B, shows racing athletes at the finish line where the time difference between them is within fractions of a second. So which of the two competitors is the winner? How accurate is this measuring image system? Is the camera calibrated and well positioned?

Adjustment Models in 3D Geomatics and Computational Geophysics https://doi.org/10.1016/B978-0-12-817588-0.00001-5

1

# 2019 Elsevier Inc. All rights reserved.

2

1. STATISTICAL INTRODUCTION

(A)

(B)

FIG. 1.1 (A) Which end of the bridge is the correctly aligned one? (B) Which athlete is the gold medal winner?

All of these questions asked of Fig. 1.1 can be answered when we understand different concepts and indices in adjustment of observations and theory of errors such as: accuracy, precision, reliability, calibration, standard deviation, weighted mean, residual error, most probable value (MPV), redundancy, etc.

1.2 STATISTICAL DEFINITIONS AND TERMINOLOGIES • Errors are the differences between observed values and their true values. An error is what causes values to differ when a measurement is repeated, and none of the results can be preferred over the others. Although it is not possible to entirely eliminate error in a measurement, it can be controlled and characterized. We summarized the terminologies related to errors as follows: • Gross errors: These are large errors (blunders, mistakes, or outliers) that can be avoided in the observations; however, they don0 t follow a mathematical or physical model and may be large or small, positive or negative. With developments in instrumentation and automated procedures, the main source of gross errors is human-related, for example, recording and reading errors, or observing a different target than the intended one. Careful reading and recording of the data can significantly reduce gross errors. It should be noted that some references don0 t count mistakes as an error type [1]. • Systematic errors Systematic errors (or bias when having many observations) occur when following some physical models and therefore can be checked. Systematic errors are either positive or negative; using proper measuring procedures can eliminate some of them, whereas some are corrected by using mathematical methods. Systematic error sources are recognizable and can be reduced to a great extent by careful designation of the observation system and the selection of its components. In practice, the process of calibration of instruments, such as cameras in photogrammetry, is to detect and remove systematic errors. • Fig. 1.2 illustrates a systematic error-free observation (red curve) biased in a certain direction and amount (the blue curve).

3

1.2 STATISTICAL DEFINITIONS AND TERMINOLOGIES

FIG. 1.2 Systematic error (bias) illustration.

• Random errors Random or accidental errors are unavoidable errors that represent residuals after removing all other types of errors. This type of error occurs in observations because of limitations in the measuring instruments or due to limitations related to the operator, among other affecting undetermined factors. It should be noted that random errors can be positive or negative, and they don0 t follow a physical model. Therefore, they are processed statistically using the probability theorem because the majority follows normal distribution. In repetitive observations, the average or mean can be used as the MPV. As shown in Fig. 1.3, greater random errors cause a greater dispersion of normally distributed observations around the mean. The dispersion is measured by the standard deviations at a certain probability as will be shown in Eq. (1.6).

The normal distribution of X without random errors Mean

Frequency

The normal distribution of X with random errors

X

FIG. 1.3 Observations having a normal probability distribution. (A) With random errors. (B) Without random errors.

• Uncertainty All observations have uncertainties, which is a range of values in which the true observation could lie. An uncertainty estimate should address both systematic and random errors, and therefore is considered the most proper measure of expressing accuracy. However, in many geomatics problems, the systematic error is disregarded, and only random error is included in the uncertainty observation. When only random error is counted in the uncertainty evaluation, it is an expression of the precision of the observation.

4

1. STATISTICAL INTRODUCTION

• Accuracy and Precision Accuracy is the closeness between an observed value and its true value (trueness). When the observations follow normal probability distribution, accuracy can be illustrated as shown in Fig. 1.4. Therefore, removing systematic errors improves accuracy. Precision is the closeness between independent observations of a quantity under the same conditions (Fig. 1.4). It is a measure of how well a value is observed without reference to a theoretical or true value. The number of divisions on the scale of the measuring device generally affects the consistency of repeated observations and, therefore, the precision. Because precision is not based on a true value, there is no bias or systematic error in the value, but rather it depends solely on the distribution of random errors. The precision of a measurement is usually indicated by the uncertainty or fractional relative uncertainty of a value. Another useful illustration about the concept of precision and accuracy is shown in Fig. 1.5 using an archery target field in three cases.

related to systematic error Observed value

True value

Probability

Accuracy

X Precision (related to random error)

FIG. 1.4 Accuracy and precision.

Accurate but less Precise because of random errors

Precise and Accurate

FIG. 1.5 Relation between accuracy and precision.

Precise but Inaccurate because of systematic bias

5

1.2 STATISTICAL DEFINITIONS AND TERMINOLOGIES

Therefore: – Reducing systematic errors improves accuracy. – Reducing random errors improves precision. An incorrect term is saying that you can reduce random errors by choosing a more accurate measuring device. The correct term is to say a more precise measuring device. For example, smaller scale divisions mean a smaller spread, which leads to higher precision whereas a more accurate device would be one that reads true values. • Reliability Reliability is an important term in adjustment computations. Reliability refers to the degree of consistency, or reproducibility, of observations. In other words, reliability defines by how much the adjusted observations must agree with reality. It should be noted that errors in observations can result in poor reliability. Fig. 1.6 shows examples of reliable and unreliable observations. The three lines AB, CD, and EF intersect at point P; however, the left plot shows less reliable observations whereas the right plot shows reliable observations where the three lines intersect almost exactly at P. F

F

D B

P

D B

P A

A C

E

C

E

FIG. 1.6 (Left) less reliable observations; (right) reliable observations.

• Residual error Residual error v means the difference between the true value and its observed value. However, because the true value is impossible to reach, it is statistically compensated by the MPV. Therefore: v ¼ MPV  observed value

(1.1)

It should be noted that the MPV value of repetitive observations is the arithmetic mean, whereas for other observations, it is the optimal value that offers the minimal of the squared residual errors. This concept will be explained in the least squares principle of Section 1.7. It is important to note that the MPV in the concept of the least squares represents the adjusted values of observations. For nonlinear geomatical problems, the adjusted values are computed by running the solution with initial values. Accordingly: MPV ¼ adjusted values X ¼ approximate values + corrections

(1.2)

6

1. STATISTICAL INTRODUCTION

1.3 STATISTICAL INDEXES • Arithmetic mean x: when observing a quantity x for n times under the same conditions, the mean x is computed as follows: n X xi x¼

i¼1

n

ðn ! ∞ Þ

(1.3)

• Variance σ 2: the theoretical mean of the squared residual errors that can be computed as follows: n n X X ð xi  xÞ 2 ð vi Þ 2 σ2 ¼

i¼1

n1

¼

i¼1

n1

(1.4)

The denominator n  1 is called the redundancy r or the degree of freedom. The minimum observations we need to determine the variance is only 1 value, which is termed as no; accordingly, the rest of the observations are redundant, and r can be formulated as: r ¼ n  no

(1.5)

• Standard deviation σ: the square root of the variance that expresses by how much the observations differ from the MPV or the mean value. Further, it is a measure of how spread out the observations are. Therefore, standard deviation is an expression of precision computed as:

σ¼

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uX n u u ð xi  xÞ 2 t i¼1

n1

¼

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uX n u u ð vi Þ 2 t i¼1

n1

(1.6)

When the reference values are available, the standard deviation is termed as the Root Mean Squared Error (RMSE), which indicates by how much the observed or derived quantities deviate from the reference (true) values. vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uX n u u ðxi  xREFRENCE Þ2 t RMSE ¼ i¼1 (1.7) n • Standard error σx : represents the standard deviation of the mean, which is computed as: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uX uX u n u n 2 u u ð x  x Þ ð vi Þ 2 i u u σ t i¼1 t i¼1 σ x ¼ pffiffiffi ¼ ¼ nðn  1Þ nðn  1Þ n

(1.8)

7

1.4 NORMAL DISTRIBUTION CURVE

EXAMPLE 1.1 Given A base line AB is observed in meters 10 times as follows:

26.342

26.349

26.351

26.345

26.348

26.350

26.348

26.352

26.345

26.348

Required Find the MPV, the standard deviation, and standard error of line AB to the nearest mm.

Solution The MPV for repetitive observations is the mean: n X



i¼1

n

xi ¼ 26:348 m

The standard deviation is computed as: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uX n u u ðxi  xÞ2 t i¼1 ¼ 3 mm σ¼ n1 The standard error is computed as: σ σ x ¼ pffiffiffi ¼ 1 mm n Then AB ¼ 26:348 m  1 mm

1.4 NORMAL DISTRIBUTION CURVE In literature related to geomatics, it is presented that all observations taken by surveyors, such as observed distances and angles, or observations applied by photogrammetrists and geodesists comply with or follow probability lows. Further, random errors that exist in the observations are normally distributed (Example 1.2). Accordingly, statistical techniques can be applied in postprocessing and analysis of the observations.

8

1. STATISTICAL INTRODUCTION

The normal distribution curve or the gaussian curve can be calculated and then plotted using the following Eq. (1.9). v2 1 2 2 y ¼ pffiffiffiffiffi e 2σ 2 ¼ Keh v σ 2π

(1.9)

where y: the normal probability, K ¼ σ pnIffiffiffiffi , 2π 2 1 h ¼ 2σ2 called the accuracy index, v: the residual error, σ: the standard deviation, I: any selected interval of the residual errors, e: exponential function.

EXAMPLE 1.2 Given An angle is observed 50 times in degrees as illustrated in Table 1.1: TABLE 1.1 Measured Angle 40.3429370

40.3414908

40.3373390

40.3363269

40.3400577

40.3441110

40.3414915

40.3373237

40.3352349

40.3398773

40.3416594

40.3419747

40.3366620

40.3385154

40.3412115

40.3433579

40.3452412

40.3384449

40.3365439

40.3381179

40.3401685

40.3397429

40.3377490

40.3466756

40.3424222

40.3434787

40.3401892

40.3456270

40.3394161

40.3384703

40.3382263

40.3422242

40.3374289

40.3401997

40.3402152

40.3411796

40.3432062

40.3405390

40.3419665

40.3393154

40.3409316

40.3441452

40.3368495

40.3435766

40.3413856

40.3364398

40.3411947

40.3443333

40.3371976

40.3395228

Required – Compute the MPV of the angle and its standard deviation. – Compute and plot the normal probability distribution curve of residual errors.

Solution The MPV of the angle is simply the arithmetic mean, therefore: MPV ¼ mean of angles ¼ 40.3404387 degrees

9

1.4 NORMAL DISTRIBUTION CURVE

whereas the standards deviation is computed as: sffiffiffiffiffiffiffiffiffiffiffi X ffi v2 σ¼ ¼ 9:83500 n1 To compute the probability y of the normal distribution curve, the following elements are computed: nI K ¼ pffiffiffiffiffi ¼ 4:553 σ 2π h2 ¼

1 ¼ 0:0051688 2σ 2

The computed residuals are grouped between two maximum/minimum limits of 22.4500 . Then, to have 20 regular intervals for the plotting issue, interval I is computed to be 2.24500 as illustrated in the following MATLAB code. Table 1.2. illustrates the details of the computations to relate the residual errors in seconds to the computed probability y of the normal distribution curve. TABLE 1.2 Computed Probability of the Measured Angle K

v

v2

Kv2

eh v



22.45

504.00

2.61

13.60

0.33

20.20

408.04

2.11

8.25

0.55

17.96

322.56

1.67

5.31

0.86

15.72

247.12

1.28

3.60

1.26

13.47

181.44

0.94

2.56

1.78

11.22

125.89

0.65

1.92

2.37

8.98

80.64

0.42

1.52

3.00

6.74

45.43

0.23

1.26

3.61

4.49

20.16

0.10

1.11

4.10

2.24

5.02

0.03

1.03

4.42

0.00

0.00

0.00

1.00

4.55

2.24

5.02

0.03

1.03

4.42

4.49

20.16

0.10

1.11

4.10

6.74

45.43

0.23

1.26

3.61

8.98

80.64

0.42

1.52

3.00

11.22

125.89

0.65

1.92

2.37

13.47

181.44

0.94

2.56

1.78

15.72

247.12

1.28

3.60

1.26

17.96

322.56

1.67

5.31

0.86

20.20

408.04

2.11

8.25

0.55

22.45

504.00

2.61

13.60

0.33

2 2

. eh

v

2 2

10

1. STATISTICAL INTRODUCTION

Then the normal distribution curve and histogram can be plotted as shown in Fig. 1.7. 5 4.5 4

Probability

3.5 3 2.5 2 1.5 1 0.5 0 –25

–20

–15

–10

–5 0 5 Residuals sec.

10

15

20

25

FIG. 1.7 The normal distribution of residuals in seconds.

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%% chapter 1 - example 1.2 %%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear, close all %%%%%%%%%%%%% create normally distributed observed angles %%%%%%%%%%%%%%%%% ang=40.34+10*(randn(50,1)/3600); %% adding random normal noise of 10 sec. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% compute residuals resid =3600*(ang-mean(ang)); Resid=unique(round(resid*100)/100); m=max(abs(Resid)); %% make equal intervals I= ((2*m)/20) % 20 intervals % can be changed by the user v=-m:I:m; v=round(v’*100)/100; % sample the residuals equally in seconds v2=round(v.^2*100)/100; % squared residuals sigma=std(ang);sigma=sigma*3600; % standard deviation in seconds n=size(ang,1); % total number of observations

1.5 CUMULATIVE DISTRIBUTION FUNCTION

11

k1=(n*I)/(sigma*sqrt(2*pi)); % k1 of the normal probab. distribution k2=1/(2*sigma^2);

% k2 of the normal probab. distribution

k2v2=round(k2*v2*100)/100; exp_k2v2=round(exp(k2v2)*100)/100; y=round((k1./exp_k2v2)*100)/100; % Y probability %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% plot(v,y,’-’,’linewidth’,3); hold on; % plot the normal curve xlabel(’RESIDUALS sec.’) ylabel(’PROBABILITY’) grid on disp(’SUMMARIZED CALCULATIONS’) disp(’-————————————————————’) T = table(v, v2, k2v2, exp_k2v2, y) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% bar(v,y);alpha(.5); % plot the bars %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

1.5 CUMULATIVE DISTRIBUTION FUNCTION A set of observations that are normally distributed can be represented as a histogram or distribution curve as shown in Example 1.2. On the other hand, the Cumulative Distribution Function (CDF) shows the percentage or relative count of the sorted observation values over the observations themselves [2]. This is, in fact, the integral of the normal distribution histogram. Let’s assume we have 100 observations of an angle where the mean is 50 degrees with a precision of 10 degrees. We can represent the observed values either in a histogram or in a CDF plot as follows: MATLAB code % angle x biased randomly x=randn(100,1)*10+50 subplot (1,2,1) hist(x) subplot(1,2,2) cdfplot(x)

The CDF plot in Fig. 1.8B explains that 90% of the observed angles are less than or equal to 60 degrees. This property is easy to interpret even for a nonprofessional. Therefore, key values such as minimum, maximum, median, percentiles, etc. can be directly read from the diagram. Accordingly, CDF is an efficient description for random variable uncertainty and is a useful tool for comparing the distribution of different sets of data. Let’s assume that three observers A, B, and C measured the same angle 300 times each. To compare the skill of the three observers in observing the angle, we can use the CDF plot as shown in Fig. 1.9.

12

1. STATISTICAL INTRODUCTION

FIG. 1.8 (A) Normal distribution histogram, (B) The corresponding CDF plot.

FIG. 1.9 The CDF plots of the angle observed by three surveyors. MATLAB code clear;clc; close all % angle biased randomly A=randn(300,1)*10+50 hA=cdfplot(A);hold on set( hA(:,1), ’Linewidth’, 3); B=randn(300,1)*10+48

1.6 THE PROBABLE ERROR AND LEVELS OF REJECTION

13

hB=cdfplot(B);hold on set( hB(:,1), ’Linewidth’, 3); C=randn(300,1)*10+53 hC=cdfplot(C);hold on set( hC(:,1), ’Linewidth’, 3); lgd =legend(’observer A’, ’observer B’, ’observer C’,’fontsize’); lgd.FontSize = 12;

1.6 THE PROBABLE ERROR AND LEVELS OF REJECTION The probable error Pe_50% is defined as the 50% or equal probability that a true error can arise. In other words, 50% probability occurs when j vi j > Pe_50% and 50% when jvi j < Pe_50% as shown in Fig. 1.10.

FIG. 1.10 The 50% probable error.

For a group of observations, the 50% probable error is the median of residual errors after sorting them in ascending or descending order, and therefore it is sometimes termed as the median absolute deviation (MAD) (Chapter 12). However, by using this approach, the probable error is not sensitive to gross errors as shown in the following illustrations where the probable error is 500 for both error sets despite the existence of gross error of 9400 . 1st set: 1″ 1″ 3″ 5″ 6″ 7″ 11″ 2nd set: 1″ 1″ 3″ 5″ 6″ 7″ 94″

Statistically, to relate the probable error Pe_50% with the standard deviations, we can apply the following derivation by the integration of the normal distribution curve equation as: 1 Area ¼ pffiffiffiffiffiffiffiffi 2πσ

ð +Pe v2 e 2σ 2 dv ¼ 0:5 Pe

(1.10)

Accordingly, the probable error is evaluated as: 2 Pe_50% ¼ 0:6745σ ffi σ 3

(1.11)

14

1. STATISTICAL INTRODUCTION

where σ is the standard deviation and v is residual error. Applying the same derivation procedure, we can compute the probable error for any given standard deviation. Hence, the probable error for the range  σ is found to be: Pðσ  ε  + σ Þ ¼ 0:683

(1.12)

Therefore, the probable error between  σ and + σ is 68% of the total area of the normal curve as shown in Fig. 1.11. As an illustration, when a line length is observed with a precision of 0.06 m, it means there is a probability of 68% that the true error of observations is equal to or less than 0.06 m.

Inflection point

Inflection point 68% of Area

FIG. 1.11

Probable error in the range of σ.

In the same manner, the probable error of the intervals 2σ and 3σ are computed as shown in Eq. (1.13) and Fig. 1.12. Pð2σ  ε  2σ Þ ¼ 0:955 Pð3σ  ε  3σ Þ ¼ 0:997

FIG. 1.12

Different standard deviations limits of the normal distribution curve.

(1.13)

15

1.7 PRINCIPLE OF LEAST SQUARES ADJUSTMENT

Usually, the threshold of 3σ is used for blender detection. Accordingly, residuals falling at any of the two ends of the normal curve in the 1% marginal area outside the 99% of confidence are assumed to be blunders. Table 1.3 illustrates different probability percentages and the associated standard deviation thresholds that can be adopted for blunder detection or other postanalysis applications. TABLE 1.3

The Relation Between Probable Errors and Standard Deviations

68.3%

86.64%

90%

95%

95.45%

99%

99.9%

1.0σ

1.5σ

1.64σ

1.96σ



2.58σ

3.29σ

1.7 PRINCIPLE OF LEAST SQUARES ADJUSTMENT Normal distribution curve as mentioned in Eq. (1.9) is: v2 1 2 2 y ¼ pffiffiffiffiffi e 2σ2 ¼ Keh v σ 2π

The probability of occurrence of vi is mathematically expressed as a rectangular slice area under the normal distribution curve as shown in Fig. 1.13 with a value of y1 Δ vi. Accordingly, the probability Pvi of having more residual errors v1, v2, … vn in the observations are: Pv1 ¼ y1 Δv ¼ Keh v1 2Δv 2 Pv2 ¼ y2 Δv ¼ Keh v2 2Δv ⋮ 2 Pvn ¼ yn Δv ¼ Keh vn 2Δv 2

(1.14)

FIG. 1.13 The probability of error vi occurrence.

Hence, the probability of the occurrence of the errors altogether is the multiplication of the involved probabilities as:      2 2 2 P ¼ Keh vn 2Δv Keh vn 2Δv … Keh vn 2Δv (1.15) or simplified as: P ¼ Kn ðΔvÞn eh ðv1 + v2 + …vn Þ 2

(1.16)

16

FIG. 1.14

1. STATISTICAL INTRODUCTION

The inverse relation between x and e x.

The inverse relation between exponential function e x and variable x is illustrated in Fig. 1.14 where the minimal value of x is attained when e x is at maximum. Accordingly, referring back to Eq. (1.16), the maximum probability value can be attained when the sum of the squared errors (v21 + v22 + … + v2n) is at minimum, which is the principle of least squares adjustment method. X   (1.17) v2 ¼ v21 + v22 + … + v2n ¼ minimum or in matrix forms as: vt v ¼ minimum

(1.18)

In summary, the MPV (adjusted value) is the value when its sum of squared residual errors is at minimum. The least squares method is considered the most common and robust statistical-based method of adjustment of observations.

EXAMPLE 1.3 Prove that the arithmetic mean x represents the MPV of the repetitive observations of a quantity xi using the principle of least squares.

Solution The MPV in the concept of least squares satisfies the minimum of squared residual errors and, accordingly, if a quantity xi is observed n times in the same precision, we can put forth the following formulation: x  x1 ¼ v1 x  x2 ¼ v2 ⋮ x  xn ¼ vn

(1.19)

where v’s are the residual errors. By substituting Eq. (1.19) in Eq. (1.17) of least squares method, we get the following: X       v2 ¼ x  x21 + x  x22 + … + x  x2n ¼ min : (1.20) P 2 Hence, deriving v with respect to the mean x: X ∂ v2 (1.21) ¼ 2ðx  x1 Þ + 2ðx  x2 Þ + … + 2ðx  xn Þ ¼ 0 ∂x

17

1.8 WEIGHTED OBSERVATIONS

After simplification, we get the following: x  x1 + x  x2 + … + x  xn ¼ 0

(1.22)

nx ¼ x1 + x2 + … + xn

(1.23)

x1 + x2 + … + xn n

(1.24)

or

Then x¼

1.8 WEIGHTED OBSERVATIONS In practice, the observations are not acquired in the same precision because of different observers, instruments, dates, dissimilar quantities, etc. Therefore, the difference in weights w should be considered to give realistic adjustment results. Whenever the required precision is higher, the observation weight is higher. On this basis: • The weight is inversely proportional to standard deviations of observations as: w∝

1 σ 2x

• The weight is inversely proportional to observed length (as in leveling networks): w∝

1 length

• The weight is conversely proportional to the number of observations n as: w∝ n The least squares principle of Eq. (1.17) can be updated by considering the weights as: X   (1.25) wv2 ¼ wv21 + wv22 + … + wv2n ¼ minimum Further, the arithmetic mean x with the consideration of weights will be computed as follows: n X

ðwi xi Þ w1 x1 + w2 x2 + … + wn xn 1 x¼ ¼ n X w1 + w2 + … + wn wi 1

(1.26)

18

1. STATISTICAL INTRODUCTION

The standard deviation of the weighted mean is also updated compared to Eq. (1.8) to the following form: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u X n u u sffiffiffiffiffiffiffiffiffiffiffiffi ð w i vi Þ 2 u σ2 i¼1 t X ¼ Xo (1.27) σx ¼ ðn  1Þ w i wi To compute the standard deviation for one observation xi that has a weight of wi, we use the following equation: sffiffiffiffiffi σ 2o σ xi ¼ (1.28) wi It’s worth mentioning that Eqs. (1.27) and (1.28) can be proven by applying the propagation of errors law of Chapter 2.

EXAMPLE 1.4 Given • Three height benchmarks A, B, and C (Fig. 1.15) as: HA ¼ 10.00 m, HB ¼ 11.01 m, HC ¼ 12.05 m. • The observed difference in elevations and leveling lengths are given as:

ΔHA ¼ 4:00 m,LAD ¼ 1:10 km ΔHB ¼ 3:00 m,LAD ¼ 0:75 km ΔHC ¼ 2:00 m,LAD ¼ 0:50 km

FIG. 1.15

Leveling network of Example 1.4.

1.8 WEIGHTED OBSERVATIONS

19

Required Compute the adjusted height of station D and its standard deviation to the nearest cm.

Solution The weighted mean represents the MPV of the height and accordingly weights can be computed for the observed differences in heights from the three benchmarks A, B, and C to the unknown station D. As mentioned, weights are inversely proportional to leveling line lengths and therefore the weights are computed as: wh1 ¼

1 1 1 ¼ 0:91,wh2 ¼ ¼ 1:33, wh3 ¼ ¼2 1:1 0:75 0:5

The estimated height of D from every leveling line is evaluated as follows: HD_1 ¼ HA + dhA ¼ 14:00 m HD_2 ¼ HB + dhB ¼ 14:01 m HD_3 ¼ HC + dhC ¼ 14:05 m Then x¼

wh1 HD_1 + wh2 HD_2 + wh3 HD_3 14:00  0:91 + 14:01  1:33 + 14:05  2 ¼ ¼ 14:03 m wh1 + wh2 + wh3 0:91 + 1:33 + 2

To compute the standard deviation of the weighted mean, we first need to compute the residuals and the variance of unit weight as follows: 3 X

σ 2o

ðwi vi Þ2

      1:0  26:712 + 1:5  16:712 + 2:0  23:292 ¼ ¼ ¼ 1053 mm2 ðn  1Þ 31 i¼1

Hence, the standard deviation of the weighted mean is computed as: sffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffi σ2 1053 σ x ¼ Xo ¼ ffi 16 mm 4:24 w Then the final adjusted height of station D is 14.03 m  16 mm.

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%% chapter 1 - example 1.4 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear,close all % Given dha=4;dhb=3;dhc=2; % diff. in height La=1. 1;Lb=.75;Lc=.5; % length Ha=10;Hb=11.01;Hc=12.05; % absolute height %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

20

1. STATISTICAL INTRODUCTION

% weights wa=1/La;wb=1/Lb;wc=1/Lc; % initial height of D Hd1=Ha+dha; Hd2=Hb+dhb; Hd3=Hc+dhc; % weighted mean M=round(100*(Hd1*wa+Hd2*wb+Hd3*wc)/(wa+wb+wc))/100; % the variance of unit weight va=(Hd1-M)*1000; vb=(Hd2-M)*1000; vc=(Hd3-M)*1000; sigma=(wa*va^2+wb*vb^2+wc*vc^2)/(3-1); %%% standard deviation of the weighted mean s_M=round(sqrt(sigma/(wa+wb+wc))) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% disp(’ Result’) [num2str(M),’m’,char(177),num2str(s_M),’mm’] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

EXAMPLE 1.5 Given Three angles A, B, and C are observed in a triangular traverse by two observers as follows (Table 1.4): TABLE 1.4 The Measured Angles by the Two Observers 1st observer

2nd observer

Mean angles

Standard dev.

Mean angles

Standard dev.

A

52.8033

4.0

"

52.8058

2.0"

B

63.3933

4.0"

63.3433

2.0"

C

63.8503

4.0"

63.8467

2.0"

Required Compute the adjusted angles A, B, and C using the weighted mean computations.

Solution The angles should be first adjusted to the geometric condition of the sum of triangle angles to 180 as illustrated in Table 1.5.

21

1.8 WEIGHTED OBSERVATIONS

TABLE 1.5

Closure Error Adjustment of the Measured Angle

Before adjustment

Correction

After adjustment

Before adjustment

Correction

After adjustment

A

52° 480 1200

5600

52° 470 1600

52° 480 2100

+500

52° 480 2600

B

63° 23 0 3600

5600

63° 220 4000

63° 200 3600

+500

63° 200 4100

C

63° 510 0100

5600

63° 500 0400

63° 500 4800

+500

63° 500 5300

Sum

180° 020 4900

180°000 0000

179° 590 4500

180°000 0000

The weight is inversely proportional to the squared standard deviations as 1 • Weight of angles of the 1st observer ¼ w1 ¼ 16 • Weight of angles of the 2nd observer ¼ w2 ¼ 14

Fractional weights can be turned into integers for simplicity as w ¼ 1 and w ¼ 4. The adjusted angles using the weighted mean are computed as follows:  ° 0 00    52 47 16  1 + 52° 480 2600  4 adjusted A ¼ ¼ 52° 480 1200 5  ° 0 00    63 22 40  1 + 63° 200 3600  4 ¼ 63° 210 0500 adjusted B ¼ 5  ° 0 00    63 50 04  1 + 63° 500 4800  4 adjusted C ¼ ¼ 63° 500 4300 5

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%% chapter 1 - example 1.5 %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc;clear;close all std_1=4;std_2=2; observer_1=[52 48 12;63 23 36;63 51 01] observer_1= dms2degrees(observer_1) observer_2=[52 48 21;63 20 36;63 50 48] observer_2= dms2degrees(observer_2) %%%%%%%%%% closure error " %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% closure_1=180-sum(observer_1); closure_2=180-sum(observer_2); %%%%%%%%% Adjusted angles and residuals - 1st observer %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% v1(1)=closure_1/3;A1=observer_1(1)+v1(1); v1(2)=closure_1/3;B1=observer_1(2)+v1(2);

22

1. STATISTICAL INTRODUCTION

v1(3)=closure_1/3;C1=observer_1(3)+v1(3); sum(A1+B1+C1) %%%%%%%%% Adjusted angles and residuals - 2nd observer %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% v2(1)=closure_2/3;A2=observer_2(1)+v2(1); v2(2)=closure_2/3;B2=observer_2(2)+v2(2); v2(3)=closure_2/3;C2=observer_2(3)+v2(3); sum(A2+B2+C2) %% weights w1=1/4^ 2;w2=1/2^ 2; %% weighted mean of angles A=(A1*w1+A2*w2)/(w1+w2) B=(B1*w1+B2*w2)/(w1+w2) C=(C1*w1+C2*w2)/(w1+w2) check=A+B+C %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

C H A P T E R

2

Propagation of Errors 2.1 INTRODUCTION Propagation of errors can be defined as the process of estimating errors of unknown variables x using a mathematical model that relates x with observed quantities l, which have uncertainties or errors. On the other hand, when the error limits of an unknown or a group of unknowns are defined, the error propagation principle can be inverted to estimate the errors in the observations made by a certain instrument; this is termed preanalysis. Both ideas related to propagation of random errors are illustrated in Fig. 2.1.

FIG. 2.1 Error propagation flow diagram.

Adjustment Models in 3D Geomatics and Computational Geophysics https://doi.org/10.1016/B978-0-12-817588-0.00002-7

23

# 2019 Elsevier Inc. All rights reserved.

24

2. PROPAGATION OF ERRORS

Normally, estimated errors are derived from what is termed in statistics as variancecovariance matrix. Both errors of observations andPunknowns P are presented in variancecovariance matrices and termed in this chapter as ll and xx, respectively.

2.2 PROPAGATION OF ERRORS LAW Let us assume a mathematical model that relates unknowns x and n observations l as x ¼ F(l1, l2, … , ln) where l1, l2, … , ln are independent observations that have random errors  σ l1,  σ l2, … ,  σ ln. Tylor’s theorem can be used by neglecting the multiplication terms and, assuming small errors, then we can get the law of random errors propagation as:  2  2  2 ∂F ∂F ∂F σ l1 2 + σ l2 2 + … + σ ln 2 (2.1) σx2 ¼ ∂l1 ∂l2 ∂ln ∂F is the partial derivative of the model with respect to observations, and σ x is the where ∂l i estimated error of unknown x. It should be noted that random error can be represented by standard deviations because they are normally distributed as presented in Chapter 1.

EXAMPLE 2.1 It is required to prove by using the propagation of errors of Eq. (2.1) that the standard error σ x for repeated observation of a quantity is σ x ¼ pσffiffin.

Solution The standard error of repeated observations is, in fact, the standard deviation of the mean and therefore the mathematical model to solve the problem is the mean equation as: n X xi x1 x2 xn i¼1 ¼ + +⋯+ x¼ n n n n  2  2  2 1 1 1 σ 2x1 + σ 2x2 + ⋯ + σ 2xn ;σ 2x ¼ n n n  2  1  2 2 ) σx ¼ σ x1 + σ 2x2 + ⋯ + σ 2xn n 1   σ2 ¼ 2 nσ 2 ¼ 2 n n σ ;σ 2x ¼ pffiffiffi n

EXAMPLE 2.2 Find the estimated standard deviation in the volume of a parallelogram tank as shown in Fig. 2.2 where its three dimensions height H, width W, and length L are measured as follows:

2.3 USING MATRIX FORM FOR PROPAGATION OF ERRORS

25

L ¼ 5:52  0:05 m W ¼ 3:41  0:03 m H ¼ 2:51  0:02 m W

L H

FIG. 2.2 Propagation of errors in the volume of the parallelogram.

Solution The mathematical model that relates the observed dimensions of the tank to its volume is: Volume ¼ L  W  H ¼ 47:25 m3 Applying the propagation of errors as in Eq. (2.1), we get:       ∂volume 2 2 ∂volume 2 2 ∂volume 2 2 2 ;σ volume ¼ σL + σW + σH ∂L ∂W ∂H where ∂volume ¼ W  H ¼ 8:559 ∂L ∂volume ¼ L  H ¼ 18:823 ∂W ∂volume ¼ L  W ¼ 13:855 ∂H σ volume ¼ 0:71 m3 ;volume ¼ 47:25  0:71 m3

2.3 USING MATRIX FORM FOR PROPAGATION OF ERRORS Matrices can be used efficiently in the error propagation of Eq. (2.1) where a special matrix called Jacobian matrix J is used. Error propagation can be formed as: X X ¼ J Jt (2.2) xx ll

26

2. PROPAGATION OF ERRORS

where P xx

is the variance-covariance matrix of m unknowns. 2 X xx

6 ¼4

σ 2 x1

σ x1x2 σ x1xm

3

7 σ 2 x2 σ x2xm 5 symmetric σ 2 xm m

(2.3)

∗m

P σ 2 x1 , …:σ 2 xm are the variances on the main diagonal of xx. σP x1x2 … . . σ x2xm are the covariance elements. ll is the variance-covariance matrix of independent (uncorrelated) n observations. 2

X ll

¼4

σ 2 observationðl1Þ

σ 2 observationðl2Þ

0

0 σ

2

observationð ln Þ

3 5

(2.4) n∗n

J is the Jacobian matrix computed from the partial derivatives as shown in Eq. (2.1). 2

∂F1 6 ∂l1 6 Jmn ¼ 6 6 ⋮ 4 ∂Fm ∂l1

∂F1 ⋯ ∂l2 ⋮ ∂Fm ⋯ ∂l2

3 ∂F1 ∂ln 7 7 ⋮ 7 7 ∂Fm 5

(2.5)

∂ln

Therefore, for the same problem of Example 2.2, the propagation of errors in matrix form is: 2 σ 2 volume ¼ ½W  H L  H L  W  4

σ2 L 0

σ2 W

32 W  H 3 7 56 4 LH 5 2 σ H LW 0

However, it should be noted that the variance-covariance matrix can be a full matrix when there is a correlation between the variables, which is reflected in the covariances’ elements. As an example, two points are measured using kinematic GPS positioning within a few seconds interval, and both points are computed from the same navigation satellites. The relation between the correlation and the covariance is simply explained in Eq. (2.6) between the two variables x1 and x2 as: covariance ¼ correlation  σ x1  σ x2 where σ x1, σ x2 represent the standard deviations of variables x1 and x2.

(2.6)

27

2.3 USING MATRIX FORM FOR PROPAGATION OF ERRORS

EXAMPLE 2.3 To determine the slope of a road, the surveyor measured two points A and B as follows (Fig. 2.3):

Given xA ¼ 11561:19  0:07, yA ¼ 401528:71  0:07, zA ¼ 397:83  0:10 xB ¼ 11554:08  0:07,yB ¼ 401526:34  0:07, zB ¼ 398:78  0:10

Required What is the slope of the road and its estimated standard deviation? (1) Assuming no correlation between points A and B. (2) With high correlation of 0.9 between A and B.

Solution The slope angel is computed as:

 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi slope ¼ tan 1 Δz= ðΔx2 + Δy2 Þ

slope ¼ 7.224 degrees. where Δx ¼ xB  xA,Δy ¼ yB  yA, Δz ¼ zB  zA To further compute the standard deviation of the computed slope, we need to apply the error propagation law as shown: X X slope ¼ J xyz Jt The Jacobian matrix elements are computed as follows: jxa ¼

ð2xA  2xBÞðzA  zBÞ !   ðzA  zBÞ2 2 2 3 +1 2 ðxA  xBÞ + ðyA  yBÞ 2 ðxA  xBÞ2 + ðyA  yBÞ2 jxb ¼ jxa

jya ¼

ð2yA  2yBÞðzA  zBÞ !   ðzA  zBÞ2 2 2 3 2 ðxA  xBÞ + ðyA  yBÞ 2 +1 ðxA  xBÞ2 + ðyA  yBÞ2 jyb ¼ jya

jza ¼ 

1 1

ðxA  xBÞ2 + ðyA  yBÞ2 Þ2

ðzA  zBÞ2 ðxA  xBÞ2 + ðyA  yBÞ2

!! +1

28

2. PROPAGATION OF ERRORS

jzb ¼ jza J ¼ [jxa jya jza jxb jyb jzb] J ¼ ½0:0157915  0:005263  0:1313194 0:0157915 0:0052638 0:13131

(1) With no correlation The variance-covariance matrix is diagonal as follows:   S ¼ diag :072 :072 :102 :072 :072 :102 ¼ diag ð0:005 0:005 0:01 0:005 0:005 0:010Þ   σ slope ¼ sqrt J  S  J t  180=pi σ slope ¼ 0:0186 rad ¼ 1:068 degrees

FIG. 2.3 Slope angle error estimation.

(2) With high correlation The variance-covariance matrix is a full matrix after computing the covariance between the variables. If we assume 0.9 correlation factor, then the covariance matrix will be as follows (Eq. 2.6):

σ slope ¼ sqrt(J  S  Jt)  180/pi ¼ 0.337 degrees The results show that when points A and B are highly correlated, the estimated error of the slope angle is reduced. This is logical because it is a relative error and the errors of B coordinates are correlated to the errors of A. For further investigation of correlated observations, the reader is advised to read advanced references about statistical regression and correlation analysis.

2.4 APPLICATIONS

29

2.4 APPLICATIONS Different applications will be presented in this section with MATLAB codes.

2.4.1 Application in Laser Scanning (Lidar Sensors) Laser scanners or Lidar sensors are collecting a large number of points in 3D space, which can be applied from the air or ground (Fig. 2.4). Generally, the acquired point clouds are of high positioning accuracy, which can reach a survey grade millimeter accuracy. Further, point clouds can be extremely dense of more than 1 point per centimeter.

FIG. 2.4 Terrestrial laser scanning.

It should be noted that there are different Lidar sensors in the market that have different specifications in terms of point density and accuracy. For a calibrated Lidar, the relative accuracy of the scanned points is affected by sensor errors in range and angles, laser beam divergence, calibration, and surface material. The propagation of errors is useful to estimate one component of the relative accuracy of the sensor errors by transforming the measured range and angles to 3D coordinates Xi, Yi, Zi as follows: Xi ¼ X1 + R cos ðAzÞ cos ðV Þ Yi ¼ Y1 + R sin ðAzÞcos ðV Þ Zi + Z1 + R sin ðV Þ

(2.7)

where R is the measured range distance from the sensor to the object point. Az is the measured azimuth angle of the laser beam. V is the measured vertical angle of the laser beam measured from the horizontal plane. X1, Y1, and Z1 are the coordinates of the Lidar. It should be noted that the errors of the Lidar coordinates X1, Y1, Z1 are predicted in normal cases from an integrated satellite navigation system such as a GPS. When using the Lidar coordinates in the error propagation model (Eq. 2.8), then the estimated errors of the scanned point represent the absolute accuracy. On the other hand, when the Lidar coordinates are not

30

2. PROPAGATION OF ERRORS

included in the model, then the estimated errors of the scanned point represent the relative accuracy (noise in the point cloud). An error propagation can be applied to Eq. (2.7) using Jacobian matrix J to estimate the errors of the scanned points in a point cloud as follows: 2 3 ∂Xi ∂Xi ∂Xi ∂Xi 0 0 6 ∂Az ∂V ∂R ∂X1 7 6 7 ∂Yi 6 ∂Yi ∂Yi ∂Yi 7 J¼6 0 0 7 6 ∂Az ∂V ∂R 7 ∂Y1 4 ∂Zi ∂Zi ∂Zi ∂Zi 5 0 0 ∂Az ∂V ∂R ∂Z1 2

3 R sin ðAÞcos ðV Þ RcosðAÞ sin ðV Þ cos ðAÞcos ðV Þ 1 0 0 ¼ 4 R cos ðAÞcos ðV Þ RsinðAÞsin ðV Þ sin ðAÞ cos ðV Þ 0 1 0 5 0 RcosðV Þ sin ðV Þ 0 0 1 X XYZ

¼J

2

X R, Az, V

Jt ¼ 4

σ Xi 2

3 σ Yi 2

5

(2.8)

σ Zi 2

where P R, Az, V : the variance-covariance matrix of the observed range and angles of the point cloud. P XYZ: the variance-covariance matrix of the derived coordinates of the point cloud. As mentioned, there are different types of Lidar sensors in the market, and one of the popular brands used widely is the rotating multibeam Velodyne-HDL 32. Eq. (2.7) is used to estimate the relative accuracy of Velodyne scanning through the variance-covariance matrix of the scanned points within sensor errors in ranges, azimuth angles, and vertical angles. It should be noted that for simplicity, we ignored the effect of the lever arm offset and the boresight angles. According to the manufacturer’s specifications, we assumed the following: • Error in Scanning Range ¼ 2 cm • Error in Azimuth angle ¼ 0.05 degrees • Error in Vertical angle ¼  0.05 degrees Accordingly, a simulation of a scan by Lidar Velodyne-32 is applied and assuming the following: – Laser beams at different ranges from 2 to 72 m. – Laser beams at different azimuth angles (0 < Az < 180 degree) at interval of 20 degrees. – Laser beams at different vertical angles (30 degree < V < 15 degree). Taking every four beams with 1.3 degree for each beam. Fig. 2.5 summarizes the estimated sensor errors using propagation of errors.

31

2.4 APPLICATIONS

0.09

Propagated error (m) in position

0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0

2

12

22

32

42

52

62

72

Distance in meters

FIG. 2.5 The estimated propagated errors along different scanning ranges.

Fig. 2.6 visualizes the estimated sensor errors (ellipsoid of errors—Chapter 3) with respect to different ranges and angular orientations.

EXAMPLE 2.4 Given A scanned point by a Velodyne-32 with the following: – Standard Error in Scanning Range ¼ 20 m 2 cm. – Standard Error in Azimuth angle ¼ 45  0.05 degrees. – Standard Error in Vertical angle ¼ 12.83 0.05 degrees.

Required Estimate the relative error of the scanned point coordinates.

Solution The coordinates of the scanned point are computed as: Xi ¼ X1 + R cos ðAzÞ cos ðV Þ ¼ 13:788 m Yi ¼ Y1 + R sin ðAzÞ cos ðV Þ ¼ 13:788 m Zi ¼ Z1 + Rsin ðV Þ ¼ 4:442m

32

2. PROPAGATION OF ERRORS

FIG. 2.6 Exaggerated ellipsoid of errors with a factor of 40.

Apply the propagation of errors using Jacobian matrix: 2 2 σ Xi X X t 4 ¼ J J ¼ σ Yi 2 XYZ R, Az, V

3 5 σ Zi

2

2.4 APPLICATIONS

33

Or 2 4

σ Xi 2

3

σ Yi 2

2

32 3  3:141 0:689 7:615  107 5  3:141 0:689 54 7:615  107 19:500 0:222 0:0004

13:789 5 ¼ 4 13:789 0 σ Zi 2 2

13:789 4 13:789 0

3t  3:141 0:689  3:141 0:689 5 19:500 0:222

3 0:000342 0:000053 0:000015 ¼J J ¼ 4 0:000053 0:000342 0:000015 5 XYZ R, Az, V 0:000015 0:000015 0:000309 X

X

2

t

Then the standard deviations of the scanned point coordinates are the squared roots of the main diagonal elements as follows: σ Xi ¼1.9 cm σ Yi ¼1.9 cm σ Zi ¼1.8 cm The predicted errors are matching the presented bar error of Fig. 2.5.

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%% Chapter 2-Example 2.4 %%%%%%%%%%% %%% Error propagation - Lidar accuracy %%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear % given: point coordinates and accuracy % % %

Standard Error in Scanning Range=20m±2cm Standard Error in Azimuth angle=45± 0.05. Standard Error in Vertical angle=12.83± 0.05. A= 45*pi/180; V=12.83*pi/180; R=20;

% variance - covariance matrix of observations Sa=diag([(.05*pi/180)^2; (.05*pi/180)^2; .02^2]); X =R*cos(A)*cos(V); Y =R*sin(A)*cos(V); Z =R*sin(V);

34

2. PROPAGATION OF ERRORS

J=[

-R*sin(A)*cos(V), -R*cos(A)*sin(V), cos(A)*cos(V) R*cos(A)*cos(V), -R*sin(A)*sin(V), sin(A)*cos(V) 0

,

R*cos(V),

sin(V)];

Vcov=J*Sa*J’; std_XYZ=sqrt(diag(Vcov))

2.4.2 Application in Computing the Area of a Polygon in 3D Space There are different applications for computing the area of polygons in 3D space, and it is based on vectors geometry using cross and dot products. One application is to estimate the accuracy of a fac¸ade area of a building as shown in Fig. 2.7 where the corner coordinates can be measured by land surveying instruments or using images in photogrammetry.

FIG. 2.7 Measuring area of a building facade.

However, the estimation of the accuracy of the measured area is mathematically complicated because of the difficulty to compute the partial derivatives necessary for the error propagation, and it depends on the number of corners (vertices) of the polygon in question. As an example, quadrilateral polygon derivatives are simpler than the derivatives of a pentagon polygon and so forth. For a triangular polygon, the area can be computed using the following method of vectors: Let’s assume there are three observed corners P1(x1, y1, z1), P2(x2, y2, z2), and P3(x3, y3, z3). Then the area of the triangle can be calculated as follows (Fig. 2.8): *

*

– Create two vectors a and b : *

a ¼ ðx2  x1Þi + ðy2  y1Þj + ðz2  z1Þk

*

b ¼ ðx3  x1Þi + ðy3  y1Þj + ðz3  z1Þk

35

2.4 APPLICATIONS

– Calculate their normal vector using the cross product:

FIG. 2.8 Vectors cross product.

i j k j k i u ¼ a  b ¼ x2  x1 y2  y1 z2  z1 ¼ dx1 dy1 dz1 x3  x1 y3  y1 z3  z1 dx2 dy2 dz2

*

*

*

Or u1 ¼ dy1 dz2  dy2 dz1 u2 ¼ dx1 dz2  dx2 dz1 u3 ¼ dx1 dy2  dx2 dy1 – The area of the triangle is computed as: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2Area5 u12 + u22 + u32

(2.9)

Based on Eq. (2.5), we can compute the Jacobian matrix as follows:

∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ J¼ ∂x1 ∂y1 ∂z1 ∂x2 ∂y2 ∂z2 ∂x3 ∂y3 ∂z3 where ∂ A4A13 + A1A12 ¼ ∂x1 A10 ∂ A7A13  A1A11 ¼ ∂y1 A10

∂ A13A5 + A2A12 ¼ ∂x2 A10 ∂ A13A8  A2A11 ¼ ∂y2 A10

∂ A13A6 + A3A12 ¼ ∂x3 A10 ∂ A13A9  A3A11 ¼ ∂y3 A10

∂ A12A7 + A11A4 ¼ ∂z1 A10

∂ A12A8 + A11A5 ¼ ∂z2 A10

∂ A12A9 + A11A6 ¼ ∂z3 A10

A1 ¼ 2ð z2  z3Þ A2 ¼ 2 ðz1  z3Þ A3 ¼ 2 ðz1  z2Þ A4 ¼ 2 ðy2  y3Þ

(2.10)

36

2. PROPAGATION OF ERRORS

A5 ¼ 2 ðy1  y3Þ A6 ¼ 2 ðy1  y2Þ A7 ¼ 2 ðx2  x3Þ A8 ¼ 2 ðx1  x3Þ A9 ¼ 2 ðx1  x2Þ A11 ¼ ðy1  y2Þ ðz1  z3Þ  ðy1  y3Þ ðz1  z2Þ A12 ¼ ðx1  x2Þ ðz1  z3Þ  ðx1  x3Þ ðz1  z2Þ A13 ¼ ðx1  x2Þ ðy1  y3Þ  ðx1  x3Þ ðy1  y2Þ A10 ¼ 4

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A112 + A122 + A132

EXAMPLE 2.5 Given A triangular house roof face is shown in Fig. 2.9 where three corner coordinates are measured as follows in Table 2.1:

TABLE 2.1 Measured Corners Coordinates of the Roof σx[m]

σy[m]

σz[m]

9.21

0.05

0.13

0.07

16019.18

14.34

0.05

0.07

0.08

16015.68

9.18

0.06

0.05

0.04

X[m]

Y[m]

1

50256.59

16018.28

2

50260.91

3

50263.71

Z[m]

Required Compute the triangular area and its standard deviation using propagation of errors. (1) With no correlation between the points. (2) With a correlation of 0.85 between the points.

37

2.4 APPLICATIONS

FIG. 2.9 Area of a triangular polygon.

Solution The presented approach of Eqs. (2.9) and (2.10) is used as:

A1¼

10.32

A2¼ A3¼ A4¼ A5¼ A6¼

0.06  10.26 7.00 5.20 -1.80

A7¼ A8¼ A9¼ A10¼ A11¼ A12¼ A13¼

-5.60 -14.24 -8.64 171.21 13.31 -36.66 -17.64

Jacobian matrix is then: J ¼ ½  2:93, 0:23,  1:74, 0:55, 1:46, 3:45, 2:38,  1:69,  1:71 (1) Assuming no correlation The variance covariance matrix is a diagonal matrix as: X ¼ diag ð:003 :017 :005 :003:005 :006 :004 :003 :002Þ Then σ area ¼ 0:40 m2 Area ¼ 21:40  0:40 m2

38

2. PROPAGATION OF ERRORS

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%% Chapter 2-Example 2.5 %%%%%%%%%%% %%% Error propagation of a 3D triangle %%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear % given: point coordinates and accuracy P=[50256.59,16018.28,9.21 0.05,0.13,0.07; 50260.91,16019.18,14.34 0.05,0.07,0.08; 50263.71,16015.68,9.18 0.06,0.05,0.04] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% x=P(:,1); y=P(:,2); z=P(:,3); x1=x(1,1);y1=y(1,1);z1=z(1,1); x2=x(2,1);y2=y(2,1);z2=z(2,1); x3=x(3,1);y3=y(3,1);z3=z(3,1); dx1=x2-x1; dy1=y2-y1; dz1=z2-z1; dx2=x3-x1; dy2=y3-y1; dz2=z3-z1; a=dy1*dz2-dy2*dz1; b=dx1*dz2-dx2*dz1; c=dx1*dy2-dx2*dy1; % area of triangular fac¸ ade plarea=.5*sqrt(a^2+b^2+c^2) % error propagation calculations A1 = 2* z2 - 2* z3; A2 = 2* z1 - 2* z3; A3 = 2* z1 - 2* z2; A4 = 2* y2 - 2* y3; A5 = 2* y1 - 2* y3; A6 = 2* y1 - 2* y2; A7 = 2* x2 - 2* x3; A8 = 2* x1 - 2* x3; A9 = 2* x1 - 2* x2; A11 = (y1 - y2)* (z1 - z3) - (y1 - y3)* (z1 - z2); A12 = (x1 - x2)* (z1 - z3) - (x1 - x3)* (z1 - z2); A13 = (x1 - x2)* (y1 - y3) - (x1 - x3)* (y1 - y2); A10=4*sqrt((A11)^2+(A12)^2+(A13)^2); ax1=(A4*A13+A1*A12)/A10; ay1= -(A7*A13-A1*A11)/A10; az1=-(A12*A7+A11*A4)/A10;

39

2.4 APPLICATIONS

ax2=-(A13*A5+A2*A12)/A10; ay2 =(A13*A8-A2*A11)/A10; az2 =(A12*A8+A11*A5)/A10; ax3=(A13*A6+A3*A12)/A10; ay3= -(A13*A9-A3*A11)/A10; az3=-(A12*A9+A11*A6)/A10; % Jacobian matrix J=[ax1 ay1 az1 ax2 ay2 az2 ax3 ay3 az3]; % variance-covariance of observations S=diag([P(1,4) P(1,5) P(1,6) P(2,4) P(2,5) P(2,6) P(3,4) P(3,5) P(3,6)]); % variance-covariance of area Vcov=J*S^2*J’ sa2= sqrt(Vcov)

To compute a quadrilateral polygon area in 3D, we can either divide it into two triangles and apply Eqs. (2.9) and (2.10), or use vectors geometry as used before for the triangle area: 2Area ¼ n:½ðV3  V1Þ  ðV4  V2Þ

where

(2.11)

 refers to cross product and (.) refers to dot product. n is the normal vector to the polygon plane. V1, V2, V3, and V4 are the four observed corners of the fac¸ade plane. An alternative expanded formula to compute the area of a quadrilateral polygon is as follows: a4 ðb1Þ a3 ðb2Þ a2 ðb3Þ + + 2Area ¼ (2.12) a1 a1 a1 where j j refers to absolute value pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a1 ¼ a42 + a32 + a22 a2 ¼ ðy1  y2Þðz1  z3Þ  ðy1  y3Þðz1  z2Þ a3 ¼ ðx1  x2Þðz1  z3Þ  ðx1  x3Þðz1  z2Þ a4 ¼ ðx1  x2Þðy1  y3Þ  ðx1  x3Þðy1  y2Þ





x1 x2 x1 x4 x2 x3 x3 b1 ¼ det  det + det + det y1 y2 y1 y4 y2 y3 y3







x3 x1 x2 x1 x4 x2 x3 b2 ¼ det  det + det + det y3 z1 z2 z1 z4 z2 z3

x4 y4 z4 z4



40

b3 ¼ det



2. PROPAGATION OF ERRORS





y3 y4 y1 y2 y1 y4 y2 y3  det + det + det y3 z4 z1 z2 z1 z4 z2 z3

det refers to matrix determinant. The error propagation in matrix form will be:   σ Area 2 ¼ J:diagonal σ X1 2 σ Y1 2 σ Z1 2 ……σ Y4 2 σ Z4 2 :J t where



∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂X1 ∂Y1 ∂Z1 ∂X2 ∂Y2 ∂Z2 ∂X3 ∂Y3 ∂Z3 ∂X4 ∂Y4 ∂Z4

(2.13)

The partial derivatives of the Jacobian matrix are:    ∂ ðy2  y3Þ h26 ðz2  z3Þ h25 h3 h30 h26 h3 h29 h25 h3 h28 h24 ¼ h1 h16 + + h12 +    =2 ∂X1 sqrtðh27Þ sqrtðh27Þ h2 h2 h2    ∂ ðx2  x3Þh26 ðz2  z3Þh24 h6 h30 h26 h6 h29 h25 h6 h28 h24 ¼ h1 h13   h20 + + + + =2 ∂Y1 sqrtðh27Þ sqrtðh27Þ h2 h2 h2    ∂ ðx2  x3Þh25 ðy2  y3Þh24 h9 h28 h24 h9 h30 h26 h9 h29 h25 ¼  h1 h21 + h17 + +    =2 ∂Z1 sqrtðh27Þ sqrtðh27Þ h2 h2 h2    ∂ ðy1  y3Þ h26 ðz1  z3Þ h25 h4 h30 h26 h4 h29 h25 h4 h28 h24 ¼  h1 h18 + + h14 +    =2 ∂X2 sqrtðh27Þ sqrtðh27Þ h2 h2 h2    ∂ ðx1  x3Þh26 ðz1  z3Þh24 h7 h30 h26 h7 h29 h25 h7 h28 h24 ¼  h1 h15   h22 + + + + =2 ∂Y2 sqrtðh27Þ sqrtðh27Þ h2 h2 h2    ∂ ðx1  x3Þh25 ðy1  y3Þh24 h10 h28 h24 h10 h30 h26 h10 h29 h25 ¼ h1 h23 + h19 + +    =2 ∂Z2 sqrtðh27Þ sqrtðh27Þ h2 h2 h2    ∂ ðy1  y2Þh26 ðz1  z2Þh25 h5 h30 h26 h5 h29 h25 h5 h28 h24 ¼  h1 h16  + h12  + + + =2 ∂X3 sqrtðh27Þ sqrtðh27Þ h2 h2 h2    ∂ ðx1  x2Þh26 ðz1  z2Þh24 h8 h30 h26 h8 h29 h25 h8 h28 h24 ¼ h1 h20   h13 + + + + =2 ∂Y3 sqrtðh27Þ sqrtðh27Þ h2 h2 h2    ∂ ðx1  x2Þh25 ðy1  y2Þh24 h11 h28 h24 h11 h30 h26 h11 h29 h25 ¼ h1 h21 + h17   + + + =2 ∂Z3 sqrtðh27Þ sqrtðh27Þ h2 h2 h2

∂ h1ðh18 + h14Þ ¼ ∂X4 2 ∂ h1ðh22  h15Þ ¼ ∂Y4 2

41

2.4 APPLICATIONS

∂ h1ðh23 + h19Þ ¼ ∂Z4 2 where

 h1 ¼ sign

h30 h26 h29 h25 h28 h24 + + sqrtðh27Þ sqrtðh27Þ sqrtðh27Þ



h2 ¼ 2 h273=2 h3 ¼ 2 ðy2  y3Þ h30 + 2 ðz2  z3Þ h29 h4 ¼ 2 ðy1  y3Þ h30 + 2 ðz1  z3Þ h29 h5 ¼ 2 ðy1  y2Þ h30 + 2 ðz1  z2Þ h29 h6 ¼ 2 ðx2  x3Þ h30  2 ðz2  z3Þ h28 h7 ¼ 2 ðx1  x3Þ h30  2 ðz1  z3Þ h28 h8 ¼ 2 ðx1  x2Þ h30  2 ðz1  z2Þ h28 h9 ¼ 2 ðx2  x3Þ h29 + 2 ðy2  y3Þ h28 h10 ¼ 2 ðx1  x3Þ h29 + 2 ðy1  y3Þ h28 h11 ¼ 2 ðx1  x2Þ h29 + 2 ðy1  y2Þ h28 h12 ¼ 

ðz2  z4Þh29 ðz2  z4Þh28 ðz1  z3Þh29 ðz1  z3Þ h28 , h13 ¼ , h14 ¼ , h15 ¼ sqrtðh27Þ sqrtðh27Þ sqrtðh27Þ sqrtðh27Þ

h16 ¼

ðy1  y4Þ h30 ðy2  y4Þ h28 ðy1  y3Þ h30 ðy1  y3Þ h28 , h17 ¼ , h18 ¼ ,h19 ¼ sqrtðh27Þ sqrtðh27Þ sqrtðh27Þ sqrtðh27Þ

h20 ¼

ðx2  x4Þ h30 ðx2  x4Þ h29 ðx1  x3Þ h30 ðx1  x3Þ h29 , h21 ¼ , h22 ¼ ,h23 ¼ sqrtðh27Þ sqrtðh27Þ sqrtðh27Þ sqrtðh27Þ h24 ¼ y1 z2  y2 z1  y1 z4 + y2 z3  y3 z2 + y4 z1 + y3 z4  y4 z3 h25 ¼ x1 z2  x2 z1  x1 z4 + x2 z3  x3 z2 + x4 z1 + x3 z4  x4 z3 h26 ¼ x1 y2  x2 y1  x1 y4 + x2 y3  x3 y2 + x4 y1 + x3 y4  x4 y3 h27 ¼

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi h302 + h292 + h282

h28 ¼ ðy1  y2Þ ðz1  z3Þ  ðy1  y3Þ ðz1  z2Þ h29 ¼ ðx1  x2Þ ðz1  z3Þ  ðx1  x3Þ ðz1  z2Þ h30 ¼ ðx1  x2Þ ðy1  y3Þ  ðx1  x3Þ ðy1  y2Þ

42

2. PROPAGATION OF ERRORS

EXAMPLE 2.6 Given Four corners of a building fac¸ade in Fig. 2.10 are observed, and their coordinates with their standard deviations are as follows in Table 2.2: TABLE 2.2 Measured Corners of the Building Fac¸ade X [m]

Y [m]

Z [m]

σx[m]

σy[m]

σz[m]

A

2634.13

5375.53

25.10

0.03

0.19

0.07

B

2634.14

5375.40

6.28

0.04

0.08

0.07

C

2646.51

5309.54

6.27

0.04

0.05

0.08

D

2646.51

5309.57

25.34

0.06

0.15

0.07

FIG. 2.10 Rectangular fac¸ade area.

Required Estimate the propagated standard deviation in the fac¸ade area.

Solution The Jacobian matrix will be: j 1:750 9:368 33:491 1:746 9:252 33:571 1:733 9:371 33:491 1:763 9:249 33:571 j pffiffiffiffiffiffiffiffiffiffiffi σ area ¼ 29:60 ¼ 5:44m2 The fac¸ade area ¼ 1270.49  5.44 m2 The same propagated standard deviation can be obtained if we divided the fac¸ade into two triangles. It should be noted that the area accuracy is sensitive to the accuracy of the corners’ Z-coordinates more than to XY-coordinates because the fac¸ade polygon is almost a vertical plane.

2.5 PREANALYSIS OF ERRORS

43

2.5 PREANALYSIS OF ERRORS The community expects a professional geomatics engineer to provide appropriate accuracy at a sensible price. Because engineering projects are not always similar, operators will not constantly employ the same instruments and processes. Accordingly, analytical thinking and proficient judgment are always required. The challenge is in selecting the methods that satisfy the accuracy requirements. Engineers must analyze some expected procedures to assure that they satisfy the precision requirements of the task before practically starting a project. If the predicted precision is acceptable, the procedure can be adopted; if precision is too low, the operation or instruments must be improved or replaced. Several tries may be necessary before deriving a plan for instrument usage that provides the desired solution and accuracy [5]. It should be noted that preanalysis deals with random errors and assumes the observations are clean from systematic errors or biases. Mathematically, in this case, we know σ x , and we would like to estimate the accuracy in the measurements σ l1,…, σ ln (Eq. 2.1):  2  2  2 ∂F ∂F ∂F σx2 ¼ σ l1 2 + σ l2 2 + … + σ ln 2 ∂l1 ∂l2 ∂ln

EXAMPLE 2.7 Given For the same problem of Example 2.3 of the slope determination, the surveyor measured the slope AB of 7.224  1.068 degrees between points A and B where they have the following coordinates: xA ¼ 11561:19, yA ¼ 401528:71, zA ¼ 397:83 xB ¼ 11554:08,yB ¼ 401526:34, zB ¼ 398:78

Required To what accuracy should the coordinates of A and B be located to carry through the slope angle accuracy?

Solution

P P We need to apply the error propagation law as slope ¼ J ll Jt The propagation formula is:             ∂slope 2 ∂slope 2 ∂slope 2 ∂slope 2 ∂slope 2 ∂slope 2 σ slope 2 ¼ σ XA 2 + σ YA 2 + σ ZA 2 + σ XB 2 + σ YB 2 + σ ZB 2 ∂XA ∂YA ∂ZA ∂XB ∂YB ∂ZB Now, we have to make the following logical assumptions to simplify the problem and reduce the number of unknowns.

44

2. PROPAGATION OF ERRORS

(1) First is to assume the accuracy of both points A and B are similar in XY, so: σ XA ¼ σ YA ¼ σ XB ¼ σ YB (2) Second is to assume the accuracy of both points A and B are similar in Z, so: σ zA ¼ σ zB Therefore: (3) If we assume σ z ¼ 1.5σ XY, then we get the following simplified formula: " σ slope ¼ 2:25:σ XY 2

2

           # ∂slope 2 ∂slope 2 ∂slope 2 ∂slope 2 ∂slope 2 ∂slope 2 + + + + + ∂XA ∂YA ∂ZA ∂XB ∂YB ∂ZB

Or σ XY 2 ¼

σ slope 2 2:25ðJ  Jt Þ

The Jacobian matrix elements are already computed in Example 2.3 as: J ¼ ½0:0157915  0:005263  0:1313194 0:0157915 0:0052638 0:13131 Accordingly, the required accuracy of the XY coordinates of points A and B should be: σ XY ¼ 0.07 m while σ Z ¼ 0.10 m.

2.5.1 Preanalysis Using the Kronecker Product In matrix algebra, we can use the Kronecker tensor product  technique between the multiplied matrices. If C is an m  n matrix and D is a p  q matrix, then the Kronecker product is a large matrix formed by multiplying D by each element of C as: 2 3 c11 D c12 D ⋯ c1n D 6 c D c D ⋯ c D7 21 22 2n 7 CD ¼ 6 (2.14) ⋮ ⋮ ⋱ 4 5 cm1 D cm2 D ⋯ cmn D Further, when three matrices B, C, and D are multiplied, respectively, then the following Kronecker product arrangement can be used:   (2.15) vecðBCDÞ ¼ Dt B vecðCÞ where vec refers to stacking the elements P of aPmatrix in a column vector. Back to the error propagation law xx ¼ J llJt, Eq. (2.15) can be formulated as: X   X    X  1 X  t t J vec J vec vec ¼ J ! vec ¼ J xx ll ll xx

(2.16)

45

2.5 PREANALYSIS OF ERRORS

EXAMPLE 2.8 Given For the same given data in Example 2.4, point P is observed with the following: – – – –

Point coordinates (13.788 m  19 cm, 13.788 m  19 cm, 4.442 m  18 cm) Observed scanning range R ¼20 m Observed azimuth angle Az ¼45 degree Observed vertical angle V ¼12.83 degree

Required Apply the preanalysis computations to estimate the needed accuracy in observations to satisfy the given standard deviations of point P.

Solution The mathematical model of the computations is: XP ¼ R:cos ðAzÞcos ðV Þ YP ¼ R:sin ðAzÞcos ðV Þ ZP ¼ R:sin ðV Þ The propagation of errors using Jacobian matrix is prepared as: 2 4

σ Xp 2

3 σ Yp

2

5 ¼ J4

2

σ Az 2

3 σV

5 Jt

2

σ Zp |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}

σR |fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl}

given

required

2

2

Back to the preanalysis problem, the system of Eq. (2.17) will be arranged as follows: 3 3 3 3 2 2 2 σ Az 2 σ Az 2 σ Xp 2 σ Xp 2 7 7 7 7 6 6 6 6 6 6 07 6 07 6 07 07 7 7 7 7 6 6 6 6 6 6 07 6 07 6 07 07 7 7 7 7 6 6 6 6 7 7 7 7 6 6 6 6 7 7 6 6 07 6 07 6 0 0 7 7 7 7 6 6 6 6 6 6 0 6 0 1 6 27 27 27 27 6 σ Yp 7 ¼ ðJJ Þ6 σ V 7 ! 6 σ V 7 ¼ ðJJ Þ 6 σ Yp 7 7 7 7 7 6 6 6 6 6 6 07 6 07 6 07 07 7 7 7 7 6 6 6 6 7 7 7 7 6 6 6 6 07 07 6 6 07 6 07 6 7 7 7 7 6 6 6 6 6 6 07 6 07 6 07 07 5 5 5 5 4 4 4 4 σ Zp 2 σR 2 σR 2 σ Zp 2 2

(2.17)

46

2. PROPAGATION OF ERRORS 0

where the matrix of Kronecker product (J  J ) is computed as:

Accordingly, the preanalysis of the standard deviations in observations Az, V, and R that satisfy the given standard deviations of point P are computed as: – Observed scanning range R¼2 cm – Observed azimuth angle Az ¼0.05 degrees – Observed vertical angle V ¼0.05 degrees

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%% Chapter 2-Example 2.8 %%%%%%%%%%% %%%%%%%%%%%%% Pre-analysis %%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc;clear;format short g %

Point coordinates (13.788m±19cm,13.788m±19cm,4.442m±18cm)

% %

Observed scanning Range R=20m Observed azimuth angle Az =45o

%

Observed vertical angle V =12.83deg A= 45*pi/180; V=12.83*pi/180; R=20;

% variance - covariance matrix of observations X =R*cos(A)*cos(V); Y =R*sin(A)*cos(V); Z =R*sin(V); J=[ -R*sin(A)*cos(V), -R*cos(A)*sin(V), cos(A)*cos(V) R*cos(A)*cos(V), -R*sin(A)*sin(V), sin(A)*cos(V) 0

,

R*cos(V),

sin(V)];

%%% required accuracy of P Vcov=diag([.019^2,.019^2,.018^2]); %%%%%%%%%%%%%%%%%%%%%%%%%%% Kronecker %%%%%%%%%%%%%%%%%%%%%%%% K = kron(J,J); S=inv(K)*Vcov(:);

2.5 PREANALYSIS OF ERRORS

47

%%% final estimated std. dev. Of observations std_observations= sqrt([S(1,1);S(5,1);S(9,1)]); std_Az=std_observations(1)*180/pi std_V=std_observations(2)*180/pi std_R=std_observations(3)

2.5.2 Preanalysis Using the Generalized inverse In linear algebra, matrix inversion is crucial to solve a linear or nonlinear system of equations (Ax ¼ b). The involved coefficient matrix A should be a squared full rank matrix. However, in many mathematical and engineering applications, a rectangular matrix A (with a rank defect) is composed, which requires a different mathematical technique for inversion. In such cases, the term pseudoinverse A+ or generalized inverse is used. Accordingly, when matrix A is rectangular, or is square and singular, then A1 does not exist. In such cases, pseudoinverse A+ has some of the properties of A1 such as the following: ABA ¼ A and BAB ¼ B ðABÞ∗ ¼ AB ðAB HermitianÞand ðBAÞ∗ ¼ BA ðBA HermitianÞ A computationally simple and accurate way to compute the pseudoinverse is by using the singular value decomposition SVD. For a given Amn matrix, SVD can be applied as the product of three matrices: A ¼ USV t

(2.18)

where Umm and Vnn have columns that are mutually orthogonal unit vectors. The columns of U are the eigenvectors of AAt, and the columns of V are the eigenvectors of AtA. Smn is a diagonal matrix; its diagonal elements si are called singular values as: s1  s2  …sn  0 Amn ¼ Ump Spp Vpn

p ¼ min ðm, nÞ

(2.19)

The equation system using SVD can be derived as follows: Ax ¼ b USV t x ¼ b  1 t    VS U USV t x ¼ VS1 Ut b VS1 ðUt UÞSV t x ¼ VS1 Ut b   V S1 S V t x ¼ VS1 Ut b VV t x ¼ VS1 Ut b ! x ¼ VS1 Ut b ¼ A +1 b

(2.20)

48

2. PROPAGATION OF ERRORS

Hence, for a singular (rank defect) matrix A, the pseudoinverse is computed as: 2 32 V 3 1 S1 6 7 4 5 ⋱ A ¼ ½U1 ⋯Un  4 ⋮ 5 ! A +1 ¼ V1 S1 1 U1 t 0 Vn

(2.21)

EXAMPLE 2.9 Given Stereo images are intersected at point P (see Section 4.7) as shown in Fig. 2.11.

FIG. 2.11 Preanalysis of image intersection (triangulation). The two image’s orientation is given as follows in Table 2.3 assuming a camera focal length of 18 mm and a pixel size of 5 μm. TABLE 2.3 Two Images Orientation and the Intersection Point Image

X[m]

Y[m]

Z[m]

ω°

φ°



Left

7.5

10

1.7

90

20

0

Right

7.5

10

1.7

90

20

0

Intersection point P

0.05

0.0

3.1

2.5 PREANALYSIS OF ERRORS

49

Required Compute the estimated standard deviations of point P coordinates using a preanalysis technique if the required standard deviations of image coordinates are 3 pixels.

Solution Using collinearity Eq. (4.30), the problem can be formulated as follows: x ¼ fr=q y ¼ fs=q

(2.22)

where q ¼ m31 ðX  Xo Þ + m32 ðY  Yo Þ + m33 ðZ  Zo Þ r ¼ m11 ðX  Xo Þ + m12 ðY  Yo Þ + m13 ðZ  Zo Þ s ¼ m21 ðX  Xo Þ + m22 ðY  Yo Þ + m23 ðZ  Zo Þ

(2.23)

x, y ¼ observed image coordinates [mm]. f ¼focal length [mm]. m’s ¼ rotation matrix elements. Xo, Yo, Zo ¼ camera coordinates. X, Y, Z ¼ Object point P coordinates. The propagation of errors is applied using the Jacobian matrix as follows assuming no covariances exist between variables: 2

3

σ xp1 2

2 3 σ Xp 2 7 7 5 Jt σ Yp 2 7 ¼ J4 5 σ xp2 2 σ Zp 2 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} σ yp2 2 required |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} 6 6 6 4

σ yp1 2

given

The Jacobian matrix elements are derived as follows: ∂x fm31 r fm11 ∂x fm32 r fm12 ∂x fm33 r fm13 , , ¼ 2  ¼ 2  ¼ 2  q ∂Yp q ∂Zp q ∂Xp q q q ∂y fm31 s fm21 ∂y fm32 s fm22 ∂y fm33 s fm23 ¼ 2  ¼ 2  ¼ 2  , , ∂Xp q q q q ∂Yp q ∂Zp q or

1:2543 0:06006 J ¼ 1:2615 0:060406

0:94703 5:80E  17 0:16502 1:5026 0:93985 5:75E  17 0:16596 1:5069

(2.24)

50

2. PROPAGATION OF ERRORS

Accordingly, as we used the Kronecker product in Example 2.8, the following system can be attained: 3 3 2 2 σ xp1 2 σ xp1 2 7 7 6 6 6 6 07 07 7 7 6 6 6 6 07 07 7 7 6 6 7 7 6 6 7 7 6 6 3 3 2 2 0 0 2 2 7 7 6 6 σ Xp σ Xp 7 7 6 6 0 0 7 7 7 7 6 6 6 6 7 7 7 7 6 6 6 6 0 0 7 7 6 σ yp1 2 7 6 σ yp1 2 7 6 6 7 7 7 7 6 6 6 6 0 0 7 7 7 7 6 6 6 6 7 7 6 6 6 6 07 07 7 7 7 7 6 6 6 6 0 0 7 7 6 6 6 6 07 07 7 7 6 6 0 6 0 +1 6 27 27 7 ¼ ðJJ Þ6 σ Yp 7 ! 6 σ Yp 7 ¼ ðJJ Þ 6 7 6 7 7 6 6 6 6 07 07 7 7 7 7 6 6 6 6 07 07 6 6 6 6 07 07 7 7 7 7 6 6 6 6 7 7 6 6 6 6 07 07 7 7 6 σ xp2 2 7 6 σ xp2 2 7 6 6 7 7 7 7 6 6 6 6 05 05 7 7 6 6 4 4 07 07 6 6 2 2 7 7 6 6 σ σ Zp Zp 6 6 07 07 7 7 6 6 7 7 6 6 07 07 6 6 7 7 6 6 6 6 07 07 5 5 4 4 σ yp1 2 σ yp1 2 0

where (J  J )+1 refers to the pseudoinverse because the produced matrix is not a squared matrix (singular). Based on 3 pixels of uncertainty in the observed image coordinates, the estimated standard deviations in the intersection point P will be: σ X (mm)

σ Y (mm)

σ Z (mm)

8

11

7

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% Chapter 2-Example 2.9 %%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%% Pre-analysis %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc;clear; format short g % input: 2- images orientation wpk=[

-7.5 -10 1.7 90 -20 0 7.5 -10 1.7 90

20 0];

omega=wpk(:,4)*pi/180;phi=wpk(:,5)*pi/180;kappa=wpk(:,6)*pi/180; xo=wpk(:,1)

; yo=wpk(:,2)

; zo=wpk(:,3)

% required std. in image observations [mm] SS= 3*[0.005;0.005 ;0.005;0.005];SS=SS.^2;

;

2.5 PREANALYSIS OF ERRORS

%%%%%%%%%%%%%%% variance - covariance matrix of observations f=18;x=.05;y=0;z=3.1; for i=1:2 mw=[1 0 0;0 cos(omega(i)) sin(omega(i)) ;0 -sin(omega(i)) cos(omega(i))]; mp=[cos(phi(i)) 0 -sin(phi(i));0 1 0;sin(phi(i)) 0 cos(phi(i))]; mk=[cos(kappa(i)) sin(kappa(i)) 0;-sin(kappa(i)) cos(kappa(i)) 0;0 0 1]; m=mk*mp*mw; m11(i)=m(1,1);m12(i)=m(1,2);m13(i)=m(1,3); m21(i)=m(2,1);m22(i)=m(2,2);m23(i)=m(2,3); m31(i)=m(3,1);m32(i)=m(3,2);m33(i)=m(3,3); dx(i) =x -xo(i); dy(i) =y -yo(i) ; dz(i) = z-zo(i); q(i) =m31(i)*(dx(i))+m32(i)*(dy(i))+m33(i)*(dz(i)); r(i) =m11(i)*(dx(i))+m12(i)*(dy(i))+m13(i)*(dz(i)); s(i) =m21(i)*(dx(i))+m22(i)*(dy(i))+m23(i)*(dz(i)); end %%%%%%%%%%%%%%%%%%%%%%% partial derivatives %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% for j=1:2 b1(j,1) = (f*m31(j)*r(j)/q(j)^2)-f*m11(j)/q(j); b2(j,1) = (f*m32(j)*r(j)/q(j)^2)-f*m12(j)/q(j); b3(j,1) = (f*m33(j)*r(j)/q(j)^2)-f*m13(j)/q(j); b4(j,1) = (f*m31(j)*s(j)/q(j)^2)-f*m21(j)/q(j); b5(j,1) = (f*m32(j)*s(j)/q(j)^2)-f*m22(j)/q(j); b6(j,1) = (f*m33(j)*s(j)/q(j)^2)-f*m23(j)/q(j); end % Jacobian matrix J=[b1(1,1) b2(1,1) b3(1,1); b4(1,1) b5(1,1) b6(1,1); b1(2,1) b2(2,1) b3(2,1); b4(2,1) b5(2,1) b6(2,1)]; vcov=zeros(4)+diag([SS]); Vcov=vcov(:); % K = kron(J,J);%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Kronecker S=pinv(K)*Vcov(:); %Pseudo inverse for std. estimation % final estimated std. dev. std_observations= sqrt([S(1,1);S(5,1);S(9,1)]) ; std_x=round(std_observations(1)*1000)/1000 std_y=round(std_observations(2)*1000)/1000 std_z=round(std_observations(3)*1000)/1000 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

51

C H A P T E R

3

Least Squares Adjustment Procedures 3.1 INTRODUCTION Adjustment can be defined statistically as the method or procedure to estimate variables in a problem by making proper measurements (observations). Least squares method is considered one of the best and common methods of adjustment computations when we have redundant observations or an overdetermined system of equations. The term least squares means that the global solution minimizes the sum of the squares of the residuals made on the results of every single equation. Historically, in 1805, Legendre described least squares as an algebraic procedure for fitting linear equations to data. However, Gauss started in 1795 and went further than Legendre, succeeding in connecting the method of least squares with the principles of probability and normal distribution as published in 1809. Since that time, least squares is used widely in geodesy and other scientific and engineering applications, which is positively influenced by the advent of computers where a huge number of equations can be solved in few seconds. As mentioned, least squares method is used to handle the redundancy r of observations n, which is also termed degree of freedom as: r ¼ n  no

(3.1)

where no is the minimum number of observations to solve a problem. So, if we measured the same angle three times, we will have: n ¼ 3, no ¼ 1,and r ¼ 2. In theory, any problem in geomatics can be solved by measuring specific quantities or variables related to the problem model. The observations should be sufficient to determine the other parameters or unknowns u in the mathematical model of the problem, for example, measuring distances from fixed reference stations to a point where its coordinates are unknown. Accordingly, we face two mathematical challenges: – A linear model can represent geomatics problems, but in many cases a nonlinear model is used. This implies linearizing the nonlinear models by applying Taylor series expansion

Adjustment Models in 3D Geomatics and Computational Geophysics https://doi.org/10.1016/B978-0-12-817588-0.00003-9

53

# 2019 Elsevier Inc. All rights reserved.

54

3. LEAST SQUARES ADJUSTMENT PROCEDURES

where only the zero and first order terms are considered yet neglecting the higher order terms. – There are often redundant observations r than the minimum no to solve the problem. Therefore, the least squares adjustment procedure is applied to handle the redundancy and to compute the most probable value of the unknowns. The equation that relates the observations and unknown parameters is prepared as: F ðl:XÞ ¼ 0 where X is the vector of unknowns and l is the vector of observations. However, in practice it’s impossible to perfect the observations, therefore we add residual errors v in the model. Furthermore, when we substitute the proper initial values of the unknowns with their corrections Δ, we get the following formula: F ðlo + v, X + ΔÞ ¼ 0

(3.2)

When the model is nonlinear, we apply Taylor expansion by differentiating F to the observations and unknowns as:  ∂F  A ¼  ∂O lo , Xo ∂F  B¼  ∂X lo ,Xo Accordingly, the linearized general observation equation in matrix form will be: Av + BΔ + F ðlo , Xo Þ ¼ 0

(3.3)

where – F (lo, Xo): the function vector evaluated from the substitution of the observations lo and the initial unknowns Xo in the model. – v: the vector of residual errors. – Δ: the vector of corrections that will be added algebraically to the initial values of unknowns. – A: the matrix of partial derivatives of F with respect to the observations. – B: the matrix of partial derivatives of F with respect to the unknowns. If we assumed that we have c observation equations with n observations and u unknowns, we can rewrite Eq. (3.3) with the sizes of the involved matrices as follows: + F ¼0 A v + B |{z} |{z} |{z} |{z}u, 1 |{z} c, n n , 1 c, u c, 1

(3.4)

In the following chapters of the book, Eq. (3.4) will be called the general form of observation equations.

55

3.1 INTRODUCTION

It should be noted that, in many geomatics adjustment problems, we can use a simplified observation equation than the general form as follows: – Adjustment with observation equations (indirect method): when the model includes one observed value and multiple unknowns, for example, the model relating one observed distance D to unknown point coordinates (X, Y): D¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðX1  X2Þ2 + ðY1  Y2Þ2

The observation in Eq. (3.3) is then reduced to: v + BΔ ¼ F

(3.5)

– Adjustment with condition equations (direct method): when the model relates only the observations in a geometrical or physical condition without unknown quantities, for example, the condition model of the sum of observed angles in a triangle: φ1 + φ2 + φ3 ¼ 180 The observation in Eq. (3.4) is then reduced to: Av ¼ F

(3.6)

Fig. 3.1 illustrates the two adjustment techniques:

FIG. 3.1 Adjustment techniques.

The properties of least squares adjustment compared to other techniques of solving overdetermined system of equations are: – – – – –

The adjusted observations comply with the statistical theory of probability. Least squares provide the best unbiased estimates (BUE) for a group of observations. Adjustments can be applied with dissimilar observations by assigning weights. Adjustment can be applied to fulfill geometrical conditions and constraints. Postanalysis of the adjustment quantities can be applied, and the quality of adjusted quantities can be evaluated. – Software and hardware suitability for computers.

56

3. LEAST SQUARES ADJUSTMENT PROCEDURES

3.2 ADJUSTMENTS BY OBSERVATION EQUATIONS METHOD V+BΔ5F There are different mathematical ways to derive the least squares adjustments with the observations method. In this book, we will use the matrices formulation for simplicity and efficiency. The derivation is based on least squares adjustment as the sum of squared residuals is minimum: ϕ ¼ vt wv»minimum

(3.7)

v ¼ BΔ + F

(3.8)

Back to observation in Eq. (3.5):

We apply the LaGrange multipliers k’s as in the mathematical optimization: ϕ’ ¼ vt wv  2kt ðv + BΔ  FÞ

(3.9)

Knowing the derivative at minima is zero, we apply the derivations of ϕ’’ for the residuals v and the corrections Δ as: ∂ϕ’ ¼ 2vt w  2kt I ¼ 0t ∂v

(3.10)

∂ϕ’ ¼ 2kt B ¼ 0t ∂Δ

(3.11)

After arrangements and knowing that for the diagonal weight matrix w ¼ wt, we get the simplified equations: wv + k ¼ 0

(3.12)

Bt k ¼ 0

(3.13)

From Eq. (3.10) and by knowing that the cofactor matrix Q is the inverse of the weight matrix (w ¼ Q1), we can get: v ¼ w1 k ¼ Qk

(3.14)

Substitute v in the observation in Eq. (3.5): Qk + BΔ ¼ F

(3.15)

k ¼ Q1 ðBΔ + FÞ ¼ w ðBΔ + FÞ

(3.16)

Eq. (3.15) can be simplified as:

Now if we substitute Eq. (3.16) into Eq. (3.13), we compose what is called the normal equation system as follows:  t    (3.17) B wB Δ ¼ Bt wF The normal equations can be further simplified: NΔ ¼ t

(3.18)

3.2 ADJUSTMENTS BY OBSERVATION EQUATIONS METHOD V + B Δ ¼ F

57

Δ ¼ N 1 t

(3.19)

N ¼ Bt w B

(3.20)

t ¼ Bt w F

(3.21)

where

It should be noted that all the derivations are based on assuming the normal equations matrix N as a full rank positive definite matrix. When the corrections are solved in Eq. (3.19), we can compute the residuals v using Eq. (3.5) and then evaluate the variance of unite weight σ 2o as: σ 2o ¼

vt wv r

(3.22) X Accordingly, the variance-covariance matrix of the unknowns ΔΔ is computed as: X X ΔΔ ¼ σ 2o N 1 or ΔΔ ¼ σ 2o QΔΔ (3.23) X Further, the variance-covariance matrix of observations ll is computed as: X ll ¼ σ 2o BN 1 Bt ¼ σ 2o Qll (3.24) Finally, the variance-covariance matrix of residuals can be computed as: X vv ¼ σ 2o ðQ  Qll Þ

(3.25)

It’s worth mentioning that the variance-covariance matrices of Eqs. (3.23), (3.24), and (3.25) are very useful to judge the quality of adjustments and to check the existence of blundered observations as will be presented in Chapter 12.

EXAMPLE 3.1 Derive Xthe variance-covariance matrix of unknowns siduals vv using Eqs. (3.19) and (3.8). X (1) Derivation of ΔΔ

X X ΔΔ, of observations ll, and for the re-

The derivation starts by using Eq. (3.19) of the normal equations:  1  t  B wF Δ ¼ N 1 t ¼ Bt wB If l is the observations and Fx is the function evaluated from approximate unknowns, then F ¼ ( l  Fx):  1 Δ ¼ Bt wB Bt wðl  Fx Þ  1  1 Δ ¼ Bt wB Bt w l  Bt wB Bt wFx |{z} |fflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} constant variable constant |fflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl} K1

K2

Δu,1 ¼ K1u,ulu,1  K2

58

3. LEAST SQUARES ADJUSTMENT PROCEDURES

Using the propagation of errors

Because

X

ll ¼σ 2o w1:

X X ΔΔ ¼ J llJt, we get the following: P P ΔΔ ¼ K1 llK1 t

X    t ΔΔ ¼ N1 Bt w σ 2o w1 N1 Bt w X ΔΔ ¼ N 1 Bt σ 2o ww1  1 t t |fflffl{zfflffl} N B w I

X ΔΔ ¼ σ 2o N1Bt(wBN1) X ΔΔ ¼ σ 2o N1 Bt wB N 1 |fflffl{zfflffl} N

X ΔΔ ¼ σ 2o N1 NN 1 X ; ΔΔ ¼ σ 2o N1 (2) The derivation of

X

ll

Using Eq. (3.8) of (F ¼ BΔ  v) Because F ¼ (l  Fx) and t ¼ BtwF:

  F ¼ B N 1 t  v ¼ BN1 Bt wðl  Fx Þ  v   F ¼ BN 1 Bt w  BN 1 Bt wFx  v l constant constant |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} |{z} K1 K2 |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} variable X X  X t llJ , we get the following assuming ll the prior Using ll ¼ J X the propagation of errors and ll the posterior: X ll ¼ Kσ 2o w1 Kt X     ll ¼ BN1 Bt w σ 2o w1 wBN 1 Bt X ll ¼ σ 2o BN1 Bt wBN 1 Bt X ll ¼ σ 2o BN 1 Bt ¼ σ 2o Qll (3) The derivation of

X

vv v ¼ F  BΔ v ¼ F  BN1 t

Because F ¼ (l  Fx) and t ¼ BtwF: v ¼ ðl  Fx Þ  BN1 Bt wl + BN1 Bt wFx

3.3 ADJUSTMENT BY CONDITION EQUATIONS METHOD





59

v ¼ I  BN1 Bt w l  Fx + BN1 Bt wFx ¼ Kl  C1 + C2 |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} |{z} C2 K

C1

X X t C1 and C2 are constants. Using the propagation of errors vv ¼ K llK , we get: X vv ¼ Kσ 2o w1 Kt X     vv ¼ I  BN1 Bt w σ 2o w1 I  wBN 1 Bt X   vv ¼ σ 2o w1  w1 wBN 1 Bt  BN1 Bt + BN1 Bt wBN 1 Bt X   ; vv ¼ σ 2o w1  BN 1 Bt ¼ σ 2o ðQ  Qll Þ

3.3 ADJUSTMENT BY CONDITION EQUATIONS METHOD The necessary number of condition equations to apply the least squares adjustment is equal to the redundancy in the observations r as presented in Eq. (3.1). The mathematical derivation of the adjustment with condition equations is based on Eq. (3.7) of the least squares principle that (ϕ ¼ vtwv » minimum). Referring back to Eq. (3.6) that (Av ¼ F), and using the same derivation technique with LaGrange multipliers as: ϕ0 ¼ vt wv  2kt ðAv  FÞ

(3.26)

To get the minimum value of ϕ, we apply the partial derivations to the residuals v and equating to zero as: ∂ϕ0 ¼ 2vt w  2kt A ¼ 0t ∂v

(3.27)

By transferring and arranging the equation, it yields the following: wv + At k ¼ 0

(3.28)

v ¼ w1 At k ¼ QAt k

(3.29)

Or

Substituting the values of the residual vector v in the condition equation model 3.6, we get: AQAt k ¼ F Accordingly, if we assume AQA

t

(3.30)

¼ Qe ¼ w1 e ,

then:  1 k ¼ AQAt F ¼ w1 e F 

(3.31)

60

3. LEAST SQUARES ADJUSTMENT PROCEDURES

Eq. (3.31) is called the normal equations in the least squares adjustment using the condition equations method. If we substitute the LaGrange vector k in Eq. (3.29), we get:  1 v ¼ QAt AQAt F (3.32) Because the condition equations model deals with observations without unknowns, the postanalysis computations can only provide the variance-covariance of the observations X ll as:

Or

 1 Qll ¼ Q  Qvv ¼ Q  QAt AQAt AQ

(3.33)

X ll ¼ σ 2o Qll

(3.34)

This also indicates the relation between the weight matrix and the variance-covariance matrix as: X1 w ¼ σ 2o (3.35) It should be noted that variance-covariance matrices are a valid measure index to all the statistical analysis after adjustment such as blunder detection (Chapter 12), the computations of the ellipse of errors, or the reliability evaluation of the adjustment. The following Example 3.2 illustrates the use of both adjusted techniques of the observation method and the condition method.

EXAMPLE 3.2 A total station instrument is used to measure the distances shown in Fig. 3.2 along a straight baseline. It is assumed that the observed distances have equal weight [10] (Table 3.1). TABLE 3.1 The measured distances along the baseline AD Side

Distances (m)

AB

11.152

BC

13.499

CD

12.052

AC

24.684

BD

25.539

AD

36.711

X2

X1 A

B

X3 C

D

FIG. 3.2 The observed baseline distances.

To adjust measurements AB, BC, and CD, additional measurements AC, BD, and AD were made. Distances AB, BC, and CD were designated as X1, X2, and X3, respectively.

Required Apply the least squares adjustment using both the observation and condition equations technique.

3.3 ADJUSTMENT BY CONDITION EQUATIONS METHOD

61

Solution (1) Adjustment by observation equations method A quick look at the problem, and we can conclude that n¼ 6, no ¼ 3, and the redundancy r ¼ 3. So, we need to form six observation equations based on the total number of field observations n. In matrix form, we need to put them in the form of v ¼ B Δ  F. Accordingly: AC ¼ AB + BC ¼ X1 + X2 BD ¼ BC + CD ¼ X2 + X3 AD ¼ AB + BC + CD ¼ X1 + X2 + X3 Therefore, the six observation equations can be prepared as: v1 ¼ X1  11:152 v2 ¼ X2  13:499 v3 ¼ X3  12:052 v4 ¼ X1 + X2  24:684 v5 ¼ X2 + X3  25:539 v6 ¼ X1 + X2 + X3o  36:711 In matrix form: 2 6 6 6 6 6 4

v1 v2 v3 v4 v5 v6

3

2

1 0 0

6 60 7 6 7 60 7 6 7¼6 7 61 5 6 60 4 1

1 0 1 1 1

3

3 2 7 11:152 0 72 3 7 6 13:499 7 X1 7 6 17 74 5 6 12:052 7 7 X2  6 7 6 24:684 7 07 7 X3 4 25:539 5 17 5 36:711 1

Using the least squares adjustment with equal weights (BtB)Δ ¼ (BtF), we can compute the adjusted distances as follows: 2 3 2 11:165 3 c1 X 7 4X c2 5 ¼ 6 4 13:504 5 m c3 X 12:043 The residuals v can be evaluated as: 3 2 0:013 6 0:005 7 7 6 7 6 v ¼ B Δ  F¼6 0:009 7, and the standard deviation of unit weight σ o ¼ 0.014 m. 6 0:015 7 4 0:008 5 0:001

62

3. LEAST SQUARES ADJUSTMENT PROCEDURES

It should be noted in this example that the observation equations are linear, and therefore there is no need to iterate the solution. In practice, this kind of linear equation can be formulated for leveling networks and GPS networks as well. (2) Adjustment with condition equations method The number of condition equations must equal the redundancy in the observations r, which is 3 in this example. Therefore, the condition equations can be formulated as: v1 + v2  v4 ¼ 24:684  ðX1 + X2  X4 Þ v2 + v3  v5 ¼ 25:539  ðX2 + X3  X5 Þ v1 + v2 + v3  v6 ¼ 36:711  ðX1 + X2 + X3  X6 Þ In matrix form of Av ¼ F: 3 2 v1 2 36 3 2 3 7 2 11:152 + 13:499  24:684 0:033 1 1 0 1 0 0 6 v2 7 7 6 v 4 0 1 1 0 1 0 56 3 7 ¼ 4 5 ¼ 4 0:012 5 m 13:499 + 12:052  25:539 6 v4 7 11:152 + 13:499 + 12:052  36:711 0:008 1 1 1 0 0 1 4 v 5 5 v6 For least squares adjustment using conditions equations method with equal weight Q ¼ I, v ¼ At(AAt)1F: 3 2 3 3 2 2 11:165 0:013 v1 6 13:504 7 6 v2 7 6 0:005 7 7 6 7 7 6 6 6 12:043 7 6 v3 7 6 0:009 7 ^ 7 7 ! adjsuted observations l ¼ 6 6 v 7¼6 6 24:670 7 6 4 7 6 0:015 7 4 25:547 5 4 v5 5 4 0:008 5 v6 36:712 0:001 Which is identical to the results we got using the observation equation method.

EXAMPLE 3.3 In a GPS survey, the following observations are made between the reference stations A and D as follows [11] (Table 3.2): TABLE 3.2 GPS observed baselines ΔX[m]

Δ Y[m]

ΔZ[m]

-2205 949.0762

-4884 126.7921

3447 135.1550

ΔAB

+3 777.9104

-6 006.8201

-6 231.5468

ΔBC

+7 859.4707

-3 319.1092

+400.1902

Δ CD

+5 886.8716

-4 288.9638

-2 350.2230

-2188 424.3707

-4897 740.6844

3438 952.8159

Fixed A

Fixed D

63

3.3 ADJUSTMENT BY CONDITION EQUATIONS METHOD

Required Apply the least squares adjustment using both the observation and condition equations technique.

Solution (1) Adjustment by observation equations method To compute the initial coordinates of station B, we add the observed baselines to the fixed coordinates of A as: 3 2 3 2 3 2 3 2 XB XA ΔXAB 2202171:1658 4 YB 5 ¼ 4 YA 5 + 4 ΔYAB 5 ¼ 4 4890133:6122 5 3440903:6082 ZB ZA ΔZAB Where Δ XAB, ΔYAB. Δ ZAB: the observed coordinates baselines components. For point C, we add the observed baselines from station B to station C as: 3 2 3 2 3 2 3 2 XC XB ΔXBC 2194311:6951 4 YC 5 ¼ 4 YB 5 + 4 ΔYBC 5 ¼ 4 4893452:7214 5 3441303:7984 ZC ZB ΔZBC Therefore, we can write the following follows: 3 2 2 vXAB 1 0 0 7 6 6v 6 YAB 7 6 0 1 0 7 6 6 6 vZAB 7 6 0 0 1 7 6 6 7 6 6v 6 X BC 7 6 1 0 0 7 6 6 6 vYBC 7 ¼ 6 0 1 0 7 6 6 7 6 6 6 vZBC 7 6 0 0 1 7 6 6 6 vX CD 7 6 0 0 0 7 6 6 7 6 6 4 vYCD 5 4 0 0 0 vZCD 0 0 0

observation equation v ¼ BΔ  F for the whole network as 0 0 0 0

3

2

XB  ðXA + ΔXAB Þ

0 0 0 0

0 0 1 0

1 0 0 1

0

0

7 7 6 72 3 6 YB  ðYA + ΔYAB Þ 7 7 δXB 7 6 7 7 6 76 δYB 7 6 ZB  ðZA + ΔZAB Þ 7 76 7 6 X  ðX + ΔX Þ 7 76 7 6 C B BC 7 76 δZB 7 6 7 7  6 YC  ðYB + ΔYBC Þ 7 6 0 7 76 δX 7 6 7 C7 6 76 7 1 76 7 6 ZC  ðZB + ΔZBC Þ 7 74 δYC 5 6 7 6 XD  ðXC + ΔXCD Þ 7 0 7 7 δZ 7 6 C 7 7 6 0 5 4 YD  ðYC + ΔYCD Þ 5 ZD  ðZC + ΔZCD Þ 1

0 0 1 0 0 1 0 0 0

0 0 0 1 0 0 1 0 0

0 0 0 0 1 0 0 1 0

Or 3 2 vXAB 1 6 vYAB 7 6 0 7 6 6 6 vZAB 7 6 0 7 6 6 6 vX BC 7 6 1 7 6 6 6 vYBC 7 ¼ 6 0 7 6 6 6 vZBC 7 6 0 7 6 6 6 vXCD 7 6 0 7 6 6 4 vYCD 5 4 0 vZCD 0 2

0 1 0 0 1 0 0 0 0

3

0 0 0 1

3 3 2 0 2 0 3 δXB 6 0 7 07 7 7 6 6 δYB 7 6 0 7 07 76 7 7 6 7 6 6 0 7 07 76 δZB 7 6 7 7 6 6 0 7 07 76 δX 7  6 7 C7 6 6 1 7 07 76 7 7 6 4 δYC 5 6 + 0:4528 7 0 7 7 7 6 4 + 1:0008 5 0 5 δZC 1 0:7595

64

3. LEAST SQUARES ADJUSTMENT PROCEDURES

Using least squares adjustment with equal weights (BtB)Δ ¼ (BtF), we get the following (F sign is inverted):

Now adding the corrections of coordinates to the initial coordinates of B and C, we get: 2 3 2 2202171:0149 3 2 3 2 2194311:3932 3 XC XB 7 7 6 4 YB 5 ¼ 6 4 4890133:2786 5and 4 YC 5 ¼ 4 4893452:0542 5 ZB Z C 3440903:35509 3441303:2921 The residual errors are computed as: 2 6 6 6 6 6 6 6 6 v ¼ BΔ  F ¼ 6 6 6 6 6 6 6 6 4

3 + 0:1509 + 0:3336 7 7 7 0:2532 7 7 7 + 0:1509 7 7 2 + 0:3336 7 7 ! σ o ¼ 1:783 7 0:2532 7 7 0:7546 7 7 7 1:6680 5 + 1:2658

(2) Adjustment with condition equations method The condition equation we need is equal to the redundancy r of 3 equations as: r ¼ n  no ¼ 9  6 Accordingly, we prepare the following three condition equations (Av ¼ F) as follows: vXAB + vXBC + vX CD ¼ XD  ðXA + ΔXAB + ΔXBC + ΔXCD Þ vYAB + vYBC + vYCD ¼ YD  ðYA + ΔYAB + ΔYBC + ΔYCD Þ vZAB + vZBC + vZCD ¼ ZD  ðZA + ΔZAB + ΔZBC + ΔZCD Þ

3.3 ADJUSTMENT BY CONDITION EQUATIONS METHOD

65

Or in matrix form as: 2

vX AB

3

7 6v 6 X BC 7 7 6 7 36 2 6 vXCD 7 2 3 7 6 v + 0:4528 YAB 7 6 1 1 1 0 0 0 0 0 0 76 7 6 7 6 76 v 6 7 6 YBC 7 7 ¼ 4 + 1:0008 5 4 0 0 0 1 1 1 0 0 0 56 7 6 v 0:7595 6 YCD 7 7 0 0 0 0 0 0 1 1 1 6 6 vZAB 7 7 6 7 6 4 vZBC 5 vZCD 2 6 6 6 6 6 6 6 6 1 t t v ¼ A ðAA Þ F ¼ 6 6 6 6 6 6 6 6 4

+ 0:1509

3

2

6 6 + 0:1509 7 7 6 7 6 6 + 0:1509 7 7 6 7 6 + 0:3336 7 6 7 6 ^ 7 + 0:3336 7 ! l ¼ 6 6 7 6 + 0:3336 7 6 7 6 6 0:2532 7 7 6 7 6 0:2532 5 6 4 0:2532

^ AB ΔX ^ BC ΔX ^ ΔX CD

3

3 3 2 2 vX AB + ΔXAB 3778:0613 7 7 7 6v 7 6 7 6 X BC + ΔXBC 7 6 7859:6216 7 7 7 6 7 6 7 6 vXCD + ΔXCD 7 6 5887:0225 7 7 6 7 7 6 ^ AB 7 6 7 7 6 ΔY 7 6 vYAB + ΔYAB 7 6 6006:4865 7 7 6 7 7 6 ^ BC 7 ¼ 6 v 7 7 6 ΔY 7 6 YBC + ΔYBC 7 ¼ 6 3318:7756 7 7 7 6 7 6 ^ ΔY CD 7 6 vYCD + ΔYCD 7 6 4288:6302 7 7 7 6 7 6 7 7 6 6 ^ AB 7 ΔZ 7 6 vZAB + ΔZAB 7 6 6231:7999 7 7 7 6 7 6 4 vZBC + ΔZBC 5 4 399:9370 5 ^ BC 7 ΔZ 5 vZCD + ΔZCD 2350:4761 ^ CD ΔZ

For checking, F elements should be zero because the closure errors are propagated through the baseline components. Finally, the adjusted coordinates of stations B and C are as follows: 2 3 2 3 2 3 2 2202171:0149 3 ^ AB XB XA ΔX 7 4 YB 5 ¼ 4 YA 5 + 4 ΔY ^ AB 5 ¼ 6 4 4890133:2786 5 ^ ZB ZA ΔZ AB 3440903:35509 And: 3 2 3 2 3 2 2194311:3932 3 ^ BC XC XB ΔX 7 4 YC 5 ¼ 4 YB 5 + 4 ΔY ^ BC 5 ¼ 6 4 4893452:0542 5 ^ BC ZC ZB ΔZ 3441303:2921 2

Which is also identical to the results we got using the observation equation method. It should be mentioned that in the next chapters we will focus more on the nonlinear applications of the adjustment with observation equations technique because it’s more efficient than the condition equations technique in terms of postanalysis and blender detection.

66

3. LEAST SQUARES ADJUSTMENT PROCEDURES

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%% example 3.3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear % input A=[-2205949.0762 -4884126.7921 3447135.1550]; d=[+3777.9104 -6006.8201 -6231.5468; +7859.4707 -3319.1092 +400.1902; +5886.8716 -4288.9638 -2350.2230]; D=[-2188424.3707 -4897740.6844 3438952.8159 ]; %%%%%%%%%%%%%%%%%% observation adjustment method %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% B=A+d(1,:); C=B+d(2,:); Dc=C+d(3,:); a=zeros(9,6); a(1,1)=1;a(2,2)=1;a(3,3)=1;a(4,4)=1;a(5,5)=1;a(6,6)=1; a(4,1)=-1;a(5,2)=-1;a(6,3)=-1;a(7,4)=-1;a(8,5)=-1;a(9,6)=-1; F=[0;0;0;0;0;0;D(1,1)-Dc(1,1);D(1,2)-Dc(1,2);D(1,3)-Dc(1,3)]; N=inv(a’*a); t=a’*-F; delta=N*t; Adj=[B’;C’]+delta; v=a*delta-F; sigma=(v’*v)/3; vcov=sqrt(diag(sigma*N)); B=Adj(1:3,1) C=Adj(4:6,1) %%%%%%%%%%%%%%%%%% condition adjustment method %%%%%%%%%%%%%%%%%%%%%%%% A1=[1 1 1 0 0 0 0 0 0;0 0 0 1 1 1 0 0 0;0 0 0 0 0 0 1 1 1]; F=[D(1,1)-(A(1,1)+sum(d(:,1))); D(1,2)-(A(1,2)+sum(d(:,2)));... D(1,3)-(A(1,3)+sum(d(:,3)))]; v= A1’* inv(A1*A1’)*F ; sig=(v’*v)/3; d(:,1)=v(1:3)+d(:,1); d(:,2)=v(4:6)+d(:,2); d(:,3)=v(7:9)+d(:,3); B=A+d(1,:) C=B+d(2,:) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

3.4 GRAPHICAL REPRESENTATIONS OF THE POSITIONAL ERROR

67

3.4 GRAPHICAL REPRESENTATIONS OF THE POSITIONAL ERROR After applying the least squares adjustment of the unknown coordinates of points, it is possible to graphically visualize the confidence limits in these adjusted points. This is can be done by deriving and then plotting an ellipse in case of 2D problems or ellipsoid in the case of 3D problems. The derivation is based on analyzing the variance-covariance matrix of the adjusted points as will be presented in the next sections. It should be noted that the ellipse and ellipsoid of errors are useful to evaluate the strength of geomatical observations and to spot weakness as well. Accordingly, optimizing the observation plan efficiently produces a more robust observation network such as by adding extra measurements at the detected weak parts in a geodetic network. Fig. 3.3 illustrates three types of a geodetic network of trilateration, triangulation, and a mixed (hybrid) network. The reader can directly conclude the strength of the three networks based on the size of the ellipse of errors plotted with the same exaggeration factor.

FIG. 3.3 Three kinds of measured networks with the same exaggeration factor: (A) trilateration network, (B) triangulation network, (C) mixed triangulation-trilateration network.

68

3. LEAST SQUARES ADJUSTMENT PROCEDURES

3.4.1 Ellipse of Errors After the adjustment of a 2D geodetic problem, we can use the variance-covariance matrix elements of σ x 2 , σ y 2 ,and σ xy of the adjusted points to compute the elements of the ellipse of errors (semimajor axis σ max, semiminor axis σ min, and the rotation angle φ) as shown in Fig. 3.4. sy+ −smin

+smax

a2

j = a1

−sx

+sx P nodal point

−smax

+smin −sy

FIG. 3.4 Ellipse of errors.

Different mathematical solutions can be followed to find the elements of the ellipse of errors where three variables qvv, quu (eigenvalues λ1, λ2), and K are computed based on the covariance matrix of adjusted points Q or N1:   QXX QXY (3.36) Q¼ QXY QYY qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi K ¼ ðQYY  QXX Þ2 + 4ðQXY Þ2 (3.37) quu ¼

ðQYY + QXX + KÞ 2

(3.38)

qvv ¼

ðQYY + QXX  KÞ 2

(3.39)

Then we can compute the semimajor and semiminor axis of the ellipse as: pffiffiffiffiffiffiffi σ max ¼ σ o quu pffiffiffiffiffiffi σ min ¼ σ o qvv

(3.40) (3.41)

where σ o represents the standard deviation of unit weight. It should be noted that we can directly use the variance-covariance matrix to compute the length of the axes of ellipse of errors. This is by computing the eigenvalues λ1, λ2 and then derive the major and minor axes as: pffiffiffiffiffi pffiffiffiffiffi (3.42) σ max ¼ λ1 andσ min ¼ λ2

3.4 GRAPHICAL REPRESENTATIONS OF THE POSITIONAL ERROR

69

Subsequently, the rotation angle of the ellipse φ can be computed using the following equation: tan ð2φÞ ¼

2QXY QYY  QXX

(3.43)

Eq. (3.43) is used to compute the angle 2φ, but to determine φ we need to correctly calculate the rotation quarter by checking the algebraic signs of the numerator and denominator before the division over 2. Another method is to compute two angles: α1 of the semimajor axis and α2 of the semiminor axis as follows: tan α1 ¼

QXY λ1  QYY ¼ λ1  QXX QXY

(3.44)

tan α2 ¼

λ2  QYY QXY ¼ QXY λ2  QXX

(3.45)

It should be noted that the computations are referred to the standard ellipse of errors whereas the ellipse of errors at a probability of 95% is computed as: σ max _95% ¼ 1:96 σ max

(3.46)

σ min _95% ¼ 1:96 σ min

(3.47)

EXAMPLE 3.4 Given – Four stations A, B, C, and D with the following coordinates (Table 3.3): TABLE 3.3 Four Stations Coordinates Station

X [m]

Y [m]

A

6276

91543

B

7398

88102

C (fixed)

500

90500

D (fixed)

2320

87381

The variance-covariance of the two stations A and B are as follows:     X 0:001555 0:001503 X 0:000756 0:000596 A¼ B¼ 0:001503 0:003408 0:000596 0:005132

Required Compute and plot the ellipse of errors at the two adjusted points A and B.

70

3. LEAST SQUARES ADJUSTMENT PROCEDURES

Solution Using Eqs. (3.42), (3.44), and (3.45), we can compute the ellipse of errors elements as follows (Table 3.4 and Fig. 3.5): TABLE 3.4 Ellipse of Errors Elements σ min

σ max

α1

α2

A

0.027m

0.065m

29.18°

119.18°

B

0.026m

0.072m

172.38°

82.38°

´104 9.2 a1

9.1

C

A

9

8.9

8.8 B

a1

8.7 D 0

1000

2000

3000

4000

5000

6000

7000

FIG. 3.5 Exaggerated ellipses of errors of adjusted stations A and B.

3.4.2 Relative Ellipse of Errors In many geomatics applications, researchers are interested in the relative accuracy rather than the absolute accuracy. Relative accuracy as a 2D problem can be visualized through the

3.4 GRAPHICAL REPRESENTATIONS OF THE POSITIONAL ERROR

71

computations of the relative ellipse of errors, which consider the difference in coordinates between points as follows:

X

dx ¼ x 2  x 1

(3.48)

dy ¼ y 2  y 1

(3.49)

The variance-covariance matrix 1, 2 of the two points x1, y1 and x2, y2 is: 2 3 σx1 2 σx1 y1 σx1 x2 σx1 y2 6 7 σy1 2 σy1 x2 σy1 x2 7 6 6 7 6 σx2 2 σx2 y2 7 4 5 2 σy symmetric 2

(3.50)

Writing Eqs. (3.48) and (3.49) in a matrix form as:  d¼

dx dy



2 3 x1 6 7 1 0 1 0 6 y1 7 ¼ 6 7 ¼ JP1,2 0 1 0 1 4 x2 5 

(3.51)

y2 X X Now using the propagation of errors (Chapter 2) as dd ¼ J 1, 2Jt: 2 32 3 σx1 2 σx1 y1 σx1 x2 σx1 y2 1 0 7    6 σy1 2 σy1 x2 σy1 x2 76 0 1 7 σ dx 2 σ dxdx 1 0 1 0 6 7 6 76 6 7 2 ¼ 2 7 σ dxdx σ dy 0 1 0 1 6 σx σx y 4 5 1 0 2 2 2 5 4 2 0 1 σy2 symmetric Accordingly, we get the following standard deviations: σ dx 2 ¼ σ x1 2  2σ x1x2 + σ x2 2

(3.52)

σ dy 2 ¼ σ y1 2  2σ y1y2 + σ y2 2

(3.53)

σ dxdx ¼ σ x1y1  σ x1y2  σ x2y1 + σ x2y2

(3.54)

Therefore, through the computations of the variances σ dx 2 , σ dy 2 and the covariance σ dxdx of Eqs. (3.52), (3.53) and (3.54), and by using the same formulas to compute the elements of error of ellipse in Eq. (3.42), we can compute the relative ellipse of errors elements a and b, and orientation angle. The idea of computing the relative ellipse or ellipsoid of errors is very useful to visualize the relative accuracy and represent a reliability check about the quality of the observations.

72

3. LEAST SQUARES ADJUSTMENT PROCEDURES

EXAMPLE 3.5 Two adjusted points C and D have the following full variance-covariance matrix after adjustment: 3 2 0:00115 0:00025 0:00080 5e  05 "X # X 6 0:00025 0:00275 0:00145 0:00105 7 CC CD 7 6 X X ¼6 7 4 0:00080 0:00145 0:00275 0:00060 5 DC DD 5e  05 0:00105 0:00060 0:00130

Required Compute (1) The standard ellipse of errors of the points. (2) The 90% ellipse of errors of the points. (3) The relative 95% relative ellipse of errors of the line CD.

Solution (1) The elements of the standard ellipse of errors are computed to each point by defining the variancecovariance matrix of every point as follows:   X 0:00115 0:00025 CC ¼ m2 0:00025 0:00275   X 0:00275 0:00060 m2 DD ¼ 0:00060 0:00130 Then, we compute the eigenvalues λ1C, λ2C and λ1D, λ2D as: pffiffiffiffiffiffiffi σ max C ¼ λ1C ¼ 0:033 m pffiffiffiffiffiffiffi σ min C ¼ λ2C ¼ 0:053 m pffiffiffiffiffiffiffi σ max D ¼ λ1D ¼ 0:033 m pffiffiffiffiffiffiffi σ min D ¼ λ2D ¼ 0:054 m The rotation angles of the two ellipses of errors are computed as: point C : φ ¼ 171o point D : φ ¼ 110o (2) The probability of 90% ellipse of errors are computed based on the elements computed in the first requirement of the standard ellipse as follows after multiplying by 1.64: σ 0 max C ¼ 1:64  σ max C ¼ 0:087 m σ 0 min C ¼ 1:64  σ min C ¼ 0:054 m σ 0 max D ¼ 1:64  σ max D ¼ 0:089 m σ 0 min D ¼ 1:64  σ min D ¼ 0:054 m The size of the ellipses becomes larger because our confidence is lowered from 100% to 90%.

73

3.4 GRAPHICAL REPRESENTATIONS OF THE POSITIONAL ERROR

(3) To compute the relative ellipse of errors of the side CD, we use Eqs. (3.52), (3.53), and (3.54) as follows: σ dx 2 ¼ σ x1 2  2σ x1x2 + σ x2 2 ! σ dx ¼ 0:048m σ dy 2 ¼ σ y1 2  2σ y1y2 + σ y2 2 ! σ dy ¼ 0:045m σ dxdx ¼ σ x1y1  σ x1y2  σ x2y1 + σ x2y2 ! σ dxdy ¼ 0:0005 Accordingly, the elements of the relative ellipse of errors are computed as follows (Fig. 3.6): pffiffiffiffiffi a ¼ λ1 ¼ 0:048 m ! a95% ¼ 1:96a ¼ 0:094 m pffiffiffiffiffi b ¼ λ2 ¼ 0:044 m ! b95% ¼ 1:96b ¼ 0:087 m

9.2

´104

9.15

95% Relative ellipse

9.1 9.0 9

100% Relative ellipse

8.95 8.9 8.85 8.8 8.75 1000

2000

3000

4000

5000

6000

7000

FIG. 3.6 Relative ellipse at 100% and 95% confidence limits.

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5% %%%%%%%%%%%%%%%%%%%%% Example 3.5 relative ellipse

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc; clear; close all format short g % adjusted points x=[6276; 7398];

74

3. LEAST SQUARES ADJUSTMENT PROCEDURES

y=[91543; 88102]; % covariance matrices vcov=[ 0.00115 -0.00025 0.0008

5e-05

-0.00025 0.00275 -0.00145 0.00105 0.0008 -0.00145 0.00275 -0.0006 5e-05

0.00105 -0.0006 0.0013];

k=0;unk=2;exag=10000; for i=1:2:2*unk k=k+1; bn=eig(vcov(i:i+1,i:i+1)); eg = sort(sqrt((bn))) ; smin1(k,1)= eg(1,1);smax1(k,1)= eg(2, 1);% ellipse axes end for i=1:unk kk=2*i; bast=2*vcov(kk,kk-1); maqam=(vcov(kk ,kk )-vcov(kk-1,kk-1)); phi2=(180/pi)*atan(bast/maqam); if bast>=0 && maqam> 0 phi=phi2/2 ; elseif bast=0 && maqam> 0 phi=phi2/2 ; elseif bast=0 && dy>=0 theta=atan(dx/dy); az=theta; elseif dx σ dxdy ¼ σ x1y1  σ x1y2 + σ x2y2  σ y1x2 > > > = σ dxdz ¼ σ x1z1  σ x1z2 + σ x2z2  σ z1x2 (3.56) σ dy 2 ¼ σ y1 2 + σ y2 2  2σ y1y2 > > > σ dydz ¼ σ y1z1  σ y1z2 + σ y2z2  σ z1y2 > > ; σ dz 2 ¼ σ z1 2 + σ z2 2  2σ z1z2

82

3. LEAST SQUARES ADJUSTMENT PROCEDURES

where

2 X dd ¼ 4

3 σ dxdy σ dxdz σ dy 2 σ dydz 5 symmetric σ dz 2 σ dx 2

(3.57)

EXAMPLE 3.7 Given – Line AB is measured where the coordinates of the endpoints are as follows (Table 3.8): TABLE 3.8 Line AB Endpoints Coordinates Point

X[m]

Y[m]

Z[m]

A

6.33

5.28

9.40

B

1.33

5.24

9.38

– Variance-covariance matrix of AB is:

Required Calculate the elements of the relative ellipsoid of errors.

Solution Using Eq. (3.56), we compute the variance-covariance matrix elements of the relative ellipsoid as follows: 2 3 0:346 0:004 0:006 X dd ¼ 4 0:004 0:276 0:010 5 0:006 0:010 0:237 Then we compute the ellipsoid of errors elements as follows (Fig. 3.8; Table 3.9): TABLE 3.9 Computed Ellipsoid of Errors Elements Axis Length [m]

Azimuth°

Elev.°

0.234

190.5

76.3

0.278

355.3

13.3

0.347

266.1

-3.5

3.5 HOMOGENOUS LEAST SQUARES ADJUSTMENT

83

FIG. 3.8 Relative ellipsoid of errors.

3.5 HOMOGENOUS LEAST SQUARES ADJUSTMENT The previous adjustment using the least squares method of Section 3.2 was presented using a linearized nonhomogenous system of normal equations. Typically, the system is overdetermined with redundant observations, and the existence of the solution is ensured by the least squares method. However, solving a homogeneous linear system (Ax ¼ 0) is a typical problem in geomatics and photogrammetry in different applications [73]. Some examples of using homogenous least squares adjustment method are listed as: • The determination of the camera pose parameters by the Direct Linear Transformation (DLT). • The determination of the relative orientation using essential or fundamental matrix from the observed coordinates of the corresponding points in two images. • Projective transformation using the homography matrix. • Generalized inverse (pseudoinverse A+) in rank defected free net adjustment. • 3D primitive fitting of lines, planes, etc. The solution of any homogeneous system Ax ¼ 0 is solvable when x ¼ 0 (trivial solution) or if x 6¼ 0, with kx, where k is an arbitrary scalar. However, only the ratios of the unknowns can be determined, and an additional condition is thus needed to fix the length of the solution vector. A simple way is to fix some component of the solution vector and to solve the rest of the unknowns x from the nonhomogeneous system [73]. In this book, one solution method for the homogeneous least squares is presented, and in Chapter 2 the method is called the generalized singular value decomposition (SVD). The SVD of a matrix is a very useful tool in the context of least squares problems, and it is also a helpful tool for analyzing the properties of a matrix [74]. SVD guarantees that the entries of a matrix estimated numerically satisfy some given constraints (e.g., orthogonality), for example, the computed rotation matrix should be an orthogonal matrix. For the homogenous least squares, it is required to solve an overdetermined system of the form Ax ¼ 0 of m linear equations and n unknowns, where m > ¼ n  1 and rank(A) ¼ n  1 and then, we can apply Eq. (2.18). Accordingly, unknown parameters x is the column of V corresponding to the zero-singular value of A. Because the columns are ordered, this is the rightmost column of V. The homogenous least squares using SVD can be derived as follows: Ax ¼ b ! x ¼ VS1 Ut b ¼ A1 b

(3.58)

84

3. LEAST SQUARES ADJUSTMENT PROCEDURES

In computer vision and photogrammetry, the estimation of projective transformation either in 2D or 3D can be applied using the homogenous least squares method through the estimation of the homography matrix. Another application is to estimate the pose of a camera in space, which includes the camera rotation and translation through the direct linear transformation (DLT) method. The third example is the estimation of relative orientation between two images in space through what is called the fundamental matrix.

3.5.1 Application in Image Rectification Using Homography Image rectification is aimed at aligning two or more images of the same scene to a reference (named as registration) or to a common plane [76]. From an algebraic viewpoint, the rectification is achieved by applying 2D projective transformations (or homography) on both images. The author intentionally selected this application to demonstrate the possibility of solving some adjustment problem either in nonhomogeneous or using homogenous least squares. • The homogenous system of equations: This is applied by computing the homography matrix H between the two images based on knowing a minimum of four corresponding points P1 and P2 in both images:

or

P1 ¼ H:P2

(3.59)

0 1 0 10 0 1 x h11 h12 h13 x @ y A ¼ @ h21 h22 h23 A@ y0 A 1 h31 h32 h33 1

(3.60)

Because the homography transform is written using homogeneous coordinates, the homography H is defined using eight parameters plus a free ninth homogeneous scaling factor. Therefore, at least four corresponding points providing eight equations are required to compute the homography. Practically, a larger number of correspondences are employed so that an overdetermined linear system is obtained. Projective transformation with setting j hj ¼ 1 can be written as: x0 ¼

h11 x + h12 y + h13 h31 x + h32 y + h33

(3.61)

y0 ¼

h21 x + h22 y + h23 h31 x + h32 y + h33

(3.62)

Multiplying and rearranging yields: h11 x + h12 y + h13  h31 xx0  h32 yx0  h32 x0 ¼ 0

(3.63)

h21 x + h22 y + h23  h31 xy0  h32 yy0  h33 y0 ¼ 0

(3.64)

3.5 HOMOGENOUS LEAST SQUARES ADJUSTMENT

85

For n pairs of point-correspondences enable the construction of a 2n  9 linear system as follows:       0  x1 y1 1 0 0 0 x1 x1 0 y1 x1 0 x1 0   h11        0  0 0 0 x1 y1 1 x1 y1 0 y1 y1 0 y1 0   h12            h13             h21           h22  ¼  :   : : : : : : : : :       (3.65) :  : : : : : : : : :   h23     0 0 0 0  xn yn 1 0 0 0 xn xn yn xn xn   h31        0  0 0 0 xn yn 1 xn yn 0 yn yn 0 yn 0   h32            h33   2n∗9 9∗1 2n∗1 Solving this linear system involves the calculation of an SVD where the solution corresponds to the last column of the matrix V. To avoid numerical instabilities, the coordinates of point-correspondences should be normalized. In practice, RANSAC (Chapter 12) is used to remove outliers and keep inlier points that fall on a common plane. • The nonhomogenous system of equations: By rewriting the projective transformation or the homography H in a vector form as h ¼ [h11, h12, h13, h21, h22, h23, h31, h32]T and considering h33 ¼ 1. x0 ¼

h11 x + h12 y + h13 h31 x + h32 y + 1

h21 x + h22 y + h23 h31 x + h32 y + 1 Multiplying and rearranging yields: y0 ¼

(3.66) (3.67)

h11x + h12y + h13  h31xx’ + h32yx’ ¼ x’

(3.68)

h21x + h22y + h23  h31xy’ + h32yy’ ¼ y’

(3.69)

In matrix form, the observation   x1 y 1 1 0   0 0 0 x1       : : : :   : : : :   xn yn 1 0   0 0 0 xn

equation system is:

 0 0 x1 x1 0 y1 x1 0  y1 1 x1 y1 0 y1 y1 0      : : : :  : : : :  0 0 0 xn xn yn xn 0  yn 1 xn yn yn yn 0  2n∗8

   0  h11   x1     0  h12   y1       h13         h21      ¼   h22   :       h23   :       h31   xn 0       h32   yn 0  8∗1 2n∗1

(3.70)

Then, a least squares adjustment using Eq. (3.17) is applied. In Fig. 3.9A [77], the shape of the building is significantly distorted because the camera’s optical axis was not normal to the plane of the building, and we see an indication of perspective distortion or keystone distortion (Fig. 3.9B). Observing a minimum of four corners manually or digitally that form a large rectangle on the planar facade of the building will ensure the computation of the H matrix. Then

86

3. LEAST SQUARES ADJUSTMENT PROCEDURES

z2

2

Virtual camera

n

x2

y2 z1 Real camera view

(A)

x1

1 y1

(B) FIG. 3.9 Rectification of the perspective distortion using homography.

use H to transform the image into a synthetic front-parallel (rectified) view. This homography transformation is also considered as one of the image warping techniques.

EXAMPLE 3.8 Given The observed corner points in pixels of the nonrectified fac¸ade are selected in a clockwise order:     x1 x2 x3 x4 66:111 70:556 806:11 813:89 p1 ¼ ¼ y1 y2 y3 y4 658:33 298:33 150:56 689:44

Required Compute the homography matrix H using both methods of nonhomogenous and homogenous solutions.

87

3.5 HOMOGENOUS LEAST SQUARES ADJUSTMENT

Solution Although no redundancy is given in this example, we will demonstrate the possible solutions as presented in Eqs. (3.65) and (3.70). The set of destination corresponding points p2 can be prepared, where they are arranged to represent a rectangular area of the rectified fac¸ade as: " # " # min ½x min ½x max ½x max ½x 66:111 66:111 658:33 658:33







p2 ¼ ! p2 ¼ max y min y min y max y 298:33 70:556 70:556 298:33 (1) Using homogeneous Eq. (3.65), we get the following system:   66:111   0   70:556   0   806:11   0   813:89   0

658:33 0 298:33 0 150:56 0 689:44 0

1 0 1 0 1 0 1 0

0 66:111 0 70:556 0 806:11 0 813:89

0 658:33 0 298:33 0 150:56 0 689:44

0 4370:7 43523 1 19723 1:96E + 05 0 4664:5 19723 1 4978:1 21049 0 5:31E + 05 99118 1 56876 10623 0 5:36E + 05 4:54E + 05 1 2:43E + 05 2:06E + 05

    h11     0 66:111   h12    0 298:33   h13    0 66:111   h21    0 70:556   h22  ¼   0 658:33     h23    0 70:556   h31    0 658:33   h32    0 298:33    h33

Using SVD, and considering the last column of V after division to the ninth element, and then H is computed as:    1:334 0:019028 29:354   H ¼  0:18884 0:68803 143:4    0:000727 4:77E  05 1 (2) Using nonhomogenous system of Eq. (3.70) we get the same H matrix.   66:111   0   70:556   0   806:11   0   813:89   0

658:33 0 298:33 0 150:56 0 689:44 0

1 0 1 0 1 0 1 0

0 66:111 0 70:556 0 806:11 0 813:89

0 658:33 0 298:33 0 150:56 0 689:44

 43523  0 4370:7 1 19723 1:96E + 05  0 4664:5 19723  1 4978:1 21049  0 5:31E + 05 99118  1 56876 10623  0 5:36E + 05 4:54E + 05  1 2:43E + 05 2:06E + 05 

     h11   66:111       h12   298:33       h13   66:111       h21   70:556   ¼   h22   658:33       h23   70:556       h31   658:33       h32   298:33 

88

3. LEAST SQUARES ADJUSTMENT PROCEDURES

Solving the system of equations yields the same homography matrix as illustrated in the MATLAB code. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%% chapter 3 - example 3.8 - Homography %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc;clear;close all format short g %%% source points p1=[ 66.111

70.556

806.11

813.89

658.33

298.33

150.56

689.44];

pp1=p1; mn = min(p1); mx = max(p1); %% destination points p2 = [mn(1) mx(2); mn(1) mn(2); mx(1) mn(2); mx(1) mx(2)]’;pp2=p2; %%%%%%%%%%%%%%%%%%% %% convert Euclidean to homogenous coordinates %%% Apply homogenous least squares with SVD A=zeros(1,9); for i=1:4 a1=[ p1(1,i) p1(2,i) 1 0 0 0 -p1(1,i)*p2(1,i) -p1(2,i)*p2(1,i) -p2(1,i)]; a2=[ 0 0 0 p1(1,i) p1(2,i) 1 -p1(1,i)*p2(2,i) -p1(2,i)*p2(2,i) -p2(2,i)]; A=[A;[a1;a2]]; end A(1,:)=[]; [U S V]=svd(A); hh = V(:,9); %% the rightmost column of V hh = hh/hh(9); H= reshape(hh,[3,3]); H=H’; H %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % The solution with non-homogenous system A1=zeros(1,8); F=0; for i=1:4 a1=[ p1(1,i) p1(2,i) 1 0 0 0 -p1(1,i)*p2(1,i) -p1(2,i)*p2(1,i)];f1=p2(1,i); a2=[ 0 0 0 p1(1,i) p1(2,i) 1 -p1(1,i)*p2(2,i) -p1(2,i)*p2(2,i) ];f2=p2(2,i); A1=[A1;[a1;a2]];F=[F;f1;f2]; end A1(1,:)=[];F(1,:)=[]; N=inv(A1’*A1); t=A1’*F; %in case of redundancy X=N*t;X=[X;1]; % adding the last element of 1 H1= reshape(X,[3,3]); H1=H1’; H1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

C H A P T E R

4

Observation Models and Least Squares Adjustment 4.1 INTRODUCTION The least squares adjustment using the method of observations (indirect method) is the most popular and efficient technique of adjustment computation in geomatics. In many geomatical models, there is only one observed quantity such as distance, angle, or azimuth direction, and this leads to the composition of one observation equation. The least squares adjustment technique of observations is powerful in the sense of postanalysis and the accuracy measures that can be estimated after adjustment. Further, it is suitable to be programmed and then productized or added to other programming software tools. It should be noted that observation equations that are necessary for adjustment can be linear or nonlinear based on the problem to be solved. For example, geodetic leveling networks adjustment is based on linear observation equations, whereas image space resection or distances intersection in 2D or 3D is a nonlinear adjustment problem as will be presented in the following sections of this chapter. In this chapter, we present the observation equation models of 2D and 3D distance adjustment model, azimuth and angle model, space intersection model, angular resection models, image resection and intersection using the collinearity model, and end up with the earthquake localization in geophysical computations. The 2D distance, azimuth, and angle observation models are grouped in what is called variation of coordinates method. This method is considered the most common method in conventional 2D geodetic networks adjustment of triangulation and trilateration (see Appendix). Further, the method is used in engineering applications such as the horizontal deformation monitoring of large structures of dams, towers, and bridges. The reason to use this method is the suitability for programming as mentioned earlier because it takes a definite form based on the observations type and its relation to the fixed and unknown points in the networks. The nonlinear observation equations depend on assuming

Adjustment Models in 3D Geomatics and Computational Geophysics https://doi.org/10.1016/B978-0-12-817588-0.00004-0

89

# 2019 Elsevier Inc. All rights reserved.

90

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

reasonable approximate coordinates of the unknown stations that are observed by distances, azimuth directions, and angles. In the literature, the approximate coordinates should not be away from their correct coordinates by more than 1 minute for azimuth direction and 1/4000 for distances, otherwise the solution could diverge to incorrect results during the adjustment iterations. Other techniques such as the 3D intersection and 3D resection using oblique angles proposed by the book author is also presented in this chapter where they are robust to converge to optimal solutions. Finally, we should emphasize the shortcoming of this adjustment technique: the inability to model a prior accuracy (weights) of the known parameters (like GCP coordinates) in the model, and therefore they are assumed fixed or error-free. This is not practical in many applications, and therefore a more advanced adjustment technique can be adopted as will be presented in the next chapters.

4.2 OBSERVATION MODELS IN 2D Although the book is intended for 3D adjustment computations, there is also a need to know the observation models of 2D problems such as observed horizontal distances and azimuth angles.

4.2.1 Observation Model of a 2D Distance Given (Fig. 4.1): – Two stations: I(Xi, Yi) and J(Xj, Yj) – Observed distance Sij

FIG. 4.1 2D distance observation model derivation.

For the observation model of a 2D distance Sij:  2  2 Sij 2 ¼ Xj  Xi + Yj  Yi

(4.1)

4.2 OBSERVATION MODELS IN 2D

91

Apply the derivation:

      2Sij dSij ¼ 2 Xj  Xi dXj  dXi + 2 Yj  Yi dYj  dYi       Xj  Xi dXj  dXi Yj  Yi dYj  dYi + dSij ¼ S S

(4.2) (4.3)

Assuming a small error in the coordinates and the observed distance: dS  δS ¼ So  Sc ¼ observed  computed Knowing that sin φ ¼ ΔX/S, cos φ ¼ ΔY/S, the equation simplifies to:     Sij ¼ δXj  δXi sin φij + δYj  δYi cosφij

(4.4)

(4.5)

Then the observation model for a 2D distance is: vij ¼ Kij δXi  Lij δYi + Kij δXj + Lij δYj  δSij

(4.6)

Eq. (4.6) represents the generic observation model of distances where: Kij ¼ sin φij ¼ ΔXij =Sij : Lij ¼ cos φij ¼ ΔYij =Sij : δSij: the difference between the observed and computed distance. vij: the residual error. δXi, δYi, δXj, δYj: the corrections to the approximate coordinates. Summarized adjustment procedure steps using the variation of coordinates are [9]: – Start the adjustment with proper initial coordinates of the unknown points. – Compute the distances, azimuth directions, or angles from the initial coordinates. – Prepare the suitable observation equation model 4.12, 4.11, or 4.6 based on the adjustment problem. – Prepare the least squares adjustment to compute the correction values for every unknown station point. – Compute the adjusted coordinates after adding the corrections to the initial coordinates. – Check the correctness of adjustment by checking the variance-covariance matrix and ellipse of errors.

EXAMPLE 4.1 Given In an intersection problem, three distances are observed from three control (fixed) stations A, B, and C to an unknown point P as shown in Fig. 4.2.

92

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

FIG. 4.2 2D intersection problem with distances.

Required Set up the observation equation model necessary to apply the least squares adjustment of this problem.

Solution Because A, B, and C are fixed, which represent point i in the generic distance observation model of Eq. (4.6), then we have the following reduced observation equation: vij ¼ Kij δXj + Lij δYj  δSij where Kij ¼ sin φij ¼ ΔXij =Sij Lij ¼ cos φij ¼ ΔYij =Sij Therefore, for the three observed distances: vAP ¼ KAP δXP + LAP δYP  δSAP vBP ¼ KBP δXP + LBP δYP  δSBP vCP ¼ KCP δXP + LCP δYP  δSCP Or in matrix form as discussed in Chapter 3 as v ¼ BΔ  F: 2 xPo  xA yPo  yA 3 SAPc 7 3 6 SAPc 2 3 7 6  vSAP SAP  SAPc 6 xPo  xB yPo  yB 7 dxP 7: 4 vSBP 5 ¼ 6  4 SBP  SBPc 5 6 SBPc dyP SBPc 7 7 6 vSCP SCP  SCPc 4x o x y o y 5 P C U C SCPc SCPc 2

where SAP, SBP, SCP are observed distances. SAPc, SBPc, SCPc are the computed distances.

4.2 OBSERVATION MODELS IN 2D

93

EXAMPLE 4.2 Given A trilateration 2D geodetic network ABCD is shown in Fig. 4.3 where the following information is given: – Observed distances are: AB, BC, CD, AC, and BD. – Fixed points are A and D. B C A

D

FIG. 4.3 2D trilateration net.

Required Set up the observation equation model for the least squares adjustment.

Solution – First step is to prepare the initial coordinates of points B and C. – Apply the LS adjustment as explained in Section 4.2.1 as follows: Observation equation is v ¼ B Δ  F, or in more detail: 3 2 0 X B  XA Y0 B  YA 0 0 c 7 6 ABc AB 7 6 7 6 XC0  XB0 Y0C  Y0B XC0  XB0 Y0C  Y0B 7 3 2 3 6 2 7 2 6  3 ABo  ABc v1 7 6 BCc BCc BCc BCc δ 7 6 XB 6 v2 7 6 6 BCo  BCc 7 7 7 6 7 6 6 δYB 7 XD  XC0 YD  Y0C 7 6 7 6 v3 7 ¼ 6 6 CDo  CDc 7 6   7 0 0   7 5 6 7 6 6 4 δXC CDc CDc 7 4 v4 5 6 4 ACo  ACc 5 7 δ 7 6 YC v5 BDo  BDc XC0  XA Y0C  YA 7 6 7 6 0 0 c c 7 6 AC AC 7 6 5 4 XD  XB0 YD  Y0B   0 0 BDc BDc where c refers to computed distances, o refers to observed distances, and 0 refers to initial coordinates.

94

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

4.2.2 Observation Model of Azimuth Direction Given as shown in Fig. 4.4: – Two stations: I(Xi, Yi) and J(Xj, Yj) – Observed azimuth direction φij In this case of an observed azimuth (full circle direction), the derivation is based on the following known equation:   Xj  Xi  tan φij ¼  (4.7) Yj  Yi

FIG. 4.4 Azimuth observation model derivation.

The derivation is applied to Eq. (4.7) as follows:       Yj  Yi dXj  dXi  Xj  Xi δYj  δYi 2 sec φij dφij ¼  2 Yj  Yi       δφij ¼ Yj  Yi δXj  δXi =Sij 2  Xj  Xi δYj  δYi =Sij 2

(4.8) (4.9)

then  δφij ¼

2 Yj  Yi  Sij 2

   Xj  Xi 2   δXj  δXi  δYj  δYi 2 Sij

(4.10)

Accordingly, the general observation model of azimuth is: vij ¼ Pij δXi + Qij δYi + Pij δXj  Qij δYj  δφij where Pij ¼ ρ

00

Qij ¼ ρ

cosφij Sij 2

00

sin φij Sij 2

 ¼ρ

¼ρ

00

00

Yj  Yi

2

Sij 2  2 Xj  Xi Sij 2

(4.11)

4.2 OBSERVATION MODELS IN 2D

95

δφij ¼ (φo  φc)ij ¼ observed – computed vij: the residual error. δXi, δYi, δXj, δYj: the corrections to approximate coordinates.  00  ρ00 ¼ 1=sin 1 ffi 206265:

EXAMPLE 4.3 For the same problem of Example 4.1, we assume the azimuth directions α are observed instead of the distances.

Required What will be the observation equation model for adjustment?

Solution Using observation equation model 4.11 and realizing that i is the fixed point and j is the unknown point, we get the following: 3 2 y 0  yA x 0  xA ρ00 P ρ00 P 2 2 2 3 6 ðAPc Þ 2 3 ðAPc Þ 7 7 6  ðαAP o  αAP c Þ vαAP xP0  xB 7 dxP 6 00 yP0  yB 00 7: 4 vαBP 5 ¼ 6 ρ ρ  4 ðαBP o  αBP c Þ 5 6 ðBPc Þ2 ðBPc Þ2 7 7 dyP 6 vαCP ðαCP o  αCP c Þ 4 00 yP0  yC 00 xP0  xC 5 ρ ρ ðCPc Þ2 ðCPc Þ2 where c refers to computed observations, o refers to observed azimuths, and 0 refers to initial coordinates.

4.2.3 Observation Model of an Angle In the case of an observed angle, we can use two azimuth observation equations of 4.11 because angle θi(jk) results from the subtraction of two azimuth directions (Fig. 4.5).

FIG. 4.5 Derivation of observation model of angle arranged in a clockwise direction.

96

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

vij ¼ Pij δXi + Qij δYi + Pij δXj  Qij δYj  δφij vik ¼ Pik δXi + Qik δYi + Pik δXk  Qik δYk  δφik Then, the observation model of an angle θi(jk) will be:     við jkÞ ¼ Pij  Pik δXi + Qik Qij δYi  Pij δXj + Qij δYj + Pik δXk  Qik δYk  ðθo  θc Þið jkÞ

(4.12)

EXAMPLE 4.4 Given Three stations A, B, and C are intersected by four observed angles at point P as shown in Fig. 4.6.

FIG. 4.6 2D intersection problem with four observed angles.

Required Prepare the observation equations necessary to solve the LS adjustment problem.

Solution Based on Eq. (4.12), the observation model is simplified as follows: við jkÞ ¼ Pij δXj + Qij δYj + Pik δXk  Qik δYk  ðθo  θc Þið jkÞ The observed angles are arranged (clockwise direction) as follows where i represents the occupied point during angle observation (Table 4.1): TABLE 4.1 The Observed Angles Arrangement Angle

j

i

k

θ1

P

A

B

θ2

A

B

P

θ3

P

B

C

θ4

B

C

P

4.2 OBSERVATION MODELS IN 2D

The observation equations model will 2 yPo  yA ρ00 6 ðAPo Þ2 6 6 3 6 2 yPo  yB vθAðPBÞ 6 ρ00 6 ðBPo Þ2 6 vθBðAPÞ 7 6 7¼6 6 4 vθBðPCÞ 5 6 yPo  yB 6 ρ00 vθCðBPÞ 6 ðBPo Þ2 6 6 4 00 yPo  yC ρ ðCPo Þ2

97

be as follows: 3 xPo  xA ρ00 2 ðAPo Þ 7 7 7 3 2 o o 00 xP  xB 7 ðθ1  θ1 c ÞAðPBÞ 7 ρ   6 o c 7 ðBPo Þ2 7 7 dxP 6 ðθ2  θ2 ÞBðAPÞ 7 : 6 o 7 7 c xPo  xB 7 dyP 4 ðθ3  θ3 ÞBðPCÞ 5 7 ρ00 o c 2 ðθ4  θ4 ÞCðBPÞ ðBPo Þ 7 7 7 xPo  xC 5 ρ00 ðCPo Þ2

where Po represent the initial coordinates of P

EXAMPLE 4.5 Given Three stations A, B, and C are intersected by two observed angles θ1, θ2, one distance SAU, and one azimuth αCU at point U as shown in Fig. 4.7.

FIG. 4.7 Mixed 2D observations.

Required Prepare the observation equations necessary to solve the LS adjustment problem.

Solution Using observation models of Eqs. (4.6), (4.11), and (4.12), we compose the following observation equations: 2 xUo  xC yUo  yC 3 SCUc SCUc 7 6 7 6 6 00 yUo  yB 3 2 00 xU o  xB 7 7 6 ρ SCUo  SCUc ρ vSCU 6   ðSBUc Þ2 ðSBUc Þ2 7 7 dxU 7 6 6 ðθ2 o  θ2 c ÞBðAUÞ 7 6 vθ 7: 7 6 BðAUÞ 7 ¼ 6 6 5 5 6 00 yUo  yB 4 ðθ 3 o  θ 3 c Þ 4 vθ xUo  xB 7 BðUCÞ BðUCÞ 7 dyU 6 ρ ρ 2 2 7 6 o c vαAU ðαAU  αAU Þ ðSBUc Þ ðSBUc Þ 7 6 7 6 4 00 yUo  yA xUo  xA 5 ρ ρ ðSAUc Þ2 ðSAUc Þ2 2

3

It should be noted that, with this kind of problem of dissimilar quantities, we need proper a priori weights.

98

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

EXAMPLE 4.6 Given A triangulation 2D geodetic network ABCD is shown in Fig. 4.8. Given: – Eight observed angles – Fixed point coordinates A and B

Required Set up the observation equations for LS adjustment. A

8

1

2

B 3

4 5 D

7

C

6

FIG. 4.8 2D triangulation net.

Solution The least squares adjustment is applied by following observation Eq. (4.12)     við jkÞ ¼ Pij  Pik δXi + Qik Qij δYi  Pij δXj + Qij δYj + Pik δXk  Qik δYk  ðθo  θc Þið jkÞ The following Table 4.2 illustrates the derivative parameters for every observed angle: TABLE 4.2 The Observed Angles Arrangement Angle

j

i

k

1

B

A

C

2

D

B

A

3

C

B

D

4

A

C

B

5

D

C

A

6

B

D

C

7

A

D

B

8

C

A

D

4.2 OBSERVATION MODELS IN 2D

99

The observation equations will be as follows: 2

3 Yo C  YA X o C  XA ρ00 0 0 ρ00 2 2 6 7 CA CA 6 7 6 7 Yo D  YB Xo D  XB 6 7 00 00 6 7 ρ 0 0 ρ 2 2 7 2 3 6 DB DB 6 7 v1 6 7 o o o o 6 7 6 7 00 Y C  YB 00 X C  XB 00 Y D  YB 00 X D  XB ρ ρ ρ ρ 6 v2 7 6 7 2 2 2 2 6 7 6 7 DB DB CB CB 6 7 6  7  6 v3 7 6 7 o o o o Y  Y Y  Y X  X X  X 6 7 6 7 B A A B C C C C 00 00 00  ρ  ρ ρ 0 0 6 7 6 ρ00 7 2 2 2 2 6 v4 7 6 7 CA CB CB CA 6 7 6 7  6 7¼6 7 o o o o o o o o Y  Y C X  X X  X Y  Y X  X 6 v5 7 6 00 Yo D  Yo C 7 C D C D C D C 00 A 00 00 A 00 00 6 7 6 ρ 7 ρ ρ ρ ρ ρ 2 2 2 2 2 2 6 7 6 7 CA CD CA CD CD CD 6 v6 7 6 7   7 6 7 6 o o o o o o o o o o 6 7 6 7 Y  Y X  X Y  Y Y  Y X  X X  X B B C D C D D C D C D D 00 00 00 00 00 00 6 v7 7 6 7 ρ ρ ρ  ρ  ρ ρ 4 5 6 7 DB2 DB2 CD2 CD2 CD2 CD2 6 7 6 7   v8 o o o o 6 7 Y  Y Y  Y X  X X  X A B B A D D D D 00 00 00 00 6 7  ρ  ρ 0 0 ρ ρ 6 7 2 2 2 2 DB DB DA DA 6 7 6 7 o o o o 4 5 Y  Y X  X Y  Y X  X A A A A C C D D 00 00 00 00 ρ ρ ρ ρ 2 2 2 2 CA CA DA DA 2

3 θ1 o  θ1 c 6 o 7 6 θ2  θ2 c 7 6 7 7 2 3 6 o 6 θ3  θ3 c 7 δXC 6 7 6 7 6 o 7 6 δYC 7 6 θ4  θ4 c 7 6 7 6 7 6 76 o 7 6 δXD 7 6 θ5  θ5 c 7 4 5 6 7 6 o 7 6 θ6  θ6 c 7 δYD 6 7 6 o 7 6 θ7  θ7 c 7 4 5 θ8 o  θ8 c

where CA, CB, DB, CD, and DA represent the computed distances.

EXAMPLE 4.7 Given In a 2D angular resection example, three angles are observed between fixed points A, B, C, and D to determine the coordinates of the occupied point P (Fig. 4.9). The observation equation model 4.12 is used where points j and k are fixed in the resection case. The observation model is reduced to:     við jkÞ ¼ δXi Pij  Pik + δYi Qik  Qij  ðθo  θc Þið jkÞ where P and Q are previously defined in Section 4.2.2.

(4.13)

100

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

FIG. 4.9 2D resection problem.

Required Set up the observation equations for LS adjustment.

Solution The observation equations system in matrix form is: 2 2 3 2 00 3 00 3 a1 b1   δθ1 vθ1 4 vθ2 00 5 ¼ 4 a2 b2 5 δX  4 δθ2 00 5 δY 00 00 a3 b3 vθ3 δθ3 The partial derivatives are as follows where α is the azimuth direction:   cos αPB cosαPA sinαPA sin αPB a1 ¼ ρ00 b1 ¼ ρ00   SPB SPA SPA SPB   cos αPA cos αPC sinαPC sin αPA a2 ¼ ρ00 b2 ¼ ρ00   SPA SPC SPC SPA   cos αPD cos αPB sin αPB sin αPD b3 ¼ ρ00   a3 ¼ ρ00 SPD SPB SPB SPD It should be noted that the ordering of i, j, and k in a clockwise direction should be carefully followed to ensure proper computations.

The author would like to point out that in Appendix A a full MATLAB code for 2D geodetic networks adjustment is given.

4.3 OBSERVATION MODEL OF 3D DISTANCES When a distance between point i and point j is measured in 3D space, the mathematical model is simply an extension of the 2D distance model (Eq. 4.6) in terms of the Cartesian coordinates as:  2  2  2 (4.14) S2ij ¼ Xi  Xj + Yi  Yj + Zi  Zj Similar to the derivation presented in Section 4.2, the observation model of the 3D distance is:

4.3 OBSERVATION MODEL OF 3D DISTANCES

 vij ¼

Xi  Xj



















Yi  Yj Zi  Zj Xi  Xj Yi  Yj δXi + δYi + δZi  δXj  δYj Sij Sij Sij Sij  Sij  Zi  Zj  δZj + δSij Sij

101

(4.15)

where Sij : the computed distance based on the approximate coordinates of the unknown points. δSij : the difference between the observed and the computed distance. δX, δYδZ: the corrections to unknown points computed form the LS adjustment.

EXAMPLE 4.8 Given Five distances S1, S2, S3, S4, and S5 are measured from fixed points A, B, C, D, and E to an unknown point P (Fig. 4.10 and Table 4.3). – The approximate coordinates XYZ of P are: 2746, 91874, 512, respectively.

FIG. 4.10 3D intersection by distances. TABLE 4.3 Five Distances Measured in Space Form Five GCPs to a Target Point P GCP

X[m]

Y[m]

Z[m]

Distances [m]

A

2742.87

91874.15

510.56

3.71

B

2751.59

91872.49

513.76

6.17

C

2738.08

91886.95

518.11

15.90

D

2715.23

91852.36

515.39

37.82

E

2724.96

91882.74

516.28

22.90

102

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

Required Compute the intersection coordinates of P and its standard deviations.

Solution The observation equations system is developed as: 2       3 Xpo  XA Ypo  YA Zpo  ZA 7 6 6  SAP   SAP   SAP  7 7 6 o o o X  X Y  Y Z  Z 3 6 3 2 2 o p B p B p B 7 7 6 vSAP SAP  SAP 72 6 3 S S S BP BP BP o       7 ΔXP 6 vSBP 7 6 6 S  SBP 7 7 6 7 6 6 BP Ypo  YC Zpo  ZC 7 74 ΔYP 5  6 So  SCP 7 6 vSCP 7 ¼ 6 Xpo  XC 7 7 6 7 6 6 CP 4 vS 5 6  SCP   SCP   SCP  7 ΔZ 4 So  SDP 5 DP P 7 DP 6 X o X o o Y  Y Z  Z o D p D p D 7 6 p vSEP SEP  SEP 7 6 6  SDP   SDP   SDP  7 7 6 4 Xpo  XE Ypo  YE Zpo  ZE 5 SEP

SEP

SEP

Accordingly, the least squares adjustment of observations is applied as follows where the observation matrices of the first and the fifth iterations are listed for clarity. B1 0.9076 0.9237 0.4840 0.8147 0.9076

F1

0.0435 0.2495 0.7914 0.5729 0.3770

0.4176 0.2908 0.3734 0.0898 0.1846

0.2579 0.1161 0.4650 0.0490 0.2858

B5 0.8304 0.9352 0.4930 0.8111 0.9153

0.0403 0.2971 0.7974 0.5801 0.3690

F5 0.5558 0.1925 0.3479 0.0741 0.1613

0.0550 0.0862 0.0315 0.0056 0.0162

Similarly, normal equation matrices are also shown as: N1 0.3462 0.2124 0.2853

0.2124 1.0845 0.6580

t1 0.2853 0.6580 2.7497

0.3178 0.5216 0.2959

N5 0.3599 0.2169 0.3021

0.2169 1.0708 0.6709

t5 0.3021 0.6709 2.5139

2.32E  06 2.48E  06 5.30E  06

The adjustment is iterated and stopped at the fifth iteration where the corrections become negligible as shown in Fig. 4.11

103

4.3 OBSERVATION MODEL OF 3D DISTANCES

Least square adjustment—corrections convergence DX DY DZ

Corrections to intial coordinates

0.5

0.4

0.3

0.2

0.1

0.0

1

2

3 Number of iterations

4

5

FIG. 4.11 Convergence of corrections in Example 4.8.

The variance of unit weight is computed as 0.006 m2, and the variances of the adjusted coordinates are then computed as follows: Xp¼

2745.90  0.05 m

Yp ¼

91874.30  0.08 m

Zp ¼

512.59  0.12 m

MATLAB code clc,clear,close all %%% input GCP coordinates point=[ 2742.87 91874.15 510.56 2751.59 91872.49 513.76 2738.08 91886.95 518.11 2715.23 91852.36 515.39 2724.96 91882.74 516.28]; % approximate P coordinates P=[2746, 91874, 512]; % measured distances Dis=[ 3.71;6.17;15.90;37.82;22.90];

104

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

format bank %%%%%%%%%%%%%%%%%%%%%%%%%%% [P,point,DD,Adjusted_point,vcov]=threeD_intersection_by_dist(point,P,Dis); %%%%%%%%%%%%%%%%%%%%%%%%%%% DD=DD’; figure plot((1:size(DD,1)), DD(:,1),’-ro’,’Markerfacecolor’,’r’,’LineWidth’,3),hold on plot((1:size(DD,1)), DD(:,2),’-b^’,’Markerfacecolor’,’b’,’LineWidth’,3),hold on plot((1:size(DD,1)), DD(:,3),’–ms’,’Markerfacecolor’,’k’,’LineWidth’,3),hold on title(’CONVERGENCE OF CORRECTIONS DX,DY,DZ’) xlabel(’NUMBER OF ITERATIONS’) ylabel(’CORRECTIONS’) grid on legend(’DX’,’DY’,’DZ’) %%%%%%%%%%%%%%%%%%%%%%%%%%% function [P,point,DD,result,vcov]=threeD_intersection_by_dist(point,P,Dis ) DD=zeros(3,1 ); D=ones( 3,1); while max(abs(D)) >.0001 for i=1:size(point,1) dx1(i,1) = -(point(i,1) - P(1,1)); dy1(i,1) = -(point(i,2) - P(1,2)); dz1(i,1) = -(point(i,3) - P(1,3)); lp1(i,1)= ((point(i,1) - P(1,1))^2+(point(i,2) -P(1,2))^2 + (point(i,3) - P (1,3))^2)^(1/2); end for i=1:size(point,1) k=3*i; bx1 =dx1(i,1)/lp1(i,1); by1 =dy1(i,1)/lp1(i,1); bz1 =dz1(i,1)/lp1(i,1); F(i,1)=Dis(i,1)-lp1(i,1);B(i,1)=bx1; B(i,2)=by1;B(i,3 )=bz1; end N= inv(B’*B);% normal eq. matrix T=B’*F; % normal eq. matrix D=N*T; % corrections DD=[DD,D]; P(1,1)=P(1,1)+D(1,1); P(1,2)=P(1,2)+D(2,1); P(1,3)=P(1,3)+D(3,1); end

4.4 OBSERVATION MODEL OF VERTICAL ANGLES

105

V=B*D-F;% residuals sigma=(V’* V)/ (size(B,1)-size(B,2)); % variance of unit weight DD(:,1)=[]; vcov= sigma*(N);% covariance matrix result=P’

4.4 OBSERVATION MODEL OF VERTICAL ANGLES The mathematical model to compute a vertical or zenith angle is based on the trigonometric function as illustrated in Fig. 4.12.   Zp  Z1 1 α1p ¼ tan h (4.16) 2  2 i1=2 Xp  X1 + Yp  Y1

FIG. 4.12 Vertical angle observation model.

Therefore, the derivatives of Eq. (4.16) with respect to the unknown point P is [9]: ∂α1p 2ðX1  XP ÞðZ1  ZP Þ " # ¼ ρ00 h i3=2 ∂Xp ðZ1  ZP Þ2 2 2 2 ðX1  XP Þ + ðY1  YP Þ +1 ðX1  XP Þ2 + ðY1  YP Þ2  sinα1p sin Az1p ¼ ρ00 S1p

(4.17)

∂α1p 2ðY1  YP ÞðZ1  ZP Þ " # ¼ ρ00 h i3=2 ∂Yp ðZ1  ZP Þ2 2 2 2 ðX1  XP Þ + ðY1  YP Þ +1 ðX1  XP Þ2 + ðY1  YP Þ2  sinα1p cos Az1p ¼ ρ00 S1p

(4.18)

106

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

∂α1p 1 " # ¼ ρ00 h i1=2 ∂ZP ð Z 1  ZP Þ 2 2 2 ðX1  XP Þ + ðY1  YP Þ +1 ðX1  XP Þ2 + ðY1  YP Þ2 cos α1p ¼ ρ00 S1p

(4.19)

For point 1, the partial derivatives are simply calculated as: ∂α1p ∂α1p ∂α1p ∂α1p ∂α1p ∂α1p ¼ , ¼ , ¼ ∂X1 ∂XP ∂Y1 ∂Yp ∂Z1 ∂ZP

EXAMPLE 4.9 Given (Table 4.4) – Observed horizontal angles θ1, θ2, θ3, θ4, θ5, θ6, θ7, θ8 (Fig. 4.13) – Observed zenith angles α1, α2, α3, α4, α5

TABLE 4.4 Angles

Observed Horizontal and Zenith

Zenith angles

Horizontal angles

α1

A-P

θ2

PAB

α2

B-P

θ3

PCA

α3

C-P

θ4

CAP

α4

D-P

θ5

CDP

α5

E-P

θ6

ECP

θ7

PEC

FIG. 4.13 Combined 3D intersection problem.

The observation equation model is developed for the horizontal angles as in Section 4.2.3 and then followed by the vertical angles as shown:

107

4.4 OBSERVATION MODEL OF VERTICAL ANGLES

2

ðYB  YPO Þ 00 ρ 6 2 6 SBP 6 6 ðY  Y Þ 6 A PO ρ00 6 2 6 SAP 6 6 6 ðYC  YPO Þ 00 6 ρ 6 2 6 SCP 6 6 6 ðYA  YPO Þ 00 ρ 6 2 6 SAP 6 6 6 ðYD  YPO Þ 00 6 ρ 6 2 6 SDP 6 6 6 ðYC  YPO Þ 00 6 ρ 2 6 SCP 6 6 6 ðYE  YP Þ O 6 ρ00 6 2 6 SEP 6 6 6 sin α1 sinAz1 00 6 ρ 6 SAP 6 6 6 sin α2 sinAz2 00 6 ρ 6 SBP 6 6 6 sin α3 sinAz3 00 6 ρ 6 SCP 6 6 6 sin α4 sinAz4 00 6 ρ 6 SDP 6 6 4 sin α5 sinAz5 00 ρ SEP

ðXB  XPO Þ SBP

2

ρ00

ðXB  XPO Þ SAP

2

ðXC  XPO Þ SCP

2

ðXA  XPO Þ SAP

2

ð X D  X PO Þ SDP

2

ðXC  XPO Þ SCP

2

SEP

ρ00

ρ00 ρ00 ρ00

ðXE  XPO Þ 2

ρ00

ρ00

sin α1 sinAz1 00 ρ SAP sin α2 sinAz2 00 ρ SBP sin α3 sinAz3 00 ρ SCP sin α4 sinAz4 00 ρ SDP sin α5 sinAz5 00 ρ SEP

3 0

7 7 7 7 7 0 7 7 7 7 7 7 0 7 3 2 3 2 o 7 vθ1 θ1  θ1 c 7 7 7 6 7 6 o c 7 6 θ2  θ1 7 6 vθ2 7 7 7 6 7 0 6 7 7 6 7 6 o 7 6 θ3  θ1 c 7 6 vθ3 7 7 7 6 7 6 7 7 6 7 6 o c7 7 6 6 vθ4 7 θ  θ 0 1 7 7 6 4 6 7 7 7 6 o 6 7 c7 72 6 6 vθ5 7 θ  θ 1 7 7 δX 3 6 5 6 7 7 7 6 o 6 7 P c7 76 6 6 vθ6 7 0 θ  θ 7 6 6 1 7 76 6 7 7 : ¼  δY 7 4 P5 6 o 7 6 7 7 6 θ7  θ1 c 7 6 vθ7 7 7 7 6 7 6 7 δZP 7 6 7 6 o 0 7 6 α1  α1 o 7 6 vα1 7 7 7 6 7 6 7 7 6 7 6 7 6 α2 o  α2 o 7 6 vα2 7 cos α1 00 7 7 6 7 6 7 6 7 6 ρ 7 7 6 α3 o  α3 o 7 6 vα3 7 SAP 7 7 6 7 6 7 7 6 7 6 cos α2 00 7 6 α4 o  α4 o 7 6 vα4 7 5 4 5 4 ρ 7 7 SBP 7 o o α5  α5 vα5 7 cos α3 00 7 ρ 7 7 SCP 7 7 cos α4 00 7 ρ 7 7 SDP 7 7 cos α5 00 5 ρ SEP

EXAMPLE 4.10 Four GCPs are occupied by a total station where eight distances and four enclosed angles are observed to two unknown points P1 and P2 as given by the following table:

From

at

to

θ°

side

distance [m]

side

distance [m]

Height difference

P2 P1 P2 P1

A B C D

P1 P2 P1 P2

6.17 12.80 3.44 3.90

A_P1 B_P1 C_P1 D_P1

6.08 15.86 37.81 22.88

A_P2 B_P2 C_P2 D_P2

9.44 15.61 35.53 20.67

Δ hP1_P2 ¼ 5.5m

108

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

The approximate coordinates of P1 and P2 are as follows:

P1 P2

X[m]

Y[m]

Z[m]

2746 2742

91873 91873

516 510

For weighing these dissimilar observations, we need also to consider weights based on their standard deviations: – standard deviation of the measured angles is 2500 – standard deviation of the measured distances is 5 mm – standard deviation of the measured height difference is 10 mm

Required Compute the adjusted coordinates of points P1 and P2 using the adjustment model with observations. Note: the data are synthetic and only prepared for illustration purposes.

Solution The observation equations are prepared as presented in this chapter where we have three different kinds of mathematical models: distance model (Eq. 4.6), angle model (Eq. 4.12), and the height difference model, which a simple linear model (hP1 hP2 ¼ 5.5 m). The structure of the observation equations in matrices form will be as follows

109

4.4 OBSERVATION MODEL OF VERTICAL ANGLES

The observation equation matrices B and F for the first two iterations are listed as follows:

The normal equation matrices N and t of the first two iterations with the corrections vector Δ are also listed for verification:

The least squares adjustment is iterated and stopped at the sixth iteration where the corrections became negligible as shown in Fig. 4.14.

Least squard adjustment—corrections convergence 1

DX DY DZ

Corrections to initial coordinates

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1

2

3 4 Number of iterations

FIG. 4.14 Iterations and convergence of solution for Example 4.10.

5

6

110

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

EXAMPLE 4.11 Given Three stations P1, P2, and P3 are used to observe four corner points 1, 2, 3, and 4 on a fac¸ade as follows in Table 4.5: TABLE 4.5

Three Stations and Four Corner Points on a Fac¸ade

Fixed

X[m]

Y[m]

Z[m]

corner

Approx. X[m]

Approx. Y[m]

Approx. Z[m]

P1

356265.967

5787646.936

87.432

1

356268

5787660

87

P2

356262.534

5787647.786

87.386

2

356268

5787660

96

P3

356252.905

5787649.079

87.289

3

356262

5787662

96

4

356262

5787662

87

Observed distances, azimuth angles, and height differences are:

side

Distance [m]

Azimuth [deg.]

Height difference [m]

P1-1 P1-2 P1-3 P2-1 P2-2 P2-3 P2-4 P4-3 P4-4

13.232 15.811 17.867 13.390 15.970 16.717 14.239 18.046 15.860

9.22789 8.95359 345.45717 24.49167 24.21737 358.05866 357.90458 35.52366 35.24937

0.43 – – – – – – – 0.29

The standard deviations of the observations are: σD ¼ :01 m; σAz ¼ :0005rad,σh ¼ :01 m

Required Use LS adjustment to find the adjusted coordinates of the four corners.

4.4 OBSERVATION MODEL OF VERTICAL ANGLES

Solution The observation equation matrix B is constructed as follows:

The observation equation matrices B and F for the first iteration is computed as follows: 1ST iteration

111

112

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

The corrections of the successive adjustment iterations are:

Iteration 1

Iteration 2

Iteration 3

Iteration 4

Iteration 5

Iteration 6

Iteration 7

Iteration 8

0.086 0.022 0.001 0.034 0.012 0.106 0.069 0.085 0.249 0.031 0.017 0.001

0.000 0.000 0.000 0.014 0.004 0.001 0.021 0.022 0.035 0.000 0.000 0.000

0.000 0.000 0.000 0.006 0.002 0.000 0.007 0.006 0.008 0.000 0.000 0.000

7.09E 11 2.28E 10 6.48E 13 2.61E 03 5.48E 04 3.33E 04 2.06E 03 1.48E 03 2.00E 03 6.85E 12 2.91E 10 2.55E 12

1.27E  11 2.28E  10 6.33E  15 1.12E  03 1.86E  04 2.14E  04 6.47E  04 3.56E  04 4.52E  04 6.85E  12 2.91E  10 3.26E  16

1.27E  11 2.28E  10 6.33E  15 4.79E  04 5.92E  05 1.21E  04 2.04E  04 7.69E  05 8.72E  05 6.85E  12 2.91E  10 3.26E  16

1.27E  11 2.28E  10 6.33E  15 0.000205 1.67E  05 6.44E  05 6.45E  05 1.31E  05 9.85E  06 6.85E  12 2.91E  10 3.26E  16

1.27E  11 2.28E  10 6.33E  15 8.77E  05 3.51E  06 3.28E  05 2.04E  05 6.16E  07 2.49E  06 6.85E  12 2.91E  10 3.26E  16

DX1 DY1 DZ1 DX2 DY2 DZ2 DX3 DY3 DZ3 DX4 DY4 DZ4

For better illustration, Fig. 4.15 shows the corrections convergence of the four points.

Convergence of coordinates—point 1 0.08

dX dY dZ

dX dY dZ

0.08

Corrections

0.06

Corrections

Convergence of coordinates—point 2 0.1

0.04 0.02

0.06 0.04 0.02

0 0 –0.02 1

2

3

4

5

6

7

8

1

2

Number of iterations

3

4

5

6

7

8

Number of iterations

Convergence of coordinates—point 3

Convergence of coordinates—point 3 0.03

dX dY dZ

0.15

dX dY dZ

0.025

Corrections

Corrections

0.2

0.1

0.02

0.015

0.05 0

0.01

0.005

–0.05 0 1

2

3

4

5

6

7

8

Number of iterations

FIG. 4.15

Convergence of corrections for Example 4.11.

1

2

3

4 5 6 Number of iterations

7

8

113

4.5 OBSERVATION MODEL OF 3D LINE INTERSECTION

As shown, the iterations continue until the solution converges to zero values, and the final adjusted coordinates are Table (4.6) and (Fig. 4.16): TABLE 4.6 The Final Adjusted Coordinates of Example 4.11 Point

X[m]

Y[m]

Z[m]

σx[m]

σy[m]

σz[m]

1

356268.086

5787659.977

87.001

0.021

0.029

0.042

2

356268.024

5787659.991

96.105

0.066

0.203

0.324

3

356262.053

5787661.933

96.220

0.025

0.069

0.120

4

356262.031

5787662.017

86.999

0.023

0.028

0.042

Point 1 Point 2 Point 3 Point 3

Point 2

Point 1

FIG. 4.16 Ellipsoid of errors representation for Example 4.11.

4.5 OBSERVATION MODEL OF 3D LINE INTERSECTION The intersection of lines in 3D space is an interesting problem that can be solved by different techniques using least squares. One technique is to intersect angles or to intersect three lines based on: – cosine directions or unit vectors n. – the origin point a of each line.

114

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

The most probable value of the intersection point p has a minimum distance from all the intersecting lines. Therefore, the following mathematical model is used for every line: h i2 d 2 ¼ ð p  a Þ T ð p  a Þ  ð p  a ÞT n (4.20) For multiple intersecting lines we have: h i2  X 2 X T T ðp  ai Þ ðp  ai Þ  ðp  ai Þ ni di ¼

(4.21)

To minimize the previous equation, we differentiate with respect to p as: h i i Xh ðp  ai Þ  ðp  ai ÞT ni ni ¼ 0

(4.22)

X Or

X

ð p  ai Þ ¼

X

n:ni T ðp  ai Þ ¼ 0

X

n:ni T  I ai ¼ 0 n:ni T  I p ¼

(4.23)

(4.24)

A line can be defined by the start and end point coordinates where the cosine direction vectors can be computed. The following steps can be applied to solve the least squares adjustment problem of Eq. (4.24) as: Given: – origin points: Aorigin, Borigin, and Corigin – end points: Aend, Bend, and Cend Compute the intersection point p. Solution Based on the MATLAB code presented by [78], the solution steps are: (1) Compute the normal vectors ni for each point i as: 2

3 dxi Si ¼ 4 dyi 5 dzi where dxi ¼ Xend  Xorigin dyi ¼ Yend  Yorigin dzi ¼ Zend  Zorigin qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi nxi ¼ dxi = dxi 2 + dyi 2 + dzi 2

4.5 OBSERVATION MODEL OF 3D LINE INTERSECTION

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi nyi ¼ dyi = dxi 2 + dyi 2 + dzi 2 nzi ¼ dzi =

115

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi dxi 2 + dyi 2 + dzi 2

(2) Evaluate the normal equation system as follows:      32 0 1 02  2 31 Xorigin Fxi nxi  1  nx2i nyi  nyi nzi  Fi ¼ @ Fyi A ¼ @4 nxi nyi nyi  1  nyi nzi  54 Yorigin 5A for each 3D line Zorigin nyi nzi nyi 2  1 Fzi nyi nzi Then, the normal equation system is formed in matrices as:  X  3  X nyi nzi nxi nyi nxi 2  1 2 3 2 XF 3 7 X X X x7 X  2     7 Xp 6 4 5 nyi nzi 7 Yp ¼ 4 nyi  1 F nxi nyi X y5 5 X Zp   X  X 2 Fz nyi nzi nyi  1 nyi nzi

2 X 6 6 6 4

(3) Solve the 3-by-3 system of equations of Step 2 to get the 3D coordinates of the intersection point p: 2

3 dxp SP ¼ 4 dyp 5 wheredxp ¼ Xp  Xorigin , dyp ¼ Yp  Yorigin ,dzp ¼ Zp  Zorigin dzp (4) To evaluate the solution and its reliability, we can evaluate the distance between the computed intersection point and all the intersecting 3D lines as follows: 2 3 dxi dxp dyp dzp 4 dyi 5 0 dzi SP :Si 2 3 for each 3D line i ! ui ¼ ui ¼ dxi Si :Si 0 ½ dxi dyi dzi 4 dyi 5 dzi

distancei ¼ kSP  ui :Si k where kk refers to norm Or we can compute the distances by using the cross product  form as:   P  Porigin  ðP  Pend Þ

distancei ¼

Pend  Porigin

116

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

EXAMPLE 4.12 Given Four stations A, B, C, and D where four azimuth directions (theta) and vertical angles (beta) are observed to point P Table 4.7. TABLE 4.7 Given Data of Example 4.12 Origin

X [m]

Y [m]

Z [m]

θ°

β°

A

92882.54

437078.53

6.11

80.5119

70.2342

B

92891.66

437083.13

6.14

92.9116

62.9809

C

92888.53

437092.60

5.92

112.7310

64.2183

D

92893.34

437078.41

6.19

84.2589

61.4602

Required Compute the unknown intersection point P coordinates by: (1) 3D line intersection technique (2) Using the horizontal and vertical angles intersection of Section 3.6

Solution – Using 3D lines intersection method: We need first to compute the end point coordinates of each line. This is simply applied for every line by assuming any distance D as: Xend ¼ Xorigin + D: sin ðθÞ Yend ¼ Yorigin + D: cos ðθÞ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Zend ¼ Zorigin + Xend 2 + Yend 2 : tan ðβÞ Using any value such as D¼ 40, we get the following end points Table (4.8): TABLE 4.8 The End Points Coordinates of Each Line X[m]

Y[m]

Z[m]

A

92843.082

437085.124

117.418

B

92851.709

437081.098

84.583

C

92851.634

437077.147

88.735

D

92853.542

437082.407

79.738

4.5 OBSERVATION MODEL OF 3D LINE INTERSECTION

117

Then computing the normalized cosine directions as:

A B C D

nx

ny

nz

0.333551 0.453701 0.401161 0.475373

0.055746 0.023075 0.168065 0.047793

0.941083 0.890855 0.900458 0.878486

Finally, the normal equation matrix is computed as:    3:29599 0:03658 1:49692     0:03658 3:96583 0:07744     1:49692 0:07744 0:73818 

     Xp    290181:765       Yp  ¼   1730000:140       Zp    172904:010 

Then:      Xp   92862:24       Yp  ¼  437081:72       Zp   63:96  The distances between the computed intersection coordinates and the lines are also computed:       

d_Line1 d_Line2 d_Line3 d_Line4

     0:31       0:09   ¼  0:38       0:22 

The distances indicate that the measured horizontal and vertical angles need to be remeasured accurately to get a better reliable intersection point. – Using angles intersection technique The approximate coordinates are calculated as: 92863.64, 437081.65, and 59.56. Then, observation equations are prepared using the equations listed previously in a matrix form assuming equal weights as:

B1

F1

1754.703

10626.638

0

48.462

387.823

7341.650

0

27.496

3055.763

6943.192

0

3929.548

749.573

6862.846

0

1853.594

3374.099

557.142

1225.201

11.782 Continued

118

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

B1

F1

3022.294

159.653

1589.604

2679.354

2800.196

1232.392

1550.952

3721.161

2925.177

319.493

1647.283

2771.916

The normal equation matrices for the first iterations are solved for Δ as: N1 50045501.960 845814.296 18099763.685

845814.296 264088602.321 956259.880

18099763.685 956259.880 9146951.614

t1

Δ1

13114823.799 17980471.337 14582149.995

1.102 0.058 3.768

The corrections are added to the approximate coordinates and stopped after four iterations where they become negligible. The final adjusted coordinates are estimated with this technique as 92862.59, 437081.74, and 63.25.

4.6 OBSERVATION MODEL OF 3D RESECTION WITH OBLIQUE ANGLES The proposed method of 3D angular resection is based on deriving oblique angles from the observed horizontal and vertical angles at the occupied point. A spherical trigonometry law is used to apply the oblique angle derivation. Fig. 4.17 shows a spherical triangle ABC with two vertical angles (β1, β1), one horizontal angle (θ), and one oblique angle γ.

FIG. 4.17

Spherical triangle ABC and oblique angle γ.

119

4.6 OBSERVATION MODEL OF 3D RESECTION WITH OBLIQUE ANGLES

The cosine rule in Eq. (4.25) can be used to solve the spherical triangle ABC and to compute the oblique angle γ [71]. cosγ ¼ cos θ cos β1 cosβ2 + sin β1 sin β2

(4.25)

It should be noted that we can get a high redundancy in the observed angles at the occupied point P based on the number of reference points n. As shown in Fig. 4.18, three control points produce three observed oblique angles, and the addition of a fourth point will result in six observed oblique angles and so on.

FIG. 4.18 Oblique angle combinations for three and four observed points.

Therefore, the possible number of oblique angles can be calculated based on the number of observing n GCPs using the following Eq. (4.26):  2  n n (4.26) Max no:of oblique angles ¼ 2 The high redundancy of observed oblique angles is expected to strengthen the stability of the solution and the convergence to an optimal minimum as will be shown in the example. On the other hand, the computed oblique angle is based on the approximate occupied P coordinates as in Eq. (4.27) using vectors dot product. cosγ ¼ Or

dxi dxj + dyi dyj + dzi dzj Li Lj

  F ¼ Li Lj cosγ  dxi dxj + dyi dyj + dzi dzj

(4.27)

(4.28)

where dxi, dyi, dzi, dxj, dyj, dzj: the difference in coordinates between control points i and j. Li, Lj: spatial distance between the camera and the observed points i, j respectively. ∂F ∂F ∂F Further, the partial derivatives ( ∂X , ∂Y , ∂Z ) of the unknown coordinates of station P P P P(XP, YP, ZP) are derived as follows between reference points i and j:

120

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

  cosγ ð2XP  2Xi ÞLj cosγ 2XP  2Xj Li ∂F ¼ Xi  2XP + Xj + + 2Li 2Lj ∂XP   cosγ ð2YP  2Yi ÞLj cosγ 2YP  2Yj Li ∂F ¼ Yi  2YP + Yj + + 2Li 2Lj ∂YP   cosγ ð2ZP  2Zi ÞLj cosγ 2ZP  2Zj Li ∂F ¼ Zi  2ZP + Zj + + 2Li 2Lj ∂ZP

(4.29)

The partial derivatives of Eq. (4.29) are necessary to solve the nonlinearity and redundancy of the observation equations by applying the least squares adjustment.

EXAMPLE 4.13 Given – The coordinates of four GCPs A, B, C, and D as Table (4.9): TABLE 4.9

Four GCPs A, B, C, and D

Point

X [m]

Y [m]

Z [m]

A

6.924

14.972

0.000

B

5.813

7.945

19.884

C

54.687

45.363

18.271

D

29.085

76.020

21.878

– The derived oblique angles γ from the occupied point P to A, B, C, and D are based on the observed horizontal θ and vertical angles β1 and β2 as listed below using Eq. (4.25):

Angle

γ[Deg.]

θ[Deg.]

β1[Deg.]

β2[Deg.]

APB APC APD BPC BPD CPD

88.218 155.599 40.848 91.985 69.896 117.088

38.175 155.175 31.725 166.650 69.900 123.450

10.875 10.875 10.875 74.025 74.025 13.575

74.025 13.575 15.225 13.575 15.225 15.225

Required Determine the adjusted coordinates of P using the approximate coordinates of [0 0 0].

4.6 OBSERVATION MODEL OF 3D RESECTION WITH OBLIQUE ANGLES

121

Solution The least squares adjustment matrices of the first two iterations are listed as:

The iterations continued until it reaches negligible values as illustrated in Fig. 4.19. The final adjusted coordinates of point P are: 10.012  0.056, 4.974  0.029, 1.995  0.085 m. Convergence of X—coordinates 6

Convergence of Y—coordinates 0

X

Convergence of Z—coordinates 5 Z

Y

–0.5

4

–1

5

3

–2

Corrections

Corrections

Corrections

–1.5 4

–2.5

3

–3

2 1

–3.5

2

0

–4 1

–4.5

–1

–5 1

2

5 6 3 4 Number of iterations

7

1

2

3 4 5 6 Number of iterations

7

1

2

3 4 5 6 Number of iterations

FIG. 4.19 Convergence of the adjusted coordinates in Example 4.13.

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% chapter 4 - example 4.13 %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

7

122

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

clc,clear,close all %%% given GCPs point=[ 6.924 -14.972 0.000; 5.813 -7.945

19.884;

54.687 45.363

18.271;

29.085 -76.020 21.878]; %%% initial coordinates Tx=0;Ty=0;Tz=0; %%%%%%%%%%%%%%%% oblique angles %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% C=[88.218;155.599;40.848;91.985;69.896;117.088]*pi/180 %%%%%%%%%%%% resection of oblique angles %%%%%%%%%%%%%%%%%%%%%%%%%%%%% [Tx1,Ty1,Tz1 ]=threeD_resection_oblique(point,C, Tx, Ty, Tz) %%%%%%%%%% adjusted coordinates %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% T= [Tx1 Ty1 Tz1] %%%%%%%%%%%%%% figure for i=1:size(point,1) plot3([Tx point(i,1)][Ty point(i,2)], [Tz point(i,3)],’k’,’linewidth’,3);hold on plot3([ point(i,1)], [ point(i,2)][ point (i,3)],’^r’,’markersize’,8,’markerfacecolor’,’r’);hold on text(point(i,1), point(i,2), point(i,3),num2str(i),’fontsize’,12); end axis image grid on %%%%%%%%%%%%%% function [Tx,Ty,Tz]= threeD_resection_oblique(point,Cc,Tx,Ty,Tz) ii=0;D=[1;1;1]; X=point(:,1); Y=point(:,2); Z=point(:,3);DD=[0;0; 0]; while max(abs(D)) >.00001 ii=ii+1; for i=1:size(X,1) dx(i,1) = (Tx - X(i,1)); dy(i,1) = (Ty -Y(i,1)); dz(i,1) =(Tz - Z(i,1)); lp(i,1) = ((Tx - X(i,1))^2+(Ty -Y(i,1))^2 + (Tz - Z(i,1))^2)^(1/2); end kk=0; for i=1:size(X,1)-1 for j=i+1:size(X,1) kk=kk+1; C(kk,1)= cos(Cc(kk)); ax(kk,1) =X(i,1)-2*Tx+X(j,1)+(((C(kk,1))*(2*Tx-2*X(i,1))*lp(j,1))/(2*lp(i,1)))+(( (C(kk,1))*(2*Tx-2*X(j,1))*lp(i,1))/(2*lp(j,1)));

4.6 OBSERVATION MODEL OF 3D RESECTION WITH OBLIQUE ANGLES

123

ay(kk,1) =Y(i,1)-2*Ty+Y(j,1)+(((C(kk,1))*(2*Ty-2*Y(i,1))*lp(j,1))/(2*lp(i,1)))+(( (C(kk,1))*(2*Ty-2*Y(j,1))*lp(i,1))/(2*lp(j,1))); aaz(kk,1) =Z(i,1)-2*Tz+Z(j,1)+(((C(kk,1))*(2*Tz-2*Z(i,1))*lp(j,1))/(2*lp(i,1)))+(( (C(kk,1))*(2*Tz-2*Z(j,1))*lp(i,1))/(2*lp(j,1))); F(kk,1) = lp(i,1)*lp(j,1) *C(kk,1)- ((dx(i,1)*dx(j,1))+(dy(i,1)*dy(j,1))+(dz(i,1) *dz(j,1))); end end %% LS adjustment b= [ax ay aaz]; N=inv(b’*b); T=b’*-F; D=N*T; % update coordinates Tx=Tx+D(1,1); Ty=Ty+D(2,1); Tz=Tz+D(3,1); %%%%%%%%%%%%%%%%%%%% DD=[ DD,D]; end V=b*D-F;% residuals sigma=(V’*V)/(size(b,1)-size(b,2)); vcov=diag(sigma*N); % disp(’std. dev.’) % SS=sqrt(vcov) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DD(:,1)=[];D=DD; figure(10) subplot(1,3,1) plot((1:size(D,2)), D(1,:),’-ro’,’Markerfacecolor’,’r’,’LineWidth’,2),hold on title(’CONVERGENCE OF X-COORDINATES’) xlabel(’NUMBER OF ITERATIONS’) ylabel(’CORRECTIONS’) grid on legend(’X’) set(gca,’xtick’,1:1:ii) axis tight subplot(1,3,2) plot((1:size(D,2)), D(2,:),’-b^’,’Markerfacecolor’,’b’,’LineWidth’,2),hold on title(’CONVERGENCE OF Y-COORDINATES’) xlabel(’NUMBER OF ITERATIONS’) ylabel(’CORRECTIONS’) grid on

124

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

legend(’Y’) set(gca,’xtick’,1:1:ii) axis tight subplot(1,3,3) plot((1:size(D,2)), D(3,:),’-ms’,’Markerfacecolor’,’k’,’LineWidth’,2),hold on title(’CONVERGENCE OF Z-COORDINATES’) xlabel(’NUMBER OF ITERATIONS’) ylabel(’CORRECTIONS’) legend(’Z’) set(gca,’xtick’,1:1:ii) grid on axis tight %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

4.7 OBSERVATION MODELS IN PHOTOGRAMMETRY The main mathematical model in photogrammetry is the collinearity equations, which are based on the geometric condition that the object point, its image, and the camera lens are all collinear (Fig. 4.20):

y

X

Z S (Xo, Yo, Zo)

a (XA, yA, –f)

Z R (w,

j, k)

A (XA, YA, ZA)

Y

X

FIG. 4.20

Collinearity condition.

125

4.7 OBSERVATION MODELS IN PHOTOGRAMMETRY

3

m11 ðXA  Xo Þ + m12 ðYA  Yo Þ + m13 ðZA  Zo Þ    xA 7 m31 Xp  Xo + m32 ðYA  Yo Þ + m33 ðZA  Zo Þ 7 5 m21 ðXA  Xo Þ + m22 ðYA  Yo Þ + m23 ðZA  Zo Þ  yA FyA ¼ f m31 ðXA  Xo Þ + m32 ðYA  Yo Þ + m33 ðZA  Zo Þ FxA ¼ f

(4.30)

where Xo, Yo, Zo: camera coordinates. m’s: rotation matrix elements as m ¼ MkMφMω or: 2

3 cosφcosk cosωsink + sinωsinφcosk sinωsink  cosωsinφcosk m ¼ 4 cosφsink cosωcosk  sinωsinφsink sinωcosk + cosωsinφsink 5 sinφ sinωcosφ cosωcosφ

(4.31)

It should be noted that the rotations are based on a right-handed system. f: focal length in mm. xA, yA: image coordinates in mm.

4.7.1 Image Space Resection (Camera Pose) The determination of image orientation is mainly to compute six exterior parameters, namely the camera coordinates Xo, Yo, Zo, and the camera rotations ω, φ, k (Fig. 4.20). The process is called image space resection where a minimum of three control points should be observed in the image using collinearity Eq. (4.30). When more than three points are observed, a LS adjustment is applied v ¼ B Δ  F. Where 2

3 2 ∂Fx ∂Fx 6 ∂ðω, φ, k, Xo, Yo, ZoÞ 7 6 ∂ω 7¼6 B¼6 4 5 4 ∂Fy ∂Fy ∂ðω, φ, k, Xo, Yo, ZoÞ ∂ω

∂Fx ∂φ ∂Fy ∂φ

3 ∂Fx ∂Fx ∂Fx ∂Fx ∂k ∂Xo ∂Yo ∂Zo 7 7 ∂Fy ∂Fy ∂Fy ∂Fy 5 per image point ∂k ∂Xo ∂Yo ∂Zo

Δ ¼ ½δω, δφ, δk, δXo, δYo, δZot     vx Fx F¼ ,v ¼ Fy vy Least squares adjustment (BtWB)Δ ¼ (BtWF) can be used for handling the redundancy in vt wv where n is the observations. The variance of unit weight is then computed as: σ 2o ¼ 2n  6 the total number of points. The partial derivatives of the B matrix can be calculated using different forms. In this book we will follow the derivatives presented in [20] as follows:

126

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

Fx ¼ qx + rf ¼ 0 Fy ¼ qy + sf ¼ 0

 (4.32)

where q ¼ m31 ðX  Xo Þ + m32 ðY  Yo Þ + m33 ðZ  Zo Þ r ¼ m11 ðX  Xo Þ + m12 ðY  Yo Þ + m13 ðZ  Zo Þ

(4.33)

s ¼ m21 ðX  Xo Þ + m22 ðY  Yo Þ + m23 ðZ  Zo Þ ΔX ¼ ðX  Xo Þ ΔY ¼ ðY  Yo Þ ΔZ ¼ ðZ  Zo Þ

(4.34)

∂Fx x f ¼ b11 ¼ ðm33 ΔZ + m32 ΔYÞ + ðm13 ΔZ + m12 ΔYÞ ∂w q q

(4.35)

∂Fx x ¼ b12 ¼ ðΔXcosφ + ΔZsinωsinφ  ΔYsinφcosωÞ + ∂φ q f ðΔXsinφcosk + ΔZsinωcosφcosk  ΔYcosωcosφcoskÞ q

(4.36)

∂Fx f ¼ b13 ¼ ðm21 ΔX + m22 ΔZ + m23 ΔYÞ ∂k q

(4.37)

∂Fx x f ¼ b14 ¼ m31 + m11 ∂Xo q q

(4.38)

∂Fx x f ¼ b15 ¼ m32 + m12 ∂Yo q q

(4.39)

∂Fx x f ¼ b16 ¼ m33 + m13 ∂Zo q q

(4.40)

Fx ¼

ðqx + rf Þ q

∂Fy y f ¼ b21 ¼ ðm33 ΔY + m32 ΔZÞ + ðm23 ΔY + m22 ΔZÞ ∂w q q ∂Fy y ¼ b22 ¼ ðΔXcosφ  ΔYsinωsinφ  ΔZsinφcosωÞ + ∂φ q f ðΔXsinφsink  ΔYsinωcosφsink + ΔZcosωcosφsinkÞ q

(4.41) (4.42)

(4.43)

∂Fy f ¼ b23 ¼ ðm11 ΔX  m12 ΔY  m13 ΔZÞ ∂k q

(4.44)

∂Fy y f ¼ b24 ¼ m31 + m21 ∂Xo q q

(4.45)

127

4.7 OBSERVATION MODELS IN PHOTOGRAMMETRY

∂Fy y f ¼ b25 ¼ m32 + m22 ∂Yo q q

(4.46)

∂Fy y f ¼ b26 ¼ m33 + m23 ∂Zo q q

(4.47)

Fy ¼

ðqy + sf Þ q

(4.48)

EXAMPLE 4.14 Given Four observed control points 1, 2, 3, and 4: – X, Y, Z: coordinates of the control points. – x, y: image coordinates of the control points. – Camera parameters (focal length, pixel size, lens distortion).

Required Set up the observation equation of the resection problem to estimate the camera exterior orientation parameters.

4.8 Solution The observation equation is developed as v + B Δ ¼ F: 2 ∂Fx1 ∂Fx1 ∂Fx1 ∂Fx1 ∂Fx1 6 ∂ω ∂φ ∂k ∂Xo ∂Yo 6 6 ∂Fy1 ∂Fy1 ∂Fy1 ∂Fy1 ∂Fy1 6 6 3 6 ∂ω 2 ∂φ ∂k ∂Xo ∂Yo vx1 6 6 ∂Fx2 ∂Fx2 ∂Fx2 7 6 6 ∂Fx2 ∂Fx2 6 vy1 7 6 7 6 ∂ω 6 ∂φ ∂k ∂Xo ∂Yo 7 6 6 6 vx2 7 6 ∂Fy2 ∂Fy2 ∂Fy2 ∂Fy2 ∂Fy2 7 6 6 7 6 6 6 vy2 7 6 ∂ω ∂φ ∂k ∂Xo ∂Yo 7 6 6 6 vx3 7 + 6 ∂Fx3 ∂Fx3 ∂Fx3 7 6 6 ∂Fx3 ∂Fx3 7 6 6 7 6 ∂ω 6 ∂φ ∂k ∂Xo ∂Yo vy3 7 6 6 7 6 6 6 vx4 7 6 ∂Fy3 ∂Fy3 ∂Fy3 ∂Fy3 ∂Fy3 5 6 4 6 ∂ω ∂φ ∂k ∂Xo ∂Yo 6 vy4 6 ∂Fx4 ∂Fx4 ∂Fx4 6 ∂Fx4 ∂Fx4 6 ∂φ ∂k 6 ∂ω ∂Xo ∂Yo 6 4 ∂Fy4 ∂Fy4 ∂Fy4 ∂Fy4 ∂Fy4 ∂ω ∂φ ∂k ∂Xo ∂Yo

∂Fx1 ∂Zo ∂Fy1 ∂Zo ∂Fx2 ∂Zo ∂Fy2 ∂Zo ∂Fx3 ∂Zo ∂Fy3 ∂Zo ∂Fx4 ∂Zo ∂Fy4 ∂Zo

3 7 7 7 7 7 7 7 7 72 7 7 76 76 76 76 76 76 76 76 76 74 7 7 7 7 7 7 7 7 7 7 7 5

2 3 6 6 δω 6 6 δφ 7 7 6 7 6 6 δk 7 7 6 ¼ 7 6 6 δXo 7 7 6 6 7 δYo 5 6 6 6 6 δZo 4

Fx1o

3

7 Fy1o 7 7 7 Fx2o 7 7 7 Fy2o 7 7 7 Fx3o 7 7 7 Fy3o 7 7 Fx4o 7 5 Fy4o

128

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

Now let’s run the following numerical example for a better understanding of the adjustment procedure of image resection. Given – Four GCPs’ world coordinates and their measured image pixel coordinates are as follows Table (4.10): TABLE 4.10

Four GCPs and Their Measured Image Pixel Coordinates

GCP

X[m]

Y[m]

Z[m]

Lnpixles

Snpixels

A

45873.70

24116.07

11.59

2328

94

B

45865.69

24117.26

8.32

1251

594

C

45868.34

24115.22

4.33

1605

1095

D

45871.69

24112.91

6.71

2127

714

– Approximate image orientation is given in Table 4.11 TABLE 4.11

Approximate Image Orientation

ωrad

φrad

krad

Xo[m]

Yo[m]

Zo[m]

1.57

-1.44

0.07

46000

24000

7

– Camera parameters as: Image format [pixels]¼2448, 3264 pixel size [mm]¼0.0015 focal length [mm] ¼4.15 Required The image orientation parameters using least squares adjustment. Solution Using the formulation of Section 4.7.1, we apply the computations of the adjustment as follows: – Convert the image coordinates from pixels to mm in p.p. system (Table 4.12), it should be noted that we can run the computations in pixels as well. TABLE 4.12 Coordinates

Converted Image

A

B

C

D

x[mm]

1.66

0.04

0.57

1.35

y[mm]

2.31

1.56

0.80

1.38

– The observation equation matrices B and F are evaluated for the first iteration as follows:

129

4.7 OBSERVATION MODELS IN PHOTOGRAMMETRY

B1

F1

0.490

6.118

0.523

0.010

0.039

0.003

3.279

4.365

2.466

4.935

0.021

0.000

0.038

2.829

0.338

4.185

0.379

0.004

0.035

0.002

4.655

4.118

1.476

4.695

0.013

0.001

0.035

1.936

0.196

4.789

0.238

0.000

0.036

0.003

4.145

4.134

0.625

4.716

0.007

0.002

0.036

1.043

0.284

5.688

0.325

0.007

0.038

0.003

3.382

4.156

1.284

4.736

0.012

0.001

0.037

1.701

Accordingly, the normal equation matrices N and t will be composed to solve the corrections Δ of the six orientation elements of the image as follows: N21(1)

t1

Δ1

1625.10

118.10

1554.00

9337.30

15476.00

18100.00

26.74

5.07

118.10

47.30

116.23

4739.50

6179.30

1905.70

65.97

0.85

1554.00

116.23

1506.30

9369.60

15239.00

19946.00

30.46

4.67

9337.30

4739.50

9369.60

484870.00

618820.00

176390.00

0.08

60.21

15476.00

6179.30

15239.00

618820.00

807530.00

250780.00

0.57

227.28

18100.00

1905.70

19946.00

176390.00

250780.00

546500.00

0.23

38.62

The adjustment is iterated until the corrections become negligible as illustrated in Fig. 4.21. Least square adjustment—angles convergence

Least square adjustment—camera coordinates convergence

0

Corrections of camera coordinates

Corrections of orientation angles-RAD.

200

Omega phi kappa

–1

–2

–3

–4

–5 1

150

Xo Yo Zo

100

50 0

–50

–100

–150

2

3

4 5 6 7 8 Number of iterations

9

10

1

2

3

4 5 6 7 8 Number of iterations

FIG. 4.21 Convergence of the adjusted angles and camera coordinates of Example 4.14.

9

10

130

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

The final adjusted orientation parameters are (Table 4.13): TABLE 4.13 Final Adjusted Orientation Parameters ω°

φ°



Xo[m]

Yo[m]

Zo[m]

79.7361

2.7630

0.1043

45866.54

24093.43

4.04

The author published the following Matlab codes of the book at: https://nl.mathworks. com/matlabcentral/fileexchange/70550-adjustment-models-in-3d-geomatics?s_tid=prof_ contriblnk MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%

space resection by collinearity equations

%%%%%%%%%%

chapter 4 - example 4.14

%% %%%%%%%%%%%%%%%

%% input:

%%

%% - XYZ: Reference point coordinates n*3

%%

%% - image coordinates [mm]:xx,yy [in p.p. system] n*2 %% - f: focal length [mm]

%% %%

%% - wpk: initial exterior orientation parameters

%%

%% output:

%%

%% - adjusted exterior orientation parameters Tx,Ty,Tz,w2,p2,k2

%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function[ Tx,Ty,Tz,w2,p2,k2]= Imageresection(XYZ,xp,yp,wpk,f) omega=wpk(1,1);phi=wpk(1,2);kappa= wpk(1,3);xo=wpk(1,4);yo=wpk(1,5);zo=wpk(1,6); delta=[1 1 1 1 1 1 1]; ng=size(XYZ,1); x=XYZ(:,1);y=XYZ(:,2); z=XYZ(:,3); __________________________________________________ ii=0; while max(abs(delta)) >.00001 ii=ii+1; %%%%%%%%%%%%%%%%%%%%% rotation matrix %%%%%%%%%%%%%%%%%%%%% mw=[1 0 0;0 cos(omega) sin(omega);0 -sin(omega) cos(omega)]; mp=[cos(phi) 0 -sin(phi);0 1 0;sin(phi) 0 cos(phi)]; mk=[cos(kappa) sin(kappa) 0;-sin(kappa) cos(kappa) 0;0 0 1]; m=mk*mp*mw; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%% partial derivatives %%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% gg=ng*2; for k=1:ng dx(k,1)=x(k,1)-xo;

4.7 OBSERVATION MODELS IN PHOTOGRAMMETRY

131

dy(k,1)=yo-y(k,1); dz(k,1)=z(k,1)-zo; q(k,1)=m(3,1)*(x(k,1)-xo)+m(3,2)*(z(k,1)-zo)+m(3,3)*(yo-y(k,1)); r(k,1)=m(1,1)*(x(k,1)-xo)+m(1,2)*(z(k,1)-zo)+m(1,3)*(yo-y(k,1)); s(k,1)=m(2,1)*(x(k,1)-xo)+m(2,2)*(z(k,1)-zo)+m(2,3)*(yo-y(k,1)); end j=0; for k=1:2:gg j=j+1; ff(k,1)=-(q(j,1)*xp(j,1)+r(j,1)*f )/q(j,1); ff(k+1,1)=-(q(j,1)*yp(j,1)+s(j,1)*f )/q(j,1); end j=0; for k=1:2:gg j=j+1; b(k,1)=(xp(j,1)/q(j,1))*(-m(3,3)*dz(j,1)+m(3,2)*dy(j,1))+(f/q(j,1))* ... (-m(1,3)*dz(j,1)+m(1,2)*dy(j,1)); b(k,2)=(xp(j,1)/q(j,1))*(dx(j,1)*cos(phi)+dz(j,1)*(sin(omega)*sin(phi))+dy(j,1)*(sin(phi)*cos(omega)))+... (f/q(j,1))*(dx(j,1)*(-sin(phi)*cos(kappa))+dz(j,1)*(sin(omega)*cos(phi)*cos (kappa))+dy(j,1)*(-cos(omega)*cos(phi)*cos(kappa))); b(k,3)=(f/q(j,1))*(m(2,1)*dx(j,1)+m(2,2)*dz(j,1)+m(2,3)*dy(j,1)); b(k,4)=-((xp(j,1)/q(j,1))*m(3,1)+(f/q(j,1))*m(1,1)); b(k,5)=-((xp(j,1)/q(j,1))*m(3,2)+(f/q(j,1))*m(1,2)); b(k,6)= ((xp(j,1)/q(j,1))*m(3,3)+(f/q(j,1))*m(1,3)); b(k+1,1)=(yp(j,1)/q(j,1))*(-... m(3,3)*dz(j,1)+m(3,2)*dy(j,1))+(f/q(j,1))*(-m(2,3)*dz(j,1)+m(2,2)*dy(j,1)); b(k+1,2)=(yp(j,1)/q(j,1))*(dx(j,1)*cos(phi)+dz(j,1)*(sin(omega)*sin(phi))+dy (j,1)*(-sin(phi)*cos(omega)))+... (f/q(j,1))*(dx(j,1)*(sin(phi)*sin(kappa))+dz(j,1)*(-sin(omega)*cos(phi)*sin (kappa))+dy(j,1)*(cos(omega)*cos(phi)*sin(kappa))); b(k+1,3)=(f/q(j,1))*(-m(1,1)*dx(j,1)-m(1,2)*dz(j,1)-m(1,3)*dy(j,1)); b(k+1,4)=-((yp(j,1)/q(j,1))*m(3,1)+(f/q(j,1))*m(2,1)); b(k+1,5)=-((yp(j,1)/q(j,1))*m(3,2)+(f/q(j,1))*m(2,2)); b(k+1,6)= ((yp(j,1)/q(j,1))*m(3,3)+(f/q(j,1))*m(2,3)); end format short g %%%%%%%%%%%%%%%%%%%%%%%%%%%% Least Square %%%%%%%%%%%%%%%%%%%%5 btb=inv(b’*b); btf=b’*ff; delta=btb*btf;v=b*delta-ff;D(:,ii)=delta; omega=omega+delta(1,1); phi =phi +delta(2,1); kappa=kappa+delta(3,1);

132

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

xo=xo+delta(4,1); yo=yo+delta(6,1); zo=zo+delta(5,1); end sigm=sqrt(v’*v)/(size(b,1)-size(b,2)); figure subplot(1,2,1) plot((1:size(D,2)),D(1,:),’-ro’,’Markerfacecolor’,’r’,’LineWidth’,2),hold on plot((1:size(D,2)),D(2,:),’-b^’,’Markerfacecolor’,’b’,’LineWidth’,2),hold on plot((1:size(D,2)),D(3,:),’-ms’,’Markerfacecolor’,’k’,’LineWidth’,2),hold on title(’ LEAST SQUARE ADJUSTMENT - ANGLES CONVERGENCE’) xlabel(’NUMBER OF ITERATIONS’) ylabel(’CORRECTIONS OF ORIENTATION ANGLES - RAD.’) legend(’omega’,’phi’,’kappa’) grid on axis tight subplot(1,2,2) plot((1:size(D,2)), D(4,:),’–M+’,’Markerfacecolor’,’m’,’LineWidth’,2),hold on plot((1:size(D,2)), D(5,:),’-ko’,’Markerfacecolor’,’c’,’LineWidth’,2),hold on plot((1:size(D,2)), D(6,:),’-b^’,’Markerfacecolor’,’r’,’LineWidth’,2),hold on title(’ LEAST SQUARE ADJUSTMENT - CAMERA COORDINATES CONVERGENCE’) xlabel(’NUMBER OF ITERATIONS’) ylabel(’CORRECTIONS OF CAMERA COORDINATES’) legend(’Xo’,’Yo’,’Zo’) grid on axis tight w2=omega; p2=phi;k2=kappa;Tx=xo;Ty=yo;Tz=zo; % disp(’corrections for last iteration is:-’); disp(’

’);

disp(’EXTERIOR ORIENTATION PARAMETERS are:- ’) disp(’*******************************************************’); disp([’adjusted omega (DEG.)=’,num2str((180/pi)*omega)]); disp([’adjusted phi (DEG.)=’,num2str((180/pi)*phi)]); disp([’adjusted kappa (DEG.)=’,num2str((180/pi)*kappa)]); disp([’adjusted xo(m)=’,num2str(xo)]); disp([’adjusted yo(m)=’,num2str(yo)]); disp([’adjusted zo(m)=’,num2str(zo)]); disp(’*******************************************************’);

133

4.7 OBSERVATION MODELS IN PHOTOGRAMMETRY

EXAMPLE 4.15 Three photos (photo#1, photo#2, photo#3) are taken for a building fac¸ade with a camera of 18 mm focal length as part of the cultural heritage documentation project as shown in Fig. 4.22.

FIG. 4.22 Three overlapped images of a fac¸ade.

Given The image orientations are already determined in a resection method (Section 4.7.1) as follows in Table 4.14: TABLE 4.14

The Image Orientations Xo[m]

Yo[m]

Zo[m]

omega[deg.]

phi[deg.]

kappa[deg.]

Photo 1

356265.967

5787646.936

87.432

103.365914

18.51947

5.342079

Photo 2

356262.534

5787647.786

87.386

102.482806

12.45407

2.710865

Photo 3

356252.905

5787649.079

87.289

104.753832

13.53309

3.146542

Image coordinates in mm are observed as follows in Table 4.15: TABLE 4.15

Image Coordinates Point

xmm

ymm

Photo 1

1

3.48

4.68

Photo 1

2

2.57

6.10

Photo 1

3

10.00

6.24

Photo 2

1

3.52

4.52

Photo 2

2

3.35

6.21

Photo 2

3

4.02

5.88

Photo2

4

4.31

4.46

Photo 3

3

6.35

4.72

Photo 3

4

7.56

4.86

134

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

Required Find the adjusted coordinates of the four fac¸ade points.

Solution The observation matrices are developed as follows:

The observation equation matrices are computed as follows:

135

4.7 OBSERVATION MODELS IN PHOTOGRAMMETRY

The normal equation matrices are:

The variance of unit weight σ 2o is 1.12, and the final adjusted coordinates with their standard deviations are (Table 4.16): TABLE 4.16

Final Adjusted Coordinates with Their Standard Deviations

X[m]

Y[m]

Z[m]

σx[m]

σy[m]

σz[m]

1

356268.02

5787660.41

86.94

0.10

0.31

0.04

2

356268.25

5787660.41

95.61

0.10

0.31

0.20

3

356262.35

5787662.43

95.68

0.03

0.09

0.06

4

356262.44

5787662.23

87.00

0.06

0.12

0.04

Ellipsoid of errors is evaluated for the observed points and plotted in Fig. 4.23.

FIG. 4.23 Exaggerated ellipsoid of errors at the fac¸ade points. It should be noted that we assumed the images as free of errors, and then we fully trusted their coordinates and rotation angles in the adjustment. The question now is: if we have uncertainty limits in the images orientations represented by standard deviations, how can we run the adjustment problem, taking image weights in the adjustment? In this case, we should apply a more advanced adjustment technique such as the unified approach of least squares (Chapter 7) or the general least squares adjustment (Chapter 5).

136

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

4.8.1 Image Triangulation (Space Intersection) When the orientation of two or more intersected images are known or computed using the space resection of Section 4.7.1, we can apply image triangulation. Image triangulation or intersection is a technique to compute the world coordinates of object points observed on overlapped images as illustrated in Fig. 4.24. O1 M1

O2

o1

Zo1

Ground point P

Z

M2

o2

p2

p1

Zo2

Zp Y Xo2 Xp Xo1 Yo1

FIG. 4.24

Yp

Yo2 X

Image triangulation from two stereo images.

The mathematical model of image triangulation (intersection) is linear in contrary to image resection. The matrix form of the observation model v + B Δ ¼ F for a point p in one image is: 2 3 # Xp " #   " b b b Fx vx 11 12 13 6 7 + Yp 5 ¼ (4.49) 4 vy Fy b21 b22 b23 Zp where

b11 ¼ fm11 + m31xp, b12 ¼ fm12 + m32xp, b13 ¼ fm13 + m33xp b21 ¼ fm21 + m31yp, b22 ¼ fm22 + m32yp, b23 ¼ fm23 + m33yp     Fx ¼ Xo fm11 + m31 xp + Yo fm12 + m32 xp + Zo ðfm13 + m33 xpÞ       Fy ¼ Xo fm21 + m31 yp + Yo fm22 + m32 yp + Zo fm23 + m33 yp

Xp, Yp, Zp: unknown point coordinates in space. vx, vy ¼ residual errors.

4.7 OBSERVATION MODELS IN PHOTOGRAMMETRY

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%% chapter 4 - example 4.15 %%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%% Image triangulation %%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear,close % image exterior orientation wpk=[ 45 356265.967,5787646.936,87.432,103.3659146,-18.51947088,5.34207989 46 356262.534,5787647.786,87.386,102.4828061,-12.45407888,2.71086527 47 356252.905,5787649.079,87.289,104.753832,-13.53309291,3.146542426] f=18;% focal length [mm] % image rotations omega=wpk(:,5)*pi/180;phi=wpk(:,6)*pi/180;kappa=wpk(:,7)*pi/180; % image coordinates xo=wpk(:,2); yo=wpk(:,3);zo=wpk(:,4); %%%%%%%%%%%%%%%%%%%%%%%%%%% photo coordinates pp %%%%%%%%%%%%%%%%%%%%%%%%%% %photo 45 xy1 = [-3.48 -4.68 -2.57 6.10 -10.00 6.24]; %photo 46 xy2 = [3.52 -4.52 3.35 6.21 -4.02 5.88 -4.31 -4.46]; % photo 47 xy3 = [6.35 4.72 7.56 -4.86]; %% loop in 3 images for i=1:3 if i==1 % 1st image xy=xy1;B1=0; [F1,B1]=triangulatei(f,xy,xo(i,1),yo(i,1),zo(i,1),omega(i,1),phi(i,1),kappa (i,1),size(xy,1)); B1(1,:)=[];B1(:,1)=[];F1(1,:)=[]; elseif i==2 % 2nd image xy=xy2;B2=0; [F2,B2]=triangulatei(f,xy,xo(i,1),yo(i,1),zo(i,1),omega(i,1),phi(i,1),kappa (i,1),size(xy,1)); B2(1,:)=[];B2(:,1)=[];F2(1,:)=[]; elseif i==3 % 2nd image xy=xy3;B3=0;

137

138

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

[F3,B3]=triangulatei(f,xy,xo(i,1),yo(i,1),zo(i,1),omega(i,1),phi(i,1),kappa (i,1),size(xy,1)); B3(1,:)=[];B3(:,1)=[];F3(1,:)=[]; end end F=[F1;F2;F3]; b=[[B1 zeros(6,3)];B2;[zeros(4,6) B3]]; %%%%%%%%%%%%%%%%%%%%% least square method %%%%%%%%%%%%%%%%%%%%%%%%%%%% btb=inv(b’*b); btf=b’*F; X=btb*btf; res=b*X-F; sigm=(res’*res)/(size(b,1)-size(b,2)); vcov= sigm* (btb); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% format bank disp(’ADJUSTED COORDINATES [m]’) k=0; for i=1:3:size(X,1) k=k+1; x(k)=X(i,1); y(k)=X(i+1,1); z(k)=X(i+2,1); end [x’ y’ z’] disp(’STANDARD DEVIATIONS OF POINTS [m]’) k=0; for i=1:3:size(vcov,1) k=k+1; sx(k,1)=sqrt( (vcov(i,i))); sy(k,1)=sqrt( (vcov(i+1,i+1))); sz(k,1)=sqrt( (vcov(i+2,i+2))); end [sx sy sz] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [F,B]=triangulatei(f,xy,xo,yo,zo,omega,phi,kappa,n) mw=[1 0 0;0 cos(omega ) sin(omega );0 -sin(omega ) cos(omega )]; mp=[cos(phi ) 0 -sin(phi );0 1 0;sin(phi ) 0 cos(phi )]; mk=[cos(kappa ) sin(kappa ) 0;-sin(kappa ) cos(kappa ) 0;0 0 1]; m=mk*mp*mw; m11 = m(1,1);m12 = m(1,2); m13 = m(1,3); m21 = m(2,1);m22 = m(2,2); m23 = m(2,3); m31 = m(4,1);m32 = m(4,2); m33 = m(4,3);

139

4.7 OBSERVATION MODELS IN PHOTOGRAMMETRY

B=0;F=0; for i=1:n b11(i,1)=(xy(i,1)*m31 +(f*m11 )); b12(i,1)=(xy(i,1)*m32 +(f*m12 )); b13(i,1)=(xy(i,1)*m33 +(f*m13 )); b21(i,1)=(xy(i,2)*m31 +(f*m21 )); b22(i,1)=(xy(i,2)*m32 +(f*m22 )); b23(i,1)=(xy(i,2)*m33 +(f*m23 )); fx1(i,1)=-((-f*m11 -xy(i,1)*m31 )*xo(1,1)-(f*m13 +xy(i,1)*m33 )*zo(1,1)-(f*m12 +xy (i,1)*m32 )*yo(1,1)); fy1(i,1)=-((-f*m21 -xy(i,2)*m31 )*xo(1,1)-(f*m23 +xy(i,2)*m33 )*zo(1,1)-(f*m22 +xy (i,2)*m32 )*yo(1,1)); F=[F;fx1(i,1);fy1(i,1)]; B=blkdiag(B,[b11(i,1) b12(i,1) b13(i,1);b21(i,1) b22(i,1) b23(i,1)] ); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

EXAMPLE 4.16 Given Three images are captured by iPhone6 in Fig. 4.25. The iPhone6 camera has the following camera settings (assuming free lens distortion): • Image format (pixels): width ¼ 3264, height¼ 2448 • Pixel size [mm]: 0.0015 • Focal length [mm]: 4.15 The image orientation data and the measured pixel coordinates of P are listed as (Table 4.17): TABLE 4.17

The Image Orientation Data and the Measured Pixel Coordinates of P

Image

X[m]

Y[m]

Z[m]

ω°

φ°



Lnpixel

Snpixel

IMG_29

32301.00

56047.52

3.32

-62.5823

86.5986

152.7170

1181

2296

IMG_30

32301.14

56042.99

3.43

85.9303

59.5973

3.9900

1386

2175

IMG_31

32300.95

56050.86

3.51

-85.5236

51.6750

176.7642

1513

2177

140

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

FIG. 4.25

Three overlapped images of Example 4.16.

Required Compute the coordinates of the intersection-point P at the bottom of the pole.

Solution The first step is to convert the pixel coordinates into the p.p. coordinates in mm as:

xp[mm] yp[mm]

IMG_29

IMG_30

IMG_31

0.065 0.996

0.243 0.815

0.434 0.818

The observation equation matrices are computed as: B 0.28322 1.1071 2.3047 0.84864 2.2293 0.7866

F

4.1408 0.06519 3.4595 0.45654 3.5269 0.64558

0.005 4.1212 0.04336 4.1179 0.04115 4.1055

1.85E + 06 1.76E + 05 1.88E + 06 95940 1.31E + 06 3.98E + 05

Then, solving the normal equation matrices as follows to finally compute P as:

N 0.096186 0.00235 0.021391

0.00235 0.02377 0.00086

0.021391 0.00086 0.024447

t

P[m]

1.31E +06 1.91E +07 1.82E +06

132294.99 456047.11 1.73

4.9 LEAST SQUARES ADJUSTMENT IN COMPUTATIONAL GEOPHYSICS

141

Further, we compute the standard deviations of the adjusted intersection point through the computation of the variance-covariance matrix: – The variance of unit weight σ 2o ¼ 0.02 – The variance covariance is computed as: σ 2o N1 and results in the following standard deviations: 32294.99  0.04, 56047.11  0.02, 1.73  0.02 m

4.9 LEAST SQUARES ADJUSTMENT IN COMPUTATIONAL GEOPHYSICS There is a wide range of least squares adjustment applications in computational geophysics. Geophysics applies physical theories to understand processes that occur in the Earth’s interior such as geodynamo, mantle convection, earthquakes, or fluid flow in porous media, and then estimate some physical properties that cannot be directly measured in situ (i.e., density, velocity, conductivity, porosity, etc.). Accordingly, geophysics differs from geomatics in the following [79]: • Physical properties in the Earth’s interior are retrieved from indirect observations. These indirect observations are converted to material properties by brute force approach, inverse theory, or a mixture of signal processing and inversion techniques. • Physical properties are continuously spread in a 3D volume, and observations are sparsely scattered on the surface of the Earth. In computational geophysics, we are interested in the relationships between physical model parameters m and a set of data d. This is based on assuming a good knowledge of the laws governing the considered phenomena (fundamental physics) in the form of a function G. In this section, we will limit our presentation to two main well-known problems in geophysics: forward and inverse problems. The relation between the observations and unknown parameters are described by Eq. (4.50) as: Gm¼d

(4.50)

where G: mathematical description of the physical process under study described in a linear or nonlinear system of equations (symboled B in the book). m: model parameters (symboled Δ in the book). d: observational data (symboled F in the book). (A) Forward Problem In the forward problem, m and G are given, and we want to compute d at discrete locations. In general, m(x) is a continuous distribution of some physical property, whereas d is a discrete vector of observations di, i ¼ 1 : n.

142

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

One example is the two-way vertical travel time t of a seismic wave through M layers of P di thickness di and velocity vi. Then t is given by t ¼ 2 M i¼1 vi . The forward problem involves the travel time as predicting data based on the mathematical model of how the seismic waves travel. When the thickness is known for each layer (such as from drilling), then only the M velocities would be considered as the model parameters. Accordingly, a particular travel time t is computed for each selected set of model parameters [81]. (B) Inverse Problem In the inverse problem, G and d are given, and we want to estimate the model’s unknown parameters m, which is the typical adjustment problem when we have redundant observations (overdetermined system). For the previous example, the travel time t can be inverted to determine the layer velocities. It should be noted that the mathematical model relating the travel time to layer thickness and velocity information should be known. In summary: Forward problem : Model parameters ! Forward modeling ! Predicted data Inverse problem : Observed parameters ! Inverse modeling ! Prameter estimation Therefore, in the context of adjustment techniques, we are more interested in solving the inverse problems of geophysical computations. Fig. 4.26 illustrates schematically the relation between observations and parameters in the forward and inverse problems.

FIG. 4.26

Forward and inverse problem.

Some examples of the inverse problem in geophysics can be: • • • • •

Travel time inverse problems Seismic tomography Location of earthquakes Global electromagnetics Reflection seismology

In the following sections, we will discuss the problem of earthquake location with an illustrative example.

4.9 LEAST SQUARES ADJUSTMENT IN COMPUTATIONAL GEOPHYSICS

143

4.9.1 Seismic Waves and Earth’s Interior Seismic waves are propagating vibrations that carry energy from the source of the shaking outward in all directions (Fig. 4.27). This concept is more complicated but similar to recalling the circular waves that spread over the surface of a pond when a stone is thrown into the water [85]. There are many different seismic waves, but here we list only two types: • Compressional or P waves (for primary). • Transverse or S waves (for secondary).

FIG. 4.27 An earthquake radiates P and S waves in all directions [85].

An earthquake radiates P and S waves, which are also called body waves, in all directions, and the interaction of these waves with Earth’s surface produces surface waves. Accordingly, if a seismometer is placed on the Earth’s surface at Xi, Yi, Zi, the observed travel time from the earthquake location X, Y, Z to the seismometer is calculated as: travel time ¼

distance from earthquake to seismometer seismic waves peed

(4.51)

Travel time is the relative time that the wave consumes to complete its journey. Primary P waves are the first waves to arrive to the surface because they travel the fastest at speeds between 1 and 14 km/sec. Secondary, or S waves, travel slower than P waves, and they vibrate the ground in the "transverse" direction, or perpendicular, to the direction that the wave is traveling [85]. The fact that P and S waves travel at different speeds is used in the earthquake location computation. When an earthquake occurs, the P and S waves travel outward from the region of the fault that cracked, and the P waves arrive at the seismometer first, followed by the S wave. Once the S wave arrives, we can measure the time interval between the P wave and the S wave shaking. Hence: The travel time of the P wave ¼ distance from earthquake=ðP wave speedÞ The travel time of the S wave ¼ distance from earthquake=ðS wave speedÞ

144

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

4.9.2 Earthquake Localization and Least Squares Adjustment The earthquake location problem is a nonlinear relationship between the observed arrival times and the desired spatial and temporal coordinates of the source. This nonlinearity attained from the model predicted travel time as a function of distance and depth. Geiger’s method, developed at the beginning of the 20th century [86], is the most classical and widely used earthquake source location method [83]. The method is mathematically presented by defining the coordinates of the earthquake (X, Y, Z, to). Where: X, Y and Z: The spatial coordinates that define the hypocenter while the two surface coordinates X, Y define the epicenter (Fig. 4.28).

FIG. 4.28

Hypocenter and epicenter of an earthquake [87].

to: Origin time. If a receiver is placed on the Earth’s surface at point i, then the coordinate of the ith station is (Xi, Yi, Zi) and the corresponding observed arrival time of a wave P or S is tj. Applying Geiger’s method is based on founding arrival time functions. Arrival times are affected by many factors such as relative orientation of the source and sensors, structure of media where stress waves spread, and the geometry of the structure under study. A simplified travel time model is adopted for practical and theoretical reasons, and this is formulated in the following Eq. (4.52), which represents an arrival time function for a homogeneous velocity model: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 (4.52) ðXi  XÞ2 + ðYi  YÞ2 + ðZi  ZÞ2 Fi ¼ Fi ðX, Y, Z, toÞ ¼ to + Vi where the unknowns, X, Y and Z, are the coordinates of an earthquake event, t is the origin time of this event, Xi, Yiand Zi, are the coordinates of the ith sensor, and Vi is the velocity of the stress wave. Geiger’s method posts no restrictions on the velocity model to be utilized as far as arrival time functions can be observed and their first-order derivatives can be evaluated. When more

4.9 LEAST SQUARES ADJUSTMENT IN COMPUTATIONAL GEOPHYSICS

145

than four observed travel times are observed, the least squares adjustment v + B Δ ¼ F can be applied as follows: 2 3 2 3 2 ∂F1 ∂F1 ∂F1 ∂F1 3 t  3 v t1 2 6 7 tobserved  tpredicted 1 6 7 ∂to ∂X ∂Y ∂Z 6 vt 7 6 6X7 7 7 6 27 6 6 7 6 (4.53) ⋮7 ⋮5 6 7+ 7¼4 76 4 ⋮5 6 6 7   Y 4 ∂F ∂F ∂F ∂F 54 5 tobserved  tpredicted n n n n n vtn Z ∂to ∂X ∂Y ∂Z where ∂Fi Xi  Xo ∂Fi Yi  Yo ∂Fi Zi  Zo ∂Fi ¼ , ¼ , ¼ , ¼1 ∂X Vi R ∂Y Vi R ∂Z Vi R ∂to qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R ¼ ðXi  Xo Þ2 + ðYi  Yo Þ2 + ðZi  Zo Þ2

(4.54)

tpredicted ¼ R=V

EXAMPLE 4.17 Given An earthquake is simulated where its initial estimated hypocenter and origin time are (Table 4.18): TABLE 4.18 Initial Estimated Hypocenter and Origin Time X

Y

Z

to

40.00

30.00

20.00

0.00

Fifteen simulated seismometer stations are located as given in Table 4.19 with the observed P and S waves at each station with a velocity of 6.5 km/s and 3.6517 km/s, respectively (Fig. 4.29). TABLE 4.19

Fifteen Simulated Seismometer Stations X [m]

Y [m]

Z [m]

P wave

S wave

1

30.00

30.00

0.58

12.6359

22.2787

2

10.00

30.00

0.01

9.8547

17.1955

3

0.00

30.00

0.92

8.5148

14.8503

4

10.00

10.00

0.25

8.3826

14.7813

5

30.00

10.00

0.45

6.7526

12.0118

6

30.00

10.00

0.34

13.1957

23.3353 Continued

146

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

TABLE 4.19 Fifteen Simulated Seismometer Stations—Cont’d X [m]

Y [m]

Z [m]

P wave

S wave

7

10.00

0.00

0.39

11.4967

20.0248

8

0.00

0.00

0.30

10.2457

18.2507

9

10.00

0.00

0.03

9.4101

16.4388

10

30.00

10.00

0.54

9.1059

16.2373

11

30.00

10.00

0.92

14.6861

25.7746

12

10.00

10.00

0.28

12.1925

21.7446

13

0.00

30.00

0.91

13.5886

23.9638

14

10.00

30.00

0.60

12.7827

22.7309

15

30.00

30.00

0.13

11.8174

21.0473

0 –5

Ground surface

–10 –15 –20 –25

–20 0 20 40 Earthquake prediceted location

FIG. 4.29

–30

–20

–10

0

10

20

30

Earthquake location problem.

Required Assuming a point source and a constant velocity medium, determine the hypocenter location and the origin time to of the earthquake.

Solution The least squares adjustment of observations is applied as follows where the observation matrices of the first two iterations are listed for clarity.

147

4.9 LEAST SQUARES ADJUSTMENT IN COMPUTATIONAL GEOPHYSICS

B1 0.1599 0.1547 0.1477 0.1209 0.055 0.1543 0.1349 0.1235 0.1066 0.0362 0.1401 0.1241 0.0888 0.0713 0.026 0.2878 0.2785 0.2658 0.2176 0.099 0.2778 0.2428 0.2224 0.1918 0.0651 0.2521 0.2233 0.1598 0.1283 0.0468

0 0 0 0.0806 0.11 0.0441 0.0809 0.0927 0.1066 0.1447 0.08 0.0993 0.1332 0.1425 0.1561 0 0 0 0.1451 0.198 0.0794 0.1457 0.1668 0.1918 0.2605 0.1441 0.1787 0.2397 0.2565 0.2809

F1 0.047 0.0619 0.0772 0.0816 0.1125 0.0449 0.055 0.0627 0.0712 0.0743 0.0419 0.0503 0.0464 0.0489 0.0524 0.0846 0.1115 0.139 0.1469 0.2025 0.0807 0.099 0.1129 0.1281 0.1338 0.0753 0.0906 0.0836 0.0881 0.0943

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

0.4756 0.8786 0.9914 1.4901 1.7021 0.5975 1.2013 1.2518 1.5905 1.4286 0.8042 0.9979 1.0749 1.087 1.1387 0.3902 1.0385 1.3082 2.3747 2.9209 0.6585 1.4931 2.0616 2.3635 2.4182 0.7872 1.5943 1.4392 1.6786 1.8256

B2 0.15 0.1381 0.1253 0.0961 0.0326 0.144 0.1201 0.106 0.0867 0.0244 0.1305 0.1112 0.0787 0.0613 0.0187 0.27 0.2486 0.2256 0.1731 0.0588 0.2591 0.2163 0.1908 0.1561 0.0438 0.2349 0.2001 0.1417 0.1103 0.0336

0.0056 0.0073 0.0084 0.0776 0.0927 0.0478 0.0815 0.0909 0.101 0.1306 0.0817 0.0986 0.1298 0.1372 0.1471 0.0101 0.0132 0.0151 0.1396 0.1669 0.086 0.1468 0.1637 0.1819 0.2351 0.1471 0.1776 0.2336 0.247 0.2647

F2 0.0725 0.093 0.1096 0.1119 0.1346 0.0691 0.0818 0.091 0.1002 0.1007 0.0637 0.0754 0.0688 0.0721 0.0762 0.1304 0.1674 0.1972 0.2014 0.2422 0.1243 0.1473 0.1637 0.1804 0.1812 0.1147 0.1358 0.1239 0.1297 0.1371

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

0.27 0.1085 0.2182 0.0116 0.3213 0.2357 0.0902 0.0183 0.1405 0.2664 0.0948 0.1099 0.12 0.2016 0.3179 0.6925 0.4789 0.6097 0.0689 0.4618 0.5818 0.2476 0.0349 0.0129 0.3734 0.5717 0.1403 0.4523 0.3814 0.5368

Similarly, normal equation matrices and corrections are also shown for the first two iterations as: N1

T1

Δ1

7.973 3.813 6.019 1.154 5.957 0.773 3.813 7.360 6.731 0.875 5.869 0.183 6.019 6.731 32.440 1.089 4.079 1.155 1.154 0.875 1.089 0.409 41.062 0.012

N2

T2

Δ2

8.609 3.974 4.930 0.995 1.147 0.017 3.974 8.481 5.309 0.854 0.896 0.011 4.930 5.309 20.510 1.259 1.031 0.004 0.995 0.854 1.259 0.419 7.613 0.005

The adjustment is iterated and stopped at the fifth iteration where the corrections become negligible as shown in Fig. 4.30.

148

FIG. 4.30

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

Adjustment iterations of the hypocenter and origin time.

The final adjusted earthquake hypocenter and origin time are listed in Table 4.20: TABLE 4.20 The Final Adjusted Earthquake Hypocenter and Origin Time X

Y

Z

to

0.3315

32.363

31.1

0.3315

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%% Example 4.29 Earthquake location [88] %%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5%%% clc;clear; close all; %%%%%%%%%%%%%%%%%%%%%%% seismometer stations coordinates %%%%%%%%%%%%%%%%%% sxyz=[-30.0000 -30.0000 0.5769 -10.0000 -30.0000 0.0134 0 -30.0000 0.9204 10.0000 -10.0000 0.2548 30.0000 -10.0000 0.4519 -30.0000 -10.0000 0.3413 -10.0000

0 0.3921

0

0 0.2996

4.9 LEAST SQUARES ADJUSTMENT IN COMPUTATIONAL GEOPHYSICS

10.0000

0 0.0313

30.0000

10.0000 0.5396

-30.0000

10.0000 0.9158

-10.0000 0

10.0000 0.2848 30.0000 0.9119

10.0000

30.0000 0.6015

30.0000

30.0000 0.1315];

sx=sxyz(:,1);sy=sxyz(:,2);sz=sxyz(:,3); ns = length(sx); % number of stations ne = 1; % number of earthquakes M = 4*ne; % 4 model parameters, x, y, z, and t0, per earthquake Nr = ne*ns; % number of rays, that is, earthquake-stations pairs N = 2*ne*ns; % total data; ne*ns for P and S each % observed data % arrival time for each station-source pair obtained wave=[12.6359 22.2787 9.8547

17.1955

8.5148

14.8503

8.3826

14.7813

6.7526 12.0118 13.1957 23.3353 11.4967 20.0248 10.2457 18.2507 9.4101

16.4388

9.1059

16.2373

14.6861 25.7746 12.1925 21.7446 13.5886 23.9638 12.7827 22.7309 11.8174 21.0473]; p_wave=wave(:,1); s_wave= wave(:,2); tobs=[p_wave;s_wave]; %% Defining the predicted arrival time using the Geiger’s Method % initial guess of earthquake locations m_ini =[ 40; -30; -20; 0] MM=zeros(4,5); % Velocity parameters vp=6; vs=3.6517;

149

150

4. OBSERVATION MODELS AND LEAST SQUARES ADJUSTMENT

%The iterative LS adjustment for iter=1:5 % (termination criteria) j=1; tpre = zeros(N,1); %allocating space for predicted data matrix for i = 1:ns % loop over stations dx = m_ini(1)-sx(i); % x-component obtained using the initial guess dy = m_ini(2)-sy(i); % y- component of displacement dz = m_ini(3)-sz(i); % z- component of displacement R = sqrt( dx^2 + dy^2 + dz^2 ); % source-receiver distance for each iteration k=(i-1)*ne+j; %index for each ray tpre(k)=R/vp+m_ini(4); %predicted P wave arrival time tpre(Nr+k)=R/vs+m_ini(4); %predicted S wave arrival time %First half of data coefficient matrix corresponding to P wave B(k,j) = dx/(R*vp); % first column of data kernel matrix B(k,ne+j) = dy/(R*vp); % second column of data kernel matrix B(k,2*ne+j) = dz/(R*vp); % third column of data kernel matrix B(k,3*ne+j) = 1; % fourth column of data kernel matrix % Second half of the data coefficient matrix corresponding to S wave B(Nr+k,j) = dx/(R*vs); B(Nr+k,ne+j) = dy/(R*vs); B(Nr+k,2*ne+j) = dz/(R*vs); B(Nr+k,3*ne+j) = 1; end % solve with least squares F = tobs-tpre; btb=inv(B’*B); del=btb*B’*F; m_ini = m_ini+del; %updated model parameter MM(:,iter)=del; end MM(:,1)=[]; % Final adjusted parameters xpre = m_ini(1); %x0 ypre = m_ini(2); %y0 zpre = m_ini(3); %z0 t0pre = m_ini(4); %t0 % Generating the final predicted data tpre=zeros(N,1); for i = 1:ns % loop over stations dx = m_ini(1)-sx(i); dy = m_ini(2)-sy(i); dz = m_ini(3)-sz(i); r = sqrt( dx^2 + dy^2 + dz^2 );

4.9 LEAST SQUARES ADJUSTMENT IN COMPUTATIONAL GEOPHYSICS

151

k=(i-1)*ne+j; tpre(k)=r/vp+m_ini(4); % S-wave arrival time tpre(Nr+k)=r/vs+m_ini(4); % P- wave arrival time end %%%%%%%%%%%%%%%%%%%%%%%%%%% PLOTING %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% plot3(xpre, ypre, zpre,’o’);hold on text(xpre, ypre, zpre,’EARTHQUAKE PREDICETED LOCATION’,’FontWeight’,’bold’) text(mean(sx),mean(sy), mean(sz)+3,’GROUND SURFACE’,’FontWeight’,’bold’) plot3(m_ini(1),m_ini(2),m_ini(3),’ro’);hold on plot3(sx,sy,sz,’r^’, ’MarkerFaceColor’,’b’);hold on % PLOT GROUND SURFACE [xq,yq] = meshgrid(-30:1:30, -30:1:30); vq = griddata(sx,sy,sz,xq,yq); surfl(xq,yq,vq) shading interp; alpha(.8) hold on %%%%%%%%%%%% PLOT RAYS FROM RECIEVERS TO EARTHQUAKE LOCATION %%%%%%%%%%%%% for i=1:length(sx) plot3([xpre;sx(i)][ypre;sy(i)][zpre;sz(i)],’b–’);hold on end zlim([zpre-5,10]) grid on axis equal view(78,12) %%%%%%%%%%%%%%%% PREDICTED EARTHQUAKE LOCATION’ [xpre, ypre, zpre,t0pre] figure(2) plot((1:size(MM,2)), MM(1,:),’-ro’,’Markerfacecolor’,’r’,’LineWidth’,2),hold on plot((1:size(MM,2)), MM(2,:),’-b^’,’Markerfacecolor’,’b’,’LineWidth’,2),hold on plot((1:size(MM,2)), MM(3,:),’-ms’,’Markerfacecolor’,’k’,’LineWidth’,2),hold on plot((1:size(MM,2)), MM(4,:),’-gs’,’Markerfacecolor’,’k’,’LineWidth’,2),hold on title(’ LEAST SQUARE ADJUSTMENT - CORRECTIONS CONVERGENCE’) xlabel(’NUMBER OF ITERATIONS’) ylabel(’CORRECTIONS TO INITIAL COORDINATES ’) legend(’dX’,’dY’,’dZ’,’dt’) xticks([0:1: size(MM,2)]) grid on axis tight %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5

C H A P T E R

5

Adjustment Using General Observation Model (Av + BΔ ¼ F) 5.1 INTRODUCTION The general adjustment model, sometimes called the mixed adjustment model, Av + B Δ ¼ F can be used to solve different problems in geomatics. It can be applied whenever the observation model combines several unknowns with more than one observed quantity (Fig. 5.1) such as cases of coordinate transformation, fitting problems, and other image orientation problems in photogrammetry (Section 4.9). The critical point of using this general model is when we want to contribute and investigate the quality of what we assume is fixed in previous techniques (such as control points) in the adjustment (see Example 5.2).

5.2 DERIVATION OF LS ADJUSTMENT BY THE GENERIC MODEL We previously mentioned the mathematical derivation of adjustment using observation and condition equations in details in Chapter 3. In this section, the derivation of the general model is listed as follows starting from the least squares principle equation: ϕ ¼ vt wv !minimum

(5.1)

With respect to the general observation Eq. (3.3) of Av + BΔ ¼ F, it is possible to solve the equation using LaGrange multiplier k using mathematical optimization as: ϕ0 ¼ vt wv  2kt ðAv + BΔ  FÞ

Adjustment Models in 3D Geomatics and Computational Geophysics https://doi.org/10.1016/B978-0-12-817588-0.00005-2

153

(5.2)

# 2019 Elsevier Inc. All rights reserved.

154

5. ADJUSTMENT USING GENERAL OBSERVATION MODEL (Av + B Δ ¼ F)

Mathematical model m Unknown

n Observations

parameters x1

x2

x3

xm

l1

l2

l3

ln

General least squares adjustment model Av+BD=F m Unknown parameters

x1

x2

x3

One observations l1 xm

Least squares adjustment v+BD=F

FIG. 5.1 The flow diagram concept of the general least squares model [15].

To get the minimum value of function ϕ0 , the partial derivative of ϕ0 with respect to v and Δ equated to zero as: ∂ϕ0 ¼ 2vt w  2kt A ¼ 0t ∂v

(5.3)

∂ϕ0 ¼ 2kt B ¼ 0t ∂Δ

(5.4)

After transferring, simplifying, and arrangement and noticing that w ¼ wt is a diagonal matrix, we get: wv + At k ¼ 0

(5.5)

Bt k ¼ 0

(5.6)

v ¼ w1 At k ¼ QAt k

(5.7)

Out of Eq. (5.5), we get:

And substituting the residuals v value in the general observation equation Av + BΔ  F, we get:   (5.8) AQAt k + BΔ ¼ F

5.2 DERIVATION OF LS ADJUSTMENT BY THE GENERIC MODEL

155

Recognizing that Q ¼ w1, Eq. (5.8) is simplified as follows:   Qe ¼ AQAt

(5.9)

k ¼ Qe 1 ðBΔ + f Þ ¼ we ðBΔ + FÞ

(5.10)

Therefore, the residuals vector v can be computed from Eq. (5.7) as: v ¼ w1 At weðBΔ + FÞ

(5.11)

Further, if we substitute the value of LaGrange multipliers k in Eq. (5.10) to Eq. (5.6), the following normal equations attained as:  t    (5.12) B we B Δ ¼ Bt we F Or simply: NΔ ¼ t ! Δ ¼ N 1 t where N ¼ Bt we B

(5.13)

t ¼ Bt we F

(5.14)

The variance of unit weight is computed as: vt wv nu Then the covariance matrix of unknowns is evaluated as: X xx ¼ σ 2o N 1

(5.16)

And the covariance matrix of observations is computed as:    X  1 ll ¼ σ 2o Q  QAt we  we B Bt we B Bt we AQ

(5.17)

σ 2o ¼

(5.15)

It should be noted that when applying what we learned using the adjustment with observation model v + B Δ ¼ F, we assume the given variables (such as control points) are free of errors and fully trust them in the adjustment. Practically, the control points are also derived by approaches that hold uncertainty and, therefore, we should consider the uncertainty in the measured points. This can be accomplished when using the general least squares adjustment as will be shown in the examples.

EXAMPLE 5.1. FITTING A SPHERE As will be presented in Chapter 8, there are two methods to solve the problem of sphere best fit: nonlinear and linear methods with a minimum of four observed points. In this example we will apply the nonlinear method of the best fitting. The sphere equation can be formulated as:  2  2  2 x  xoc + y  yoc + z  zoc  R2 ¼ 0

156

5. ADJUSTMENT USING GENERAL OBSERVATION MODEL (Av + B Δ ¼ F)

where • xoc , yoc , and xoc are the initial values of the best fit coordinates of the sphere center. • R is the radius of the best fit sphere. • x, y, and z are the fitting points. In the form of the generic adjustment model:  2     2 ðxi + vxi Þ  xoc + yi + vyi  yoc 2 + ðzi + vzi Þ  zoc  Ro 2 ¼ 0 The general adjustment observation model is Av + BΔ ¼ F, and the partial derivative values in the adjustment matrices of observation will be as: 2

∂F1 ∂F1 ∂F1 6 ∂x1 ∂y1 ∂z1 ∂F2 4 ∂x2

3 2 vx1 ∂F1 ∂F1 6 vy1 7 6 7 6 ∂xc ∂yc 6 ∂F2 ∂F2 7 6 vz1 7 6 ⋮ ⋮ 7+ ∂Fn ∂Fn ∂Fn 5:6 6 ⋮ 7 6 ∂y2 ∂z2 ⋱ 4 ∂Fn ∂Fn ∂xn ∂yn ∂zn 4 vyn 5 ∂xc ∂yc vzn 2      2 3 2 2 R2  x1  xoc  y1  yoc  z1  zoc 7 6 ¼4 ⋮ ⋮ ⋮ ⋮ 5       2 o 2 o 2 o 2 R  xn  xc  yn  yc  zn  zc 3

2

∂F1 ∂zc ⋮ ⋮ ∂Fn ∂zc

3 3 ∂F1 2 δxc ∂R 7 76 δyc 7 76 7 74 5 ∂Fn 5 δzc δR ∂R

where ∂F1 ¼ 2ðx1  xc o Þ ∂x1 ∂F1 ¼ 2ðy1  yc o Þ ∂y1 ∂F1 ¼ 2ðz1  zc o Þ ∂z1 etc:

∂F1 ¼ 2ðx1  xc o Þ ∂xc ∂F1 ¼ 2ðy1  yc o Þ ∂yc ∂F1 ¼ 2ðz1  zc o Þ ∂zc ∂F1 ¼ 2Ro ∂R

It should be noted that the adjustment with observation method assumes only one observation per equation, and therefore the A matrix turned into identity matrix as:

Given

v + BΔ ¼ F

• 3D points as follows (Fig. 5.2, Table 5.1): • All the measured points having the following standard deviations: σ X ¼ 4 cm, σ Y ¼6 cm, σ Z ¼ 10 cm. • Initial values of sphere parameters are: xc o ¼ 0,yc o ¼ 0,zc o ¼ 0,R ¼ 5.

Required • Find the best fit sphere using the general least squares adjustment method.

157

5.2 DERIVATION OF LS ADJUSTMENT BY THE GENERIC MODEL

10

8

6

4

2

0

–2

–4 –6

Data points Best fit sphere

–4 –2 0

8 6

2

4

4

2 0

6 8

–2 –4

FIG.5.2 Best fitting sphere of the given points in Table 5.1.

Solution For every control point, we can compute the weight matrix W; the weight matrix should be as: 3 2 1 2 7 6 0:04 7 6 1 7 6 W¼6 7 2 7 6 0:06 4 1 5 0:102

158

5. ADJUSTMENT USING GENERAL OBSERVATION MODEL (Av + B Δ ¼ F)

TABLE 5.1

The Sphere Fitting Points Coordinates

Point

X

Y

Z

1

5.00

5.54

1.13

2

5.93

0.08

2.78

3

7.66

0.77

2.61

4

5.91

2.04

1.09

5

1.86

3.35

0.83

6

7.89

3.98

2.03

7

5.68

3.34

2.37

8

0.08

4.85

1.11

9

6.10

1.24

2.42

10

6.17

1.24

2.88

The total weight matrix for the 10 fitting points should be arranged as: W ¼ diagonal½ W1 W2 W3 W4 W5 W6 W7 W8 W9 W10  For clarity, the general observation equations system is represented as follows:

In the following, we list the first two iterations of the A matrices as follows: A matrix at first iteration:

159

5.2 DERIVATION OF LS ADJUSTMENT BY THE GENERIC MODEL

A matrix at second iteration:

The B and F matrices for the first two iterations are listed as follows:

B1

F1

B2

F2

10.00

11.08

2.26

10.00

31.97

12.05

7.03

3.79

12.57

12.73

11.86

0.16

5.56

10.00

17.90

13.91

4.21

0.49

12.57

13.36

15.32

1.54

5.22

10.00

41.08

13.27

5.59

0.83

12.57

12.53

11.82

4.08

2.18

10.00

15.28

13.87

0.03

3.87

12.57

12.33

3.72

6.70

1.66

10.00

9.63

5.77

10.75

7.71

12.57

12.58

15.78

7.96

4.06

10.00

57.21

13.73

3.91

1.99

12.57

12.45

11.36

6.68

4.74

10.00

24.03

9.31

2.63

10.79

12.57

13.00

0.16

9.70

2.22

10.00

0.24

2.21

13.75

3.83

12.57

12.67

12.20

2.48

4.84

10.00

19.60

14.25

1.57

1.21

12.57

12.24

12.34

2.48

5.76

10.00

22.90

14.39

1.57

0.29

12.57

12.90

The general least squares adjustment is then applied using Eq. (5.12): (BtweB)Δ ¼ (Btwef ) where we ¼ (AQAt)1 The iterations of the converged corrections are illustrated in Fig. 5.3, and the variance of unit weight is computed σ 2o ¼ 0.134. The final adjusted coordinates with the estimated standard deviations are: xc o ¼ 1:02 m  0:7 cm,yc o ¼ 2:02 m  1:6 cm,zc o ¼ 3:04 m  3:3 cm, R ¼ 7:22  0:9 cm It should be noted that whenever the accuracy of the fitting points is degraded, the best fit sphere parameters accuracy is degraded as well. Fig. 5.4 illustrates the relation between the fitting points accuracy and the quality of the fitting parameters.

160

5. ADJUSTMENT USING GENERAL OBSERVATION MODEL (Av + B Δ ¼ F)

3 xo correction yo correction Zo correction Radius correction

2.5

Corrections (m)

2

1.5

1

0.5

0 1

2

3 Iterations

4

5

FIG. 5.3 Convergence of corrections of sphere parameters in Example 5.1.

Std. Dev. of the estimated sphere parameters

0.3 sx sy sz sR

0.25

0.2

0.15

0.1

0.05

0 0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

Std. Dev. of the fitting points FIG. 5.4 The relation between the fitting points accuracy and the quality of the fitting parameters in the sphere fitting example.

5.2 DERIVATION OF LS ADJUSTMENT BY THE GENERIC MODEL

161

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%% chapter 5 - example 5.1 %%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear,close all %%%%%%%%%%%%%%%%% initial values xo=0; yo=0; zo=0; r=5; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % observed points P=[ -5.00

5.54

1.13

-5.93 7.66

-0.08 -0.77

2.78 2.61

-5.91

2.04

1.09

-1.86

-3.35

-0.83

7.89

3.98

2.03

5.68

3.34

-2.37

-0.08

-4.85

1.11

-6.10

1.24

2.42

-6.17 1.24 2.88]; %%%%%%%%%%%%%%%%%%%%%%%%%%%%% LS method %%%%%%%%%%%%%%%%%%%%%%%%%%%%% [Center,Radius,DD] = mixed_mode_sphereFit(P,.04,.06,.10); disp(’ adjusted values’) Center,Radius %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% figure plot3(P(:,1),P(:,2),P(:,3),’ro’,’markersize’,8,’markerfacecolor’,’k’) hold on; [Base_X,Base_Y,Base_Z] = sphere(20); surf(Radius*Base_X+Center(1), Radius*Base_Y+Center(2),... Radius*Base_Z+Center(3),’faceAlpha’,0.3,’Facecolor’,’c’) view([45,28]) grid on axis image legend({’Data Points’,’Best Fit Sphere’,’LSE Sphere’},’location’,’W’) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% figure d1=DD(1:4,:); plot((1:size(d1,2)), d1(1,:),’-ro’,’Markerfacecolor’,’r’,’LineWidth’,2),hold on plot((1:size(d1,2)), d1(2,:),’-b^’,’Markerfacecolor’,’b’,’LineWidth’,2),hold on plot((1:size(d1,2)), d1(3,:),’-ms’,’Markerfacecolor’,’k’,’LineWidth’,2),hold on plot((1:size(d1,2)), d1(4,:),’-gs’,’Markerfacecolor’,’m’,’LineWidth’,2),hold on

162

5. ADJUSTMENT USING GENERAL OBSERVATION MODEL (Av + B Δ ¼ F)

xlabel(’Iterations’) ylabel(’Corrections [m]’) grid on axis tight legend(’xo correction’,’yo correction’,’zo correction’,’Radius correction’) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [Center,Radius,DD] = mixed_mode_sphereFit(X,s1,s2,s3) %% initial values xo=0;yo=0;zo=0;R=5; %% weight matrix w= ([1/s1^2,1/s2^2,1/s3^2]); w= diag([w,w,w,w,w,w,w,w,w,w]); D=[1;1;1;1];vv=0; DD=[0;0;0;0]; while (max(abs(D)))>.00001 && vv u’) then there should be s constraint equations s ¼ u  u’. It should be noted that s must be less than u to avoid having an inhomogeneous system of observation equations that cannot be solved (s < u). Finally, the previous discussion leads us to conclude that the number of equations c with the number of constraints should equal the redundancy with total number of unknowns as: c+s¼r+u!r¼cu+s

(6.1)

6.2 MATHEMATICAL DERIVATION In this section, we will present the derivation of the least squares adjustment with constraint to have a better understanding of the theory behind it. There will be two systems of equations: c of observation Eq. (6.2) and s of constraint Eq. (6.3) as follows: Ac, n vn,1 + Bc, u 4u, 1 ¼ fc,1

(6.2)

Cs, u 4u,1 ¼ gs,1

(6.3)

These two systems of equations can be solved by using LaGrange multiplier kn, 1 and kcn,1 respectively. The mathematical derivation will be followed using the least squares adjustment principle of ϕ ¼ vtwv ≫ minimum. ϕ ¼ vt wv  2kt ðAv + B 4 f Þ  2ktc ðC 4 gÞ ¼ minimum

(6.4)

0

To find the minimum value of ϕ , its partial derivative with respect to v and Δ are computed and equated to zero as: ∂ϕ0 ¼ 2vt w  2kt A ¼ 0t ∂v

(6.5)

∂ϕ0 ¼ 2kt B  2ktc C ¼ 0t ∂4

(6.6)

171

6.2 MATHEMATICAL DERIVATION

After simplification and arrangement, we get a linear system of equations (n + u) as: wv + At k ¼ 0

(6.7)

Bt k + Ct kc ¼ 0

(6.8)

v ¼ w1 At k ¼ QAt k

(6.9)

From Eq. (6.7), we get:

Substituting Eq. (6.9) in the general observation Eq. (3.3), we get:  1 k ¼ AQAt ðB 4 + f Þ

(6.10)

Let Qe ¼ AQAt, then: k ¼ Q1 e ðBΔ + f Þ ¼ we ðBΔ + f Þ

(6.11)

Substituting the value of k above in Eq. (6.8) to have the following normal equations as:     (6.12)  Bt we B 4 + Bt we f + Ct kc ¼ 0 Or N 4 + Ct kc ¼ t

(6.13)

Therefore, the correction vector can be computed as:   4 ¼ N 1 t + Ct kc ¼ N 1 t + N 1 Ct kc ¼ 4o + δ4

(6.14)

To compute the LaGrange multipliers kc, 4 can be substituted in Eq. (6.3) as:   1  g  CN 1 t ¼ M1 ðg  C4o Þ kc ¼ CN 1 Ct

(6.15)

where

  M ¼ CN 1 Ct , Δo ¼ N 1 t, δΔ ¼ N 1 Ct kc

Accordingly, the correction vector can be computed as:  1 4 ¼ 4o + N 1 Ct CN 1 Ct ðg  C4o Þ ¼ 4o + N 1 Ct M1 ðg  C4o Þ

(6.16)

(6.17) (6.18)

This adjustment method is called the direct method, and the variance-covariance matrix of unknowns can be computed as:   Q44 ¼ N 1 I  Ct M1 CN 1 (6.19) Or in terms of N1 and C as:

h i  1 Q44 ¼ N 1 I  Ct CN 1 Ct CN 1

(6.20)

Another efficient method to solve the constrained least squares adjustment problem is to use the Helmert method. This depends on building a system of equations using both Eqs. 6.3 and 6.13 as:

172

6. ADJUSTMENT WITH CONSTRAINTS



N Ct C 0



   t 4 ¼ g kc

(6.21)

This approach is efficient to solve the complicated constrained adjustment problems when the rank of the normal equation matrix N is defect, or the determinant of N is zero. This is will be discussed more in Section 6.4 in the topic of free adjustment. Another constrained adjustment method is called over-weighting, which is simply by adding auxiliary observation equations for adding the weights of constrained unknowns as will be illustrated in the examples. Another method of elimination can also be used, but it is unsuitable for programming.

EXAMPLE 6.1 Given Four distances measured from the fixed stations A, B, C, and D to point P with a standard deviation of 1 cm as shown in Fig. 6.2 and listed in the following Table 6.1 with the approximate coordinates of P.

518

TABLE 6.1 Observation Data of Example 6.1

C

517 516 515 514 513

Side

P

Distance (m)

5.1919 5.1919

B A

5.1919

AP

4.117

BP

6.474

CP

16.589

DP

38.576

5.1919 6

× 10

5.1919 5.1919

D

5.1919

4.7272

4.7272

4.7273

4.7273

4.7274

FIG. 6.2 Intersection of distances in Example 6.1.

4.7274

4.7275 5 × 10

4.7275

Approximate P Xp (m)

472744.872

Yp (m)

5191873.294

Zp (m)

513.68

Required Determine the adjusted position of P and consider keeping distance AP fixed after adjustment.

Solution The least squares adjustment is applied with one constraint equation of fixing the distance AP. The solution will show the use of the mentioned direct, Helmert, and over-weighting methods, respectively. In Chapter 4, the partial derivation of a distance in space Sij is shown in Eq. (4.15) as:

173

6.2 MATHEMATICAL DERIVATION

 vij ¼











Xi  Xj Yi  Yj Zi  Zj δXi + δYi + δZi + δSij Sij Sij Sij

The partial derivatives will compose the matrix of coefficients B and then the adjustment with observations will be as follows: (1) Helmert method With this method, we should add a constraint equation C, which is simply the partial derivative to the first observed distance AP as highlighted in the solved iterations of Helmert Eq. (6.21). It should be noted that three observation equations are the minimum number to solve the problem. 3 2 ðXB  XP o Þ ðYB  YP o Þ ðZB  ZP o Þ 72 6 3 2 3 SBP SBP SBP 7 6 SBP  SBP 6 ðXC  XP o Þ ðYC  YP o Þ ðZC  ZP o Þ 7 δXP 74 δYP 5 ¼ 4 SCP  SCP 5 6 7 6 SCP SCP SCP 7 6 SDP  SDP 4 ðXD  XP o Þ ðYD  YP o Þ ðZD  ZP o Þ 5 δZP SDP SDP SDP The constraint equation C is with respect to the observed distance AP as follows: 2 3   δX ðXA  XP o Þ ðYA  YP o Þ ðZA  ZP o Þ 4 P 5 δYP ¼ SAP  SAP SAP SAP SAP δZ P

We should notice that the fourth unknown parameter is the LaGrange multiplier kc , which can be considered as a byproduct of this method.

174

6. ADJUSTMENT WITH CONSTRAINTS

The adjusted coordinates and the residuals will be listed at the end of this example because the three methods will give the same results. The variance covariance is computed as shown in Eq. (6.20) as: h i  1 σ 2o N1 I  Ct CN 1 Ct CN1 (2) Direct method Eqs. 6.16 and 6.18 are used as follows for every iteration M ¼ (CN1Ct), 4o ¼ N1t, 4 ¼ 4o + N1C M1(g  C4o), accordingly the solution for five iterations is: t

(3) Over-weighting method In this method, the weight of distance AP is fixed by assigning a high weight (such as 106) and therefore no adjustment will be given to it. The weight matrix will take the following form: AP BP CP DP

1000000 0 0 0

0 100 0 0

0 0 100 0

0 0 0 100

We list here the adjustment matrices of the first three iterations:

175

6.2 MATHEMATICAL DERIVATION

The following Fig. 6.3 and Table 6.2 illustrate the convergence of the corrections to zeros after six iterations. Convergence of corrections DX,DY,DZ 1 0.8

Corrections

0.6 DX DY DZ

0.4 0.2 0 −0.2 −0.4 −0.6

1

2

3

4

Number of iterations FIG. 6.3 Convergence of corrections to zero of Example 6.1.

5

6

176

6. ADJUSTMENT WITH CONSTRAINTS

TABLE 6.2 The Corrections of P Coordinates at Six Iterations Corrections Iteration

4X

4Y

4Z

1

0.9751

0.8141

-0.5220

2

0.0403

0.1603

-0.4163

3

0.0124

0.0161

-0.0504

4

0.0017

0.0015

-0.0031

5

0.0001

0.0001

-0.0002

6

0.0000

0.0000

0.0000

All three presented methods resulted in the same adjustment of P as follows: – The residual errors in (m) after adjustment for the observed distances are: 0.000

-0.106

-0.058

-0.025

which illustrates how the distance AP did not receive any corrections in contrary to the other observed distances. In over-weighting, the variance covariance matrix is computed in a similar way as in the adjustment with observations without constraints as σ 2o N1 , whereas in Helmert and the direct methods, we used Eq. (6.20).

– The adjusted P coordinates shown in Table 6.3 are: TABLE 6.3

The Final Adjusted P Coordinates

Adjusted point P

Std. deviations

Ellipsoid of errors elements

X (m)

Y (m)

Z (m)

σX (m)

σY (m)

σZ (m)

a (m)

b (m)

c (m)

472745.902

5191874.286

512.688

0.085

0.125

0.124

0.001

0.110

0.162

177

6.2 MATHEMATICAL DERIVATION

It should be noted that the estimated ellipsoid of errors a of the adjusted point P in the direction of the constrained distance AP will be much smaller compared to the other two errors b and c as represented by the ellipsoid of errors in Fig. 6.4.

513.5

513.5

513

513

P

512.5

512.5

Co

ns

512

tra

512

ine

dd

ist

511.5

511.5

an

ce 511

511

4.7275

4.7275

× 105

4.7275

4.7275

5.1919 A 5.1919 5.1919 5.1919 × 106 5.1919 4.7274

4.7274

4.7274

5.1919

4.7274

× 106

5.1919

5.1919

A 5.1919

4.7275 4.7275 4.7275 4.7275 × 105 4.7274 4.7274 4.7274 4.7274

5.1919

FIG. 6.4 Ellipsoid of errors with an exaggeration factor of 10.

EXAMPLE 6.2 In a construction site, two images (Table 6.4) are intersected at target points P1 and P2 (Table 6.5), which are fixed on a cylindrical tower as shown in Fig. 6.5. TABLE 6.4

Exterior Orientations of the Two Images ω[deg.]

φ[deg.]

k[deg.]

Image

Xo (m)

Yo (m)

Zo (m)

I1

645558.757

5774127.471

46.379

87.3442

13.5267

-90.2668

I2

645562.622

5774131.802

46.549

-109.5174

103.6263

110.1886

TABLE 6.5

Observed Image Coordinates

XP1= YP1=

XP2= YP2=

ZP1=

ZP2=

178

6. ADJUSTMENT WITH CONSTRAINTS

2m

P1

P2

I1

I2

FIG. 6.5 Two-image triangulation with a constraint.

Given – Exterior orientation of the two images (camera coordinates + camera rotations). The camera focal length is 6.99 mm. – Observed image coordinates of P1 and P2 in the image center system p.p. and their initial space coordinates.

Required – Find the adjusted position of P1 and P2 while keeping the distance between them to be exactly 2 meters.

Solution The least squares constrained adjustment should be applied to this problem of image triangulation (Section 4.7.2). The nonlinear triangulation equations are used as the mathematical model with the addition of the distance constraint observation Eq. (4.15) between P1 and P2.

vP1 P2 ¼

ð X P2  X P1 Þ ðYP2  YP1 Þ ðZP2  ZP1 Þ ðXP2  XP1 Þ ðYP2  YP1 Þ δXP1 + δYP1 + δZP1  δXP2  δYP2 S P1 P2 SP1 P2 SP1 P2 SP1 P2 SP1 P2   ðZP2  ZP1 Þ  δZP2 + 2  SP1 P2 SP1 P2

For every observed image point, the following observation equation v ¼ B4  F is formed: 3 2 ∂Fx ∂Fx ∂Fx 2 3     6 ∂Xp ∂Yp ∂Zp 7 dX vx 74 dY 5  Fx ¼6 4 ∂Fy ∂Fy ∂Fy 5 vy Fy dZ ∂Xp ∂Yp ∂Zp ∂Fx ∂Fy The partial derivatives , etc: can be symboled for simplicity as b’s [20]: ∂Xp ∂Yp x f y f b14 ¼ m31 + m11 b24 ¼ m31 + m21 q q q q x f y f b15 ¼ m32 + m12 b25 ¼ m32 + m22 q q q q x f y f b16 ¼ m33 + m13 b26 ¼ m33 + m23 q q q q

6.2 MATHEMATICAL DERIVATION

179

q ¼ m31 ðX  Xo Þ + m32 ðY  Yo Þ + m33 ðZ  Zo Þ r ¼ m11 ðX  Xo Þ + m12 ðY  Yo Þ + m13 ðZ  Zo Þ s ¼ m21 ðX  Xo Þ + m22 ðY  Yo Þ + m23 ðZ  Zo Þ Fx ¼

ðqx + rf Þ q

Fy ¼

ðqy + sf Þ q

In this example problem, we used the Helmert solution method for constrained adjustment. The observation matrix of the partial derivatives B will be a block diagonal matrix as:

The solution iterations for the nonlinear least squares adjustment will be as follows:

180

6. ADJUSTMENT WITH CONSTRAINTS

Helmert system of equation of the first iteration is constructed as:

And the least squares solution of corrections 4 ¼ N1t will be: 4 Xp1 ¼ 4 Yp1 ¼ 4 Zp1 ¼ kc ¼

1.011 -2.427 -0.397 0.527

4Xp2 ¼ 4Yp2 ¼ 4Zp2 ¼

7.750 0.249 3.239

The iterations continue to the sixth iteration where the corrections become negligible as shown:

The final adjusted P1 and P2 are shown in Table 6.6: TABLE 6.6

The Final Adjusted Coordinates of P1 and P2

Adjusted P1

Std. dev.

Ellipsoid of errors

Adjusted P2

Std. dev.

Ellipsoid of errors

X

645557.719

0.008

a¼ .009

645557.724

0.007

a ¼.008

Y

5774132.410

0.008

b ¼.007

5774132.413

0.007

b ¼.006

Z

47.629

0.004

c ¼.004

49.629

0.004

c ¼.004

181

6.2 MATHEMATICAL DERIVATION

Interesting to note that the distance between the adjusted points is constrained to 2 meters. Fig. 6.6 shows the exaggerated ellipsoid of errors of the adjusted points.

FIG. 6.6 Exaggerated ellipsoid of errors at P1 and P2.

Fig. 6.7 illustrates the convergence of the least squares adjustment of the two target points P1 and P2.

1 x1 correction y1 correction z1 correction

0.5

x2 correction y2 correction z2 correction

7

6

Corrections [m]

Corrections [m]

0

−0.5

−1

−1.5

5

4

3

2

1

−2

0 1

1.5

2

2.5

3

3.5

4

4.5

Convergence of iterations

5

5.5

6

1

1.5

2

2.5

3

3.5

4

4.5

Convergence of iterations

FIG. 6.7 Convergence of corrections in the coordinates of P1 (left) and P2 (right).

5

5.5

6

182

6. ADJUSTMENT WITH CONSTRAINTS

EXAMPLE 6.3 Three window corners A, B, and C are observed from three control stations P1, P2, and P3, which is the minimum number of controls as shown in Fig. 6.8. TABLE 6.7 Data and observations of Example 6.3 B

AC

2

AB

1.5

A

C

P3

1

20 15

P1 10

P2

5

0.5 8

8.5

9

9.5

10

10.5

11

11.5

12

12.5

0

FIG. 6.8 The intersection of distances with perpendicularity constraint.

Given Table (6.7) – The approximate coordinates of the window corners: A, B and C – The GCP coordinates of P1, P2 and P3 – Measured distances

Required Find the adjusted coordinates of A, B, and C that are constrained to the window sides as they are perpendicular to each other AC ? AB. (Note: Point C should not be confused with C matrix of constraints.)

183

6.2 MATHEMATICAL DERIVATION

Solution The constrained adjustment using the direct method is applied where the constraint equation is developed starting from the dot product of two vectors as follows: !

!

v1 : v2 ¼ jv1jjv2j cos γ

(6.22)

where jv1jand j v2j represent the vector lengths and γ is the angle enclosed between them. On the basis of the constraint of perpendicularity (cos 90 5 0) and Eq. (6.22) is simplified as a constraint equation [21]: g ¼ ðXB  XA ÞðXC  XA Þ + ðYB  YA ÞðYC  YA Þ + ðZB  ZA ÞðZC  ZA Þ

(6.23)

Then, C matrix of the partial derivatives of the nonlinear Eq. (6.23) takes the following form for the window corner points A, B, and C:   ∂g ∂g ∂g ∂g ∂g ∂g ∂g ∂g ∂g C¼ (6.24) ∂XA ∂YA ∂ZA ∂XB ∂YB ∂ZB ∂XC ∂YC ∂ZC C ¼ ½2XA  XB  XC , 2YA  YB  YC , 2ZA  ZB  ZC , XC  XA , YC  YA , ZC  ZA , XB  XA ,YB  YA , ZB  ZA 

Matrix B of the partial derivatives of the function with respect to the observed distances is developed similarly to Example 6.1. The iterative nonlinear least squares adjustment is illustrated as follows: Iteration 1:

where C1 ¼

-2.11

-0.35

-1.31

1.15

0.1

0.46

0.96

g ¼ -1.52 The coordinates of A, B, and C are updated for the next iteration. Iteration 2: 1

0.25

0.85

184

6. ADJUSTMENT WITH CONSTRAINTS

where C2 ¼

-0.946

-0.039

-1.229

0.982

0.027

0.118

-0.036

0.012

1.111

-0.014

1.008

g ¼-0.095 2

and the final fifth iteration:

where C5 ¼

-0.998

0.004

-1.009

0.998

0.010

0.001

-0.001

g ¼ 0.000 The value of g being zero expresses that the perpendicularity constraint is fulfilled. Fig. 6.9 illustrates the convergence of corrections in the coordinates to optimal minimum at the three corner points A, B, and C, respectively. 5

Convergence of corrections DX,DY,DZ

Convergence of corrections DX,DY,DZ

DX DY DZ

DX DY DZ

0

Convergence of corrections DX,DY,DZ DX DY DZ

0

0.1 –0.1

0

–0.2

–0.2

–0.3

Corrections

–0.3

Corrections

–0.1

Corrections

–0.1

–0.4

–0.2

–0.5

–0.3

–0.4

–0.5

–0.6 –0.6 –0.7

–0.4

–0.7 –0.8 –0.8

–0.5 –0.9

–0.9

–0.6 1

2

3

4

Number of iterations

5

1

2

3

4

Number of iterations

5

1

2

3

4

5

Number of iterations

FIG. 6.9 The convergence of the corrections of the three window corners coordinates A, B, and C, respectively.

185

6.2 MATHEMATICAL DERIVATION

The final adjusted coordinates of the corner points are shown in Table 6.8: TABLE 6.8 Final Adjusted Coordinates of the Corner Points Adjusted coordinates Corners

X (m)

Y (m)

Z (m)

A

10.24

15.90

0.91

B

10.24

15.88

1.92

C

11.24

15.91

0.91

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%% chapter 6 - example 6.3 %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear,close all %% GCP coordinates point=[ 10.45, 0.0 , 0.7; 8.45 , 0.5 , 0.5; 12.45, 0.1 , 1.0 ]; % approximate coordinates P1=[10.27 16.01 1.34; 11.23 16.26 2.19; 11.42 16.11 1.80]; % measured distances Dis1=[15.902;15.509;15.953]; Dis2=[15.933;15.554;15.965]; Dis3=[15.931;15.666;15.856]; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % LS adjustment format bank [X,Y,Z,DD,vcov,P ]=example_6_3(point,P1,Dis1,Dis2,Dis3); disp(’ADJUTED COORDINATES [m]’) P %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% figure DD=DD’; subplot(1,3,1) plot((1:size(DD,1)), DD(:,1),’-ro’,’Markerfacecolor’,’k’,’LineWidth’,3),hold on plot((1:size(DD,1)), DD(:,2),’-b^’,’Markerfacecolor’,’r’,’LineWidth’,3),hold on plot((1:size(DD,1)), DD(:,3),’–ks’,’Markerfacecolor’,’r’,’LineWidth’,2),hold on grid on title(’CONVERGENCE OF CORRECTIONS DX,DY,DZ’) xlabel(’NUMBER OF ITERATIONS’) ylabel(’CORRECTIONS’) grid on

186

6. ADJUSTMENT WITH CONSTRAINTS

legend(’DX’,’DY’,’DZ’) set(gca,’Xtick’,0:1:7) axis tight subplot(1,3,2) plot((1:size(DD,1)), DD(:,4),’-ro’,’Markerfacecolor’,’k’,’LineWidth’,3),hold on plot((1:size(DD,1)), DD(:,5),’-b^’,’Markerfacecolor’,’r’,’LineWidth’,3),hold on plot((1:size(DD,1)), DD(:,6),’–ks’,’Markerfacecolor’,’r’,’LineWidth’,2),hold on grid on axis tight title(’CONVERGENCE OF CORRECTIONS DX,DY,DZ’) xlabel(’NUMBER OF ITERATIONS’) ylabel(’CORRECTIONS’) set(gca,’Xtick’,0:1:7) grid on legend(’DX’,’DY’,’DZ’) subplot(1,3,3) plot((1:size(DD,1)), DD(:,7),’-ro’,’Markerfacecolor’,’k’,’LineWidth’,3),hold on plot((1:size(DD,1)), DD(:,8),’-b^’,’Markerfacecolor’,’r’,’LineWidth’,3),hold on plot((1:size(DD,1)), DD(:,9),’–ks’,’Markerfacecolor’,’r’,’LineWidth’,2),hold on grid on axis tight title(’CONVERGENCE OF CORRECTIONS DX,DY,DZ’) xlabel(’NUMBER OF ITERATIONS’) ylabel(’CORRECTIONS’) set(gca,’Xtick’,0:1:7) grid on legend(’DX’,’DY’,’DZ’) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%55%%%%%% function [X,Y,Z,DD,vcov,P]=example_6_3(point,P,Dis1,Dis2,Dis3) DD=zeros(1,9)’; X=point(:,1); Y=point(:,2); Z=point(:,3); ii=0; while ii q). The observation equations system will be: Ac, n vn,1 + Bc, u 4u, 1 ¼ fc,1

(6.25)

D1s0 , u 4u,1 + D2s0 , q 4 0 q,1 ¼ hs0 ,1

(6.26)

Notice that Δ is the correction vector of main unknowns while 40 is the vector of corrections of the additional parameters, which appears only in the constraints equations. Derivation of the Mathematical Model of the Constrained Adjustment with Additional Parameters [14] Using the same technique of derivation with Lagrange multipliers, the solution can be found as follows: ϕ ¼ vt wv  2kt ðAv þ B 4 f Þ  2ktc ðD1 4 þD240  hÞ

(6.27)

The solution is applied by taking the partial derivatives with respect to the variables v, 4, 40 and equating to zero. Then the following system of equations will be attained if we consider Eqs. 6.25 and 6.26 as:       0 n  w At 0 0 0   v    f  c  A 0 B 0 0   k    u  0 Bt 0 D1 t 0   Δ  ¼  0  (6.28) h s0  0 0 D1 0 D2   kc    0 t 0 q  0 0 0 D2 0   Δ  n c u s0 q Solving this system of Eq. (6.28) using subpartitioning, we get: v ¼ w1 At k ¼ QAt k k ¼ we ðBΔ + f Þ     N ¼ Bt we B ,t ¼ Bt we f , 4o ¼ N 1 t   4 ¼ N 1 t + Dt1 kc ¼ 4o + N1 Dt1 kc

(6.29)

6.3 CONSTRAINED ADJUSTMENT WITH ADDITIONAL PARAMETERS

Then after summarizing and substituting 4 value, we get: #  " # "  kc h  D 1 4o D1 N 1 Dt1 D2 : ¼ Dt2 0 40 0 Solving this system of Eq. (6.30) to compute kc and 4’ as:   kc ¼ P1 h  D1 4o  D2 40 |{z} s0 , 1  t   40 5R1 D2 P1 h  D1 4o |{z} q, 1   t 4 ¼ 4o + N1 D1 P1 h  D1 4o  D2 40 |{z} u, 1

189

(6.30)

(6.31)

(6.32)

(6.33)

where P and R are computed as: P ¼ D1 N 1 D1

t

t

R ¼ D2 P1 D2

(6.34) (6.35)

The variance-covariance matrix of unknown parameters is then computed as follows: Q40 40 ¼ R1

  Q44 ¼ N1 I  Dt1 P1 D1 N 1 + Dt1 P1 D2 R1 Dt2 P1 D1 N 1

(6.36) (6.37)

EXAMPLE 6.4 Given Three stations P1, P2, and P3 are used to observe four corner points 1, 2, 3, and 4 on a fac¸ade as follows in Table 6.9: TABLE 6.9

Three Stations Fixed Coordinates and the Corners Initial Coordinates

190

6. ADJUSTMENT WITH CONSTRAINTS

Observed distances, height differences, and azimuth angles are shown in Table 6.10: TABLE 6.10 Observed Values of Example 6.4

The standard deviations of the observations are: σ D ¼ 0:01 m;σ Az ¼ 0:0005rad,σ h ¼ 0:01 m

Required Use constrained adjustment with additional parameters to find the adjusted coordinates of the four corners that belong to the same fac¸ade plane.

Solution We should use the technique described in Section 6.3 as follows: – Plane equation as discussed in Chapter 8 is formed as: 2 32 3 2 3 z1  ao x1  bo y1  co x1 y1 1 da 4 ⋮ ⋱ ⋮ 54 db 5 ¼ 4 5 ⋮ x4 y4 1 z4  ao x4  bo y4  co dc The starting values of the plane parameters are: ao ¼  1, bo ¼  100, and because the fac¸ade is an almost vertical plane, then we select co ¼ 0. – The observation equations matrices necessary to run the least squares adjustment and the proper weight matrix w are as follows:

6.3 CONSTRAINED ADJUSTMENT WITH ADDITIONAL PARAMETERS

LS adjustment – first iteration

4ot ¼

0.086 -0.022 0.001 0.034 -0.012 0.106 0.069 -0.085 0.249 0.031 0.017 -0.001

191

192

6. ADJUSTMENT WITH CONSTRAINTS

P 0.4704473 0 0 24.0863 0 0 0

0 0 2.79911

0 0 0

0

0.459184

0

R 42.3197 -14.1066 -14.1066 4.7022 -1.1036 0.3679

Δ⬘ -1.1036 0.3679 4.7022

36.042 -124.048 -2.835

The correction vector with the contribution of the additional parameters in the first iteration will be: 4t ¼

-.112 -1.278 -0.049 -0.300 -1.166 1.95 0.175 1.313 -2.055 0.128 1.244 0.025

The adjustment continues until it converges to negligible values as illustrated in the following and in Fig. 6.10. For checking, all four corner points give the same z-coordinates when substituted in the plane equation (z ¼ ax + by + c).

Final computed plane parameters are: a ¼ 45:539, b ¼ 135:447,and c ¼ 0:007 The final adjusted corners coordinates are shown in Table 6.11: TABLE 6.11

Adjusted Corners Coordinates

Ellipsoid of errors is evaluated for the observed points and plotted with exaggeration in Fig. 6.11.

Convergence of coordinates –– Point 1

dX dY dZ

2 1.5 1 Corrections

0.5 0 –0.5

0.5 0 –0.5 –1 –1.5

–1

–2 –2.5

1

2

3

4

5

6

7

8

1

2

3

Number of iterations

4

5

6

7

8

Number of iterations

Convergence of coordinates –– Point 3

Convergence of coordinates –– Point 4

2.5

dX dY dZ

2

dX dY dZ

1

1

Corrections

Corrections

1.5

0.5 0 –0.5

0.5 0 –0.5

–1 –1

–1.5 –2 1

FIG. 6.10

2

3

4 5 Number of iterations

6

7

8

–1.5 1

2

3

4 5 Number of iterations

6

7

6.3 CONSTRAINED ADJUSTMENT WITH ADDITIONAL PARAMETERS

Corrections

1

Convergence of coordinates –– Point 2

2.5

dX dY dZ

8

Iterations of the least squares adjustment with additional parameters of the four fac¸ade corners.

193

194

6. ADJUSTMENT WITH CONSTRAINTS

97 96 95 94 97 96 95

93 92 6 91

× 10

94 93 92 91

90

5.7877

89

5.7877 5.7877

90 89 88 87

5.7877

5.7877 5.7877

Point3

88 87

5.7877

Point2 Point1

Point1

5.7877 5.7877

6

× 10

3.5626

5.7877

3.5626 3.5626

5.7877

5.7876

3.5625 3.5626 3.5626 3.5626 3.5626 3.5626 3.5627 3.5627 5 × 10

FIG. 6.11

3.5627 3.5627

Point2

5.7877

5.7877 5.7877

Point3 5.7876

3.5625

3.5626 3.5626

Adjustment with additional parameters example.

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%55 %%%%%%%%%%%%%%%%%%%% chapter 6 - example 6.4 %%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear,close all %%%%%%%%5 fixed coordinates %%%%%%%%%5 GCP=[56265.967 787646.936 87.432 56262.534 787647.786 87.386 56252.905 787649.079 87.289]; xo=GCP(:,1);yo=GCP(:,2);zo=GCP(:,3); %%%%%%%%5 initial coordinates %%%%%%%%%5 P=round([56268 787660 87 56268 56262

787660 96 787662 96

56262

787662 87]);

DD=zeros(12,1); DDD=[0;0;0]; x=P(:,1);y=P(:,2);z=P(:,3); %%%%% observed azimuth %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%55 AZ=[9.22789;8.95359;345.45717;24.49167;24.21737;358.05866;357.90458;... 35.52366;35.24937]*pi/180; %%%%%% observed distances %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%55 Dis=[13.232;15.811;17.867;13.39;15.97;16.717;14.239;18.046;15.86]; az1=AZ(1:3);az2=AZ(4:7);az3=AZ(8:end); d1=Dis(1:3);d2=Dis(4:7);d3=Dis(8:end);

× 10

5

6.3 CONSTRAINED ADJUSTMENT WITH ADDITIONAL PARAMETERS

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%55%% %% weight matrices sd=.01;sa=.0005; ww=[1/sd^2,1/sa^2,1/sd^2,1/sa^2,1/sd^2,1/sa^2,1/sd^2,1/sa^2,... 1/sd^2,1/sa^2,1/sd^2,1/sa^2,1/sd^2,1/sa^2,1/sd^2,1/sa^2,... 1/sd^2,1/sa^2 ,1/.01^2,1/.01^2 ]; w=diag(ww); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5 A= -1;B1= -100;C= 0; %%% plane initial parameters %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5 %%% observation equations and LS vv=0; while vv.0001 % stop if iterations exceeds 30 for i=1:size(point,1) dx1(i,1) = -(point(i,1) - P(1,1)); dy1(i,1) = -(point(i,2) - P(1,2)); dz1(i,1) = -(point(i,3) - P(1,3)); lp1(i,1)= ((point(i,1) - P(1,1))^2+(point(i,2) -P(1,2))^2 + (point(i,3) P(1,3))^2)^(1/2); end Bs=0; for i=1:size(point,1) k=3*i;

7.3 UNIFIED APPROACH OF LEAST SQUARES WITH CONSTRAINTS

bx1 =dx1(i,1)/lp1(i,1); by1 =dy1(i,1)/lp1(i,1); bz1 =dz1(i,1)/lp1(i,1); Fd1(i,1)=Dis(i,1)-lp1(i,1); B1(i,k-2)=-bx1;B1(i,k-1)=-by1;B1(i,k)=-bz1; b1(i,1)=bx1; b1(i,2)=by1;b1(i,3 )=bz1; Bs=blkdiag(Bs,B1); end Bs(1,:)=[];Bs(:,1)=[]; B =[B1,b1]; ro=206265; %% for azimuth observations for i=1: (size(point,1)) dx=point(i,1)-P(1,1); dy=point(i,2)-P(1,2); az(i,1)=computeaz_2016(dx,dy); end Ba=0;B1=[]; for i=1:size(point,1) k=3*i; bx1 = (dy1(i,1)/(lp1(i,1))^2); by1 = (-dx1(i,1)/(lp1(i,1))^2); bz1 =0; Fa1(i,1)=Az(i,1)-az(i,1); B1(i,k-2)=-bx1;B1(i,k-1)=-by1;B1(i,k)=0; ba(i,1)=bx1; ba(i,2)=by1;ba(i,3 )=0; Ba=blkdiag(Ba,B1); end Ba(1,:)=[];Ba(:,1)=[]; B2=[B1,ba]; B=[B;B2]; F=[ Fd1; Fa1 ]; %%%%%%%%%%% adding the constraint equation dx =point(1,1)-P(1,1); dy =point(1,2)-P(1,2); dz =point(1,3)-P(1,3); S =sqrt(dx^2+dy^2+dz^2); bet=atan(dz/ sqrt(dx^2+dy^2) ); th=atan2(dx, dy ); f2 = ro*(v-bet); bx2=-ro*(sin(th)*sin(bet)/S); by2=-ro*(cos(th)*sin(bet)/S); bz2= ro*(cos(bet))/S; A2=[bx2 by2 bz2 0 0 0 0 0 0 0 0 0 -bx2 -by2 -bz2 ]; B=[B;A2];

241

242

7. UNIFIED APPROACH OF LEAST SQUARES

F=[F;f2]; %%% extra observation dH1=2.12 B=[B; 0 0 1 0 0 0 0 0 0 0 0 0 0 0 -1]; F=[F;-(point(1,3)-P(1,3)+2.11)]; %%% adding eye for weighting B=[B;-eye(15)]; F=[F;Fs]; %%% LS adjustment N=inv(B’*w*B); T=B’*w*F; D=N*T; Fs=Fs+D; DD=[DD,D]; k=0; for i=1:3:3*size(point,1) k=k+1; point(k,1)=point(k,1)+D(i,1); point(k,2)=point(k,2)+D(i+1,1); point(k,3)=point(k,3)+D(i+2,1); end P(1,1)=P(1,1)+D(end-2,1); P(1,2)=P(1,2)+D(end-1,1); P(1,3)=P(1,3)+D(end,1); V=B*D-F; % residuals sigma=V’*w*V/((size(B,1)-size(B,2))); %unit variance end DD(:,1)=[]; vcov=sigma*N;

C H A P T E R

8

Fitting Geometric Primitives With Least Squares 8.1 INTRODUCTION There is a wide application of fitting points in 3D space to geometric primitive objects such as spheres, cylinders, planes, cones, and lines. Currently, point clouds acquired by laser scanners, range cameras, or from photographs are used to predict objects in outdoor or indoor environments such as road furniture extraction (poles, buildings, cars, trees, etc.) or for robotics navigation, autonomous driving, and other scene-understanding applications as shown in Fig. 8.1. There are different mathematical techniques to solve the least squares fitting problems either by conventional least squares adjustment or by the homogeneous least squares (Section 3.5) using SVD technique. Further, least squares adjustment methods can also vary as linear or nonlinear depending on the way of dealing with the mathematical model as will be shown. It is worth mentioning that the direct linear solutions are preferred over the nonlinear solutions, which need proper initial values to converge to optimal minimal.

Adjustment Models in 3D Geomatics and Computational Geophysics https://doi.org/10.1016/B978-0-12-817588-0.00008-8

243

# 2019 Elsevier Inc. All rights reserved.

244

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

FIG. 8.1 Fitting primitives from point clouds.

8.2 FITTING A PLANE Two techniques for fitting a plane are presented as follows: • Linear least squares approach A linear model of the plane is presented in Eq. (8.1) as follows: z ¼ ax + by + c

(8.1)

where a, b, and c are the plane unknown fitting parameters and are, in fact, the plane normal vector parameters or the cosine direction (Fig. 8.2). The system of equations can be solved if a minimum of three points are observed and a least squares adjustment is used when redundant observations exist, as will be shown in Example 8.1.

8.2 FITTING A PLANE

245

FIG. 8.2 Plane fitting.

In matrix form, the observation equation is: 2 32 3 2 3 x1 y1 1 a z1 4 ⋮ ⋱ ⋮ 54 b 5 ¼ 4 ⋮ 5,n  3 xn y n 1 zn c Δ ¼ F B |{z} |{z} |{z} n3

31

(8.2)

n1

Least squares solution with redundant observations is followed as: Δ ¼ (BtB)1BtF • Homogenous least squares using SVD The plane equation can be formulated as: ax + by + cz + d ¼ 0

(8.3)

where a, b, c, are the direction cosine of the normal vector of the plane and d represents the distance from a point to the plane. Here the plane can be solved by defining the following (Fig. 8.2): – An origin point (xo, yo, zo) on the plane computed as a mean point. – The direction cosine of the normal (a, b, c) with a unit length (a2 + b2 + c2 ¼ 1). The point on the plane should satisfy the following condition: aðx  xo Þ + bðy  yo Þ + cðz  zo Þ ¼ 0

(8.4)

t

Then decompose the coefficient matrix B B using SVD as will be shown in Example 8.1 where: 2 3 x1  xo y1  yo z1  zo ⋮ ⋮ 5 B ¼4 ⋮ (8.5) |{z}  xo y  yo z  zo x n n n n3

EXAMPLE 8.1 Given 25 points as listed in Table 8.1, and it is required to compute the best fit plane for them using both methods of linear and homogeneous least squares-based techniques.

246

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

TABLE 8.1 Plane fitting points

(1) Solution by the homogenous least squares using SVD (Section 3.5) xo ¼ 0:00104, yo ¼ 0:0058, zo ¼ 1:0018 t

Coefficient matrix B B is computed as in Eq. (8.5): 12.4230 0.1620 0.1539

0.1620 12.7380 0.0186

0.1539 0.0186 0.0116

SVD ¼ Bt B The computed eigenvector V matrix and normal vector parameters are:

247

8.2 FITTING A PLANE

(2) Solution by linear least squares Prepare the matrix of coefficients B and then compute the normal equation matrix as:      12:423 0:162 0:026   0:1799   t    t B B ¼  0:162 12:739 0:145  B F ¼  0:1638   0:026  25:0440  0:145 25:000     0:012408     t 1 t Δ¼ B B B F ¼  0:001617   1:0017  The results of both techniques are identical, and the best fit plane is plotted as shown in Fig. 8.3. For clarification, the last column of Table 8.1 is given to show the predicted Z values of the plane.

1.04 1.02 1 0.98 1

0.8

0.6

0.4

0.2

0

–0.2

–0.4

–0.6

–0.8

–1

1

0.8

0.6

0.4

0.2

0

–0.2

–0.4

–0.6

–0.8

–1

.04 .02 1 .98 1

1 0.8

0.6

0.4

0.2

0.2 0

–0.2

–0.4

–0.6

–0.8

–1

–1

–0.8

–0.6

–0.4

–0.2

0.4

0.6

0.8

0

1.04 1.02 1 0.98

1

0

–1 –1

–0.8

FIG. 8.3 Best fit plane of Example 8.1.

–0.6

–0.4

–0.2

0

0.2

0.4

0.6

0.8

1

248

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

8.3 FITTING A 3D LINE There are different possible formulations for the description of a 3D line [23,25,26], a popular method being a six-parameter representation as illustrated in Fig. 8.4.

t

d Z

s P

Y X

FIG. 8.4 Best fit 3D line.

– Three of the parameters represent a single point s(xo, yo, zo), on the line. – Three direction cosines unit vector d (a, b, c) along the line. Vector P describes an arbitrary point P (x, y, z) on the line, and point P is fixed on the line by the arbitrary scalar factor t. Therefore, the line parametric formulation can be given in Eq. (8.6) as: P ¼ s + td

(8.6)

Or x ¼ xo + t  dð1;1Þ y ¼ yo + t  dð1;2Þ z ¼ zo + t  dð1; 3Þ The solution is based on applying the SVD to the matrix whose ith row is (x  xo, y  yo, z  zo) as will be presented in Example 8.4.

EXAMPLE 8.2 Given 20 points in space in a linear form, and it is required to fit the best 3D line to them. After computing the mean of the points coordinates xo,yo, zo, the difference between every point coordinate and the mean value is computed as shown in the last three columns in Table 8.2. Then the SVD is applied to them, and the eigenvector matrix V is computed as:

249

8.3 FITTING A 3D LINE

TABLE 8.2 3D Line fitting points Point

X

Y

Z

x  x0

y  y0

z  z0

1

0.248

1.160

0.555

3.886

6.902

2.890

2

1.260

2.321

1.203

4.418

5.710

3.567

3

0.728

3.513

0.525

4.030

5.279

2.953

4

1.116

3.944

1.139

3.595

4.914

1.257

5

1.550

4.309

2.835

1.697

3.679

2.332

6

3.449

5.544

1.761

2.050

2.698

0.749

7

3.096

6.525

3.343

0.588

2.016

1.542

8

4.558

7.207

2.550

0.738

0.811

0.060

9

4.407

8.412

4.152

0.057

0.506

0.090

10

5.202

8.717

4.003

0.120

0.737

0.235

11

5.266

9.959

4.327

1.095

1.100

0.665

12

6.241

10.323

4.757

1.050

2.848

0.748

13

6.195

12.071

4.841

2.214

1.930

1.769

14

7.360

11.153

5.862

2.176

4.084

1.440

15

7.322

13.307

5.532

3.202

4.372

1.820

16

8.348

13.595

5.913

2.876

4.752

2.631

17

8.022

13.975

6.724

4.414

6.450

2.471

18

9.560

15.673

6.563

3.784

6.746

4.075

19

8.930

15.969

8.167

4.909

7.560

3.004

20

10.054

16.783

7.096

4.898

8.063

3.538

Mean

5.1456

9.2228

4.0925

The best fit 3D line is plotted as shown in Fig. 8.5.

250

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

20

20

18

18

16 14

16

12

14

10

12

8

10

6

8

4 2

6

0 50

4 40

2

30

50

25

30

0 0

20

20

15

40 5

10

10 0

30 10

5

15

0

20 20

25

10 30

0

FIG. 8.5 Best fit 3D line.

8.4 FITTING A SPHERE There are two methods to solve the problem of sphere best fit: nonlinear and linear methods with a minimum of four observed points. • Linear least squares adjustment method of the sphere best fitting 2

1 61 B ¼6 |{z} 4 ⋮ n4 1

x1 x2 ⋮ xn

y1 y2 ⋮ yn

2  2 2 2 3 3 3 2 2 x1 + y1 + z1 z1 r  xc 2  yc 2  z c 2   6 x2 + y2 + z2 7 z2 7 6 7 2 2 2 7 2xc 7 F ¼6 X ¼4 5 7 |{z} 5 ⋮ |{z} 6 2yc 4 5 ⋮   n1 41 2zc zv x 2 + y2 + z 2 n

n



X ¼ Bt B

n

1

Bt F

xc ¼ X2 =2 yc ¼ X3 =2 zc ¼ X4 =2 r¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X 1 + xc 2 + y c 2 + z c 2

(8.7)

8.4 FITTING A SPHERE

251

• Nonlinear least squares solution method of the sphere best fitting. The sphere equation can be formulated as:  2  2  2 x  xoc + y  yoc + z  zoc  R2 ¼ 0

(8.8)

where – xc, yc, and zc are the best fit coordinates of the sphere center. – R is the radius of the best fit sphere. – x, y, and z are the fitting points. The adjustment observation model is v + B Δ ¼ F, and the partial derivative values in the adjustment matrices of observation will be: 2

vx1 6 vy1 6 6 vz1 6 ⋮ 6 4v yn vzn

3 2

∂F1 ∂F1 7 6 ∂xc ∂yc 7 6 7 6 ⋮ ⋮ 7+6 7 6 5 4 ∂Fn ∂Fn ∂xc ∂yc

∂Fn ∂zc

3

2 3 2       3 δxc 2 o 2 o 2 o 2 7  x  x  x  y  x  z R 1 1 1 c c c 76 δy 7 6 7 76 c 7 6 7 ⋮ ⋮ ⋮ ⋮ 7¼4 76 5 74 δzc 5       2 o 2 o 2 o 2 ∂Fn 5  x  x  x  y  x  z R n n n c c c δR ∂R

∂F1 ∂F1 ∂zc ∂R ⋮ ⋮

(8.8)

where the partial derivatives using the first fitting point are: ∂F1 ¼ 2ðx1  xc o Þ ∂xc ∂F1 ¼ 2ðy1  yc o Þ ∂yc ∂F1 ¼ 2ðz1  zc o Þ ∂zc ∂F1 ¼ 2Ro ∂R

EXAMPLE 8.3 Given • 3D points as follows (Fig. 8.6 and Table 8.3): • Initial values of sphere parameters are: xc o ¼ 0,yc o ¼ 2,zc o ¼ 5,R ¼ 10.

Required: • Find the best fit sphere using the linear and nonlinear least squares adjustment methods using the given fitting points.

252

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

12 10 8 6 4 2 0 –2 –10

–4 –5

–6

0

–5 0

5

5 10

FIG. 8.6 Sphere fit [27].

TABLE 8.3 Sphere fitting points

10

8.4 FITTING A SPHERE

(1) Nonlinear LS adjustment Iteration 1

Iteration 2

Iteration 3

The final adjusted best fit sphere parameters are: Center ¼0.95657, 2.0361, 3.0057 and R ¼ 7.2193.

253

254

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

(2) Linear LS solution B 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000

10.000 70.723 31.660 26.043

70.723 506.590 215.660 186.980

8.050 5.845 7.785 7.154 8.065 6.765 7.943 6.595 6.425 6.097

N 31.660 215.660 188.180 86.204

0.789 6.655 2.277 5.225 2.670 3.943 0.122 6.550 6.149 2.719

26.043 186.980 86.204 106.250

T 801.02 5660.50 2900.90 2337.70

2.663 0.336 0.734 5.001 1.987 6.783 3.425 3.112 0.751 1.251

F 72.515 78.564 66.326 103.490 76.115 107.320 74.840 96.076 79.654 46.124

! X¼ N-1T ¼

38.024 1.9131 4.0721 6.0114

Adjusted parameters are: xc ¼ 0.95657 yc ¼ 2.0361 zc ¼ 3.0057 R¼ 7.2193

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5 %%%%%%%%%%%%%%%%%%%%%% chapter 8 - example 8.3 %%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear close all %%%%%%%%%%%%%%%%% initial values xo=0; yo=2; zo=5; r=10; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % observed points P=[ 8.0498 0.7892 2.6632 5.8447 6.6551 7.7848 2.2768

0.3359 0.7343

7.1542 5.2246

5.0008

8.4 FITTING A SPHERE

8.0647 2.6698

1.9869

6.7650 3.9429

6.7827

7.9432 0.1222

3.4250

6.5949 6.5498

3.1118

6.4254 6.1485

0.7514

255

6.0966 -2.7186 1.2508]; %%%%%%%%%%%%%%%%%%%%%%%%%%%%% LS method %%%%%%%%%%%%%%%%%%%%%%%%%%%% [Center,Radius,DD] = sphereFit_18(P, xo, yo, zo, r); disp(’ adjusted values’) Center,Radius %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% figure plot3(P(:,1),P(:,2),P(:,3),’ro’,’markersize’,8,’markerfacecolor’,’k’) hold on; [Base_X,Base_Y,Base_Z] = sphere(20); surf(Radius*Base_X+Center(1), Radius*Base_Y+Center(2),... Radius*Base_Z+Center(3),’faceAlpha’,0.3,’Facecolor’,’c’) view([45,28]) grid on axis image legend({’Data Points’,’Best Fit Sphere’,’LSE Sphere’},’location’,’W’) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5%%%% figure d1=DD(1:4,:); plot((1:size(d1,2)), d1(1,:),’-ro’,’Markerfacecolor’,’r’,’LineWidth’,2),hold on plot((1:size(d1,2)), d1(2,:),’-b^’,’Markerfacecolor’,’b’,’LineWidth’,2),hold on plot((1:size(d1,2)), d1(3,:),’-ms’,’Markerfacecolor’,’k’,’LineWidth’,2),hold on plot((1:size(d1,2)), d1(4,:),’-gs’,’Markerfacecolor’,’m’,’LineWidth’,2),hold on xlabel(’Iterations’) ylabel(’Corrections [m]’) grid on axis tight legend(’xo correction’,’yo correction’,’zo correction’,’Radius correction’) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function [Center,Radius,DD] = sphereFit(X,xo,yo,zo,R) DD=[0;0;0;0]; D=[1;1;1;1];vv=0; while (max(abs(D)))>.00001 vv=vv+1; for i=1:size(X,1) S=sqrt((X(i,1)-xo)^2+(X(i,2)-yo)^2+(X(i,3)-zo)^2); B(i,1)=(X(i,1)-xo)/S;B(i,2)=(X(i,2)-yo)/S; B(i,3)=(X(i,3)-zo)/S;B(i,4)=1; f(i,1)=-R+S ; end

256

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

%%%%%%%%%%5 least square D= (B’*B)\(B’* f ); xo=xo+D(1,1); yo=yo+D(2,1); zo=zo+D(3,1); Center=[xo,yo,zo]; R=R+D(4,1) ; DD=[DD,D]; end DD(:,1)=[] Radius=R;

8.5 FITTING A CIRCLE IN 3D SPACE Fitting a circle in 3D space (Fig. 8.7) needs a multistep solution approach as follows [28]: (1) Compute the best fit least squares plane of the data as explained previously. (2) Rotate the points such that the least squares plane is the XY plane. Z

X

Y

FIG. 8.7 3D circle fit.

(3) Rotate data points onto the XY plane. (4) Compute the 2D circle to fit in the XY plane. (5) Rotate back to the original orientation in 3D space for plotting.

8.5.1 Fitting a 2D Circle Because the best fit of a plane is explained in Section 8.2, we will start with the fourth step by explaining the least squares adjustment of a circle best fit in a 2D space as follows (Fig. 8.8): Given – XY points, i ¼ 1 : n

8.5 FITTING A CIRCLE IN 3D SPACE

257

FIG. 8.8 2D circle fit.

Required – To compute the adjusted radius r, and circle center xc, yc Circle equation can be formulated as: ðx  xc Þ2 + ðy  yc Þ2  r2 ¼ 0

(8.9)

(A) Nonlinear LS adjustment approach The observation equation will be as follows in the form v + B Δ ¼ F: 2

vx1 6 vy1 6 6 vx2 6 ⋮ 6 4v yn vzn

3

2 ∂F1 7 7 6 ∂xc 7 6 7+6 ⋮ 7 4 ∂Fn 5 ∂xc

∂F1 ∂yc ⋮ ∂Fn ∂yc

3    3 ∂F1 2 3 2 2  o 2 o 2 r  x  x  x  y 1 1 c c δx c ∂r 7 6 7 7 ⋮ ⋮ ⋮ ⋮7 ⋮ 74 δyc 5 ¼ 6 4 5 ∂Fn 5 δr     2 o 2 o 2 r  x n  xc  x n  yc ∂r

The partial derivatives of the function to the unknowns and observations will be as follows:   ∂F1 ¼ 2 x  xoc ∂xc   ∂F1 ¼ 2 y  yoc ∂yc ∂F1 ¼ 2ro ∂R (B) Linear LS adjustment approach The circle from Eq. (8.9) is repeated here for derivation:

258

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

ðx  xc Þ2 + ðy  yc Þ2  r2 ¼ 0   ð2xc Þx + ð2yc Þy + r2  x2c  y2c ¼ x2 + y2

(8.10)

c0x + c1y + c2 ¼ x2 + y2

(8.11)

Or

where c0, c1, and c2 are the unknown parameters. Then we will have a linear observation system of the fitting equation as v + BΔ ¼ F: 2 3

3 32 3 2 y1 1 c0 x21 + y21 4 ⋮ 5 ¼ 4 ⋮ ⋮ ⋮ 5 4 c1 5  4 ⋮ 5 x n yn 1 c2 vn x2n + y2n |fflfflfflffl{zfflfflfflffl} |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} |ffl{zffl} |fflfflfflfflffl{zfflfflfflfflffl} v1

2

x1

Vn∗1

B n ∗3

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi c0 c1 Then xc ¼ , yc ¼ and r ¼ c2 + x2c + y2c 2 2

Δn∗1

(8.12)

Fn ∗1

8.5.2 Rodrigues Rotation Formula To apply the second step of rotating the points to the best fit plane of points, we can utilize the Rodrigues rotation formula to rotate 3D points and get their XY coordinates in the coordinate system of the plane. This is simply a problem of rotating from normal vector n1 to normal n2 as shown in Fig. 8.9. To compute the rotation matrix, we will apply the following steps: (1) Find axis and angle of rotation using cross product and dot product, respectively. The axis of rotation k is a cross product () between plane normal n1 and the normal of the new XY

FIG. 8.9 Rotating from normal vector n1 to normal n2.

coordinates. Thus, n2 ¼ (0, 0, 1)t and k ¼ n1  n2.   ! ! 1 n 1:n 2 . Further, θ ¼ cos k!n 1k:k!n 2k (2) Find the rotation matrix M using exponential map: M ¼ I3∗3 + k^sin ðθÞ + k^2 ð1  cos θÞ

(8.13)

259

8.5 FITTING A CIRCLE IN 3D SPACE

3 0 kð3Þ kð2Þ – Skew – symmetric matrix k^ ¼ 4 kð3Þ 0 kð1Þ 5 kð2Þ kð1Þ 0 where

2

– k ¼ (n1  n2), then normalized. The third step after rotation of points is to find the best fit circle Cpoints in 2D space (xc, yc, r). Finally rotate back to 3D space by taking n2 as the plane normal and n1 ¼ (0, 0, 1)t. C ¼ Cpoints  ½xc, yc, 0

(8.14)

C ¼ C Mt + Points_mean

(8.15)

A python code to solve this problem is also published in [28].

EXAMPLE 8.4 Given the following set of 3D points (P) in Table 8.4, it is required to find the best fit circle in 3D space.

Solution (1) Find the best fit plane using the methods described in Section 8.2. The cosine direction normal is: A¼ 0.000 B¼0.414 C¼ 1.082 (2) Rotate the plane normal vector into XY plane, which should have a normal vector of [0 0 1]t. – Compute the cross-product vector k (n1  n2) as:

0.41433

-0.00012

0

– Then normalize as w ¼ w/ norm (w):

1 – Skew-symmetric matrix:

-0.00028

0

260

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

2

3 2 3 0 kð3Þ kð2Þ 0 0 :00028 k^ ¼ 4 kð3Þ 0 kð1Þ 5 ¼ 4 0 0 1 5 kð2Þ kð1Þ 0 0:00028 1 0

– Dot product n1, n2 produce θ as 0.36564 rad. – Using Rodrigues’ formula to rotate to XY plane:



1  1.87E 05 0.000101

1.87E 05 0.9339 0.35755

0.0001 0.35755 0.9339

– Apply the rotation to the points in Table 8.4 as P_rot ¼ P ∗ M as illustrated in Fig. 8.10 in red dots. – Apply the LS adjustment to find the best fit parameters of the XY circle as shown in

TABLE 8.4

3D circle fitting points

261

8.5 FITTING A CIRCLE IN 3D SPACE

1.5

1.8 1.6

1

1.4 1.2

0.5

1 0.8

0 2.5 1.5

2 1.5

2.5 1

2 1.5

0.5

1

0

0.5

–0.5

0

2.5 1

2 1.5

0.5 1

0

0.5 –0.5

–0.5

0 –0.5

FIG. 8.10 Least squares fitting of a 3D circle. xc ¼ 0:98506, yc ¼ 0:96453 and r ¼ 1:5095 Then apply the transformation back to 3D space by:   Pcircle ¼ Pcircle 2D  ðxc, yc, 0Þt Mt + Pmean where Pmean ¼ [0.99047, 0.5426, 1.3071]

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%% section 8.4 - fitting 3D circle %%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc; clear ; close all P=[0.963 1.904 1.871 1.276 1.901 1.870 1.600

1.819

1.836

2.057

1.491

1.700

2.392

1.266

1.607

2.435

0.937

1.471

2.627

0.380

1.240

2.388

0.056

1.106

2.129 1.897

-0.102 -0.622

1.040 0.825

1.281

-0.851

0.730

0.955

-0.920

0.701

0.734

-0.771

0.763

0.466

-0.761

0.767

-0.165

-0.396

0.918

262

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

-0.394

-0.067

1.054

-0.377

0.166

1.151

-0.496

0.659

1.355

-0.519

0.987

1.491

-0.291

1.250

1.600

-0.041

1.475

1.694

0.087 0.630

1.648 1.962

1.765 1.895];

xo=mean(P(:,1));yo=mean(P(:,2));zo=mean(P(:,3)) ; x=P(:,1);y=P(:,2);z=P(:,3); format short g for i=1:size(x,1) F(i,1)= z(i,1) ; bb(i,3)=1;bb(i,1)=x(i,1);bb(i,2)=y(i,1); end Delta=inv(bb’*bb)*bb’* F; a= Delta(1); b= Delta(2); c= Delta(3); v1=[a;b;c];v2=[0;0;1]; % % rotation vector w=cross(v1,v2); w=w/norm(w); w_hat=[0 -w(3) w(2); w(3) 0 -w(1); -w(2) w(1) 0]; cos_tht=v1’*v2/norm(v1)/norm(v2); tht=acos(cos_tht); %% rotation matrix using Rodrigues formula M=eye(size(v1,1))+w_hat*sin(tht)+w_hat^2*(1-cos(tht)); Prot= P*M X=Prot(:,1);Y=Prot(:,2); for i=1:size(x,1) F(i,1)= X(i,1)^2+Y(i,1)^2; B(i,1)=X(i,1);B(i,2)=Y(i,1);B(i,3)=1; end u=inv(B’*B)*B’*F; xo=u(1,1)/2 yo=u(2,1)/2 r=sqrt(u(3,1)+xo^2+yo^2) angle=linspace(0,2*pi,500); xx=xo+r*cos(angle); yy=yo+r*sin(angle); R_comp=sqrt((X(:,1)-xo).^2+(Y(:,1)-yo).^2) ; resid = R_comp-r; figure(1)

8.6 FITTING A CYLINDER

263

bar( resid) ylabel(’RESIDUALS OF THE OBSERVED CIRCUMFERENCE POINTS [m]’) xlabel(’No. OF OBSERVED CIRCUMFERENCE POINTS ’) grid on axis square %%%%%%%%%%%%%% back rotation figure(2) Pc=[xx’-xo,yy’-yo,zeros(size(xx,2),1)];Pm=mean(P) %% M1=fcn_RotationFromTwoVectors(v1, v2) T= Pm-[xo,yo,0] for i=1:size(Pc,1) PP(i,:)= Pc(i,:)*M1’ +Pm ; end plot3(PP(:,1),PP(:,2),PP(:,3),’-’,’Linewidth’,2);hold on plot3(P(:,1),P(:,2),P(:,3),’r.’, ’markersize’,12);hold on axis image grid on %%%%%%%%%%%%%%%%% max resid, min resid, mean resid %%%%%%%%%%%%%%%%%%5%%%% result=[max(resid),min(resid),mean(resid)] MSE=sqrt(sum(resid.^2)/size(resid,1))%% mean squared errors

8.6 FITTING A CYLINDER There are different methods of least squares fitting of a cylindrical shape either linear- or nonlinear-based solutions [29,30,31]. Normally seven parameters are used to define a cylinder as (Fig. 8.12): • Point of origin lying on the cylinder axis and define as xo, yo, zo. • Cosine direction normal a, b, c of the cylinder axis. • Radius of the cylinder R. In this section, we will introduce the author-modified approach in computing the unknowns parameters of xo, yo, zo, R and normal cosine direction [a, b, c] of the cylinder axis using least squares adjustment that complies with our book context as follows: – The first derivation step is to apply the cross product between the vector derived from the coordinates differences and the cosine direction as: 2 3 2 3 a x  xo E ¼ 4 b 5  4 y  yo 5 ! d ¼ kEk c z  zo

(8.16)

264

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

where  refers to cross product and k k refers to norm. Therefore, the objective function is to minimize the difference between the distances from the fitting points to the cylinder axis and its radius R as illustrated in Fig. 8.11.

FIG. 8.11

Or

Fitting of a cylinder defined by seven parameters.

X F ¼ min : ðdi  RÞ2

The observation equation v ¼ B Δ  F can be formed as follows: 2 3 2 3 Δa ∂F1 ∂F1 ∂F1 ∂F1 ∂F1 ∂F1 ∂F1 6 Δb 7 2 3 2 3 7 v1 F1 6 ∂a ∂b ∂c ∂R ∂xo ∂yo ∂zo 7 6 6 7 7 6 Δc 7 7 6 v2 7 6 6 ΔR 7  ⋮ ⋮ ⋮ 7 ⋮ ⋮ ⋮ 4 ⋮ 5 ¼6 4 F⋮2 5 6 ⋮ 76 6 7 4 ∂Fn ∂Fn ∂Fn ∂Fn ∂Fn ∂Fn ∂Fn 5 6 Δxo 7 vn Fn 4 Δyo 5 |fflffl{zfflffl} |fflffl{zfflffl} ∂a ∂b ∂c ∂R ∂xo ∂yo ∂zo Vn1 Fn1 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Δzo |fflfflffl{zfflfflffl} Δ71 Bn7 where the partial derivatives are given as: ∂ jvj signðvÞðz  zoÞ  jwj signðwÞðy  yoÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ∂a ðw2 + v2 + u2 Þ ∂ juj signðuÞðz  zoÞ  jwj signðwÞðx  xoÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ∂b ðw2 + v2 + u2 Þ ∂ juj signðuÞðy  yoÞ  jvj signðvÞðx  xoÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ∂c ð w 2 + v2 + u 2 Þ ∂ c jvj signðvÞ  b jwj signðwÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ∂xo ðw2 + v2 + u2 Þ ∂ c juj signðuÞ  a jwj signðwÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ∂yo ðw2 + v2 + u2 Þ ∂ b juj signðuÞ  a jvj signðvÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ∂zo ð w 2 + v2 + u 2 Þ ∂ ¼ 1 ∂R

(8.17)

(8.18)

8.6 FITTING A CYLINDER

265

and 9 w ¼ bðx  xoÞ  aðy  yoÞ = v ¼ aðz  zoÞ  cðx  xoÞ ; u ¼ cðy  yoÞ  bðz  zoÞ

(8.19)

j j refers to the absolute value. If the value is > 0 then Sign return 1 If the value is < 0 then Sign return  1 Further, to avoid the rank deficiency in the B matrix, two constraint equations (Chapter 6) are added to system as follows: First constraint The cosine direction vector a, b, and c should be normalized as (a2 + b2 + c2 ¼ 1). Therefore, the constraint equation C1 will be:

C1 ¼ ½ 2a 2b 2c 0 0 0 0 , g1 ¼ 1  a2  b2  c2

(8.20)

Second constraint The second constraint equation which should be added is based on the dot product between the cosine direction a, b, and c of the cylinder axis and the origin center point to be 0 or: a:xo + b:yo + c:zo ¼ 0

(8.21)

C2 ¼ ½ xo yo zo a b c 0 , g2 ¼ ½a:xo + b:yo + c:zo

(8.22)

Therefore:

Accordingly, the Helmert system is used to solve the constrained problem as presented in Chapter 6 of Eq. (6.21) as follows:

N Ct C 0

t g Δ C1 ¼ where C ¼ , g ¼ 1 , N ¼ Bt B, and t ¼ Bt F g g2 C2 kc

266

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

EXAMPLE 8.5 Given The following cylindrical distributed points are given in [m] in Table 8.5 as: TABLE 8.5 Cylinder fitting points

• The initial values of the axis normal vector and the radius will be given randomly as: a ¼ 1,b ¼ 1, c ¼ 1,R ¼ 1 • xo, yo, and zo are computed from the average of the point coordinates as: 1.9081, 5.1363, 0.0106.

Required Find the seven parameters of the best fitting cylinder.

Solution The constrained adjustment can be solved by composing the Helmert system of equations as discussed in Chapter 6.

First iteration

8.6 FITTING A CYLINDER

Accordingly, the Helmert system of equations is prepared as:

Second iteration

And directly to the sixth iteration:

The cylinder can then be plotted using the computed adjusted seven parameters. a ¼ 0.0299 b ¼ -0.0070 c ¼ 0.9995 R ¼ 1.5371 xo ¼ 1.9358 yo ¼ 5.1271 zo ¼ -0.0218

267

268

8. FITTING GEOMETRIC PRIMITIVES WITH LEAST SQUARES

Fig. 8.12 illustrates the convergence of corrections to zero value during the least squares adjustment of nine iterations. Fig. 8.13 shows the best fit cylinder of the points listed in Table 8.5. Convergence of cosine direction normal a b c

0.2 Corrections

0

−0.2

−0.4 −0.6 −0.8

Corrections

1

2

3

4

3

4

5 6 Number of iterations Convergence of cylinder radius

7

8

9

0.2 0

−0.2 1

2

5 Number of iterations

6

7

8

9

Convergence of cylinder center 1

xo yo

Corrections

zo

0.5

0

−0.5 1

2

3

4

5

6

7

8

9

Number of iterations

FIG. 8.12

The adjustment convergence to optimal minimum.

6

6

6 4 4

4 2

2

2

6.5

0 0

6

–2

5.5

0 –2

–2

–4

5

–4

4.5

–6

–4

–6 4

6 –6

3 2

5 6 5 4

FIG. 8.13

1

2

3

4

Best fit cylinder.

1

2

3

1 6

5

4

3.5 0.5

1

1.5

2

2.5

3

3.5

8.6 FITTING A CYLINDER

MATLAB code %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%% chapter 8 - example 8.5 %%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% clc,clear close all %%%%%%%%% given points %%%%%%%%%%%%%%%%%%5 P=[ 1.73 6.41 -6.04 3.38

4.98

-5.94

1.93

3.64

-5.88

0.25

5.69

-6.52

1.26

6.84

-2.19

3.56

4.73

-2.55

1.57

3.88

-1.85

0.07 2.30

5.37 6.54

-1.89 2.19

3.40

4.68

1.74

1.82

3.63

2.06

0.59

5.37

2.34

2.31

6.65

6.71

3.49

4.73

6.13

2.02

3.40

5.58

0.85 5.64 5.94]; x= P(:,1);y= P(:,2);z= P(:,3); P=[x,y,z]; %%%%%%%%%%%%%%%%%%%%5 initial values a=1; b=1;c=1; xo=mean(P(:,1));yo=mean(P(:,2));zo=mean(P(:,3)) ;r=1; format short g ini= [a;b;c;xo;yo;zo;r] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5 D=[1;1;1;1;1;1];vv=0; DD=[0 0 0 0 0 0 0]; while vv