Geoinformatics 9781781830956, 9781523118977, 1523118970, 1781830959

Cover; Preface; Acknowledgement; Contents; Section I: Geoinformatics; Chapter 1: Introduction; 1.0 General; 1.1 Mutual R

350 97 15MB

English Pages 205 Year 2017

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Geoinformatics
 9781781830956, 9781523118977, 1523118970, 1781830959

Table of contents :
Cover......Page 1
Preface......Page 6
Acknowledgement......Page 8
Contents......Page 10
Section I: Geoinformatics......Page 18
1.1 Mutual Relationship of Components of Geoinformatics......Page 20
Section II: Plane Surveying......Page 22
2.1.2 Principle of Surveying......Page 24
2.2 Definitions of Some Basic Terms......Page 25
2.3 Concept of Errors in Surveying......Page 26
2.3.1 Classification of Errors......Page 27
2.3.3 Propagation of Errors......Page 28
3.1.1 Distance Measurements by Direct Method......Page 29
3.1.2 Distance Measurements by Indirect Methods......Page 31
3.1.3 Distance Measurements with EDM......Page 35
4.1 Classification of Angles and Directions......Page 36
4.1.2 Azimuths......Page 37
4.1.3 Deflection Angles......Page 38
4.2.1 Main Parts of a Theodolite......Page 39
4.2.2 Definitions of Some Technical Terms......Page 40
4.2.3 Geometry of Theodolite......Page 41
4.2.5 Horizontal Angle Measurement with a Theodolite......Page 42
4.2.7 Miscellaneous Field Operations with a Theodolite......Page 44
4.3 Magnetic Compass......Page 45
4.3.2 Local Attraction......Page 46
5.1.1 Direct Differential Levelling......Page 47
5.1.2 Indirect or Trigonometric Levelling......Page 52
5.1.3 Barometric Levelling......Page 53
6.2 Concept of Contours and Contour Gradient......Page 54
6.3 Characteristics of Contours......Page 56
6.4.2 Indirect Method......Page 58
6.5 Uses of Contours......Page 59
7.1 Advantages and Disadvantages......Page 60
7.3 Plane Table and Accessories......Page 61
7.6 Setting up the Plane Table......Page 63
7.7 Orienting the Plane Table......Page 64
7.8.2 Intersection......Page 66
7.8.4 Resection......Page 67
7.9 Errors in Plane Tabling......Page 71
8.2 Types of Traverses......Page 72
8.2.2 Closed Traverse......Page 73
8.4 Traverse Procedure......Page 74
8.5 Computation of Coordinates......Page 75
8.6.1 Bowditch’s Method......Page 77
8.6.2 Graphical Method......Page 78
Section III: Photogrammetry......Page 80
9.1 Types of Photogrammetry......Page 82
9.3 Merits and Demerits of Photogrammetry......Page 83
9.4 Limitation of Photogrammetry in Land Surveying......Page 84
10.1 Aerial Photographs......Page 85
10.2.1 Photocoordinate System......Page 88
10.2.2 Definitions of Technical Terms......Page 90
10.2.4 Scale of a Vertical Photograph......Page 91
10.2.5 Ground Coordinates from a Vertical Photograph......Page 93
10.2.6 Relief Displacement on a Vertical Photograph......Page 94
10.3.1 Overlaps......Page 95
10.3.2 Computation of Flight Plan......Page 97
11.1 Stereoscopic Vision and Depth Perception......Page 100
11.2 Stereoscopic Viewing of Photographs......Page 101
11.3 Parallax in Stereoscopic Views......Page 103
11.3.3 Measurement of Parallax......Page 104
11.3.4 Concept of Floating Mark in Measurement of Parallax......Page 105
11.4 Aerial Photointerpretation......Page 106
Section IV: Remote Sensing......Page 108
12.1 Principle of Remote Sensing......Page 110
12.2 Advantages and Disadvantages of Remote Sensing......Page 111
12.4 Applications of Remote Sensing......Page 112
13.1 Electromagnetic Energy......Page 113
13.2 Electromagnetic Spectrum and its Characteristics......Page 114
13.3 Electromagnetic Energy Interaction......Page 115
13.4 Resolution......Page 117
13.5 Image Histogram......Page 118
13.6 Pure and Mixed Pixels......Page 119
14.1 Broad Classifications of Sensors and Platforms......Page 121
14.2.1 Land Observation Satellites and Sensors......Page 122
14.2.2 High Resolution Sensors......Page 123
14.2.4 Radarsat-1......Page 124
14.2.5 Weather Satellites......Page 125
15.1 Data Reception, Transmission, and Processing......Page 127
15.2.1 Digital Data......Page 128
15.2.2 Tape Format......Page 129
15.2.3 Data Products......Page 131
16.1 Image Interpretation......Page 132
16.1.2 Image Characteristics......Page 133
16.1.5 Image Interpretation Keys......Page 134
16.2 Digital Image Processing (DIP)......Page 135
16.2.3 Image Transformation......Page 136
16.2.4 Image Classification......Page 137
16.2.6 Data Merging and GIS Integration......Page 139
16.3.3 Panchromatic Image......Page 140
16.3.4 Multispectral Image......Page 141
16.3.5 Colour Composite Image......Page 142
17.1 More about The Application of Remote Sensing Data......Page 146
17.2 Land Use and Land Cover Mapping......Page 147
17.3 Ground Water Mapping......Page 148
17.4 Disaster Management......Page 150
Section V: Geographic Information System......Page 152
18.1 Definition of GIS......Page 154
18.2 Components of GIS......Page 155
18.3 Understanding the GIS......Page 156
19.1 Input Data and Sources......Page 157
19.2.7 Attribute Data Tagging......Page 158
19.3 Layer Concept of Data Storage in GIS......Page 159
19.5 Georeferencing of GIS Data......Page 160
19.7 Spatial Data Models and Structure......Page 161
19.8 GIS Database and Database Management System......Page 162
19.9 Topology......Page 163
19.10 Types of Output Products......Page 165
19.11 Spatial Data Analysis......Page 166
20.1 Problem Identification......Page 167
20.4 Identifying Implementation Problem......Page 168
20.6 Project Evaluation......Page 169
20.7.1 Site Suitability for Urban Planning......Page 170
20.7.2 Road Accident Analysis......Page 171
Section VI: Global Positioning System......Page 174
21.2 Advantages of GPS over Traditional Surveying......Page 176
21.4 GPS a New Utility......Page 177
22.1 Principles of GPS Working......Page 178
22.1.2 Measuring Distance from a Satellite......Page 179
22.1.3 Atomic Clock and Determination of Position......Page 181
23.0 Introduction......Page 182
23.1 Space Segment......Page 183
23.1.2 Satellite Signals......Page 184
23.2 Control Segment......Page 185
23.2.1 Master Control Station......Page 186
23.3 User Segment......Page 187
24.1 GPS Receivers and its Features......Page 188
24.1.1 Surveying Receivers......Page 189
24.1.2 Receivers by Method of Operation......Page 190
24.2 GPS Errors......Page 191
25.2 GPS Surveying Techniques......Page 192
25.2.1 Rapid-static GPS Surveying......Page 193
25.2.2 Stop-and-Go GPS Surveying......Page 194
25.2.3 Kinematic GPS Surveying......Page 195
25.3 Real-time GPS Surveying and Mapping Techniques......Page 196
25.3.2 RTK-GPS Technique......Page 197
Index......Page 200

Citation preview

GEOINFORMATICS

This page intentionally left blank

GEOINFORMATICS

Dr. A M CHANDRA Professor of Civil Engineering Arba Minch University, ETHIOPIA Former Professor of Civil Engineering Geomatics Engineering Section Indian Institute of Technology, Roorkee, INDIA

New Academic Science New Age International (UK) Ltd. NEW ACADEMIC SCIENCE

27 Old Gloucester Street, London, WC1N 3AX, UK www.newacademicscience.co.uk e-mail: [email protected]

Copyright © 2017 by New Academic Science Limited 27 Old Gloucester Street, London, WC1N 3AX, UK www.newacademicscience.co.uk • e-mail: [email protected] ISBN : 978 1 78183 095 6 All rights reserved. No part of this book may be reproduced in any form, by photostat, microfilm, xerography, or any other means, or incorporated into any information retrieval system, electronic or mechanical, without the written permission of the copyright owner. British Library Cataloguing in Publication Data A Catalogue record for this book is available from the British Library Every effort has been made to make the book error free. However, the author and publisher have no warranty of any kind, expressed or implied, with regard to the documentation contained in this book.

Preface

Leaving Surveying and Photogrammetry, it is now found that other units of Geoinformatics, such as Remote Sensing Geographic Information System (GIS), and Global Position System (GPS), are finding tremendous application in variety of fields. There are number of books on these subjects but there is no named Geoinformatics which introduces the principles and applications of the subjects coming under its fold, and this book is an attempt in this direction only. Surveying has been one of the basic courses of civil engineering curriculum at undergraduate level in every university but the geoinformatics, which includes, Photogrammetry, Remote Sensing, Geographic Information System, and Global Positioning System, has been introduced in some of the universities’ curriculum only few years back, and there is no book available for the students, which deals all the units of geoinformatics. Though, the books on surveying are available in plenty but the author thought to include it also in the book along with photogrammetry, remote sensing, GIS, and GPS, to make it a complete text. The book has twenty five chapters. The first section is Introduction of Geoinformatics. The second section is on Plane Surveying which deals only with fundamentals and methods of measurements used only in plane surveying. The third to sixth section deal with Photogrammetry, Remote Sensing, GIS, and GPS. In these chapters the author has tried to introduce the principles, working, and applications of these technologies so that readers are introduced about these technologies at undergraduate level, and later on they can refer to the individual books on these technologies for further knowledge and applications. A.M. Chandra

v

This page intentionally left blank

Acknowledgement

This book has been written at the Arba Minch University, Ethiopia, and author expresses his deep gratitude to Mr. Zenebe Zewdie, Director, Academic Program Evaluation & Implementation, Arba Minch University, Dr. Negash Wagesho Amencho, Scientific Director, Arba Minch University, and all staff of the university who helped directly or indirectly in producing this book in the present form. The author also wishes to acknowledge the love and affection of his son Dr. Anshuman Chandra and daughter Ms. Arushi Chandra, presently in USA, which provided him the strength to complete this project. Further, the books “Plane Surveying”, ‘Higher Surveying”, and “Remote Sensing and Geographic Information System” by the author have been of great help to him in writing this book. The author also acknowledges with thanks the works and articles of various authors available on Internet which have been taken to improve the quality of subject matter wherever found necessary. A.M. Chandra

vii

This page intentionally left blank

Contents Preface v Acknowledgement vii SECTION I: GEOINFORMATICS Chapter 1: Introduction 1.0 General 1.1 Mutual Relationship of Components of Geoinformatics

3–4 3 3

SECTION II: PLANE SURVEYING Chapter 2: Introduction and Basic Concepts 2.0 Introduction 2.1 Definition and Principle of Surveying 2.1.1 Definition of Surveying 2.1.2 Principle of Surveying 2.2 Definitions of Some Basic Terms 2.3 Concept of Errors in Surveying 2.3.1 Classification of Errors 2.3.2 Sources of Errors 2.3.3 Propagation of Errors

7–11 7 7 7 7 8 9 10 11 11

Chapter 3: Horizontal Distance Measurement 3.0 Introduction 3.1 Methods of Distance Measurements 3.1.1 Distance Measurements by Direct Method 3.1.2 Distance Measurements by Indirect Methods 3.1.3 Distance Measurements with EDM

12–18 12 12 12 14 17

Chapter 4: Angle and Direction Measurement 4.0 Introduction

19–29 19

ix

x

4.1 Classification of Angles and Directions 4.1.1 Bearings 4.1.2 Azimuths 4.1.3 Deflection Angles 4.1.4 Angles to the Right 4.1.5 Interior Angles 4.2 Theodolite 4.2.1 Main Parts of a Theodolite 4.2.2 Definitions of Some Technical Terms 4.2.3 Geometry of Theodolite 4.2.4 Adjustments of a Theodolite 4.2.5 Horizontal Angle Measurement with a Theodolite 4.2.6 Vertical Angle Measurement with a Theodolite 4.2.7 Miscellaneous Field Operations with a Theodolite 4.2.8 Errors in Theodolite Measurements 4.3 Magnetic Compass 4.3.1 Magnetic Declination 4.3.2 Local Attraction

Chapter 5: Vertical Distance Measurements 5.0 Introduction 5.1 Methods of Levelling 5.1.1 Direct Differential Levelling 5.1.2 Indirect or Trigonometric Levelling 5.1.3 Barometric Levelling

Geoinformatics

19 20 20 21 22 22 22 22 23 24 25 25 27 27 28 28 29 29 30–36 30 30 30 35 36

Chapter 6: Contouring 37–42 6.0 Introduction 37 6.1 Definitions 37 6.2 Concept of Contours and Contour Gradient 37 6.3 Characteristics of Contours 39 6.4 Methods of Contouring 41 6.4.1 Direct Method 41 6.4.2 Indirect Method 41 6.5 Uses of Contours 42 Chapter 7: Plane-table Surveying 7.0 Introduction 7.1 Advantages and Disadvantages 7.2 Principle of Plane Tabling 7.3 Plane Table and Accessories 7.4 Drawing Paper 7.5 Basic Definitions 7.6 Setting up the Plane Table

43–54 43 43 44 44 46 46 46

xi

Geoinformatics

7.7 Orienting the Plane Table 47 7.8 Plane Tabling Methods 49 7.8.1 Radiation 49 7.8.2 Intersection 49 7.8.3 Traversing 50 7.8.4 Resection 50 7.9 Errors in Plane Tabling 54

Chapter 8: Control Surveys 8.0 Introduction 8.1 Definitions 8.2 Types of Traverses 8.2.1 Open Traverse 8.2.2 Closed Traverse 8.3 Classification of Traverse 8.3.1 Based on Methods of Measurement of Horizontal Angles 8.3.2 Based on Instruments Employed 8.4 Traverse Procedure 8.5 Computation of Coordinates 8.6 Balancing the Traverse 8.6.1 Bowditch’s Method 8.6.2 Graphical Method

55–61 55 55 55 56 56 57 57 57 57 58 60 60 61

SECTION III: PHOTOGRAMMETRY Chapter 9: Photogrammetry 9.0 Introduction 9.1 Types of Photogrammetry 9.2 Applications of Photogrammetry 9.3 Merits and Demerits of Photogrammetry 9.4 Limitation of Photogrammetry in Land Surveying

65–67 65 65 66 66 67

Chapter 10: Properties of Aerial Photography 68–82 10.0 Introduction 68 10.1 Aerial Photographs 68 10.2 Aerial Photogrammetry 71 10.2.1 Photocoordinate System 71 10.2.2 Definitions of Technical Terms 73 10.2.3 Geometric Properties of Aerial Photograph 74 10.2.4 Scale of a Vertical Photograph 74 10.2.5 Ground Coordinates from a Vertical Photograph 76 10.2.6 Relief Displacement on a Vertical Photograph 77

xii

Geoinformatics

10.3 Flight Planning 10.3.1 Overlaps 10.3.2 Computation of Flight Plan Chapter 11: Stereophotogrammetry 11.0 Introduction 11.1 Stereoscopic Vision and Depth Perception 11.2 Stereoscopic Viewing of Photographs 11.3 Parallax in Stereoscopic Views 11.3.1 Algebraic Definition of Parallax 11.3.2 Difference in Elevation by Stereoscopic Parallax 11.3.3 Measurement of Parallax 11.3.4 Concept of Floating Mark in Measurement of Parallax 11.4 Aerial Photointerpretation

78 78 80 83–89 83 83 84 86 87 87 87 88 89

SECTION IV: REMOTE SENSING Chapter 12: Remote Sensing 12.0 Introduction 12.1 Principle of Remote Sensing 12.2 Advantages and Disadvantages of Remote Sensing 12.3 Multi-concept of Remote Sensing 12.4 Applications of Remote Sensing Chapter 13: Electromagnetic Energy 13.0 Introduction 13.1 Electromagnetic Energy 13.2 Electromagnetic Spectrum and its Characteristics 13.3 Electromagnetic Energy Interaction 13.4 Resolution 13.5 Image Histogram 13.6 Pure and Mixed Pixels Chapter 14: Sensors and Platforms 14.0 Introduction 14.1 Broad Classifications of Sensors and Platforms 14.2 Sensors and Satellites Launched for Different Missions 14.2.1 Land Observation Satellites and Sensors 14.2.2 High Resolution Sensors 14.2.3 Earth Observing (EO-1) Satellites 14.2.4 Radarsat-1 14.2.5 Weather Satellites

93–95 93 93 94 95 95 96–103 96 96 97 98 100 101 102 104–109 104 104 105 105 106 107 107 108

xiii

Geoinformatics

Chapter 15: Satellite Data Products 15.0 Introduction 15.1 Data Reception, Transmission and Processing 15.2 Remote Sensing Data 15.2.1 Digital Data 15.2.2 Tape Format 15.2.3 Data Products

110–114 110 110 111 111 112 114

Chapter 16: Image Interpretation and Digital Image Processing 115–128 16.0 Introduction 115 16.1 Image Interpretation 115 16.1.1 Interpretation Procedure 116 16.1.2 Image Characteristics 116 16.1.3 Image Interpretation Strategies 117 16.1.4 Photomorphic Analysis 117 16.1.5 Image Interpretation Keys 117 16.1.6 Equipment for Image Interpretation 118 16.2 Digital Image Processing (DIP) 118 16.2.1 Image Rectification and Restoration 119 16.2.2 Image Enhancement 119 16.2.3 Image Transformation 119 16.2.4 Image Classification 120 16.2.5 Classification Accuracy Assessment 122 16.2.6 Data Merging and GIS Integration 122 16.3 False Colour Images Used in Interpretation 123 16.3.1 True Colour Image 123 16.3.2 False Colour Image 123 16.3.3 Panchromatic Image 123 16.3.4 Multispectral Image 124 16.3.5 Colour Composite Image 125 Chapter 17: Application of Remote Sensing

17.0 17.1 17.2 17.3 17.4

129–133

Introduction 129 More about The Application of Remote Sensing Data 129 Land Use and Land Cover Mapping 130 Ground Water Mapping 131 Disaster Management 133 SECTION V: GEOGRAPHIC INFORMATION SYSTEM

Chapter 18: Geographic Information System 18.0 Introduction 18.1 Definition of GIS

137–139 137 137

xiv

Geoinformatics

18.2 Components of GIS 18.3 Understanding the GIS

138 139

Chapter 19: GIS Data 19.0 Introduction 19.1 Input Data and Sources 19.2 Data Acquisition 19.2.1 Data from Satellite Remote Sensing 19.2.2 Data from Existing Maps 19.2.3 Data from Photogrammetry 19.2.4 Data from Field Surveying 19.2.5 Data from GPS 19.2.6 Data from Internet/World Wide Web (WWW) 19.2.7 Attribute Data Tagging 19.3 Layer Concept of Data Storage in GIS 19.4 Data Verification and Editing 19.5 Georeferencing of GIS Data 19.6 Spatial Data Errors 19.7 Spatial Data Models and Structure 19.8 GIS Database and Database Management System 19.9 Topology 19.10 Types of Output Products 19.11 Spatial Data Analysis

140–149 140 140 141 141 141 141 141 141 141 141 142 143 143 144 144 145 146 148 149

Chapter 20: GIS Application 20.0 Introduction 20.1 Problem Identification 20.2 Designing a Data Model 20.3 Project Management 20.4 Identifying Implementation Problem 20.5 Selecting an Appropriate GIS Software 20.6 Project Evaluation 20.7 Case Studies 20.7.1 Site Suitability for Urban Planning 20.7.2 Road Accident Analysis

150–155 150 150 151 151 151 152 152 153 153 154

SECTION VI: GLOBAL POSITIONING SYSTEM Chapter 21: Introduction and Basic Concepts 21.0 Introduction 21.1 What is Unique About GPS? 21.2 Advantages of GPS over Traditional Surveying

159–160 159 159 159

xv

Geoinformatics

21.3 Limitations of GPS Based Surveying 160 21.4 GPS a New Utility 160 Chapter 22: Satellite Ranging 161–164 22.0 Introduction 161 22.1 Principles of GPS Working 161 22.1.1 Satellite Ranging 162 22.1.2 Measuring Distance from a Satellite 162 22.1.3 Atomic Clock and Determination of Position 164 Chapter 23: GPS Components 23.0 Introduction 23.1 Space Segment 23.1.1 Satellite Identification 23.1.2 Satellite Signals 23.2 Control Segment 23.2.1 Master Control Station 23.2.2 Monitor Stations 23.2.3 Ground Antennas 23.3 User Segment

165–170 165 166 167 167 168 169 170 170 170

Chapter 24: GPS Receivers for Surveying 24.0 Introduction 24.1. GPS Receivers and its Features 24.1.1 Surveying Receivers 24.1.2 Receivers by Method of Operation 24.2 GPS Errors

171–174 171 171 172 173 174

Chapter 25: GPS Surveying 175–181 25.0 Introduction 175 25.1 GPS Navigation and GPS Surveying 175 25.2 GPS Surveying Techniques 175 25.2.1 Rapid-static GPS Surveying 176 25.2.2 Stop-and-Go GPS Surveying 177 25.2.3 Kinematic GPS Surveying 178 25.3 Real-time GPS Surveying and Mapping Techniques 179 25.3.1 DGPS Technique 180 25.3.2 RTK-GPS Technique 180 Index 183

This page intentionally left blank

SECTION I

GEOINFORMATICS

This page intentionally left blank

Introduction

1.0 GENERAL Geoinformatics may be described as the science and technology dealing with the structure and character of spatial information and its capture. It is a means of collecting and displaying the information about the features and phenomena associated with the earth surface (including a little above and a little below the surface of the earth) by making measurements. There are different methods of making measurements for collecting the information. These are:

1. Ground method, i.e., surveying in which measurements are made directly on the Earth’s surface, 2. Photogrammetric method, i.e., photogrammetry in which the measurements are made on photographs taken by terrestrial camera or aerial camera, and 3. Remote sensing method, i.e., remote sensing in which the use of data collected through sensors onboard artificial satellites, in the form of digital data or satellite imagery, is made.

With the development of satellite technology and information technology, two other technologies developed simultaneously are:

1. Global Positioning System (GPS), and 2. Geographic Information System (GIS),

and since these two fields are so closely associated with the data collection, and their display they have become parts of geoinformatics. Thus, in the subsequent chapters, these two technologies will also be discussed. As all the fields listed above contribute towards providing information about the features on the Earth’s surface, they can be treated as the components of geoinformatics (Fig. 1.1).

1.1  MUTUAL RELATIONSHIP OF COMPONENTS OF GEOINFORMATICS The information about any feature or phenomenon may be collected in the form of length, area, volume, roads, hills, valley, forests, water body, cultivated land, potable water, polluted water, healthy crop, sandy soil, granite rock, folds, faults, densely populated area, thick forest, water yield from snow, water management, erosion studies, traffic problems, urban growth of a city, war strategies, air pollution, ground water level, natural resources exploration, etc. The information collected can be displayed in

Chapter 1

1

4

Geoinformatics

the form of maps, charts, and/or reports using the technologies listed above. The technologies may be combined as and when required as per the need and situations as surveying, photogrammetry, remote sensing, GPS, and GIS, all in one way or other, are techniques of measurements, collecting, displaying or processing the information. Ground surveys Photogrammetry

Remote sensing

Geoinformatics

G.I.S.

Global positioning system

Fig. 1.1  Components of Geoinformatics

It would be out of scope of this book to discuss all the technologies listed above in detail. Therefore, the discussions will be limited to their introduction and salient features to explain their principles and applications only. Enough literature on each technology is available separately, and here an attempt has been made to bring them under one fold to make the readers understand each technology, its application, applications, advantages, and disadvantages. Uncommon and obsolete methods of historical importance have not been discussed.

SECTION II

PLANE SURVEYING

This page intentionally left blank

2 2.0 INTRODUCTION Surveying is a field method of making measurements on the surface of the earth in all the three dimensions, i.e., x, y and z, where x and y coordinates are taken in horizontal plane, and the z coordinate which is height of the points with respect to some datum, in vertical plane. When the plane of reference for x and y coordinates is taken as horizontal plane, the surveying is called as plane surveying. In geodetic surveying the curvature of the earth is taken into account and the reference plane is no longer a horizontal plane as the mean surface of the earth is an oblate ellipsoid, and the reference surface for mapping taken is an ellipsoid of evolution. The geodetic surveying is a part of geodesy.

2.1  DEFINITION AND PRINCIPLE OF SURVEYING The definition and the basic principle of surveying must be understood very clearly.

2.1.1 Definition of Surveying Surveying is defined as an art of making such measurements of relative positions of points on the surface of the earth that on drawing them to scale, natural and man-made features are exhibited in their correct relative horizontal and vertical positions on a piece of paper, known as plan or map. All civil engineering projects such as highways, railways, tunnels, bridges, flyovers, dams, reservoirs, airport, etc., use surveying for planning, design, and execution of the projects. The measurements required for carrying out a project are: 1. Measurement of lengths, 2. Measurement of angles and directions, 3. Measurement of elevations of points, and 4. Establishment of grades.

2.1.2 Principle of Surveying The fundamental principle of surveying is “working from whole to the part”. This principle requires that a network of control points which are the points of known locations must be first established in the area to be surveyed, and then all other points should be located with reference to these control points. This basic principle of surveying must be followed to control the errors in measurements. By employing

Chapter 2

Introduction and Basic Concepts

8

Geoinformatics

this principle of working, the errors in measurements are kept within the prescribed limits of accuracy of the work. The points can be located with reference to already established points by any one of the following approaches (Fig. 2.1). Let the already fixed points be A and B, and the point C is to be located. 1. Measure the perpendicular distance y to C from D, and the distance x of D from A, or from B (Fig. 2.1a). 2. Measure the distances l1 and l2, and plot C by intersection (Fig. 2.1b). 3. Measure the angle a and the distance l2, and locate C (Fig. 2.1c). 4. Measure the angle b and the distance l1, and locate C (Fig. 2.1d). 5. Measure the angle a and β, and locate C (Fig. 2.1e).

Fig. 2.1  Fixing a point in relation to the known points

2.2  DEFINITIONS OF SOME BASIC TERMS The definitions of some of the common basic terms used in surveying are given below: Level surface: A level surface is the equipotential surface of the earth’s gravity field. It is curved surface, and every element of which is normal to the plumb line passing through the point. The surface of still water is the best example of a level surface. Level line: The line lying in a level surface is a level line. It is, thus, a curved line normal to the plumb at all points. Horizontal line: A line tangent to a level surface is a horizontal line. Horizontal angle: An angle measured between two intersecting lines lying in a horizontal plane is defined as a horizontal angle.

Horizontal distance: In plane surveying, the distance measured along a level line is termed as horizontal distance. Vertical line: A line perpendicular to a horizontal plane is a vertical line. Vertical plane: A plane containing a vertical line is a vertical plane. Vertical angle: The angle between two intersecting lines in a vertical plane where one of the lines is taken in horizontal plane, is a vertical angle. Zenith: The point vertically above the observer on the celestial sphere is known as zenith. Zenith angle: An angle between two lines in a vertical plane where one of the lines is directed towards zenith is known as zenith angle. Elevation: The vertical distance of a point measured from an assumed datum or mean sea-level is known as the elevation of the point. Contour: A contour is an imaginary line of constant elevation on the surface of ground. Grade or gradient: The slope of a line or rate of ascent or descent is termed as grade or gradient. Latitude or departure: If the x-axis and y-axis in a Cartesian coordinate system are in east-west and north-south directions, respectively, the y-coordinate of the point is its latitude and x-coordinate departure.

2.3  CONCEPT OF ERRORS IN SURVEYING The surveyors must have some knowledge of errors in surveying so that when they are making the observations they are careful and avoid mistakes and blunders, and they can deal with some of the errors which they can remove from the observations. There are some errors which still remain, so how to minimize those errors. Before discussing the errors, the terms accuracy and precision must be understood clearly. Accuracy is the degree of closeness or conformity of a measurement to its true value whereas the precision is the degree of closeness or conformity of repeated measurements of the same quantity to each other. The following four sets of data related to the measurement of a quantity by four observers will make clear the difference between the accuracy and precision. Let the true value (which can never be determined) of the quantity be 25.567.



Observer A

Observer B

Observer C

Observer D

25.568

25.555

25.355

25.389

25.565

25.551

25.354

25.889

25.566

25.558

25.350

25.263

25.567

25.554

25.353

25.705

25.566

25.556

25.555

25.411

If the above data are closely analyzed it would be found that 1. The observations of A are accurate as they are very close to the true value, 2. The observations B are accurate and precise as the values are close to each other and also close to the true value, 3. The observations C are not accurate but precise as they are away from the true value but close to each other, and 4. The observations D are neither accurate nor precise as neither the values are close to the true value nor close to each other.

Chapter 2

9

Introduction and Basic Concepts

10

Geoinformatics

2.3.1  Classification of Errors The errors can be classified as:

1. Gross errors, 2. Systematic errors, and 3. Random or accidental errors.

Gross Errors Gross errors are, in fact not errors at all, but results of mistakes and blunders committed due to the carelessness of the observer. Some of the examples of gross errors are pointing on the wrong survey target, taking incorrect reading on the scale, and wrong recording of the value. The observation procedures should be so designed that the mistakes and blunders are detected and removed from the observations.

Systematic Errors Systematic errors are those errors of the observations that are systematic in nature, and follow certain mathematical functional relationship, and therefore, can be determined and the observations can be corrected. An example of the systematic errors is the error in measurement of distance due to a too short tape.

Random or Accidental Errors Random or accidental errors are those errors which are beyond the control of the observer. After the observations are corrected for mistakes and systematic errors, the errors left in the observation are the random errors. The random errors follow the law of normal distribution, and have the following characteristics:

1. The positive errors are as frequent as the negative errors, 2. The small error are more frequent than the large errors, and 3. Very large errors do not exist.

Most Probable Value of a Quantity An observation (ο), initially, consists of its true value (τ), gross errors (γ), systematic errors (ξ), and random errors (ρ), i.e., ο = τ + γ + ξ + ρ …(2.1) As stated above, all gross and systematic errors are removed from the observation and the observations will have only random errors, thus Eq. (2.1) becomes ο = τ + ρ …(2.2) Now the random errors present in the observations are minimized as they cannot be removed. After minimizing the random error if the remaining error is ρ′, then M = τ + ρ′

…(2.3)

In Eq. (2.3), the observed value M is the value closest to the true value, and it is known as the most probable value of the quantity.

11

Introduction and Basic Concepts

2.3.2  Sources of Errors The errors in observations may be due to the carelessness of the observer, called personal errors, due to the imperfectness of the instrument, called instrumental errors, or/and natural causes such as high wind, change in temperature during measurements, atmospheric refraction, etc.

In surveying the two basic quantities, distances, and angles, measured are often used to calculate other quantities, such as horizontal distances from measured slope distances, elevations from difference in heights, areas, volumes, grades, etc., using some mathematical relationships between the measured and computed quantities. Since the measured quantities have errors, it is inevitable that the quantities computed from observed quantities having errors, will not have errors. The errors in the observed quantities propagate through the mathematical relationships into the computed quantities, which is called propagation of error. As the error propagates, the standard error which is the measure of precision of observations of a quantity also propagates into the precision of the computed quantities. The relationship between the observed quantities and the computed quantities may be linear or non-linear, and the error in the computed quantity is determined accordingly.

Linear Relationship If x is the observed quantity then the quantity y can be written as y = ax + b where a is the coefficient and b the constant. If the errors in x and y are dx and dy, respectively, then dy = adx where a = dy/dx which is the slope of the line in Eq. (2.4).

…(2.4)

…(2.5)

Non-linear Relationship For a non-linear relationship y = f (x1, x2, x3, …), the error in y is computed from the following expression:

dy =

∂y ∂y ∂y dx1 + dx2 + dx3 + ... ∂x1 ∂x2 ∂x3

…(2.6)

The standard error of the computed quantity is given by 2



2

2

  ∂f   ∂f   ∂f s x1  +  s x2  +  s x3  + ... s =   ∂x1   ∂x2   ∂x3  2 y

…(2.7)

Chapter 2

2.3.3  Propagation of Errors

3 Horizontal Distance Measurement

3.0 INTRODUCTION The determination of distance between two points on or above the surface of the earth is one of the basic operations of surveying. For mapping purposes all measured distances are finally reduced to their equivalent horizontal distances except those which are measured to determine the difference in level of the points.

3.1  METHODS OF DISTANCE MEASUREMENTS The distances can be measured (1) directly, (2) indirectly, or (3) using electronic distance measuring (EDM) instrument. People have been using some direct approximate methods of measuring the distances such as by pacing, using Pedometer, Passometer, and Odometer. Chains were also being used to measure the distances but now tapes and bars are used for the direct measurements. In the indirect measurements of the distances the trigonometric method or tacheometry is also used. The distances can be measured using electronic instruments which make use of electromagnetic waves for the measurements.

3.1.1  Distance Measurements by Direct Method For the direct measurement of distances, the tapes or bars are used. For ordinary works linen or metallic tapes which have more strength due to metal wires woven into water proof fabrics are used. For precise works invar tapes or bars of standard lengths are used. When steel tapes are used for precise measurements, some corrections, known as tape corrections, may be required to be applied. The process of measuring the distance is commonly termed as taping or chaining. The accessories used for taping are plumb bob, ranging rods, pegs, arrows, spirit level, spring balance, tape clamp, line ranger, and optical square. If the distance between two points is more than a tape length, intermediate points between two end points are established to measure the straight line distance between the two end points. The intermediate points are established by method of ranging. In Fig. 3.1, the points 1 and 2 are to be fixed on the line AB by ranging. The surveyor stands slightly away from A at P, and the assistant holding a ranging rod at R. The surveyor asks the assistant to move closer to the line AB until he is in line with AB. The point 1 is marked at the bottom of the ranging rod which is on the line AB. Similarly, other points are also established.

13

Horizontal Distance Measurement R

1

A

2

A

Fig. 3.1  Ranging when ends of line are visible

When the end points A at B are not intervisible, the method of reciprocal ranging is employed. In Fig. 3.2, A and B are not intervisible due to intervening high ground. To establish the intermediate points 1 and 2, two points P and Q are selected such that both are visible from A and B. Two assistants hold the ranging rods at P and Q. The surveyor stands at M near A, sights Q, and asks the assistant at P to come at P1 on the line AQ. Then the surveyor standing at N near B, sights P1 , and asks the assistant at Q to come at Q1 on line BP1. Repeating this process in three-four attempts, the points 1 and 2 are located on the line AB.

P Q

M

B

P1 P2 1

A

N

Q1 2

B

Fig. 3.2  Reciprocal ranging when ends of line are not intervisible

The distance measurements on sloping or uneven ground can be done by stepping, i.e., taking horizontal distances in short steps.

Tape Corrections Some systematic errors which occur during the measurement of distances can be easily computed, and the measured distance can be corrected to get the horizontal distance. These errors are listed below: Let ca = Correction for absolute length ct = Correction for temperature cp = Correction for pull cg = Correction for sag cs = Correction for slope cm = Correction for alignment cl = Correction reduction to mean sea-level (m.s.l.) c = Correction per tape length l = Designated or nominal length of the tape L = Measured length of the line α = Coefficient of linear expansion of the tape material

Chapter 3

A

14

Geoinformatics

tm = Mean temperature during the measurement t0 = Standard temperature P = Pull applied in the field P0 = Standard pull A = Cross-section area of the tape E = Modulus of elasticity of the tape material W = Weight of the tape per span length θ = Angle of the slope h = Difference in elevation of the two ends of the tape d = Distance by which the other end of the tape is out of alignment hav = Average elevation of the measured line. The formulae for the corrections are given below: c Correction for absolute length ca = ± L l

…(3.1)

Correction for temperature

ct = ±α(tm − t0 ) L

…(3.2)

Correction for pull

cp = ±

( P − P0 ) L AE

…(3.3)

Correction for sag

cg = −

1 24

Correction for slope

cs = – (1 – cos q)L (Exact)

2

W   P  L

h2 − =  (Approximate) 2L

…(3.4) …(3.5) …(3.6)

Correction for alignment

cm = −

d2 (Approximate) 2L  

…(3.7)

Correction for reduction to m.s.l.

cl = −

hav L (Exact) R + hav  

…(3.8)

a L  (Approximate) R

…(3.9)

= −

av

3.1.2  Distance Measurements by Indirect Methods Indirect methods of distance measurements are by making use of trigonometry, tacheometry, and subtense method. There are many possibilities of getting distance using trigonometry, and only few will be discussed here as the surveyors can find out other possibilities themselves depending on the field conditions.

15

Horizontal Distance Measurement

Using Trigonometric Functions

(a) Sine rule (Fig. 3.3): In ∆ ABC if α, β, and a are known, b can be calculated from the following formula:

b a = sin β sin α

…(3.10)

C b

c a

C

b

A



B

Fig. 3.3  Distance from properties of a triangle

(b) Using properties of a right-angled triangle (Fig. 3.4): (i) In right-angled ∆ ABC if a and c are known, b is given by 2 2 b = ( a + c )

…(3.11)

(ii) If c and α are known, b is computed as b = c sec a



…(3.12) C

b

a

a c

A

B

Fig. 3.4  Distance from properties of a right-angled triangle

(c) Using properties of a triangle (Fig. 3.5): In ∆ ABC if b, c, and α, are known, a can be computed from the formula cos a =

b2 + c 2 − a 2 2bc

…(3.13)



C b

a a

A

c

B

Fig. 3.5  Distance from properties of a triangle

Chapter 3



16

Geoinformatics

Using Tacheometry Tacheometry or stadia method is an indirect method of horizontal distance and vertical distance measurement. It is a rapid method but usually of lower degree of accuracy. In the diaphragm of theodolite which is used for measuring angles (c.f., Sec. 4.2), in addition to the horizontal and vertical hairs, two horizontal hairs known as stadia hairs at a known distance i apart are provided (Fig. 3.6). The stadia hairs are also provided in the instrument known as level used for measuring the difference in elevation (c.f., Fig. 5.1). If the intercept made by the stadia hairs on a graduated staff (c.f., Fig. 5.2) is s (Fig. 3.7) then the horizontal distance D is given by Eq. 3.14 in which k is the multiplying constant and c is the additive constant. Stadia hair Vertical hair

Horizontal hair i

Fig. 3.6  Diaphragm with stadia hairs

D = ks + c



...(3.14) Staff

Line of sight

Tacheometer

s A D

B

Fig. 3.7  Using a tacheometer

D = ks cos2a …(3.15) In Fig. 3.7 the line of sight is horizontal, but when it is inclined to horizontal at an angle of α, the horizontal and vertical distances as shown in (Fig. 3.8), D and V, respectively, are given by Eqs. 3.15 and 3.16. 1 V = 2 ks sin 2a …(3.16) If the elevation hA of A and height of the instrument hi above the ground are known, the elevation hB of B with respect to the datum is given by Eq. (3.17). hB = hA + hi + V – h …(3.17) where h is the middle wire reading on the staff.

17

Horizontal Distance Measurement Staff Line of sight

s

h V

Tacheometer B

a hi

hB

D hA

Datum

Fig. 3.8  Inclined line of sight

Subtense method is also an indirect method of distance measurement. This method essentially consists of measuring the angle θ subtended by the two ends P and Q of a rod of fixed length s. The rod PQ of known length is known as subtense bar (Fig. 3.9). If the angle measured between P and Q at A is θ, the horizontal distance D is given by Eq. (3.18).

D =

1 θ s cot . cos α 2 2

…(3.18)

and the vertical distance by Eq. (3.19). V = D tan a



…(3.19)

The elevation hB of B can be determined from Eq. 3.17. Subtense bar

P s

Q q

V

a hi

D hA

h

Datum

hB

Fig. 3.9  Subtense method of measuring horizontal distance

Chapter 3

Subtense Method

18

Geoinformatics

3.1.3  Distance Measurements with EDM The electronic distance measurement instruments use reflectors to reflect the electromagnetic (EM) waves emitted by the instrument. The EM waves travel with the speed of light. The total time of travel of the EM waves from the instrument to reflector and back, is measured. If the time of travel of total distance which is 2D (the distance between the instrument and the reflector being D), is t and the velocity of light is c, then

c =

2D t

or

D =

ct 2

...(3.20)

4 Angle and Direction Measurement

4.0 INTRODUCTION In plane surveying, it is required to measure the angles and directions in horizontal plane or vertical plane (c.f., 2.2) for various purposes. The angle is defined as the space between two intersecting lines or planes at or close to the point of intersection whereas the direction is also an angle but it is with respect to some reference or meridian. It is a practice to say that two lines meet at some angle but a particular object lies in a direction with reference to already a known direction or line. For example, the lines AB and CD intersect at an angle θ at O (Fig. 4.1a) but the point D lies in a direction at an angle θ towards the right of the known line AB (Fig. 4.1b).

q

A

(a)

D

O

C

O

C

q D

A

(b)

Fig. 4.1  Angle and direction

The method of determining angle and direction depends upon the instrument employed. Normally a transit theodolite or simply theodolite is the most commonly used instrument for measuring angles, but the angles can also be measured using tape, plane table and alidade, compass, or sextant.

4.1  CLASSIFICATION OF ANGLES AND DIRECTIONS Since the directions are also defined in terms of angles, the method of measurement of both is same. There are several means of defining angles and directions. These are:

1. Bearings, 2. Azimuths, 3. Deflection angles, 4. Angles to the right, and 5. Interior angles.

Chapter 4

B

B

20

Geoinformatics

4.1.1 Bearings Bearing is defined as the direction of a line with respect to a given meridian. The fixed reference line NS, shown in Fig. 4.2, with respect to which the horizontal angles, say θF for the line AB at A, are measured clockwise, is known as meridian, and the measured angle is known as the bearing. If the direction of progress of survey is A to B, the bearing measured θF at A is called the fore bearing of the line AB, and the bearing measured θB at B is called the back bearing of AB.

Fig. 4.2  Direction referred to a meridian (bearing)

The fixed reference line known as meridian is named according to the choice as below:

1. If it passes through the geographical north (N) and south (S) poles of the earth—it is a true or astronomic meridian. 2. If it is a line parallel to a central true meridian—it is a grid meridian. 3. If it is parallel to the magnetic north (N) and south (S) poles of the earth—it is a magnetic meridian. 4. If it is an arbitrarily chosen reference line—it is an arbitrary meridian. As the bearing depends upon the meridian chosen, it may be true, magnetic or arbitrary bearing.

Whole-circle Bearing and Reduced Bearing If the bearing is measured from the north end of the meridian in clockwise direction, it is called as the whole-circle bearing (W.C.B.) (Fig. 4.2). The bearings can also be measured from north end or south end in clockwise or anti-clockwise direction depending upon in which quadrant the line is, and the bearing so measured is called as the reduced bearing (R.B.) or quadrantal bearing. The bearings are named according to the quadrant, i.e., north-east (0°–90°), north-west (360°–270°), south-east (180°–90°) or south-west (180°–270°) as shown in Fig. 4.3.

4.1.2 Azimuths The whole-circle bearing of a line in geodetic and astronomic surveying is called as azimuth of the line. The distinction between the whole circle bearing and azimuth from each other is made due to the convergence of meridian considered for azimuth. In plane surveying the convergence of meridian is not considered, and the azimuths are called bearings.

Angle and Direction Measurement

21

Fig. 4.3  Reduced or quadrantal bearing

4.1.3  Deflection Angles The angle between the line and the prolongation of the preceding line is known as the deflection angle. In Fig. 4.4, AB, BC, CD, and DE are the lines of a route in which AB is the initial line. The line BC deflects towards right by an angle θBR with respect to the preceding line AB. Similarly, CD deflects towards right by an angle θCR with respect to BC, and DE towards left by an angle θDL with respect the CD.

Fig. 4.4  Deflection angles

Chapter 4

A

22

Geoinformatics

4.1.4  Angles to the Right The angles measured clockwise from the preceding lines to the following line as illustrated in Fig. 4.5, are called angles to the right.

Fig. 4.5  Angles to the right

4.1.5 Interior Angles The inside angles between the adjacent lines of a polygon as shown in Fig. 4.6, are called interior angles. The sum of interior angles is equal to (n-2)180° where n is the number of sides of a polygon.

Fig. 4.6  Interior angles

4.2 THEODOLITE Theodolite is an instrument (Fig. 4.7) which is used for a variety of surveying tasks. It is primarily employed to measure horizontal and vertical angles.

4.2.1 Main Parts of a Theodolite Figure 4.8 is the sectional view of a theodolite illustrating its main parts. The tribrach is a leveling head and carries three leveling screws for levelling the instrument. It supports the main parts of the theodolite, and it is used to attach the theodolite to the tripod. The lower plate and the upper plate of the instrument are used for the measurement of horizontal angles. The lower plate carries graduated circle having degree marks from 0° to 360°. The upper plate carries two verniers, diametrically opposite to each other, and moves along Fig. 4.7  Transit theodolite with the movement of the telescope. There are two clamps, lower clamp and upper clamp. The lower clamp restricts the movement of the lower plate, and the upper clamp restricts the relative motion between the upper and lower plates, thus the reading on the lower

Angle and Direction Measurement

23

plate remains unchanged. There are slow motion screws with each clamp for finer movements. Two standards resembling the English letter A, are firmly attached to the upper plate. These standards support the telescope and the vertical circle, and allow the movement of the telescope in vertical plane. The upper plate carries one or two plate levels for levelling the instrument. The vertical circle and the verniers attached to it are used to measure the vertical angles. The altitude bubble is used to make 0°-0° line of the vertical circle, horizontal before taking the vertical angles. The telescope of the theodolite is generally internal focusing type.

4.2.2 Definitions of Some Technical Terms

Chapter 4

Some of the common terms used with theodolite and their definitions are given below (Fig. 4.9): Vertical axis: The axis (V ) about which the theodolite is rotated in a horizontal plane, is the vertical axis of the instrument. Horizontal or trunion axis: The axis (H ) about which the telescope along with the vertical circle rotates in a vertical plane, is called horizontal or trunion axis of the instrument. Line of collimation: The imaginary line (S ) joining the optical center of the objective and the intersection of the cross hairs is called the line of collimation. It is also called the line of sight. Axis of the plate level and telescope level bubble: The lines (A) and (B) tangential to the longitudinal curve of the plate level and telescope level tubes, respectively, at their centers, are the axes of the plate level bubble and telescope level bubble, respectively. Instrumental centre: The point O through which the vertical axis, horizontal axis, and the line of sight pass, is called the instrumental centre. Centering: The process of setting up a theodolite over the ground station mark is known as centering.

Fig. 4.8  Sectional view of a theodolite showing its main parts

Levelling: The process of making the vertical axis of the instrument coinciding with the plumb line passing through the instrument centre is called levelling. After centering and levelling, the vertical axis of the instrument passes through the ground station mark.

24

Geoinformatics

Transiting (reversing or plunging): The process of turning the telescope in vertical plane through 180° about the horizontal axis is known as transiting.

Fig. 4.9  Principal axes of a theodolite

Swing: The continuous motion of the telescope about the vertical axis in the horizontal plane is called swing. When the telescope is rotated clockwise, it is right swing and if anticlockwise, it is left swing. Face-left and face-right observations: The observations made keeping the vertical circle on the left of the telescope are known as face-left observations, and if the circle is on right side of the telescope, the observations are called face-right observations. Changing face: When the face of the telescope is changed from left to right or vice versa, the process is known the changing face. Telescope normal and inverted: When the vertical circle is on the left of the telescope and telescope bubble is up, the telescope is said to be normal, and when on the right with bubble down, it is inverted. A set: A set of horizontal angle observations consists of two horizontal angle observations, one on the face left and the other on the face right.

4.2.3 Geometry of Theodolite The principal axes of a theodolite have some geometrical relationships between them (Fig. 4.9) which must exist permanently, and it is known as permanent adjustments of theodolite. If these relationships are disturbed, the observations made will be in error. The following relationships between the axes must exist:

1. The vertical axis (V ) is perpendicular to plane of the plate level bubble (A), 2. The horizontal axis (H ) is perpendicular to the vertical axis (V ), 3. The line of sight (S ) is perpendicular to the horizontal axis (H ),

25

Angle and Direction Measurement



4. The axis of the telescope bubble (B) is parallel to the line of sight (S ), and 5. The vertical axis (V ), the horizontal axis (H ), and the line of sight (S ) pass through the instrument centre.

4.2.4 Adjustments of a Theodolite The following are the adjustments of a theodolite:

1. Temporary adjustments and 2. Permanent adjustments.

4.2.5 Horizontal Angle Measurement with a Theodolite The following two methods are used for horizontal angle measurements: 1. Reiteration method and 2. Repetition method.

Reiteration Method Reiteration method, also known as direction method, is employed when a number of angles are to be measured at one station. Readings are taken on both the verniers on face left and right for one set of readings. Depending on the accuracy required, the number of sets can be decided. In Fig. 4.10, five stations A, B, C, D, and E are to be observed from station O. The angles q1, q2, q3, θ4, and θ5 are measured, and the angles between the points are obtained by subtracting the measured angles from each other, e.g., ∠ AOB is (q2 – q1), ∠ BOC is (q3 – q2), and so on. The following steps are to be followed to make the measurements:

1. Set up the instrument over the station O. 2. Bisect the station A, and make the initial reading on the horizontal circle 0° on face left.

Fig. 4.10  Reiteration method of horizontal angle measurement

Chapter 4

The temporary adjustments are required to be done for every set up of the instrument at a station before making the observations. The temporary adjustments require setting up of instrument at the station from where the observations are to be made. Setting up the instrument at a station includes centering, levelling, and removal of parallax. The centering of the theodolite is done to set up the instrument over the ground station mark such that the vertical axis passes exactly through the ground mark of the station. Then levelling is done to make the instrument levelled so that for all horizontal positions of the theodolite for a set up, the plate level bubble remains central. Finally the removal of parallax is done for accurate bisection of the objects. The permanent adjustments are done by the manufacturer of the instrument or in the laboratories equipped for carrying out such adjustments. The permanent adjustments are done to satisfy the relationships between different axes of the instrument given in Sec. 4.2.3. Also the vertical circle reading should be zero when the line of sight is horizontal. Once the permanent adjustments have been done for an instrument, it remains for a long period of time.

26

Geoinformatics

3. Bisect B, C, D, E, and A in the last, and take the respective readings q1, q2, q3, θ4, and θ5 in clockwise direction. Take mean of the angles obtained from the readings on the two verniers. 4. Change the face to face right, bisect stations A having the initial reading as 360°. 5. Rotate the telescope in anticlockwise direction, and take the readings in reverse sequence as E, D, C, B, and A. Take mean of the angles obtained from the readings on the two verniers. 6. Take the mean of the respective values of the angles on face left and face right, which are the required angles between the points for one set of observations.

Repetition Method Repetition method is employed when the accuracy requirement is high. In this method also the readings are taken on both the verniers on face left and right for one set of readings, and depending on the accuracy required, the number of repetitions is decided. In Fig. 4.11, let the angle θ between the stations A, and B at O is to be observed by three repetitions. The following steps are followed to measure the required angle:

1. 2. 3. 4. 5.



6.



7.



8.



9.

Set up the instrument over the station O. Bisect the station A, and make the initial reading on the horizontal circle 0° on face left. Bisect the station B and take the reading θ′ in clockwise direction. Rotate the telescope in clockwise direction, and bisect A keeping the initial reading now as θ′. Rotate the telescope further to bisect B, and take the reading as θ′′ in the second repetition. Rotate the telescope again in clockwise direction, and bisect A keeping the initial reading now as θ′′. Rotate the telescope further to bisect B, and take the reading as θ′′′ in the third repetition. Let the mean of the readings on the two verniers – Fig. 4.11  Repetition method of horizontal on face left in the third repletion be q L. angle measurement Now bisect the station A again and take the readings as above rotating the telescope in anticlockwise direction keeping the face right. Let – the mean reading after three repetitions with face right be q R. The mean of the two final readings divided by the number of repetitions 3 is the value of the angle, i.e.,

1  θL θR  + . 2  3 3

Errors Eliminated and not Eliminated The errors which get eliminated during various operations in the above two methods are listed below: 1. Errors due to eccentricity of verniers and centers are eliminated by reading both the verniers. 2. Errors due to inadjustment of line of collimation and horizontal axis are eliminated by taking both face readings. 3. Errors due to inaccurate graduations on the circles are eliminated by taking number of sets changing zeros. The errors due to inaccurate bisection are counter-balanced to some extent. In the method of repetition, the readings are automatically taken on the different parts of the circle.

Angle and Direction Measurement

27

The following errors are not eliminated:

1. Errors due to slips. 2. Errors due to the displacement of station signal. 3. Errors due to the verticality of the vertical axis.

4.2.6 Vertical Angle Measurement with a Theodolite

Fig. 4.12  Vertical angle measurement

The following steps are involved in the measurement of vertical angles:

1. Set up the instrument over the station A. 2. Check that the telescope bubble is in its centre of run. 3. With face left, rotate the telescope in vertical plane to bisect the point B, and take the readings on the two verniers of the vertical circle. 4. Change the face, bisect B, and take the readings on the two verniers of the vertical circle again. 5. Take mean of the means of the two vernier readings for face left and right, which is the required vertical angle +α of the point B at A.

4.2.7  Miscellaneous Field Operations with a Theodolite Theodolite is a versatile instrument and it is not used only for measuring the horizontal and vertical angles but for a variety of other applications, some of them are listed below. Measurement of magnetic bearing: The whole-circle bearing can be measured by attaching a tubular compass to the standards. Lining-in: It is a process of establishing intermediate points on a given line. By setting up instrument in such manner that the two ends of the line lie on the line of sight, the intermediate points lying on the line of sight can be located on the line.

Chapter 4

The vertical angles may be the angle of elevation (+) or the angle of depression (–). If the point is above the horizontal plane passing through the instrument centre, such as the point B, the vertical angle is the angle of elevation, and if below it is angle of depression, such as the point C (Fig. 4.12).

28

Geoinformatics

Balancing-in: Establishing intermediate points on a line whose extremities are not intervisible, but they are visible from some intermediate point can be done using theodolite by balancing-in. Prolonging a straight line: A straight line of given length can be easily prolonged using a theodolite. Location of intersection of two straight lines: The intersection of two straight can also be determined using a theodolite. Checking the verticality of the structures: The verticality of structures, such as pillars, piers, etc., can also be checked using a theodolite.

4.2.8  Errors in Theodolite Measurements The sources which cause errors in the measurements made by the theodolite may be classified as below: Personal errors: These errors arise from inaccurate centering, inaccurate levelling, slip screws, improper use of tangent screws, errors in setting and reading the verniers, inaccurate sighting, and parallax. Instrumental errors: These errors are due to instrument imperfection and/or inadjustment, such as imperfect adjustment of the plate level, line of sight not being perpendicular to the horizontal axis, horizontal axis not being perpendicular to the vertical axis, eccentricity of inner and outer axes, eccentricity of verniers, imperfect graduations, and inadjustment of vertical verniers or vertical index error. Errors due to Natural causes: These are effects of temperature difference, wind effect, refraction effect, and settlement of the tripod.

4.3  MAGNETIC COMPASS A magnetic compass used by surveyors is an instrument containing a magnetized pointer which shows the direction of magnetic north and bearings from it. There are various kinds of magnetic compasses, such as trough compass, circular compass, tubular compass, prismatic compass, and surveyor’s compass. Some of these are used to know the direction of magnetic north and south line, i.e., the magnetic meridian, and others are used to measure the directions of the line from the magnetic meridian, i.e., the bearings. For the measurement of the whole-circle bearing, Fig. 4.13  Prismatic compass the prismatic compass (Fig. 4.13) is most commonly employed by the surveyors. In the prismatic compass, the magnetic needle is attached to the graduated ring of the compass inside a circular box which contains the eye vane and the object vane for sighting. A prism attached to the eye vane enables to take the reading on the graduated ring while sighting the object. Since the graduated ring does not move, by rotating the circular box when the object is sighted, the reading on the graduated ring is the whole-circle bearing.

Angle and Direction Measurement

29

4.3.1  Magnetic Declination The horizontal angle between the true or astronomic meridian and the magnetic meridian is called the magnetic declination. If the magnetic declination at a place is known, from the observed magnetic bearing of a line, its true bearing can be determined.

4.3.2  Local Attraction

Chapter 4

The deviation of the magnetic needle from the magnetic meridian arising from local sources, such as objects of iron or steel, some kinds of iron ores, electric transmission lines, is called the local attraction. Deviation of the magnetic needle from the magnetic meridian alters the magnetic bearing, and consequently the observed magnetic bearings will be in error due to local attraction. If the difference of the fore bearing and back bearing of a line is 180°, the both ends of the line are free from the local attraction, and this property is used to detect the stations affected by local attraction, and for correcting the affected bearings.

5 Vertical Distance Measurements

5.0 INTRODUCTION For the execution of engineering projects, such as highways, railways, canals, tunnels, bridges, dams, irrigation and drainage works, the heights of different points above a datum, e.g., mean sea-level, are required. The vertical distance between two points is the difference in heights of the two points. It is also equal to the difference in elevations (c.f., Sec. 2.2) of the two points. Levelling is an operation in surveying performed to determine and establish elevations of points on the surface of the earth or beneath the surface of the earth, for finding out the vertical distances between points.

5.1  METHODS OF LEVELLING Difference in elevations can be determined by any of the following methods:

(i) (ii) (iii) (iv)

Direct differential or spirit levelling, Indirect or trigonometric levelling, Stadia levelling, and Barometric levelling.

5.1.1 Direct Differential Levelling Direct differential levelling or spirit levelling is the simplest and most accurate method for determination of difference in elevations between two points. The instruments employed in the differential levelling are level and levelling staff. There are different types of levels and staffs available. Fig. 5.1 shows an auto level and Fig. 5.2 shows a levelling staff. The level is used to provide a horizontal line of sight (c.f., Sec. 4.2.2) to take readings on a vertically held staff which is a graduated scale in units of metres. The principle of differential levelling is explained in Fig. 5.3. The level provides a horizontal line of sight which falls on the two staffs held at two points A and B between which the difference in level is to be determined. If the elevations of A and B are hA and hB, respectively, and the readings on A and B are sA and sB, respectively, then we find that

hA + SA = H.I. = hB + sB

hB = H.I. – sB If the difference in elevation between the two points is Dh, we can write Δh = hB – hA = sA – sB or hB = hA + Dh or

…(5.1) …(5.2) …(5.3) …(5.4)

31

Vertical Distance Measurements

Fig. 5.1  Auto level

Fig. 5.2  Levelling staff

Chapter 5





Fig. 5.3  Direct differential levelling

Thus, from Eq. (5.3) we find that the difference in elevation between two points is equal to the difference in staff readings at the two points, and if elevation of one point is known, the elevation of the other point can be obtained from Eq. (5.4) by simply adding the difference in elevation to the elevation of the known point or from Eq. (5.2) by subtracting the staff reading at the point of unknown elevation from the height of instrument (H.I.). The height of instrument given by the Eq. (5.1) is the elevation of the line of sight obtained by adding the staff reading to the elevation of the point whose elevation is known.

32

Geoinformatics

Classification of Direct Differential Levelling The direct differential levelling is classified into the following:

Simple Levelling This is the easiest and straightforward type of direct levelling in which only one setting of the instrument is done to determine the difference in level between two points at a distance which is within the range of the instrument.

Differential, Compound, or Continuous Levelling This method of levelling is employed when the two points between which the difference in level is required are too far apart, there are intervening obstacles, or the difference in elevation between the points is too large, and more than one setting of the instrument is required.

Check Levelling It is performed to check the levels of the points whose elevations have already been established.

Fly Levelling This levelling is employed to establish a temporary bench mark (B.M.) (c.f., Definitions of Terms used in Levelling) by running a series of levels along a general direction from a point of known elevation (B.M.).

Profile or Longitudinal Levelling It is used in route surveys, such as highways, railways, canals, sewer lines, along the proposed centre line of the route, to obtain the profile of the ground along the route by establishing the levels at regular intervals on the centre line of the alignment.

Cross-section Levelling It is similar to the profile levelling to get profile or cross-section of the ground in the transverse direction of the alignment at regular intervals.

Reciprocal Levelling It is the accurate determination of the difference in level between two points situated at large distance and it is not possible to setup the instrument in between the points.

Precise Levelling This is a very accurate method of differential levelling for high precision works or establishing bench marks.

Definitions of Terms Used in Levelling The following are the terms which are commonly used in levelling: Datum: It is a level surface with respect to which the levels of points are measured or referred to. It may be a standard surface like the mean sea-level (m.s.l.) or an arbitrary surface. Bench Mark (B.M.): A bench mark is a permanent or semi-permanent physical mark of known elevation. Station: In levelling, a station is that point where the levelling staff is held to determine the level of that point.

33

Vertical Distance Measurements

Height of Instrument (H.I.): It is the elevation of the horizontal line of sight of the instrument. Back Sight (B.S.): In general, it is the reading taken on the staff held at the point of known elevation. Normally, the levelling proceeds from a B.M., and therefore, the first reading taken on the B.M. is the back sight. Fore Sight (F.S.): It is the last reading taken on the staff before shifting the instrument for taking other readings or closing the work. Intermediate Sight (I.S.): It is the staff reading taken on the stations between two successive B.S. and F.S. stations. Change point (C.P.) or Turning Point (T.P.): It is station where both F.S. is taken before shifting the instrument and B.S. is taken after shifting the instrument. Balancing of Sights: When the distances of the B.S. station and F.S. station from the instrument position, are taken approximately equal, it is known as balancing the sights. It is done to minimize the instrumental and other errors. Reduced Level (R.L.): In levelling the readings are taken on the staff held on the points whose elevations are to be determined. The readings are not the elevations of the points. These readings are used to compute or reduce the levels of the points, and therefore, the levels so determined, are called the reduced levels. Thus, the reduced level of a point is its elevation.

Booking and Reducing Levels There are two methods of reducing the levels from the readings taken on the staff. These are:

(i) Height of instrument method, and (ii) Rise and fall method.

Height of Instrument Method or H.I. Method

Table 5.1: Height of instrument method of reducing levels Stations

B.S.

A

SA

I.S.

F.S.

H.I.

R.L.

Remarks

H.I.1 = hA + SA

hA

B.M. (First R.L.)

B

SB

hB = H.I.1 - SB

C

SC

hC = H.I.1 – SC

D

S′D

Check

H.I.2 = hD + S′D

SE

E Σ

SD

SA + S′D

SD + SE

Σ B.S. – Σ F.S. = Last R.L. – First R.L. (SA + S′D) – (SD + SE) = hE – hA

hD = H.I.1 – SD

C.P.

hE = H.I.2 – SE

(Last R.L.)

Chapter 5

The H.I. method is based on the Eq. (5.2). In this method for a particular setting of the instrument, the H.I. is determined from Eq. (5.1) by taking B.S. (SA) on a point of known elevation (hA), and then the R.L.’s of the points are obtained by subtracting the F.S. and I.S. (if taken) from the H.I. [Eq. (5.2)]. The method of booking and reducing the levels is explained in Table 5.1.

34

Geoinformatics

Rise and Fall Method or R and F Method The R and F method is based on the Eqs. (5.3) and (5.4). The difference in level between a point and its preceding point is found as rise (+) if the point is at higher elevation than the preceding point or fall (–) if at lower elevation, from Eq. (5.3) using the staff readings. The R.L. of the point is then computed by adding the difference in level to or subtracting it from the elevation of the preceding point using the Eq. (5.4). The method is explained in Table 5.2. Table 5.2:  Rise and Fall method of reducing levels Stations

B.S.

A

SA

I.S.

B

SB

C

SC

F.S.

SA + S′D

SE

S′D – SE = (+) RE

SD + SE

RB + RE

R.L.

Remarks

hA

B.M. (First R.L.)

hB = hA + RB

SD

E

Σ

Fall (–)

SA – SB = (+) RB

S′D

D

Rise (+)

SB – SC = (-) FC

hC = hB – FC

SC – SD = (-) FD

hD = hC – FD

C.P.

hE = hD + RE

(Last R.L.)

FC + FD

Σ B.S. – Σ F.S. = Σ R – Σ F = Last R.L. – First R.L. (SA + S′D) – (SD + SE) = (RB + RE) – (FC + FD) = hE – hA

Check

The booking of the readings and computation of the R.L.’s in Tables 5.1 and 5.2 are based on the readings taken as shown in Fig. 5.4. The tables also show the arithmetic checks on the computations available for the two methods of reducing the levels.

(B.S.) sA

sB +(I.S.) A

B.M. hA

B

sC +(I.S.)

D

C hB

C.P. hD

hC

E hE

H.I.2 = hA + s′D

H.I.1 = hA + sA

sE(F.S.)

s′D(B.S.)

(F.S.) sD

Datum

Fig. 5.4  Differential levelling operation for a section of ground

35

Vertical Distance Measurements

Errors in Levelling Errors in levelling are classified as: (i) Personal errors, (ii) Instrumental errors, and (iii) Errors due to natural causes. The personal errors are errors in sighting, manipulation, reading the staff, and in recording and computations. The instrumental errors are due to imperfect adjustment of the level, defective level tube, imperfect graduations of the staff, and shaky tripod. The errors due to curvature of the earth and atmospheric refraction, and wind and sun are the errors due to natural causes.

5.1.2 Indirect or Trigonometric Levelling Trigonometric levelling involves observing the vertical angle and either the horizontal or slope distance for the points between which the difference in elevation is to be determined. One of these two points must be of known elevation. In Fig. 5.5 shows the points A and C between which the difference in level is to be determined. The measured vertical angle to C at A is α. If the elevation of A is hA and the height of the instrument above the ground is hi, then V = D tan a …(5.5) and hC = hA + hi + V …(5.6) hB = hC – h …(5.7) where D = the horizontal distance between the points A and C, and h = the height of BC. In Fig. 5.5 the vertical angle α is the angle of elevation (c.f., Sec. 4.2.6). If it is the angle of depression, the plus sign of V in Eq. (5.6) is to be replaced by minus sign.

h V

B

a hi A

D

hA Datum

Fig. 5.5  Trigonometric levelling

hB

Chapter 5

C

36

Geoinformatics

5.1.3  Barometric Levelling Barometric levelling is the method of determining elevations by measurement of atmospheric pressure. The pressure caused by the weight of the column of air above the observer’s station, decreases with rise in altitude. Assuming isothermal condition, i.e., the atmosphere has a constant temperature between points of different altitudes, the following relationship exists between the atmospheric pressure p and the altitude H:

  pA   tm  HB – HA = 18402.6 log  p    1 + 273    B   

…(5.8)

where HA, HB = the altitudes of the points A and B, pA, pB = the atmospheric pressure at A and B, expressed in millimetres of mercury, and tm = the mean absolute temperature of atmosphere at the points A and B in degree celsius.

6 Contouring

6.0 INTRODUCTION The purpose of survey is to gather data necessary for the construction of a graphical portrayal of planimetric and topographic features. This graphical portrayal is a topographical map. The location of the features is referred to as planimetry, and the configuration of the terrain is referred to as topography. The utility of a plan or map is highly enhanced if the relative position of the points is represented both horizontally and vertically. There are various methods, such as shading, hachure, spot heights, or contour lines, for representing the vertical position of the points. Out of these, contour lines or contours are most widely used for direct representation of the configuration of the terrain and elevation of the points.

The definitions of contour and some related terms are given below: Contour: Contour is an imaginary line of constant elevation on the ground surface. Contour line: The contours are depicted on a map by lines called the contour lines. Contour interval (C.I.): The vertical distance between two successive contours on a map, which is, generally, uniform throughout the map, is called the contour interval. For example, if the successive contours are of the values 50 m, 60 m, 70 m, the contour interval is 10 m. Horizontal equivalent: The horizontal distance between two points on the successive contours for a given slope is called the horizontal equivalent. Contour gradient: An imaginary line on the surface of the earth having a constant inclination (slope) to the horizontal is referred to as contour gradient.

6.2  CONCEPT OF CONTOURS AND CONTOUR GRADIENT Figure 6.1a shows a hill. The horizontal planes at elevations 100 m, 105 m, 110 m, 115 m, 120 m, 125 m, and 130 m cut the hill at the respective levels. The outlines of the hills at these levels will be the respective contours shown in Fig. 6.1b. In Fig. 6.2 the points a, b, c, d, e, and f, lie on the contours of 100 m, 105 m, 110 m, 115 m, 120 m, 125 m, and 130 m, respectively, such that the all the distances ab, bc, cd, de, and ef, are equal to, say

Chapter 6

6.1 DEFINITIONS

38

Geoinformatics

50 m. Since the contour interval is 5 m, the two ends of each of all the lines are at a vertical distance of 5 m and horizontal distance of 50 m. Therefore, each line has a gradient of 5/50 or 1 in 10. In this case, the horizontal equivalent is 50 m and the line joining all the points is a contour gradient of 1 in 10.

Fig. 6.1  Contour lines for a hill

Fig. 6.2  Horizontal equivalent and contour gradient

39

Contouring

6.3  CHARACTERISTICS OF CONTOURS In topographic maps, the configuration of the terrain is represented by drawing contours at constant vertical distance or uniform contour interval. A knowledge of contour characteristics helps in identifying the topography and the natural features of the area from the map. These characteristics also help in avoiding mistakes in plotting the contours correctly. The following are the characteristics of contours (Fig.6.3):

1. The ground slope between the contour lines is assumed to be uniform. 2. The direction of the steepest slope at a point on the contour is at right angles to the contour (Fig.6.3a). 3. Closely spaced contours indicate steep slope (Fig. 6.3b).





(a)

(b)

4. Widely spaced contours indicate moderate slope (Fig. 6.3c). 5. Equally spaced contours depict a uniform slope (Fig. 6.3d).





(c)

6. Contours are not shown going through buildings (Fig. 6.3e). 7. Contours crossing a man-made horizontal surface will be straight parallel lines as they cross the surface (Fig. 6.3f).

Chapter 6



(d)





(e)

(f)

40

Geoinformatics

8. Contours of different elevations do not cross each other since their intersection cannot have two elevations except in the case of overhanging cliffs where the contours overlap each other (Fig. 6.3g). 9. Contours of different elevations cannot unite to form one contour except in the case of vertical cliffs where the contours of different values meet but actually they do not unite (Fig. 6.3h). 10. Contour lines cannot begin or end on the plan as they are closed loops.

(g)





(h)

11. The contour lines must close itself being closed loops but not necessarily within the limits of the map. 12. Hill and depressions look the same. They are identified from the contour values. In the case of hills the contour values increase from outer contour to inner contours whereas in the case of a depression the values decease from outer contour to inner contours (Fig. 6.3i). 13. Contours deflect uphill at valley lines and downhill at ridge lines. Contours lines in U-shape cross a ridge and in V-shape cross a valley at right angles. The concavity in contour lines is towards higher ground in the case of ridge and towards lower ground in the case of valley (Fig. 6.3j). 14. The same contour must appear on both sides of a ridge or valley. 15. Contours do not have sharp turnings. 16. A single contour line cannot lie between two contour lines of higher or lower elevation.

(i)

41

Contouring

(j) Fig. 6.3  Characteristics of contours

6.4  METHODS OF CONTOURING The points, whose elevations are to be determined, their planimetric positions are first found out. There can be following two methods which can be employed for locating the contours:

1. Direct method, and 2. Indirect method.

6.4.1 Direct Method In the direct method the contour to be plotted is actually traced on the ground. Thus, only those points are surveyed which happen to fall on a particular contour, and then such points are plotted and joined. The method is slow and tedious, and therefore, employed only for small areas where superior accuracy is demanded.

In this method sufficient number of points are located and given spot levels, and the contours in between spot levels are interpolated. The selection of points for giving spot levels is done giving cognizance of salient topographic features, e.g., hilltops, ridge lines, beds of streams, etc. Indirect methods are less expensive, less time consuming, and less tedious, and therefore, are employed in small-scale surveys. There can be following approaches for the indirect method of contouring:

(i) Grid method (Fig. 6.4), (ii) Cross-section method (Fig. 6.5), and (iii) Radial line method (Fig. 6.6).

Chapter 6

6.4.2 Indirect Method

42

Geoinformatics



Fig. 6.4  Grid method

Fig. 6.5  Cross-section method

Fig. 6.6  Radial method

6.5  USES OF CONTOURS Contour maps provide valuable information about the character of the country, whether it is flat, undulating, or mountainous. Such information are used in civil engineering projects for selection of sights, determination of catchment area of a drainage basin and storage capacity of a reservoir, intervisibility between the points, alignment of linear projects, such as roads, canals, etc.

Plane-table Surveying

7.0 INTRODUCTION Plane-table surveying or simply plane tabling is a graphical method of survey in which the field observations and plotting proceed simultaneously. This method of surveying is primarily used in compilation of maps. It is a means of making a manuscript map in the field without intermediate steps of recording and transcribing field notes. By means of the plane table, points on the ground to which the observations are made, can be plotted immediately in their correct relative positions on the drawing. Plane tabling is particularly suitable for small-scale surveys where high degree precision is not required.

7.1  ADVANTAGES AND DISADVANTAGES The plane-table surveying has the following advantages and disadvantages:

Advantages

1. Suitable for small-scale surveys. 2. Omission of any details is avoided as the full view of the area being before the surveyor. 3. The accuracy of the work can be checked as the work proceeds. 4. Less number of control points is required as they can be generated by plane tabling as per requirement. 5. Detail plotting and contouring can proceed simultaneously. 6. Errors in recording linear as well as angular measurements do not exist. 7. Useful in the areas affected by magnetism where compass survey cannot be performed. 8. Office work involves only inking, colouring, and finishing. 9. It is less expensive compared to other methods, and becomes faster when sketching is done by estimation.

Disadvantages

1. Accessories are many, heavy, awkward to carry, and are likely to be lost in the field. 2. Unsuitable for wet climate and high wind.

Chapter 7

7

44

Geoinformatics

3. Not useful for large-scale surveys and accurate works. 4. Unsuitable for wooded area. 5. It can be used only in open country with clear visibility.

7.2  PRINCIPLE OF PLANE TABLING The principle of plane tabling is based on the fact that if the plane table is properly oriented, all lines drawn on the drawing would be parallel to their respective lines on the ground. In Fig. 7.1, A, B, and C are the three ground points whose positions are plotted on the plane table as a, b, and c, respectively. If the plane table is correctly oriented the triangle ABC will be parallel to the triangle abc on the plane table, and the rays Aa, Bb, and Cc, from A, B, and C to a, b, and c, respectively, will meet at a point p which is the point P on the ground where the plane table has been set up, called as the plane-table station. Now if any point D is to be plotted, draw a line pD, and mark the distance of PD to the scale of plotting as pd on the drawing, and locate the position of D as d. Since the points a, b, and c, are correctly located relative to their ground positions A, B, and C, respectively, the plotted position d will also be correctly plotted position of the ground point D relative to other points. A

a

B

b

p p

d c

D

C

Fig. 7.1  Principle of plane tabling

7.3  PLANE TABLE AND ACCESSORIES There are three different types of plane table but the simplest and the most commonly used is the traverse table shown in Fig. 7.2. The plane table has a large number of accessories, and they are used for various purposes during the plane tabling. Some of them as below are shown in Fig. 7.3. Alidade or sight rule: A plain alidade shown in Fig. 7.4, consists of a metal or wooden rule with two vanes for sighting. It has a ruling edge parallel to the line joining the centre of the vanes, called the fiducial edge. The fiducial edge is used for drawing rays on the plane table parallel to the line of sight. A thread can be tied to the two vanes for sighting the points too low or high from the plane-table station.

45

Plane-table Surveying Plumb bob Accessory box

Alidade

Trough compass

Tripod stand

Fig. 7.2  Plane table and accessories Plumbing fork

Spirit level

Alidade

Fig. 7.3  Accessory box Eye vane Thread for inclined sights

Object vane

Fiducial edge

Fig. 7.4  Plain alidade

Spirit level: Spirit level is used for levelling the plane table. Plumbing fork: Plumbing fork is used for centering the plane table on the ground station or for transferring the plotted positions of the points onto the ground.

Chapter 7

Plane table

46

Geoinformatics

Magnetic compass: Magnetic compass is employed for orienting the table with reference to the magnetic meridian. Miscellaneous accessories: While using the plane table, some of the accessories, such as pencil, eraser, protector, set-square, scale, french curves, magnifying glass, calculator, ranging rods, metallic tape, flag poles, and Tangent clinometer and clinopole, are required. An umbrella is also carried to protect the plane table from sun while working and water-proof leather cover to protect it from rain when not in use.

7.4  DRAWING PAPER The drawing paper used in plane tabling must be of high quality, must be well seasoned to prevent from undue expansion and contraction, must contain a surface with a reasonable amount of tooth or roughness to take pencil lines without under grooving of the paper, and must be tough enough to stand erasers. For high accuracy works, plane-table sheets containing thin aluminium sheets laminated with paper are used.

7.5  BASIC DEFINITIONS The following terms are commonly used in plane tabling: Centering: The process of setting up the plane table over the ground plane-table position is known as centering. Orientation: This process involves positioning the plane table in such a manner that all the plotted lines on the paper are parallel to the corresponding lines on the ground. Back sight: The sight taken from a plane-table station to another station whose position has already been plotted on the drawing paper is called the back sight. Back sighting is one of the methods of orienting the plane table. Fore sight: The sight taken from a plane-table station to another station whose position is yet to be plotted is known as the fore sight. The fore sighting is done to locate the forward station. Radiation: Radiation is a method of locating the points by drawing the radial lines from the planetable station to those points. Intersection: The points can also be located by method of intersection. In this method, rays are drawn to the points to be located from two different stations whose positions are already plotted, and the intersection of the respective rays is the position of the points on the plane-table sheet. Resection: This is a method of locating the plane-table station on the plane table by drawing the resectors through other stations and their plotted positions. By method of resection, location of the plane-table station and orientation of the plane table are achieved simultaneously.

7.6  SETTING UP THE PLANE TABLE The setting up of the plane table is done in the following steps: Levelling: By this process the plane table top is made horizontal using a circular spirit level (for approximate levelling) or by rectangular spirit level putting in two mutually perpendicular positions (for accurate levelling which is seldom required in small-scale surveying).

47

Plane-table Surveying

7.7  ORIENTING THE PLANE TABLE The plane table may be oriented by one of the following methods: Using a magnetic compass: This method of orientation is the simplest, quickest, and approximate (Fig. 7.5). In this method at the first plane-table station, a line showing the north direction is drawn using a trough compass, and at the subsequent stations the table is oriented by placing the trough compass along the plotted north line and rotating the plane table till the compass needle points towards the north. In this position, the plane table is in oriented position. Since the method utilizes the magnetic compass, the orientation may be affected by local attraction. N

N

N

N

b

a

a

Plant table at the first station A

Plant table at the second station B in non-oriented position

N

N

a

B

b

Plane table at the second station B in oriented position

Fig. 7.5  Orientation using a magnetic compass

Chapter 7

Centering: It is the process of setting up the plane table over the ground station such that its plotted position is exactly above the ground station. This is done using the plumbing fork for accurate works or by dropping a small stone in small-scale surveys. Orienting: This process involves positioning of the table in such a manner that all lines on the plane table are parallel to the corresponding lines on ground. This is essentially required when more than one station is occupied for plotting the details.

48

Geoinformatics

By back sighting: It is quicker method but not as accurate as the resection method (Fig. 7.6). At station A, B is plotted as b by fore sighting, and at B, the table is oriented by back sighting at A. The method has only advantage that it is not affected by local attraction as no magnetic compass is being used.

a

b

a

b

Plane table at the second station B in oriented position

Plane table at the first station A

Fig. 7.6  Orientation by back sighting

By resection: It is the most accurate and a quick method used by professional surveyors (Fig. 7.7). A and B are already plotted, and C is to be plotted. At A the table is oriented by sighting at B and then a ray ac is drawn towards C. Now the plane table is shifted to C, and by putting the alidade along ca, the station A is sighted and the table gets oriented. Now putting the alidade along the ray Bb, a back ray is drawn. The intersection of the rays ac and Bb is the point c which is the plotted position of C. C c

a

b

Plane table at the station C whose position is being determined

B a

b

Plane table at the first station A

Fig. 7.7  Orientation by resection

49

Plane-table Surveying

7.8  PLANE TABLING METHODS

1. 2. 3. 4.

Radiation (for detail plotting), Intersection (for detail plotting), Traversing (for control plotting), and Resection (for orientation of plane table and locating the plane-table station).

7.8.1 Radiation From the occupied station the rays are drawn towards the points to be plotted, by measuring the distance and reducing them to the scale of plotting. To plot the object M (Fig. 7.8), from the occupied station A, the distances to all the corners of the object M from A are measured, and the rays are drawn to the corners from a. If the scale of plotting is S, then the corresponding distances to the scale are plotted on the respective rays, e.g., 1a = (1A)S. The corner points of M are plotted on the respective rays, and joined to get the plotted position of M. 1 2

M 3

4

N 1

N

2 M 3

4

a A

Fig. 7.8  Plotting by radiation

7.8.2 Intersection This method does not require linear measurements, and the details are plotted by intersection of the respective rays drawn from two stations already plotted on the drawing. In Fig. 7.9, the base AB has been plotted as ab. First, the table is set up at A and oriented by sighting towards B by placing the alidade along ab. The rays are drawn from a, towards the visible selected details. Then the table is shifted to B, and oriented by back sighting to A. Now the rays are drawn to all those visible details for which the rays were drawn from A. The intersection of the respective rays for the details, give the plotted positions of the points which are joined to show the details. Sometimes it is not possible to draw rays for some of the points from the second station for which the rays have been drawn from the first station. In such cases, the rays are drawn for those points from some other station from where they are visible, and then those points are obtained by intersection and the shape of the object is completed if it was left incomplete.

Chapter 7

Plane-table surveying can be carried out by any one of the following methods or their combination:

50

Geoinformatics 1

2

M

4

3

N

N

N

a

1 b

N

2

M

4

3

A a

b B

Fig. 7.9  Plotting by resection

7.8.3 Traversing It is a method of providing ground control, and the points established are later on used either as planetable stations or for locating the plane-table position. The ground stations A, B, C, D, E form a traverse (Fig. 7.10), and plotting such traverses by plane-table survey is known as plane-table traversing. The plane table is set up at A, and rays are drawn to all visible stations (E, B, and, C). AB is measured and marked as ab to the scale to locate b. Now the table is shifted B, and oriented by back sighting to A. From b the rays are drawn to other visible stations (E, D, and C), and bc is plotted to the scale to locate C as c. Now the table is set up at station C, and a ray is drawn towards D. cd is plotted to the scale to locate D on the drawing as d. As a check CA ray must pass through a. Similarly E is plotted and the traverse ABCDE is completed.

7.8.4 Resection It is a process of determining the location of plane-table station by means of drawing rays when the table is oriented, from stations whose locations have already been plotted. The method of resection is never used for plotting the details.

51

Chapter 7

Plane-table Surveying

Fig. 7.10  Plane-table traversing

There are following four methods of locating the plane-table station by resection:

Resection after Orientation by Back Ray This has already been explained in Sec. 7.7 (Fig. 7.7). The plane table station C has been plotted as c by drawing back ray.

Resection after Orientation by Compass The table is set up at A and ab is plotted to the scale by sighting towards B (Fig. 7.11). The north direction is marked on the drawing using a compass. The table is shifted to the desired station C to be plotted, and oriented by placing the compass along the line already drawn showing the direction of north. Now two rays ac and bc are drawn by putting the alidade along the lines Aa and Bb, respectively, intersecting at c which is the desired location of C on the drawing.

Resection after Orientation by Two Points This method is also known as two-point problem. In this method, using two points whose locations have already been plotted are used to determine the location of the plane-table station. The method is time consuming and also not very accurate, and therefore, seldom employed.

Resection after Orientation by Three Points This method is also known as three-point problem. Being fast and accurate, this method is widely used by professional surveyors. This method is based on the principle of plane tabling. If the plane table is properly oriented at any station P, the plane-table station can be located as p by sighting to three points (A, B, and, C) through their respective plotted positions a, b, and, c (Fig. 7.1). The intersection p of the three resectors Aa, Bb, and, Cc through a, b, and, c from A, B, and, C, respectively, is the desired plotted position of the ground station P.

Trial and Error Method of Solving Three-point Problem There are other methods of solving the three-point problem but the trial and error method is very fast and convenient in the field. The method is based on estimating the location of the ground station on the drawing applying Lehmann’s rules.

52

Geoinformatics N C N

a

c

b

Plane table at the station C whose position is being determined

N N

B a

b

A

Fig. 7.11  Resection after orientation by compass

The triangle ABC joining the three ground points A, B, and, C is known as the great triangle. The selected ground station P can be situated either outside the great triangle or inside it (Fig. 7.12). The inside location is preferred and will be discussed. The plane table is kept inside ∆ ABC at the desired location, and is approximately oriented, and the resectors of A, B, and, C are drawn. Since the orientation is approximate, the three resectors form a triangle called the triangle of error. Now applying the Lehmann’s rules, the location of p is estimated as p′, and then alidade is put along p′c (considering C as the farthest station), and the table is rotated till C is sighted. If the estimated location p′ is correct, the resectors of A, B, and, C, will intersect at p′ which is the desired location p of P. If not, the size of the triangle of error will get reduced, and then the process is repeated till the triangle of error reduces to a point which is the desired location of the plane-table station on the drawing.

Lehmann’s Rules The Lehmann’s rules are used to estimate the position of the plane-table station on the drawing sheet so that rays can be drawn from that point to locate the details in the area. Following Lehmann’s rules are used to estimate the plane-table station P as plotted position p′ (final position of p′ will be p) when P is located within the great triangle (Figs. 7.12 and 7.13): Rule-1: If the ground station P is inside the great triangle, the estimated point p′ will be within the triangle of error. Rule-2: The point p′ is chosen such that its perpendicular distances p′1, p′2, and p′3 from the rays Aa, Bb, and, Cc, respectively, are proportional to the distances of P from A, B, and, C, respectively.

53

Plane-table Surveying

B

Great triangle

Great circle

b a

Triangle of error

p c

C

Fig. 7.12  Resection after orientation by three points

Fig. 7.13  Lehmann’s rule

Lehmann’s Rules for Other Situations Rule-3: If the ground station P is outside the great triangle, the estimated point p′ will be outside the triangle of error. Rule-4: The point should be so chosen that it is to the same side of all the three rays. It may be noted that if the point P falls on the circumference of the great circle passing through the points A, B, and, C, the solution becomes indeterminate.

Practical Utility of Three-point Resection

1. The forward instrument is not required to be selected in advance before shifting. 2. Plane-table station can be chosen in such a manner that it is suited to the observer for plotting the details.

Chapter 7

A

54

Geoinformatics

3. The observer is not dependent on the previous instrument station as in other methods. 4. The observer is only required to insure that from the occupied position of the plane table, any three well-defined points already plotted, are visible for solving the three-point problem.

7.9  ERRORS IN PLANE TABLING The following are the common sources of errors in plane-table survey:

1. Instrumental errors, 2. Errors of manipulation and sighting, and 3. Errors due to other causes.

Instrumental Errors The following errors arise when the plane table and the alidade are not in perfect adjustment.

1. The plotting errors are introduced if the top surface of the table is not perfectly plane. 2. The fittings of the table and tripod are loose making the table unstable causing plotting errors. 3. The fiducial edge of the alidade if not straight will cause plotting errors. 4. The sight vanes of the alidade if not perpendicular to its base it will introduce error in sighting. 5. The sluggish magnetic compass causes error in orientation done using the compass. 6. Inaccurate levelling results from defective level tube which results into plotting errors. Accurate levelling of the table is required in large-scale survey, and in small-scale surveys accurate levelling is not important.

Errors of Manipulation and Sighting

1. If the plane table is not levelled, the sight vanes will be inclined to the vertical, and there will be error in plotted positions of the points. 2. If the table is not centered accurately, there will be errors in plotting. This is not important in small-scale surveys. 3. Inaccurate orientation of the table causes errors in plotting, and thus the plotted map will be in error. 4. If the plane table is not properly clamped, it may rotate between sights causing errors. 5. Inaccurate bisection causes error in plotting 6. If the alidade is not properly pivoted on the point, the rays drawn will be in error. 7. If the tripod is not firmly planted into the ground, it will be unstable and will cause errors. 8. Undue pressure on the plane table of leaning against it while working may disturb the table which will cause error.

Errors due to other Causes

1. If a good quality of drawing sheet is not used it may contract or expand due to change in temperature while working in the field resulting into plotting errors. 2. The drawing sheet should be properly stretched on the table to avoid plotting errors. 3. Care should be taken in drawing the rays and in the use of scale during plotting the details to avoid any error in drawing the rays or in reducing the distances to scale.

8 8.0 INTRODUCTION The basic principle of surveying to control the errors is “working from whole to the part”. This requires a framework of control points which are the points of known accurate horizontal and vertical positions. For establishing the control points in the area to be surveyed, control surveys are conducted. The results of such surveys are the horizontal coordinates (x, y) and the elevations as z-coordinates of the points. The fundamental network of points, whose horizontal positions have been accurately determined, is called the horizontal control. Horizontal control is generally established by traversing, triangulation, or trilateration. Traversing is most frequently employed especially for surveys of limited extent and where the points, whose positions are desired, lie along a devious route. Triangulation or trilateration is generally preferred in hilly country where numerous intervisible points are available to set up the instrument and signals at elevated ground. As the triangulation or trilateration is conducted for the areas of comparatively large extent where the geodetic considerations are to be taken into account for the computations, only traversing will be discussed in this chapter.

8.1 DEFINITIONS Traverse: A traverse consists of a series of straight lines of known length related to each other by known angles between two successive lines (Fig. 8.1). Traverse lines: The straight lines which form a traverse are called the traverse lines (Fig. 8.1). Traverse stations: The extremities of the traverse lines are the traverse stations or control points (Fig. 8.1).

8.2  TYPES OF TRAVERSES There are two basic types of traverses as below:

(i) Open traverse, and (ii) Closed traverse.

Chapter 8

Control Surveys

56

Geoinformatics

Fig. 8.1  A traverse

8.2.1  Open Traverse The open traverse originates at a point of known position A and terminates at a point of unknown position D (Fig. 8.2). The open traverses may extend for long distances without the opportunity for checking the accuracy of the ongoing work. An open traverse is particularly useful for providing control in preliminary and construction surveys for roads, pipelines, electricity transmission lines, and like.

Fig. 8.2  An open traverse

8.2.2  Closed Traverse A closed traverse originates at a point of known position A and closes on another point of known position D (Fig. 8.3). Closed traverses provide computational checks allowing detection of systematic errors in both distance and direction, and therefore, preferred to open traverses.

Fig. 8.3  A closed traverse

57

Control Surveys

If a closed traverse can be in the form of a loop it is called the closed-loop traverse. The closed-loop traverse originates and terminates at a single point of known position (Fig. 8.4). For the surveys of small extent and where accuracy requirements are not high, the closedloop traverse may originate from and terminate at a point of unknown position. This type of traverse permits an internal check on the angular measurements, but detection of systematic errors in linear measurements or the error in the orientation of the traverse, is not possible, and therefore, not recommended for use in major projects.

Fig. 8.4  A closed-loop traverse

8.3  CLASSIFICATION OF TRAVERSE There are the following two approaches of classification of traverses:

(a) The method of measurement of horizontal angles and (b) The instruments employed.

8.3.1  Based on Methods of Measurement of Horizontal Angles The angle between two successive traverse lines can be measured in the forms discussed in Sec. 4.1, and accordingly a traverse can be

1. 2. 3. 4.

Deflection-angle traverse, Angle to the right traverse, Interior-angle traverse, and Azimuth traverse.

8.3.2  Based on Instruments Employed For the measurement of distances and angles various kinds of instruments are employed, and the traverse can be named accordingly:

1. 2. 3. 4. 5.

Chain traverse, Compass traverse, Theodolite traverse, Plane-table traverse, and Stadia or tacheometric traverse.

8.4  TRAVERSE PROCEDURE The following steps are required to establish a traverse:

1. Reconnaissance, 2. Selection of traverse stations,

Chapter 8

Closed-loop Traverse

58

Geoinformatics

3. 4. 5. 6.

Marking of stations, Making linear and angular measurements, Computation of coordinates, and Plotting the traverse.

The preliminary field inspection of the area to be surveyed is known as reconnaissance. During the inspection various information about the area, such as suitable positions of the traverse stations, intervisibility between the traverse stations, method of survey and instruments to be employed, camping ground, availability of transport facility, drinking water, food, labour, etc., are collected for planning the survey. In selecting the location of traverse stations, the factors to be considered are that the number of stations should be minimum, as far as possible, they should be intervisible, the length of the traverse lines should be comparable, they are located on firm ground, etc. The selected stations should be marked on the ground using wooden pegs for small areas, and or by making concrete blocks for areas of large extent. The linear and angular measurements are made using the appropriate instruments for measuring the lengths of the traverse lines and angles between them. The traverse computation, explained in the next section, involves the computation of the coordinates of the traverse stations and then adjusting the errors. For plotting the traverse, a grid on the scale of plotting is drawn on the drawing sheet, and the traverse stations are marked using their coordinates on the grid.

8.5  COMPUTATION OF COORDINATES For the computation of coordinates of the traverse stations with respect to some common origin, the consecutive coordinates of each traverse station are required. The consecutive coordinates of a point are the coordinates of a point with respect to its preceding point in terms of departure and latitude. In Fig. 8.5,

ID sin qD

D

(– D)

A

D′′ NqAE

NqDW

ID

ID cos qD

IA sin qA

A′′

IA

IA cos qA

C′

B′

D′

O

A′

IC

SqBE

IC cos qC SqCW B

B′′

IC sin qC

(+ D)

IB

IB sin qB C′′

– (L)

Fig. 8.5  Consecutive coordinates

IB cos qB

B

59

Control Surveys

Chapter 8

OA, OB, OC, and OD are four traverse lines lying in four quadrants. The reduced bearings of lines are NθAE, S θBE, S θCW, and NθDW, and their lengths are lA, lB, lC, and lD, respectively. The consecutive coordinates of a point are defined in terms of departure as x-coordinate and latitude as y-coordinate of a point. Thus, the coordinates of four points A, B, C, and D are (lA sin θA, lA cos θA), (lB sin θB, lB cos θB), (lC sin θC, lC cos θC),and (lD sin θD, lD cos θD), respectively. Figure 8.6 shows a traverse ABCD. To determine the independent coordinates of the traverse stations with respect to a common coordinate system, the consecutive coordinates of each point are computed, and knowing the coordinates of one point, the independent coordinates of other points can be computed as explained below.

Fig. 8.6  Independent coordinates from consecutive coordinates

Let the coordinates of A be (XA, YA). If the consecutive coordinates of the points B, C, and D are



(DB, LB) = (lB sin θB, lB cos θB) (DC, LC) = (lC sin θC, lC cos θC) (DD, LD) = (lD sin θD, lD cos θD) then the independent coordinates of the points will be

XB = XA + DB = XA + lB sin θB YB = YA + LB = YA + lB cos θB

XC = XB + DC = XB + lC sin θC YC = YB + LC = YB + lC cos θC XD = XC + DD = XC + lD sin θD YD = YC + LD = YC + lD cos θD

The computation of coordinates involves the following steps: 1. Ascertaining the bearing of one of the traverse lines, 2. Running down the bearings of remaining traverse lines, 3. Computation of reduced bearings of traverse lines, 4. Calculation of consecutive coordinates of traverse stations, 5. Calculation of closing error, and 6. Balancing the traverse.

60

Geoinformatics

8.6  BALANCING THE TRAVERSE To explain the closing error and balancing a traverse, let us consider a closed-loop traverse shown in Fig. 8.7. A closed traverse must satisfy the following conditions:

Fig. 8.7  Closing error in a closed-loop traverse

and

Σ D = 0 Σ L = 0

…(8.1)

If Σ D ≠ 0 and Σ L ≠ 0, then e = (ΣD ) 2 + (ΣL) 2 …(8.2) The operation of satisfying the conditions given by Eq. (8.1) for a closed traverse is termed as balancing the traverse. There are several methods of balancing the traverse, but the most commonly employed methods are: (i) Bowditch’s method and (ii) Graphical method.

8.6.1  Bowditch’s Method The basis of the method, also known as compass rule, suggested by C. F. Bowditch, is on the assumption that the errors in linear measurements are proportional to l and the errors in angular measurements are inversely proportional to l , where l is the length the traverse line. This method is used when linear and angular measurements are equal in precision. In this method the total error in departure Σ D and latitude Σ L is distributed in proportion to the lengths of the traverse lines. Therefore,

CD = ΣD

l …(8.3) Σl

61

Control Surveys

CL = ΣL



l …(8.4) Σl

where CD, CL = the corrections to the departure and latitude of any traverse line, l = the length of the traverse line, and Σ l = the total length of the traverse or its perimeter.

8.6.2  Graphical Method For rough surveys where the angular measurements are of inferior degree of accuracy, the Bowditch’s rule is applied graphically without doing theoretical calculations. Before applying this method the angles or bearings of the traverse lines must be adjusted to satisfy the geometric conditions of the traverse.

Chapter 8

B B′ A C

e A′

SL C′ SD E

D

E′

(a) B A

B′

D′

C

D

E

C′ D′

(b)

A′′

e E′

A′

Fig. 8.8  Graphical method of balancing a traverse

Let AB′C ′D′E ′A′ be an unbalanced traverse having the closing error e as shown in Fig 8.8a. Draw a line AA′ equal to the perimeter of the traverse at some suitable scale, and mark the lengths of the traverse lines at the same scale in sequence as AB′, B′C ′, C ′D′, D′E ′, and E ′A′, as shown in Fig. 8.8b. Now draw a line A′A′′ equal in length to the closing error e and parallel to it at the end A′ of AA′. Join A and A′′, and draw the lines B ′B, C ′C, D′D, and E ′E parallel to A′A′′. The required corrections for the traverse AB ′C ′D′E ′ at the points B′, C ′, D′, and E ′ are B′B, C ′C, D′D, and E ′E, respectively. Draw these lines of equal lengths and parallel to them in same direction from the respective points of the traverse AB′C ′D′E ′ as shown in Fig. 8.8a in brick red colour, and join them to get the adjusted traverse ABCDEA shown in green colour.

This page intentionally left blank

SECTION III

PHOTOGRAMMETRY

This page intentionally left blank

9 Photogrammetry

There is no universally accepted definition of photogrammetry. The definition given here captures the most important notion of photogrammetry. Photogrammetry is the science which is commonly applied to making measurements on photographs. When the photogrammetry is combined with the interpretation of the photographs, it may be said as the science of obtaining reliable information about the properties of objects/features or phenomenon on the earth’s surface without being in physical contact with the objects by making measurements and interpretations. The name Photogrammetry is derived from the Greek words phos or phot which means light, gramma, which means something letter or drawn, and metrien, the noun of measure.

9.1  TYPES OF PHOTOGRAMMETRY The two broad categories involved in photogrammetry are:

1. Metric or quantitative work, and 2. Interpretative or qualitative work.

The technique of photogrammetry is employed for various applications in different modes using it for measurements and/or interpretations, keeping the camera or sensor, in space, air, or ground. Metric photogrammetry involves all quantitative works, such as determination of ground positions, distances, elevations, areas, volumes, and preparation of various types of maps. The interpretative work of photogrammetry is identifying objects and assessing their significance, and it is classically called photointerpretation. In recent years, records from other imaging systems, such as infrared sensors and radar, have been used for interpretative purposes, and a more general term remote sensing is used (refer to Section IV). When photogrammetry deals with extraterrestrial photography and images where the camera is fixed on the earth, mounted onboard an artificial satellite, or located on moon or a planet, it is called the space photogrammetry. If the photographs are obtained using camera onboard the aircrafts or balloons, the photogrammetry is known as the aerial photogrammetry, and if the camera station is on the ground and its axis is horizontal or nearly horizontal, it is called the terrestrial photogrammetry. Another type of photogrammetry is the close-range photogrammetry. It involves applications where the camera is relatively close to the object to be photographed.

Chapter 9

9.0 INTRODUCTION

66

Geoinformatics

A more sophisticated technique, called the stereophotogrammetry, involves the use of stereopairs of photographs for estimating the three-dimensional coordinates of points and for better interpretation of the objects/features by viewing three-dimensional model of the objects or area.

9.2  APPLICATIONS OF PHOTOGRAMMETRY Photogrammetry has a wide application in different fields, such as topographic mapping, architecture, engineering, manufacturing, quality control, police investigation, and geology, as well as by archaeologists to quickly produce plans of large or complex sites, and by meteorologists as a way to determine the actual wind speed of a tornado where objective weather data cannot be obtained. It is also used to combine live action with computer-generated imagery in movie post-production. Photogrammetry is commonly employed in collision engineering, especially with automobiles. When litigation for accidents occurs and engineers need to determine the exact deformation present in the vehicle, it is common for several years to have passed and the only evidence that remains is accident scene photographs taken by the police. Photogrammetry is used to determine how much the car in question was deformed, which relates to the amount of energy required to produce that deformation. The energy can then be used to determine important information about the crash (such as the velocity at the time of impact). It can also be employed in tailoring for stitching of clothes especially for the ladies where the measurements can be taken on the photographs avoiding close contact with the lady for taking the measurements. The close-range photogrammetry finds its use in biomedical field for accurate measurements of biological structures having regular geometric shapes. The photogrammetry finds its applications in industry for automobile construction, mining engineering, machine constructions, study of objects in motion, shipbuilding, structures and buildings, traffic engineering, etc. Recent advances in computer technology and a growing range of stereometric sensing techniques have helped to expose the potential of biostereometrics. As a result, the use of photogrammetry is growing in such fields as: aerospace, medicine, anthropometry, child growth and development, dentistry, marine biology, neurology, orthodontics, orthopedics, pediatrics, physiology, prosthetics, radiology, and zoology, to mention a few. In recent days, photogrammetric data generation for Geographic Information System (refer to Section V) is one of the most important applications of aerial photogrammetry.

9.3  MERITS AND DEMERITS OF PHOTOGRAMMETRY The photogrammetry has the following merits and demerits:

Merits

1. This is a very quick and accurate method of surveying in which the ground observations are almost totally eliminated. 2. This is very accurate method if true interpretations of photographs are made. 3. It also provides means to develop a contour map.

Photogrammetry

67

Demerits

1. This method requires fair weather conditions. 2. The instruments are very expensive and staff should be highly qualified and experienced to make full use of this method.

9.4  LIMITATION OF PHOTOGRAMMETRY IN LAND SURVEYING

Chapter 9

Photogrammetry is particularly suitable for topological or engineering surveys, and also for those projects demanding higher accuracy. Photogrammetry is rather unsuitable for dense forest and flatsands due to the difficulty in identifying points upon the pair of photographs. It is also unsuitable for flat terrains where contour plans are required, because the interpretation of contours becomes difficult in absence of spirit levelled heights. Considering these factors, it is evident that photogrammetry may be most suitably employed for mountainous and hilly terrain with little vegetation. In the foregoing chapters of this section, since the entire field of photogrammetry cannot be incorporated, only the elementary principles of aerial photogrammetry applied to the surveying has been discussed.

10 Properties of Aerial Photography

10.0 INTRODUCTION The different types of photographs are obtained using a camera in different modes in different positions. The photographs used in photogrammetry may be broadly classified into two types as: (a) terrestrial photographs and (b) aerial photographs. When the photographs are taken with phototheodolite having camera station on the ground and the camera axis horizontal or nearly horizontal, the photographs so obtained are called the terrestrial photographs. If the camera station is situated in air and aerial camera is pointing towards the ground, the photographs are called the aerial photographs. The following factors determine the quality of aerial photography:

(i) (ii) (iii) (iv) (v) (vi)

Design and quality of lens system, Manufacturing the camera, Photographic material, Development process, Weather conditions, and Sun angle during photo flight.

10.1  AERIAL PHOTOGRAPHS The aerial photographs are classified depending on the inclination of the camera axis with the vertical if one camera is used. They are also classified depending on use of more than one camera simultaneously. They are:

(i) (ii) (iii) (iv)

Vertical photograph, Oblique photograph, Convergent photographs, and Trimetrogen photograph.

The oblique photographs are further classified as:

(i) Low-oblique photograph, and (ii) High-oblique photograph.

Properties of Aerial Photography

69

Vertical Photographs

Fig. 10.1  A truly vertical photography

Since the truly vertical photography is not possibly due to the f lying limitation of the aircraft, the near vertical photographs in which the inclination of the camera axis with vertical, known as tilt, is not more than 3°, is considered to be a vertical photograph (Fig. 10.2). A truly vertical photograph closely resembles a map. The vertical photographs are utilized for the compilation of topographical and engineering maps on various sacles.

Fig. 10.2  Vertical photography

Chapter 10

Vertical photographs (Fig. 10.1) are those photographs which are taken when the optical axis of the camera is vertical or nearly vertical. A true vertical photograph in which the camera axis is identical to the plumb line through exposure station, hardly exists in reality.

70

Geoinformatics

Oblique Photographs Oblique photographs (Fig. 10.3) are obtained when the optical axis of the camera is intentionally inclined from the vertical and the inclination is more than 3°. An oblique photograph may or may not show the horizon depending on the amount of tilt. Those oblique photographs in which horizon does not appear are called the low-oblique photographs (Fig. 10.3a), and in which the horizon appears are the high-oblique photographs (Fig. 10.3b). Low-oblique photographs are generally used to compile reconnaissance maps of inaccessible areas. They are also used for some special purposes, such as in estimating water yield from snow-melt. High-oblique photographs are sometimes used for military intelligence.

(a) Low-oblique

(b) High-oblique Fig. 10.3  Oblique photography

Properties of Aerial Photography

71

Convergent Photographs Convergent photographs are photographs taken with two cameras exposed simultaneously at successive exposure stations with their axes tilted at a fixed inclination from the vertical in the direction of flight, so that the forward exposure of the first station forms a stereopair with the backward exposure of the next station.

Trimetrogen Photographs Trimetrogen photograph is combination of a vertical and low-oblique photographs exposed simultaneously from air. Three cameras are used simultaneously amongst which the central camera is vertical and the other two adjusted to oblique positions. The cameras are so fixed that the entire area from right horizon to the left horizon is photographed.

Mapping of large area using aerial photogrammetry is faster and economical than any other method if the aerial photographs and plotting instruments are already available. Accurate topographic maps can be prepared on various scales ranging from 1:500 to 1: 1,000,000 having contours accurate up to contour interval of 50 cm. The topographic maps are prepared using vertical aerial photographs. The topographic mapping using aerial photographs consists of the following stages:

(i) Aerial photography, (ii) Providing ground control, and (iii) Planimetric mapping.

Aerial photography conducted to acquire aerial photographs for large areas involves heavy expenditure, and therefore, they are made by the government organizations or large private companies. The ground control is provided by including the points in the photographs whose coordinates are already known. The extension of control is done by aerial triangulation methods. The planimetric maps are produced by directly tracing or employing simple photogrammetric instruments, such as vertical sketchmaster, reflecting projector, etc.

10.2.1  Photocoordinate System To provide reference lines for the measurements of image coordinates or distances, four marks, called the fiducial marks, are located in the camera image planes. These are located either in the middle of the sides of the focal plane or in its corners (Fig. 10.4). The lines joining the opposite fiducial marks are called the fiducial lines, and the intersection of the fiducial lines is called the centre of collimation. To measure the coordinates of the images appearing on the photograph, a cartesian coordinate system as shown in Fig. 10.5, called the photocoordinate system, is used. The x-axis is taken along the direction of flight of the aircraft which is obtained by joining the opposite fiducial marks lying in the direction of flight. The y-axis is taken in a direction perpendicular the x-axis, joining the remaining two fiducial marks.

Chapter 10

10.2  AERIAL PHOTOGRAMMETRY

72

Geoinformatics

Fig. 10.4  Fiducial marks

Fig. 10.5  Photocoordinate system

The intersection of the two axes, O, is the origin of the photocoordinate system. If the measured photocoordinates of the two image points a and b are (xa, ya) and (xb, yb), respectively, then the distance between the two points on the photograph is given by

ab = ( xa − xb ) 2 + ( ya − yb )2

…(10.1)

and the ∠aOb is given by

∠a0b = tan −1

ya y + tan −1 b xa xb

…(10.2)

Photographic measurements are usually done on the positive prints on paper, film, or glass. For the measurement of coordinates, simple scales are commonly used for low-order accuracy works. However, when higher accuracy is essential, more accurate scales, such as a metal microrule or glass scale, should be used.

Properties of Aerial Photography

73

10.2.2  Definitions of Technical Terms

Chapter 10

Fig. 10.6 shows a diapositive in near vertical position. The following are commonly used terms in aerial photography which apply to this position of dispositive: Perspective center (O): The real imaginary point of the origin of bundle of perspective rays is known as the perspective centre. Focal length (f ): The distance from the front nodal point of the lens to the plane of photograph, or the distance of the image plane from the rear nodal point, is known as the focal length. Exposure station or Perspective centre (O): Exposure station is the point in the air occupied by the front nodal point of the camera lens at the instant of exposure. Flying height (H ): Flying height is the elevation of the exposure station above mean sea-level.

Fig. 10.6  Tilted photograph in diapositive position and ground control coordinate system

Principal point (p and P): The principal point is a point where a perpendicular dropped from the front nodal point of the camera lens strikes the photograph. It is also known as photo-principal point (p), and coincides with the intersection of x and y axes. The corresponding point on the ground where the plate perpendicular strikes the ground is called the ground-principal point (P). Nadir point or Plumb point (n and N ): The point where a plumb line dropped from the front nodal point of the camera lens strikes the photograph is called the photo-nadir point (n). The point on the ground vertically beneath the exposure station is called the ground-nadir point (N). Isocentre (i and I ): The point where the bisector of the tilt angle strikes the photograph is called the photo isocentre (i). The point where the bisector strikes the ground is the ground isocentre (I). Camera axis (Op): The line joining the front nodal point of the camera lens and the photo-principal point is the camera axis.

74

Geoinformatics

Tilt or tilt angle (t): The angle between the vertical and camera axis is called the tilt. Swing (s): The horizontal angle measured clockwise in the plane of the photograph from +y-axis to the nadir is called the swing. Principal line (pl ): Intersection of plane defined by the vertical through perspective centre and camera axis with the photograph is the principal line. This plane which defines the principal line is called the principal plane. Azimuth of the principal plane (α): The azimuth of the principal plane, also sometimes known as the azimuth of the photograph, is the clockwise horizontal angle measured about the ground-nadir point from the ground survey north meridian to the principal plane of the photograph. Isometric parallel (ip): It is in the plane of photograph and is perpendicular to the principal line at the isocentre. True horizon line: It is defined as the intersection of a horizontal plane through the perspective centre with photograph or its extension. Horizon point: It is the intersection of the principal line with the true horizon line.

10.2.3  Geometric Properties of Aerial Photograph An aerial photograph has the following geometric properties:

(a) (b) (c) (d)

The photo-principal point lies on the principal line. The principal line is oriented in the direction of steepest inclination of the tilted photograph. Photo isocentre is the intersection of the bisector of the tilt angle. It is on the principal line. The isometric parallel is in the plane of photograph, and is perpendicular to the principal line at the isocentre. (e) True horizon line is the intersection of a horizontal plane through the perspective centre with the photograph or its extension. ( f ) Horizon point is the intersection of the principal line with the true horizon line.

10.2.4  Scale of a Vertical Photograph Map scale is defined as the ratio of a map distance to the corresponding distance on the ground. In similar manner, the scale of a photograph is the ratio of a distance on the photo to the corresponding distance on the ground. A map being an orthographic projection has uniform scale everywhere on the map, but on the other hand, a photograph being a perspective projection, does not have uniform scale. The scale on a photograph varies from point-to-point with change in elevation. Scales may be expressed either as unit equivalent, dimensionless representative fraction, or dimensionless ratio. If, for example, 1 cm on a map or photo represents 100 m on the ground, the scale may be expressed as

(a) Unit equivalents (b) Dimensionless representative fraction (c) Dimensionless ratio

1 cm = 100 m 1/10000 1 : 10000

75

Properties of Aerial Photography

Scale of a Vertical Photograph over Flat Terrain Fig. 10.7 shows a vertical photograph taken over flat a terrain. The scale of a vertical photograph over the flat terrain is the ratio of the photo distance ab to the ground distance AB between the points A and B. Thus, if the scale is S then

S =

ab AB

…(10.3)

From similar Ds Oab and OAB, we get ab op f = = AB OP H′ S =

or

f H′

…(10.4)

Fig. 10.7  A vertical photograph taken over a flat terrain

where f = the focal length of the aerial camera, and H ′ = the flying height above the ground. If the flying height above the datum is H and all the ground points have been projected on to the datum, then the scale is called the datum scale. S =

Therefore,

f H

…(10.5)

Scale of a Vertical Photograph over Variable Terrain If terrain varies in elevation, the object distance H ′ from O, in the denominator in Eq. (10.4), varies depending upon the elevation of the points. In Fig. 10.8, this distance for the point A having the elevation ha is (H – ha), and for the point B having the elevation hb is (H – hb). The scale, therefore, varies from point-to-point depending upon the elevation of the points.. If the distance of image a of A on the photograph from the principal point p is pa, the scale at the point A is given by pa SA = P0 A From the similar Ds Opa and OP0 A, we have

pa f = P0 A H − hA

or

SA =

f H − hA

SB =

f H − hB

Similarly, at the point B, we have

…(10.6)

Chapter 10



76

Geoinformatics

Fig. 10.8  A vertical photograph taken over variable terrain

In general, at any point having elevation h, the scale is given by

S =

f H −h

…(10.7)

where H = the flying height of the aircraft above the datum. The scale given by the Eq. (10.7) is called the point scale.

Average Scale To define the overall mean scale of a vertical photograph taken over variable terrain, it is often convenient and desirable to use average scale of the photograph. Average scale is the scale at the average elevation of the terrain covered by a particular photograph, and is expressed as

Sav =

f H − hav

…(10.8)

where hav = the average elevation of the terrain covered by the photograph.

10.2.5  Ground Coordinates from a Vertical Photograph If the ground coordinates of the point A are (XA, YA) and its photocoordinates are (xa, ya) then by replacing pa by xa and P0A by XA in Eq. (10.6), we get or

xa f = xA H − hA XA =

H − hA xa f

77

Properties of Aerial Photography

Similarly, the y-coordinates will be YA =



H − hA ya f

In general, the ground coordinates are given by and

X =

H −h x f

Y =

H −h y f

…(10.9)

The ground distance AB between two points A and B having the coordinates (XA, YA) and (XB, YB), respectively, can be determined as below: AB = ( X A − X B ) 2 + (YA − YB )2



…(10.10)

The effect of relief does not only cause a change in the scale but also causes image displacement. Fig. 10.9 illustrates the displacement of images appearing on a photograph. Suppose point T is top of a building and point B the bottom. On a map, both points have identical (X, Y) coordinates; however, on the photograph they are imaged at different positions, namely in t and b at distances rt and rb, respectively, from the principal point p. The distance dr between the two photo points is called relief displacement because it is caused by the elevation difference Dh between T and B. The expression for evaluating relief displacement may be obtained as below: From the similar Ds Otp and OTP0, we have

rt f = R H − ∆h

Chapter 10

10.2.6  Relief Displacement on a Vertical Photograph

Fig. 10.9  Relief displacement on a vertical photograph

or f R = rt(H – Dh) From the similar Ds Obp and OBP, we have

...(10.11)

f rb = H R

or f R = rb H Equating the two values of (f R) from Eqs. (10.11) and (10.12), we get rt (H – Dh) = rb H

…(10.12)

78

Geoinformatics

By rearranging the terms in the above equation and taking rt – rb = dr we get, rt Dh dr = H

…(10.13)

On examination of Eq. (10.13) for relief displacement, the following conclusions may be drawn:

(a) (b) (c) (d)

Relief displacement increases with increase in radial distance to the image. Relief displacement increases with increase in elevation of the point above the datum. Relief displacement decreases with increase in flying height. Relief displacement occurs radially from the principal point.

Relief displacement often causes straight roads, fence lines, etc., on rolling ground to appear crooked on a vertical photograph.

10.3  FLIGHT PLANNING Successful execution of any aerial photogrammetric project depends upon good-quality photography. Aerial photography is executed based on the requirements of the client, such as the required map scale, contours, and accuracy of the deliverables. To achieve the requirements it is necessary to pan the flight of the aircraft according to the requirements. A flight plan generally consists of (i) a flight map which shows where the photos are to be taken, and (ii) specifications which outline how to take them, including specific requirements, such as camera and film requirements, scale of photography, flying height, end lap, side lap, number of photographs, number of strips to cover the area to be photographed, exposure interval, etc. Flight planning is one of the most important operations in the overall photogrammetric project. Failure to obtain a successful photography on a flight mission not only necessitates costly flights but in all probability it will also cause long and expensive delays on the project for which the photos were ordered.

10.3.1  Overlaps Vertical photography is usually done along flight strips of suitable width, having some common coverage or overlapping in the successive photographs. The common coverage in the photographs in the direction of flight or photo strip, is called the end lap, longitudinal overlap, or forward overlap (Fig. 10.10). The common coverage between the photographs of two adjacent strips, is called the side lap or lateral overlap (Fig. 10.11). The end lap is provided in the photographs for stereoscopic coverage of the area so that 3-dimensional model of the area photographed can be generated. For getting the stereoscopic coverage of the area, the absolute minimum end lap is 50%. However, in order to prevent gaps from occurring in the stereoscopic coverage due to crab, tilt, flight height variations, and terrain variations, it is recommended that the end lap should, normally, be between 55% and 65%. Side lap is required in aerial photography to prevent gaps from occurring between flight strips as a result of drift, crab, tilt, flight height variations, and terrain variations. Drift is a term applied to a failure of the pilot to fly along planned flight lines. It is often caused by strong winds. The side lap generally varies from 25% to 30%.

79

Properties of Aerial Photography

If G represents the dimension of the square of ground covered by a single photograph, and B is the distance between two successive exposure stations, called the air base, the amount of end lap given in percent is G − B E% =   100  G 



…(10.14)

Chapter 10

where, E% = the percent end lap.

Fig. 10.10  End lap in successive photographs along the flight direction

Fig. 10.11  Side lap in photographs of adjacent strips

If W is the spacing between adjacent flight lines, the amount of side lap in percent is where, S% = the percent side lap.

G − w S% =   100  G 

…(10.15)

80

Geoinformatics

10.3.2  Computation of Flight Plan Figure 10.12 shows a flight plan that how an aircraft takes the photographs in various strips covering the entire area maintaining the specified end lap and side lap in the photographs.

Fig. 10.12  A flight plan

Figure 10.13 shows a pair of successive photographs having the specified end lap and side lap in the two photographs. Let the various quantities for the computation of flight plan be l = the length of the photograph in the direction of flight, w = the width of the photograph normal to the direction of flight, le = the end lap between the successive photographs, ls = the side lap between the successive flight strips, L = the net ground distance covered by each photograph in the direction of flight, W = the net ground distance covered by each photograph in the transverse direction, A = the total area to be surveyed, a = the net area covered by each photograph, S = the scale of the photograph (f/H), L0 = the length of the area to be photographed, and W0 = the width of the area to be photographed. The scale S of the photograph may be written as or

S =

(1 − le ) l L

L =

(1 − le ) l S

…(10.16)

81

Properties of Aerial Photography

Also or

S =

(1 − ls ) W W

W =

(1 − ls ) W S

…(10.17)

The net area covered by each photograph a = LW …(10.18) Substituting the values of L and W from Eqs. (10.16) and (10.17) in Eq. (10.18), we get a =



(1 − le )(1 − ls ) lw s2

Therefore, the required number of photographs is N =



A AS 2 = a lw(1 − le )(1 − ls )

…(10.19)

The number of photographs in each strip is given by

N1 =

L0 +1 L

…(10.20)

W0 +1 W

…(10.21)

W0 N2 − 1

…(10.22)

and the number of strips is given by N2 =



Actual spacing between the strips

d =

Chapter 10

Fig. 10.13  Area covered by overlapping photographs

82

Geoinformatics

The total number of photographs for the entire area N′ = N1 × N2 The exposure interval is given by

Ground distance between successive exposures (L) I = Ground speed of aircraft (V)

…(10.23)

…(10.24)

The flying height is determined from the following considerations H = C . contour interval …(10.25) where C is a factor which varies from 500 to 1500 depending upon the conditions of map-compilation process. Now software are available which can be used to calculate various quantities of flight planning and they also produce the flight plan for the area to be photographed.

11 Stereophotogrammetry

11.0 INTRODUCTION Stereophotogrammetry is a branch of photogrammetry that studies the geometric properties of stereopairs and the methods of determining the dimensions, shapes, and spatial position of objects from images on stereopairs. It is a more sophisticated technique which involves estimating the threedimensional coordinates of points on an object. These are determined by measurements made in two or more photographic images taken from different positions. The stereoscopic viewing of stereopairs in which a three-dimensional model of the terrain is formed, allows measuring the elevation of the points, height of the objects, and more accurate interpretation of the objects or phenomena appearing in the photographs compared to the interpretation using a single photograph.

Methods of judging depth of the objects from the observer may be stereoscopic or monoscopic. Persons with normal vision, i.e., capable of viewing with both eyes simultaneously, are said to have binocular vision, and perception of depth through binocular vision is called the stereoscopic viewing. Monocular vision is the term applied to viewing with only one eye, and methods judging distances with one eye are termed as monoscopic. With binocular vision, an object is viewed simultaneously by both the eyes, and the rays of vision converge at an angle called the parallactic angle or angle of parallax. Nearer is the object, the greater the parallactic angle, and vice versa. In Fig. 11.1, the optical axes of the eyes R and L are separated by a distance b, called the eye base. When the eyes are focused on a point A, the optical axes form parallactic angle φA at A. Similarly, when sighting point B, the parallactic angle formed is φB. The brain automatically and unconsciously associates distances DA and DB with the corresponding parallactic angles φA and φB, respectively. The depth (DA - DB) between A and B, is perceived as the difference dφ in the two parallactic angles φA and φB. The difference dφ is called the differential parallax. The shortest distance of clear stereoscopic depth perception for the average adult is about 25 cm, and the eye base is between 63 and 69 mm. The maximum parallactic angle formed by the eyes, assuming 25 cm as the minimum focusing distance and average eye base as 66 mm is, therefore, approximately  66  tan −1  =  = 15º  250 

Chapter 11

11.1  STEREOSCOPIC VISION AND DEPTH PERCEPTION

84

Geoinformatics

The maximum distance at which the stereoscopic depth is possible for the average adult, is approximately 600 m. Beyond this distance parallactic angles become so small that changes in the parallactic angles for the depth perception cannot be discerned. A man with normal vision can discern changes in parallactic angle even up to 3″ and some persons up to 1″.

Fig. 11.1  Stereoscopic depth perception.

11.2  STEREOSCOPIC VIEWING OF PHOTOGRAPHS Stereoscopic fusion is a process in which the right images seen by right eye and the left images seen by left eye of the same objects are fused in the brain to provide the observer with a three-dimensional model called the stereoscopic model or stereomodel. In Fig. 11.2a, the right eye views the top A and bottom B of the object AB at the same position a/b, and the top and bottom of the same object are viewed at different positions a′ and b′ by the left eye. If the right and left eyes are replaced by two cameras kept in the same positions as the eyes, two photographs PR and PL will be obtained. Now keep

(a)

(b)

Fig. 11.2  Depth perception using steropairs

85

Stereophotogrammetry

Fig.11.4  Stereopairs of photograph

Figure 11.5 shows a mirror stereoscope. It consists of mainly two pairs of mirrors inclined at 45° to the plane of photographs with their reflecting surfaces facing each other, and two convex lenses. The mirror stereoscope permits two photographs to be completely separated from each other when viewing stereoscopically, eliminating the problem of one photo obscuring part of the overlap of the other. It also enables the entire stereomodel to be viewed simultaneously using large size photographs.

Binocular

Prism Mirror

Fig. 11.5  Mirror stereoscope

Chapter 11

Convex lens these two in same relative positions as they were at the time of photography as shown in Fig. 11.2b, placing the right eye above PR and the left eye above PL to view the photographs such that the right eye sees only right photograph and the left sees only the left photograph, then the respective rays from the images will form three-dimensional image of the same object AB in virtual space. This three-dimensional image is called the stereoscopic model. In actual practice it is not possible that the right photograph is seen only by right eye and left photograph only by left eye. To facilitate this process a simple instrument called the stereoscope is used. A lens stereoscope, shown in Fig. 11.3, consists of two identical convex lenses mounted on a frame with inclined legs. The legs can be folded and the Fig. 11.3  Lens stereoscope stereoscope can be put in pocket, and therefore, it is also called the pocket stereoscope. Pocket stereoscopes are cheap and best suited for small photographs. The readers, if they have a pocket stereoscope, may use it to see the stereomodel of the stereopairs Fig. 11.4. The stereomodel can also be seen by putting a thin cardboard between the two images, and then seeing the left image by left eye and right image by right eye without blinking the eyes for some time.

86

Geoinformatics

11.3  PARALLAX IN STEREOSCOPIC VIEWS Figure 11.6 shows two points A and B which have been photographed as a and b, respectively, in left photograph and as a′ and b′, respectively, in right photograph, from two aerial camera positions O and O′. The principal point and the transferred principal point of the left photograph are p and p1, and that of the right photograph are p′ and p′1. The lines Oa″ and Ob″ are drawn parallel to the lines O′a′ and O′b′, respectively.

Fig. 11.6  Parallax in aerial stereoscopic vision

Since the aircraft taking the photographs is moving at certain velocity, after taking left photograph from O, it has moved to the position O′ from where the right photograph has been taken. During this movement the image of A has moved a distance aa″ and that of B, a distance bb″. This displacement of the image on two successive exposures is called the parallax. Since this displacement is along the x-axis, which is assumed to be in the direction of flight, it is known as x-parallax. In this case there is

87

Stereophotogrammetry

no displacement of the image along the y-axis, and hence the y-parallax is zero. If y-parallax exists in photograph, it causes eyestrain and prevents comfortable stereoscopic viewing. It may be observed that the parallax of the higher point B is more than the parallax of the lower point A. Thus, each image in a varying terrain elevation has a slightly different parallax from that of a neighbouring image. This point-to-point difference in parallax exhibited between points on a stereopairs makes possible the viewing of the photograph stereoscopically to gain an impression of a continuous three-dimensional model of the terrain.

11.3.1  Algebraic Definition of Parallax The x-coordinates of a on the left photograph and that of a′ on the right photograph are xa and –x′a, respectively. Since a″ has been located by drawing Oa″ parallel to the line O′a′, the x-coordinate of a″ is equal to x′a. The total displacement of the image of A is, therefore, aa″ which is the x-parallax pa of A. From the Fig. 11.6, we find that pa = pa + pa″ = xa – (–x′a) Similarly, pb = pb + pb″ = xb – (–x′b) In general we can write

p = x – (–x′)

…(11.1)

Thus, on a pair of overlapping photograph, the parallax of a point is equal to the algebraic difference of the x-coordinates of the point on the left and right photographs.

It has been seen that different points have different x-parallaxes due to height variations between them. This characteristic of the images on a stereopairs helps in determination of difference in elevation by making use of the difference in the x-parallaxes of the points. The following Eq. (11.2) relates the difference in elevation to the difference in parallax:

Dh =

H Dp (bm − Dp )

…(11.2)

where Dh = the difference in elevation (ha – hb) between the two points, Dp = the difference in parallax (pa – pb) of the two points, and bm = the mean principal base (pp′1 + p1p′)/2.

11.3.3  Measurement of Parallax To make use of the Eq. (11.2) for the determination of difference in elevation Dh, the difference in x-parallax Dp is required. The difference in parallax between two points is obtained by making measurements on stereopairs. It requires a stereopairs in which the points between which the difference in elevation is required appear, stereoscope, a drawing sheet, glass marking pencil, and an instrument called parallax bar which is used for determination of difference in parallax by taking readings.

Chapter 11

11.3.2  Difference in Elevation by Stereoscopic Parallax

88

Geoinformatics

The parallax bar, shown in Fig. 11.7, is a simple device having some scale with a micrometer to take readings. It has two glass plates having marks shown in the figure. The marks are called the floating marks. Out of the two glass plates one plate is fixed and the other one is movable. The working principle of parallax bar is explained in the next section. Floating mark + l

Fixed glass plate

+

Movable glass plate

l

Micrometer

Main scale

Fig. 11.7  Parallax bar

11.3.4  Concept of Floating Mark in Measurement of Parallax In Fig. 11.8 a point A lies on the ground surface, and it has been imaged as a and a′ on the left and right photographs, respectively. There is another point M which has three positions in the space as M1 above the ground surface, M0 on the ground surface, and M2 below the ground surface. The respective images on the left and right photographs of M for all the three positions are shown in the figure. If the two photographs are viewed stereoscopically under a stereoscope, it will be found that the point M0 is the only point which is lying on the ground at A, and the remaining two points M1 and M2 are above and below the ground surface, respectively.

Fig. 11.8  Principle of floating mark

Let us now take one of the floating marks out of the three engraved on the two glass plates of the parallax bar as M, as shown in Fig. 11.9. The two photographs have been laid down on a drawing sheet

89

Stereophotogrammetry

such that they are in their correct relative positions as they were at the time of photography, and the line joining the principal points and the respective transferred principal points on the two photographs are along the direction of flight. This process is known as base lining. The images a and a′ of a point A are marked using glass marking pencil. Now to start the measurement of parallax of A on the two photographs, the parallax bar is put on the photographs, and the marks m and m′ on the two glass plates are treated as the two images of the point M. The mark m on the fixed glass plate is put exactly over the image a, and m′ on the movable glass plate is movend right or left using the micrometer drum viewing the two photographs under the stereoscope till it appears that M lies on the point A. In this position of the glass plates, the mark m is over a and m′ is over a′, and the parallax bar reading RA is taken. For all other positions of the mark m′, M would appear to lie above or below A. In similar manner the parallax bar reading RB is taken for the point B at its images b and b′. The difference (RA – RB) of the two parallax bar readings is equal to the difference in the parallax (pA – pB) = Dp of the two points.

Fig. 11.9  Measuring parallax using parallax bar.

Aerial photointerpretation is an art of examining photographic images for the purpose of identifying objects and judging their significance. It is visual interpretation of the images appearing on the aerial photograph in context to the photography. As the visual interpretation is also being applied to the satellite imageries, a general term called the image interpretation is used. Image interpretation and the characteristics of the photographic images are fully discussed in Chapter 16.

Chapter 11

11.4  AERIAL PHOTOINTERPRETATION

This page intentionally left blank

SECTION IV

REMOTE SENSING

This page intentionally left blank

12 Remote Sensing

12.0 INTRODUCTION Remote sensing is a techniques to know about an object or phenomenon from a distance, i.e., without being in contact with it. It is like identifying a person from a distance through some of the features or characteristics associated with that person, such as his hair style, height, complexion, his body structure, walking style, etc. In our day-to-day life, every human being is using this technique without noticing the technique. It is such a powerful technique that physical, chemical as well as biological properties of objects can be determined from a distance. It can also be applied successfully for interpreting underground objects or phenomena. The remote sensing technique can be applied using aerial photographs or satellite images by visual interpretation. But the other method which utilizes digital data through satellites has wide applications and several advantages over the visual interpretations, and hence this book will deal only with the remote sensing which utilizes the digital data obtained through sensors onboard satellites.

12.1  PRINCIPLE OF REMOTE SENSING

(a) Electromagnetic energy source, (b) Object, (c) Sensor, (d) Data recorder, (e) Filtering the required information, and ( f ) Providing the required information to the user.

Figure 12.1 shows a schematic diagram of an ideal remote sensing system. The electromagnetic energy propagates from the source through a non-interfering atmosphere where there is no loss of energy. The source of the electromagnetic energy provides a high-level energy over all wavelengths at a known constant intensity. The electromagnetic energy falls on the target and interacts with it depending upon its characteristics. Consequently, some energy is scattered, transmitted, absorbed, and reflected. The target itself emits some energy. The remote sensing makes use of the reflected and emitted energy by the target. The super sensor having the capability of recording the reflected and/or emitted energy by the target in all wavelengths, records the spatial information in spectral form.

Chapter 12

A remote sensing system requires the following:

94

Geoinformatics

The super sensor transmits the recorded data to a real-time data recording system on ground. The recorded data is processed instantaneously in an interpretable form for identifying all the features uniquely characterized by their physical, chemical, and biological characteristics. Now as per requirement of the users having in depth knowledge of making the use of the data in their respective fields, the data are supplied to them.

Fig. 12.1  An ideal remote sensing system

In real world, however, we do not have an ideal remote sensing system. The remote sensing systems in the real world have the following shortcomings:

(i) No energy source emits uniform energy both spatially and temporally. (ii) The energy gets changed in strength and spectral distribution due to atmospheric gases, water vapor molecules, and dust particles present in the atmosphere. (iii) The spectral response by the same matters is different under different conditions and also different matters may have similar spectral response. (iv) The sensor used cannot accommodate all wavelengths of the electromagnetic spectrum. (v) The users may not receive the data in desired form in real time.

Some users may not have sufficient knowledge of data acquisition, analysis, and interpretation of remote sensing data.

12.2  ADVANTAGES AND DISADVANTAGES OF REMOTE SENSING The remote sensing data once acquired can be stored as permanent record and used for different purposes, as many times as required at any time. It provides a synoptic view making it possible to view a large area at a glance helping various types of studies for taking decision, which otherwise becomes difficult. It also makes possible easy acquisition of data over inaccessible areas. Map revision at medium to small scale using remote sensing data is economical and faster. The remote sensing data being in digital

Remote Sensing

95

form, processing and analysis becomes faster using computer. The data analysis can be performed in the laboratory which reduces the field work making the remote sensing data cost effective. The multi-concept of remote sensing, discussed in the next section, makes this technique more useful in a wider perspective. Besides numerous advantages of remote sensing, there are some disadvantages. The data processing and analysis can only be done by trained and experience personnel. It becomes expensive if it is applied for a small area. The remote sensing data cannot be used for preparation of large-scale engineering maps. The interpretations based on remote sensing data require ground verification. Software used for processing and analysis of the data are costly.

12.3  MULTI-CONCEPT OF REMOTE SENSING The multi-concept of remote sensing makes its use in wider perspective because the remote sensing can have

(i) (ii) (iii) (iv) (v) (vi) (vii)

Multi-station images, Multi-band images, Multi-date images, Multi-stage images, Multi-polarization images, Multi-enhancement images, and Multi-disciplinary images.

Multi-station imaging provides overlapping pictures, called stereopairs in photogrammetry, for three-dimensional perception for a better interpretation. Multi-band images make possible unambiguous identification of features. The comparative analysis can be made for features having dynamic characteristics through multi-date images. The multi-stage images are obtained through space, aircraft, and ground for extracting more detailed information from successively smaller sub-sample of the area. The multi-polarization images enable the delineation of features utilizing the polarization of the reflected radiation. The multi-enhancement images involve the combination of multi-date, multi-band, and multipolarization images to suitably generate composite images. Through multi-disciplinary analysis by the analyst from different disciplines, a more accurate and complete information can be obtained.

The remote sensing has successfully been applied to almost every field in different ways. It is being used for map preparation and revision especially on medium to small scales, prediction of crop yield, natural resources inventory, flood mapping, disaster management, water resources management, environmental impact assessment, land use and land cover mapping, urban growth mapping, wasteland mapping, etc.

Chapter 12

12.4  APPLICATIONS OF REMOTE SENSING

13 Electromagnetic Energy

13.0 INTRODUCTION The electromagnetic (EM) energy plays a very important role in remote sensing, as remote sensing relies on the measurement of EM energy. Sun is an important source of EM energy providing energy at all wavelengths, and the sensors used in remote sensing measure the reflected EM energy emitted by the Sun and reflected by the target under study and/or the energy emitted by the target itself. To understand the principle of working of sensors, the basic understanding of EM energy, its characteristics and interaction with matter, is required.

13.1  ELECTROMAGNETIC ENERGY The wave model of EM energy bears particles called photons, and propagates through space in the form of sinusoidal waves characterized by electrical field and magnetic field both perpendicular to each other and perpendicular to the direction of travel of waves (Fig. 13.1). As the energy has two components of electrical energy and magnetic energy, it is termed as electromagnetic energy. The electromagnetic waves propagate through space at a speed of light (c) which is 299,790,000 ms–1, rounded off to 3 × 108 ms–1.

Fig. 13.1  Electromagnetic energy (wave model)

The wavelength λ is defined as the distance between the successive wave crests, and the frequency ν is defined as the number of cycles of a wave passing a fixed point over a specific period of time. The wave length and frequency of electromagnetic wave are related to each other through the expression

97

Electromagnetic Energy

c …(13.1) λ Since the speed of light is constant, the frequency and wavelength of electromagnetic wave are inversely related to each other. The units of measurement of wavelength is metres (m), nanometers (nm, 10-9 metres), or micrometres (mm, 10-6 metres), and for frequency the unit is hertz (Hz).

n =

13.2  ELECTROMAGNETIC SPECTRUM AND ITS CHARACTERISTICS Any matter having temperature above Absolute zero (0° K) will generate EM energy. This means that the Sun, and also the Earth, radiate EM energy, and the matter capable of absorbing and reemitting all EM energy is known as the blackbody. The radiation emitted by a blackbody at different temperatures is shown in Fig. 13.2. The area below the curve represents the total amount of energy emitted at a specific temperature.

Fig. 13.2  Radiations from a blackbody at different temperatures

The emitting ability of a real material compared to that of the blackbody is called the emissivity of a material. The total range of wavelengths of electromagnetic wave extending from gamma rays (10–6 mm) to radio waves (108 mm) is referred to as the electromagnetic spectrum (Fig. 13.3).

Fig. 13.3  Electromagnetic spectrum

Chapter 13



98

Geoinformatics

From a wide range of wavelength of the electromagnetic spectrum, the remote sensing utilizes ultraviolet portion to microwave portion of the spectrum for different purposes. For example, some of the Earth’s surface materials, such as primary rocks and minerals, emit visible light when illuminated with ultraviolet radiation. The longer wavelength used for remote sensing are in the thermal infrared and microwave regions. Thermal infrared gives information about surface temperature which can be related to the mineral composition of rocks or the condition of vegetation. Microwaves are used to provide information on surface roughness and other properties of the surface such as water content.

13.3  ELECTROMAGNETIC ENERGY INTERACTION The Sun is the most important source of EM energy. The incident energy on the ground surface is transmitted energy through the atmosphere. Some energy is lost in the atmosphere due to scattering and absorption. The energy incident upon the ground surface breaks into

(i) Reflected energy from the ground surface, (ii) Absorbed energy by the ground, and (iii) Transmitted energy into the ground.

The ground itself emits some EM energy, and consequently, the sensors used in remote sensing collect some scattered energy from the atmosphere, some emitted from the ground surface, and the reflected energy by the ground surface which forms the major part of the total energy reaching to the sensor (Fig. 13.4). The interrelation between the incident energy EI (λ), reflected energy ER (λ), absorbed energy EA (λ), and transmitted energy ET (λ) is written in the form of an energy balance equation as below: EI (λ) = ER (λ) + EA (λ) + ET (λ)

…(13.2)

Fig. 13.4  Electromagnetic energy interaction

The earth features, depending upon their physical and chemical characteristics, reflect, absorb, and transmit the EM energy in different proportions, and these variations make possible to distinguish different features appearing on an image. Even the same feature type has variations in the reflected, absorbed and transmitted energy at different wavelengths. Thus, it may be possible that two features

99

Electromagnetic Energy

ER (λ) = EI (λ) – [EA (λ) + ET (λ)]

…(13.3)

The surface smoothness of an object plays important role in the geometry of the reflected energy. Specular reflection is obtained from flat smooth surfaces where the angle of reflection is equal to the angle of incident (Fig. 13.5a). The rough surfaces behave like reflectors where reflection takes place uniformly in all directions (Fig. 13.5d). These are the two extreme cases, and most of the earth surfaces reflect the energy between these two extremities.

Fig. 13.5  Specular and diffuse reflectance

Any given surface can be characterized into a particular category by the surface roughness in comparison to the wavelength of the incident energy upon it. For example, a sandy surface appears rough in the visible portion of EM spectrum but it may appear as a smooth surface in a relatively long wavelength of the energy. Thus the particle size that makes the surface and the wavelength of the energy incident upon the surface, make the surface look like smooth or rough surface. If the wavelength of the energy is smaller than the particle sizes, the incident energy will get diffused making the surface look as a rough, and if the wavelength is greater than the particle size, the incident energy gets reflected and the surface looks like a smooth surface. Further, the diffuse reflections contain spectral information on the colour of the reflecting surface whereas specular reflections do not. Hence, measuring the diffuse reflectance properties of terrain features plays important role in remote sensing. Spectral reflectance rλ which is a measure of the reflected portion of the incident energy as a function of wavelength, can be expressed as percent as follows:

rl =

=

Energy of wavelength reflected from the object × 100 Energy of wavelength incident upon the object E R (λ ) × 100 E I (λ )

…(13.4)

Chapter 13

distinguishable in one spectral range may look different in another wavelength band. Remote sensing utilizes basically the magnitude of spectral variations to distinguish various objects by interpretation. Since the reflectance properties of earth features play important role in distinguishing different features, the Eq. 13.2 takes the form given by Eq. 13.3, in which the reflected energy is taken equal to the energy incident on a given particular feature reduced by the energy that is either absorbed or transmitted by the feature.

100

Geoinformatics

The spectral reflectance curve is a graph of spectral reflectance of an object as function of wavelength Fig. 13.6. The illustration shows highly generalized spectral reflectance curves for trees such as deciduous (broad-leaved trees such as oak and maple) and coniferous (needle-bearing trees such as pine and spruce) trees.

Fig. 13.6  Spectral reflectance curves

It may be noted that the curves are plotted as a “ribbon” (or “envelope”) of values, not as a single line. This is because spectral reflectance varies somewhat within a given material class, and therefore, the spectral reflectance curves of different trees of same species can be different. By analyzing the spectral reflectance curve, the choice of wavelength region in which the remote sensing data is to be acquired for a particular application, can be decided. The spectral reflectance curve of green vegetation is distinctive and quite variable with wavelength. The spectral reflectance curves of most of soils are not very complex in appearance. An increasing level of reflectance with increase in wavelength, particularly in the visible and near-infrared regions, is one of the most outstanding reflectance characteristics of dry soils. Different levels of moisture content in sandy soil can be easily identified through the reflectance curves. Water has low reflectance compared to vegetation. High reflectance is observed in turbid water whereas water containing plants with chlorophyll has a reflectance peak in the green wavelength.

13.4 RESOLUTION Spatial resolution is another one of the important characteristics of remote sensing which determines the ability of a sensor in discerning smallest possible details in spatial data. The resolution characteristics of a remote sensing imaging system are broadly classified as:

Electromagnetic Energy

(i) (ii) (iii) (iv)

Spatial resolution, Spectral resolution, Radiometric resolution, and Temporal resolution.

Spatial resolution is analogous to the sharpness of image in conventional photography. The electro-optical scanners produce digital images, and the spatial resolution for such images is described as Instantaneous Field of View (IFOV) or the solid angle measured in milliradians. Spatial resolution can be measured as A given by the following formula: A = HB where A = the ground dimension of the detector element in metres, H = the flying height of the platform in metres, and B = the IFOV in milliradians.

…(13.5)

The IFOV of an imaging system depends upon the following factors:

(a) Height of the imaging platform, (b) Size of the detector element, and (c) Focal length of the optical system.

The images where only large features are visible, are said to be having coarse or low resolution, and if detection of small features is possible then fine or high resolution. The area viewed on the ground determines the maximum spatial resolution of a sensor, and it is determined by resolution cell. The spectral resolution of a sensor describes its ability to define wavelengths. Finer is the spectral resolution the narrower is the wavelength range for a particular band or channel. The two components which are considered in spectral resolution are: (i) the number of wavelength bands or channels used, and (ii) the width of each wave band. A higher spectral resolution is achieved from a larger number of bands and narrower band width of each band. It is important to select the correct spectral resolution for the type of information to be extracted by deciding the wave bands to be used. Many remote sensing systems record energy over several separate wavelength ranges having various spectral resolutions. The radiometric resolution expresses the detection of the smallest difference in radiant energy. Radiometric characteristics of an image describe the actual information contained in the image. It is the sensitivity of a sensor to the magnitude of the electromagnetic energy. The temporal resolution refers to the frequency of data collection. To capture the changes occurring in the environmental phenomenon, the data are collected either daily, weekly, monthly, seasonally, or yearly. Change detection using imageries of the same area is only possible with good temporal resolution. It is necessary that the data collection frequency should match the frequency of change. Special characteristics of features may change over time, and these changes can only be detected by collecting and comparing multi-temporal imageries.

13.5  IMAGE HISTOGRAM An image histogram is a graphical representation of the number of pixels in an image as a function of their intensity which is the digital number (c.f., Sec. 15.2.1). It acts as a graphical representation of the tonal value distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image, a viewer will be able to judge the entire tonal distribution at a glance.

Chapter 13



101

102

Geoinformatics

Histograms are made up of bins, each bin representing certain intensity range (Fig. 13.7). The histogram is computed by examining all pixels in an image, and assigning each to a bin depending on the pixel intensity or digital number. The final value of a bin is the number of pixels assigned to it. The number of bins in which the whole intensity range is divided is usually in the order of the square root of the number of pixels.

Fig. 13.7  Histogram of near infrared band displayed in red.

For 8-bit grayscale image there are 256 different possible intensities, and so the histogram will graphically display 256 numbers showing the distribution of pixels amongst those grayscale values. Image histogram can be useful tool for the thresholding. Because the information contained in the graph is a representation of pixel distribution as a function of tonal variation, image histogram can be analyzed for peaks or valleys which can then be used to determine the threshold value. This threshold value can then be used for edge detection, image segmentation, and co-occurrence matrices.

13.6  PURE AND MIXED PIXELS The remote sensing data is collected in the form of matrix of picture elements called pixels (Fig. 13.8). A pixel represents the smallest unit of ground area in an image. The spatial resolution refers to a sensor used in a remote sensing system to collect the images and it is its ability to define wavelengths whereas the pixel refers to the image collected by the sensor. A pixel may be a pure pixel or mixed pixel. A pure pixel occupies a completely single homogeneous class of information or feature whereas a pixel occupying more than one class of information is a mixed pixel (Fig. 13.9). Thus, the single digital value of a mixed pixel may not accurately represent any of the classes of features present causing errors and confusion in interpretation.

Fig. 13.8  Spatial resolution in the form of pixels

Fig. 13.9  Pure and mixed pixels

The number of mixed pixels in an image, generally, increases with the decrease in spatial resolution. Hence, increase in spatial resolution increases the detection of finer details, and consequently, it increases the cost of processing as there is substantial increase in number of pixels.

Chapter 13

103

Electromagnetic Energy

14 Sensors and Platforms

14.0 INTRODUCTION Electromagnetic energy from Sun in the form of reflected energy from the Earth and emitted energy by the Earth are measured and recorded to derive information for identification of surface features and their characteristics. The variations in the measured values of energy which are dependent on the physical, chemical and biological characteristics of the features help in identification of the features and their characteristics. Sensors placed on either static or moving platform make the measurement of electromagnetic radiation. The sensor-platform combination is employed to obtain the characteristics of the resulting image data. Different types of sensors are available for different applications. Aircrafts and satellites are used as platforms to carry one or more sensors.

14.1  BROAD CLASSIFICATIONS OF SENSORS AND PLATFORMS Sensors can be classified on the basis of source of energy as:

(i) Passive sensor, and (ii) Active sensor.

The sensors which depend on external source of energy, usually the Sun, are known as passive sensors, while the sensors which have their own source of energy are called as active sensors. A normal photographic camera is one of the oldest sensors, and under different operating conditions it acts either as a passive or active sensor. Under good illumination operating condition when flash is not used, the camera behaves as a passive sensor. However, when the camera operates under poor illumination condition using a flash, it becomes an active sensor. In order for a sensor to collect and record reflected or emitted energy from Earth surface, it is placed on a stable platform away from the surface being observed. The platform may be:

(i) Ground-based, (ii) Airborne, or (iii) Space-borne.

Sensors placed on ground-based platforms are employed for recording detailed information about the surface. The collected information may be used as reference data for subsequent analysis. Aircrafts or helicopters, which are airborne platforms, are used to collect detailed images over virtually any part of the Earth. Space-borne platforms are satellites launched for remote sensing purposes. Since

Sensors and Platforms

105

the satellites provide repetitive coverage of an area under study, satellite data products have wide applications in various fields due to their multi-characteristics discussed in Sec. 12.3, and hence, in the forgoing sections various space-borne platforms and sensors have been discussed. Satellites are placed into orbits tailored to match the capabilities of the sensors they carry and the objectives of each satellite mission. Orbit selection depends upon the altitude of the satellite, and its orientation and rotation relative to the Earth. Basically the satellites are categorized as: (i) Geostationary satellite, and (ii) Sun-synchronous satellite.

If a satellite is positioned in the equatorial plane at an altitude of approximately 35,800 km moving in same direction as the Earth, it is called as geostationary satellite. Such satellites have the same time period as the Earth, and hence appear to be stationary with respect to the Earth’s surface. Geostationary satellites are ideal for meteorological or communication purposes. The satellites positioned in a near north-south orbital plane, and made Sun-synchronous, are called the Sun-synchronous satellites. Such satellites have capabilities of revisiting the same area under uniform illumination conditions at same local time in different seasons every year. This is an important factor which helps in observing and analyzing the changes in the appearances of the features within each scene under the same conditions of observation, and does not require corrections for different illumination conditions. Sun-synchronous satellites are useful for mapping of earth resources. Since they also provide synoptic view of large area with finer details and synoptic repetitive coverage of land area, they are well suited to monitor global environmental problems.

14.2  SENSORS AND SATELLITES LAUNCHED FOR DIFFERENT MISSIONS In this section brief description of satellites and sensors which were launched for different missions, has been presented.

14.2.1  Land Observation Satellites and Sensors Worldwide three major land observation satellite missions launched are: (i) LANDSAT by U.S.A., (ii) SPOT by France, and (iii) IRS by India. These satellites are low-altitude satellites having an altitude less than 1000 km above the Earth’s surface. The sensors used are of low-spatial resolution and have spectral resolution less than 0.1 mm.

LANDSAT Satellites All LANDSAT-1 to 7 satellites have been placed in a near polar Sun-synchronous orbit, altitudes ranging from 700 – 900 km, revisit period from 16 – 19 days, and aerial coverage of each scene 185 km × 185 km. The special features of these LANDSAT satellites are that they have a combination of sensors with spectral bands tailored to Earth observation, functional spatial resolution, and good aerial overage. The sensors used include Return Beam Vidicon (RBV), Multi-Spectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper (ETM), and Enhanced Thematic Mapper Plus (ETM+).

Chapter 14



106

Geoinformatics

SPOT Satellites SPOT (Systém Pour Ľ Observation de la Terre) is a series of imaging satellites designed and launched by CNES (Centre National ď Ėtudes Spatiales) of France in collaboration with Sweden and Belgium to provide Earth observation data. SPOT-1 to 5 satellites are near polar Sun-synchronous satellites having altitudes 822 – 832 km and revisit period of 26 days. The SPOT satellites ushered a new era in remote sensing by introducing linear array sensors having pushbroom scanning facility. The SPOT sensors have pointable optics that enables site-to-site off-nadir view capabilities permitting full scene stereoscopic imaging of the same area. It is of tremendous value for terrain interpretation, mapping, and visual terrain simulation from two different satellite orbital paths. The SPOT satellites use various sensors having different capabilities. These sensors are HRV (Visible High Resolution), HRVIR (Visible and Infrared High-resolution), HRG (High Resolution Geometric), and HRS (High Resolution Stereoscopic) sensors.

IRS Satellites The IRS satellite systems are under the umbrella of National Natural Resources Management System (NNRMS) of India. The launch of IRS-1A on March 17, 1988, which is India’s first civil remote sensing satellite, marked the beginning of a successful journey in the International Space Programme. The IRS-1A,1B, and 1C satellites are near polar Sun-synchronous, and -1D satellite is Sunsynchronous, the altitudes vary from 821 – 914 km and repeat cycle 24 – 26 days with 5 days for PAN and WiFS. The sensors used in these satellites are LISS (Linear Imaging Self-Scanning), PAN (Panchromatic), WiFS (Wide Field Sensor). In order to strengthen India’s own capability to launch space vehicles, a series of satellites known as IRS-P was initiated. With the availability of IRS-1C/1D data it was possible to prepare cartographic and town planning applications up to 1:10,000 scale, and also the height information could be provided to an accuracy of 10 m approximately, using stereo pairs of the imageries. This provided the necessary impetus to further develop high resolution sensors having resolution of 2.5 m using PAN cameras, one camera working at + 26° with respect to nadir and the other at – 5° with respect to nadir. Cartosat-1, which used such PAN camera, was placed in Polar Sun-synchronous orbit of altitude 618 km with repeat cycle of 116 days. Cartosat-1 was dedicated to cartographic and mapping application. The IRS missions have application in the areas like resource survey and management, urban planning, forestry studies, disaster monitoring, and environmental studies.

14.2.2  High Resolution Sensors The stereo ability of SPOT and IRS 1C/1D PAN data provided a new dimension to cartographic mapping application up to 10,000 scale. The intense research has lead to significant improvement in spatial resolution close to 1 m. Some of the sensors capable of acquiring such high resolution are IKONOS, QuickBird, and OrbView. IKONOS derived from the Greek word imaging, is the world’s first commercial high-resolution imaging satellite having 1 m resolution of panchromatic (B & W) images and 4 m resolution of multispectral (colour) images. It has multitude of applications, such as mapping, agriculture monitoring, and urban planning. The QickBird satellite has capability of acquiring black and white images with a resolution of 61cm and colour images (4 bands) with a resolution of 2.44 m covering a surface area of 16.5 km × 16.5 km.

Sensors and Platforms

107

ORBIMAGE’s OrbView-1 satellite launched in 1995 contained two atmospheric instruments that improved weather forecasting capabilities around the world. The satellite’s miniaturized camera provided daily several weather images and global lightning information during day and night operations. Its atmospheric monitoring instrument provided global meteorological data useful for improving longterm weather forecasts. The OrbView-2 satellite provides unprecedented multispectral imagery of the Earth’s land and ocean surface every day. OrbView-3 being the world’s first commercial satellite to provide high-resolution imagery from space provides panchromatic imagery and multispectral imagery with revisit period of less than three days for each location on the Earth. Its images are useful for a variety of applications, such as telecommunications and utilities, oil and gas, mapping and surveying, agriculture and forestry, and many more.

In 1996, NASA started a New Millennium Program (NMP) to develop new and more cost-effective approaches for conducting scientific missions in 21st century. Earth Observing-1 (EO-1) was the first adventure of this program, with the objective of advanced land imaging mission and technologies contributing to the significant reduction in the cost of follow-on LANDSAT missions. LANDSAT-7 and EO-1 image the same ground area. All the three of the EO-1 land imaging instruments view all or sub-segments of the LANDSAT swath. Each of the imaging instruments has unique filtering methods for passing light in only specific spectral bands. Bands are selected to provide best look for specific surface features or land characteristics based on scientific or commercial applications. There are three imagers in the EO-1 suite: the Advanced Land Imager (ALI), Hyperion, and the Linear Etalon Imaging Spectral Array (LEISA) Atmospheric Corrector (AC). The ALI is a technology verification instrument and has the potential for reducing the cost and size of future LANDSAT type instruments by a factor of 4 to 5. The Hyperion instrument provides new class of Earth observation data for improved Earth surface characterization. The AC provides the following capabilities:

(a) High spectral, moderate spatial resolution hyper-spectral imager using wedge filter technology, (b) Spectral coverage of 0.89–1.58 mm. Band is selected for optimal correction of high spatial resolution images. (c) Correction of surface imagery for atmospheric variability (primary water vapor).

Using the atmospheric Corrector, instrument measurements of actual rather than modeled absorption values, enables more precise predictive models to be constructed for remote sensing applications. The algorithms developed will enable more accurate measurement and classification of land resources, and better models for land management.

14.2.4 Radarsat-1 Radarsat-1 was launched by the Canadian Space Agency in November, 1995. As a remote sensing device, Radarsat is quite different from the Landsat and SPOT satellites. Radarsat is an active remote sensing system that transmits and receives microwave radiation. Landsat and SPOT sensors passively measure reflected radiation at wavelengths roughly equivalent to those detected by our eyes. Radarsat’s microwave energy penetrates clouds, rain, dust, or haze, and produces images regardless of the sun’s illumination allowing it to image in darkness. Radarsat images have a resolution between 8 to 100 meters. This sensor has found important applications in crop monitoring, defense surveillance, disaster monitoring, geologic resource mapping, sea-ice mapping and monitoring, oil slick detection, and digital elevation modeling.

Chapter 14

14.2.3  Earth Observing (EO-1) Satellites

108

Geoinformatics

14.2.5  Weather Satellites Weather monitoring and forecasting were one of the first civilian applications of satellite remote sensing. The first true weather satellite TRIOS-1 (Television and Infrared Observation Satellite) was launched by United States in 1960. In 1966 a geostationary Application Technology Satellite (ATS-1) was launched, which provided hemispheric images of Earth’s surface and cloud cover every half hour. For the first time, the development and movement of weather systems could be routinely monitored. Now several countries around the world operate weather or meteorological satellites to monitor weather conditions around the globe. Some of the weather satellites are GEOS (Geostationary Operational Environmental Satellite), NOAA AVHRR (National Oceanographic Atmospheric Administration) (Advanced Very High Resolution Radiometer), INSAT (Indian Satellite), and OLS (Operational Linescan System). Some of the imageries are presented in Figs. 14.1 to 14.4.

Fig. 14.1  15 m 7-4-2 pan-fused LandSat-7 image of Moundou, Chad

Fig. 14.2  Landscape of Tuscany, Italy

Fig. 14.3  SPOT false-colour image of the southern portion of Manhatten, New York

Fig. 14.4  Resourcesat-2 imagery of Delhi and surrounding areas, India

109

Chapter 14

Sensors and Platforms

15 Satellite Data Products

15.0 INTRODUCTION The data collected by different types of sensors, are to be disseminated to the user community for analysis. To make the use of these data efficiently and effectively, it is important that the user should know regarding the manner in which the data are stored, i.e., the type of format of the data, different types of data products, and their characteristics. In this chapter, details regarding the data format, data products and their characteristics have been discussed.

15.1  DATA RECEPTION, TRANSMISSION, AND PROCESSING The data acquired by the satellite can be transmitted to the Earth’s surface by

(i) (ii) (iii) (iv)

Direct transmission, Recording the data onboard the satellite and transmitting later, Tracking and Data Relay Satellite System (TDRSS), and Use of near real-time processing system for producing low-resolution imagery in hard copy or soft copy.

The data can be directly transmitted to Earth as shown in Fig. 15.1 if a Ground Receiving Station (GRS) is in the line of sight of the satellite S1. If this is not the case, the data can be recorded onboard the

Fig. 15.1  Satellite data transmission

Satellite Data Products

111

satellite using some recording device for transmission at a later time. The data can also be relayed to the GRS through the Tracking and Data Relay Satellite System (TDRSS) S2, which consists of a series of communication satellites in geosynchronous orbit. The data are transmitted from one satellite to another until they reach the appropriate GRS. The data received at GRS are in raw digital format, which may then, if required, be processed to correct for systematic, geometric, atmospheric distortions present in the imagery, and be transformed into a standardized format. The data are written on CCT, disk, or CD. The data are typically archived at most receiving and processing stations, and whole libraries of data are managed by government agencies such as NRSA (National Remote Sensing Agency) in India or in some countries by authorized commercial companies. It is possible for many sensors to provide to users with quick-turnaround imagery when data are needed immediately after collection. Near real-time processing systems are used to produce lowresolution imagery in hard copy or soft copy format within hours of data collection. Such imagery can then be transmitted in the form of hard copy or soft copy to the end users. Real-time processing of imagery in airborne systems has been used, for example, to pass thermal infrared imagery to forest fire fighters right at the scene.

In remote sensing, in general, data collection in the form of images is through scanner based devices, and hence, the primary data collected at the sensor level is digital in nature. However, this data may be distributed to users in digital and/or photographic form as per their choice or preference.

15.2.1  Digital Data As a sensor scans the Earth’s surface, it generates an electrical current that varies in intensity depending on the variation in the brightness of the land surface. If the sensor detects several spectral bands, then separate electrical currents are generated. Each electrical current is, of course, a continuously varying signal that must be subdivided into distinct units to create the discrete values necessary for digital analysis. The conversion from the continuous varying analog signal to the discrete digital values is accomplished by sampling the electric current at uniform intervals. All signal values within this interval are represented by an average value, and the variation within this interval is lost. Thus the choice of sampling interval forms one dimension to the resolution of the sensor. In addition, digital values are usually scaled in such a way that they portray relative, rather than absolute brightness, i.e., the digital values do not represent true radiometric values from the scene (Fig. 15.2). Another limit upon image detail is the manner in which digital values are quantized from the analog signal. Each digital value is recorded as a series of binary digits or bits. Each bit records an exponent of a power of 2, with the value of exponent determined by the position of the bit in the sequence. As an example, let us consider a system designed to record seven bits for each digital value. This means that seven binary places are available to record the brightness sensed for each band of the sensor. The seven values are recorded in sequence of successive power of 2. A binary value (either 0 or 1) denotes whether or not that specific value is to be added to the total value for a given pixel. A “1” signifies that a specific power of 2 (determined by its position within the sequence) is to be evoked and a “0” indicates a value of zero for that position. Thus, the seven-bit binary number “1111111” signifies 26 + 25 + 24 + 23 + 22 + 21 + 20 = 64 + 32 + 16 + 8 + 4 + 2 + 1 = 127, and “1001011” signifies 26 + 05 + 04 + 23 + 02 + 21 + 20 =

Chapter 15

15.2  REMOTE SENSING DATA

112

Geoinformatics

64 + 0 + 0 + 8 + 0 + 2 + 1 = 75. If a system records digital values in eight-bits then “11111111” signifies 27 + 26 + 25 + 24 + 23 + 22 + 21 + 20 = 128 + 64 + 32 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255, and in similar manner “111111” digital value in six bit signifies 25 + 24 + 23 + 22 + 21 + 20 = 32 + 16 + 8 + 4 + 2 + 1 = 63. In this manner discrete digital values for each pixel are recorded in a form suitable for storage on tapes or disks, and for subsequent analysis by digital computer. The values, as read from tape or disk, are known as digital numbers (DN), brightness values (BV), or digital counts.

Fig. 15.2  Schematic representation of data collection as DN values

It may be noted that the number of brightness values within a digital image are determined by the number of bits available. The 6-bit would allow for a maximum range of 64 possible values, i.e., a pixel may have a value ranging from 0 to 63, a 7-bit would mean that 128 range of brightness values, i.e., from 0 to 127, and a 8-bit would extend to the range to 256, i.e., 0 to 255. Hence, the number of bits determines the radiometric resolution of a digital image.

15.2.2  Tape Format Digital data in remote sensing must be stored in an organized form for easy and fast retrieval. There are three formats as below which are commonly used to store the image data:

(i) Band Interleaved by Pixel (BIP), (ii) Band Interleaved by Line (BIL), and (iii) Band Sequential (BSQ).

Fig. 15.3 shows the layout of a sample dataset for two scan lines by two pixels for three bands. If this dataset is stored in BIP, BIL, and BSQ formats, the sequence of data for each format are shown in Fig. 15.4. In the Band Interleaved by Pixel format system, the data are stored on pixel basis. Any given pixel, once located on the tape, is found with its value for all bands written in sequence, one after the other. This dataset consisting of two scan lines by two pixels for three bands if stored in BIP format will

113

Satellite Data Products

have the sequence of data as shown in Fig. 15.4a. This arrangement of data may be advantageous in some situations, but for most applications in which the data is typically very large it is awkward to sort through the entire sequence of data to sort the bands into their respective images.

Band Interleaved by line format treats each line of data as a separate unit. Each line is represented in all bands before the next line is encountered. A typical example of BIL data format is shown in Fig. 15.4b. In Band Sequential format, all data for band 1 are written in sequence, followed by all data for band 2, and then band 3 as shown in Fig. 15.4c. In this format, each band is treated as a separate unit. For many applications, this format is among the most practical as it presents the data in the format that most closely resembles that used for the display and analysis.

Fig. 15.4  Tape data formats

The best tape format depends upon the context of study, and often upon the software and equipment available to a specific analyst. If all bands for an entire image are to be used, then the BSQ and BIL

Chapter 15

Fig. 15.3  Layout of a sample dataset

114

Geoinformatics

formats are useful, because they are convenient for reconstructing the entire scene in all bands. On the other hand, if the exact position on the image of the sub-area is to be studied is known, then the BIP format is useful, because values for all bands are found together, and it is not necessary to read through the entire data set to find a specific region.

15.2.3  Data Products The satellite data received at the Ground Receiving Stations are digital in nature. The data supplied to the user community in India by NRSA, are generated in two categories, (1) digital and (2) photographic. These may be standard or special products. Standard products are generated after applying radiometric and geometric corrections, whereas special products are generated after further processing of the standard product. The raw data recorded at the Earth station are corrected to various levels after further processing at the Data Processing System (DPS) as below:

l

Raw data (uncorrected) Radiometrically and geometrically corrected only for Earth station (For browse facility) l Radiometrically and geometrically corrected (Standard product) l Special processing such as merging, enhancement of Standard product or Level 2 product l

Level 0 Level 1 Level 2 Level 3

Standard products can be supplied on either photographic or digital media. Black and White (B & W) and False Colour Composite (FCC) photographic products are available in the forms of films or paper prints. Digital products are supplied on various magnetic media, such as Computer Compatible tape (CCT), 8 mm Exabyte tape, and CD-ROM. All the data supplied to the users are geographically referenced so that the geographic locations of the points on the surface of the Earth can be identified conveniently.

16 Image Interpretation and Digital Image Processing

Remote sensing data once acquired and processed is supplied to users for various applications. The remote sensing data can be interpreted visually using a hard copy of the image or by digital method using soft copy of the image employing suitable software. The visual method is known as the image interpretation and the digital method as the digital image processing. The digital image processing is an extremely broad subject, and often involves complex mathematical procedures. Though the digital methods cannot replace the visual interpretation, it has some advantages, such as consistency in results, discrimination of more shades of gray tones, quantitative analysis, etc. Table 16.1 presents a broad comparison between visual and digital interpretation. Table 16.1:  Comparison between visual and digital interpretation Visual Interpretation

Digital Interpretation

It is a traditional approach based on human intuition, It is a recent approach, and requires specialized and its success depends on the experience of the training. interpreter or analyst. It requires simple and inexpensive instruments.

It is complex, highly mathematical, and requires expensive instruments.

It uses the brightness characteristics of the object, It relies heavily upon the brightness and spectral and accounts for the spatial content of the image. content of the object, and does not use spatial content of the image. Usually a single band of data is used for analysis. Multiple bands data are used for analysis. However, colour products generated from three bands of data can be used for analysis. The analysis process is subjective, qualitative, and The process is objective and quantitative in nature, dependent on analyst bias, however deductions are yet abstract in nature. concrete.

16.1  IMAGE INTERPRETATION An image is a detailed photographic documentation or record of features on the ground at the time of data acquisition. Such acquired images are examined by an analyst in a systematic manner with the

Chapter 16

16.0 INTRODUCTION

116

Geoinformatics

help of some supporting information collected from maps, field visit reports, or previously interpreted images of the same area. Image interpretation, also popularly known as photographic interpretation, is defined as the act of examining photographic images for the purpose of identifying objects and judging their significance. The interpretation of the information is carried out on the basis of certain physical characteristics of the object(s) and phenomena appearing in the image. The success of an image interpretation is dependent upon the experience of the analyst, the type of object or phenomena being interpreted, and the quality of the image.

16.1.1  Interpretation Procedure The image interpretation procedure is very complex in nature, and requires several tasks to be conducted in a well-defined routine consisting of the process of classification, enumeration, mensuration, and delineation. Classification is the first task where based on the appearance of an object or feature, the analyst assigns a class or informational group. This is done through the process of detection followed by recognition where an object or phenomena is assigned an identity to a class or category. Finally, the feature is identified with a certain degree of confidence to a specific class through the process of identification. Enumeration which is the next step, relates to listing and counting of objects or phenomena that are visible on an image. Mensuration is the process of measurement, wherein the measurements of the objects are made in terms of length, area, volume, or height. Another form of measurement could be in terms of image brightness characteristics known as densitometry. Delineation, the final task to be performed, is outlining the regions of homogeneous objects or areas characterized by specific tones and texture.

16.1.2  Image Characteristics In order to carry out the above processes of classification, enumeration, mensuration, and delineation, it is essential to understand the characteristics of images which govern the appearance of objects or phenomena on an image. These characteristics, which allow for a systematic and logical approach in carrying out image interpretation, are known as the elements of photointerpretation. These characteristics are described below. Size of an object is function of scale. Generally, relative sizes of the objects must be taken into consideration within a same image. Shape normally refers to the general form or outline of individual feature. It is very distinctive clue for identification. Normally, man-made features tend to have defined edges leading to regular shape, while natural objects have irregular shapes. Tone of an object or phenomena refers to the relative brightness or colour in an image. It is one of the fundamental elements for distinguishing between different objects or phenomena, and is a qualitative measure. Pattern refers to the spatial arrangement of visibly discernable objects. Typically, repetition of similar tones and texture produces a distinctive and recognizable pattern. Texture is referred to as the frequency of tonal changes in particular areas of an image. It is a qualitative characteristic, and normally categorized as rough or smooth. Shadow is an important characteristic of image interpretation. It gives an idea of profile and relative height of an object, hence helping in easier identification.

Image Interpretation and Digital Image Processing

117

Association is another important characteristic as it considers the interrelation with the objects within the close proximity of an object or phenomena. Site refers to the vocational characteristics of objects, such as topography, soil, vegetation, and cultural features.

16.1.3  Image Interpretation Strategies The strategy adopted in image interpretations has the following steps: (i) (ii) (iii) (iv) (v)

Field observation, Direct recognition, Interpretation by inference, Probabilistic interpretation, and Deterministic interpretation.

When an analyst is not able to correlate the relationships between ground and image, he has to visit the ground for making proper identification. Thus, field observation is an important part of interpretation in order to assess the accuracy of identification. The analyst utilizes the elements of image interpretation for direct recognition. It is qualitative and subjective, and depends on the experience and judgement skills of the analyst. In interpretation by inference, the analyst identifies information on the basis of the presence of some other information to which it is closely related to. Sometimes the non-image information or knowledge, such as a certain crop is grown in a particular season at a particular time, can be utilized in probabilistic interpretations. The deterministic interpretations are based upon qualitatively expressed relationships that tie the image characteristics to ground conditions. A good example of deterministic interpretations is study of precise information about the landscape using stereoscopic 3-D model of a terrain employing photogrammetric method. The deterministic interpretations require very little non-image information.

16.1.4  Photomorphic Analysis Photomorphic analysis is another approach to interpretation of complex patterns consisting of identification of areas of uniform appearance on the image, i.e., search for photomorphic regions which are regions of relatively uniform tone and texture. In the first step, the regions of uniform image appearance using tone, texture, shadow, and other elements of image interpretation, are identified as a means of separating regions, and then analyst tries to match photomorphic regions of useful classes of interest. Photomorphic regions do not always correspond to the categories of interest to the interpreter. The appearance of one region may be dominated by factors related to geology and topography, whereas that of another region in the same image may be controlled by the vegetation pattern.

16.1.5  Image Interpretation Keys Image interpretation keys are valuable aids for summarizing complex information. The interpretation keys serve as a means of training inexperienced personnel in the interpretation of complex or unfamiliar topics and/or to organize information and examples pertaining to specific topics. An image interpretation key is simply a reference material designed to permit rapid and accurate identification of objects and features represented on aerial images. A key usually consists of two parts: (a) a collection of annotated or captioned images or stereograms and (b) a graphic and/or word description.

Chapter 16



118

Geoinformatics

16.1.6  Equipment for Image Interpretation The image interpretation requires relatively simple and inexpensive equipment, although some of the optional items may be expensive. In order to carry out the interpretation process, light table, rulers, parallax scale, stereoscope, magnifiers, densitometer, parallax bar, and zoom transferscope may be required. Figure 16.1 illustrates the comparison between B & W and colour images of the scene. It may be noted that increased number of tones is found on the colour image compared to the B & W image of the same area, which makes the visual interpretation of the scene much easier.

Fig. 16.1  Comparison of colour and black and white images (Earth Science and Map Library)

16.2  DIGITAL IMAGE PROCESSING (DIP) The image interpretation is visual interpretation which was developed for interpretation of aerial photographs, whereas digital image processing is a digital method for interpretation of digital images using high speed computers. The image processing is a vital part of remote sensing operations. It is the task of processing and analyzing the digital data using some image processing algorithm. The digitally processed data are the results of computations for each pixel which forms a new digital image that may be displayed or recorded in pictorial format, or it may be further manipulated. The DIP, normally, consists of the following operations: (i) (ii) (iii) (iv) (v)

Image rectification and restoration, Image enhancement, Image transformation, Image classification, and Data merging and GIS integration.

Image Interpretation and Digital Image Processing

119

Initial statistics provides an insight into the raw data as received from satellite. Some of the statistical information, such as mean, standard deviation, and variance for each band which are calculated, and histogram and scattergram, are required for the next stage of processing remote sensing data.

16.2.1  Image Rectification and Restoration Image rectification and restoration are required to correct the distorted or degraded image to create a more faithful representation of the original image. The raw data are corrected for geometric distortions due to sensor, earth geometry variations, and geocoding and registration of image to real world coordinate system, calibrated radiometrically for sensors irregularities, and removal of noise. Image rectification and restoration produces a corrected image that is as close as possible to the radiant energy characteristics of the original scene.

The purpose of image enhancement is to improve the appearance of the imagery, and to assist in subsequent visual interpretation and analysis. Normally, image enhancement involves techniques for increasing the visual distinctions between features by improving tonal distinction between various features in a scene using technique of contrast stretching. Contrast enhancement involves changing of the original values of the digital values (commonly 8 bit 256 levels) of the pixels in a scene so that the full range of the brightness scale is used, thereby increasing the contrast between the targets and their background. The key to understanding contrast enhancements is the concept of image histogram. By manipulating the range of digital values in an image that is graphically represented by its histogram, various enhancements can be applied to the data. The technique of spatial filtering is used to enhance or suppress specific spatial patterns in an image or appearance of an image based on spatial frequency of specific features. Spatial frequency is related to image texture which refers to the frequency of the variations in tone that appears in an image. Rough textured areas of an image where there are abrupt tonal variations over a small area have high spatial frequencies, and smooth textured areas with little tonal variations over several pixels have low spatial frequencies. The filters are designated as low-pass, high-pass, and high-boost filters. A low-pass filter is to smooth the appearance of an image by reducing the small details in the image, whereas high-pass filters do the opposite job as they sharpen the appearance of finer details. If low-pass filter is applied to an image and the result is subtracted from the original leaving behind only high-spatial frequency information, then it is known as high-boost filter.

16.2.3  Image Transformation Image transformation is similar in concept to image enhancement. Generally, image enhancement operation is carried out on a single-band data, while image transformations are usually carried out on multi-band data. Arithmetic operations such as subtraction, addition, multiplication and division, are performed to combine and transform the original bands into new images which display better or highlight certain features in a scene. Image subtraction produces difference image in which areas where there has been little or no change are represented in mid-gray tones, while those areas where significant change has taken place are shown brighter or darker tones depending on the direction of change in reflectance between the two images. Image addition is basically an averaging operation in order to reduce the overall effect of noise. Image division which is one of the most common transforms applied to image data serves to highlight

Chapter 16

16.2.2  Image Enhancement

120

Geoinformatics

subtle variations in the spectral response of various surface covers. Image multiplication of two images is rarely performed in remote sensing. In multi-band data set, when the spectral range of bands are located very closely to each other, repetitive information available leads to redundancy of data. Principal Component Analysis helps in reducing the number of bands for analysis, and hence redundancy.

16.2.4  Image Classification The objective of image classification is to replace visual analysis of the image data with automated quantitative techniques for the identification of features in a scene. Thus, the operations involve identification and classification of all pixels in a digital image into one of the several land cover classes or themes using suitable software. The statistical based decision rules are applied for determining the land cover identity of each pixel in an image. When these decision rules utilize the spectral radiances for the classification, the classification process is called spectral pattern recognition. When the decisions are based on the geometric shapes, sizes, and the pattern present in the image data, the classification process is called spatial pattern recognition. There is variety of approaches to perform digital image classification; however, the most commonly used classification procedures are supervised classification and unsupervised classification discussed below.

Supervised Classification In a supervised classification, the analyst identifies in the imagery, homogeneous representative samples of the different surface cover types (information class) of interest. The selection of appropriate training areas is based on the analyst’s familiarity with geographical area and knowledge of the actual surface cover types present in the image. Thus, the analyst is supervising the categorization of a set of specific class. The numerical information in all spectral bands for the pixels comprising these areas, are used to train the computer to recognize spectrally similar areas for each class. The computer uses special programs or algorithms to determine the numerical signatures for each training class. Once the computer has determined the signatures for each class, each pixel in the image is compared to these signatures, and labeled as the class it closely resembles digitally. Thus, in a supervised classification, the analyst first identifies the information class based on spectral classes, which represent them are determined.

Classification Scheme There are various classification schemes available, some of them are U.S. Geological Survey Land Use/Land Cover Classification System, Michigan Land Use Classification System, and Cowardin Wetland Classification System. The major points of difference between various classification schemes are their emphasis and ability to incorporate information obtained using remote sensing data. The U.S. Geological Survey classification system is resource oriented. The Standard Land Use Coding (SLUC) manual is people or activity oriented. The Michigan Land Use Classification (MLUC) is a hybrid system which incorporates both land use interpreted from remote sensing data and precise land use information obtained through ground survey. The U.S. Geological Survey Land Use/Land Cover Classification System is presented in Table 16.2. Once a classification scheme has been adopted, the analyst may identify and select sites within the image that are representative of the land cover classes of interest. The image coordinates of these training sites are identified and used to extract statistics from the multispectral data for each of these areas. The success of a supervised classification depends upon the training data used to identify different classes. Hence, the selection of training data is done meticulously keeping in mind that each training

121

Image Interpretation and Digital Image Processing

data set has the same specific characteristics, such as number of pixels, size, shape, location, number of training areas, placement, and uniformity. Table 16.2:  U.S. Geological Survey Land Use/Land Cover Classification Level I

Level II 11. Residential 12. Commercial and services 13. Industrial

1. Urban and Built-up land

14. Transportation, communications, and services 15. Industrial and commercial complex 16. Mixed urban and built-up land 17. Other urban and built-up land 21. Cropland and pasture

2. Agricultural land

22. Orchards, groves, vineyards, nurseries, and ornamental horticulture areas 23. Confined feeding operations 24. Other agriculture land 31. Herbaceous rangeland

3. Rangeland

32. Shrub and brush rangeland 33. Mixed rangeland 41. Deciduous forest land

4. Forest land

42. Evergreen forest land 51. Streams and canals

5. Water

52. Lakes 53. Reservoirs 54. Bays and estuaries

6. Wetland

61. Forested wetland 62. Non-forested wetland 71. Dry salt flats 72. Beaches 73. Sandy areas other than beaches

7. Barren land

74. Bare exposed rocks 75. Strip mines, quarries, and gravel pits 76. Transitional areas 77. Mixed barren land 81. Shrub and brush tundra

8. Tundra

82. Herbaceous tundra 83. Bare ground 84. Mixed tundra

9. Perennial snow and ice

91. Perennial snowfields 92. Glaciers

Chapter 16

43. Mixed forest land

122

Geoinformatics

Unsupervised Classification The unsupervised classification is a reverse process of supervised classification. In this classification, the spectral classes are grouped, first, based solely on the numerical information in the data, and are then matched by the analyst to information classes. Programs called clustering algorithms are used to determine the natural groupings or structures in the data. In addition to specifying the desired number of classes, the analyst may also specify parameters related to the separation distance amongst the clusters, and the variation within each cluster. The final result of this iterative clustering process may result in some clusters that the analyst would like to subsequently combine, or that some cluster have been broken down, each of these require a further iteration of the clustering algorithm. The unsupervised classification does not start with a predetermined set of classes as in supervised classification. The algorithm demonstrating the fundamental logic of clustering is known as CLUSTER. Some of other algorithms available are ISODATA, AMEOBA, and FORGY.

16.2.5 Classification Accuracy Assessment No classification task using remote sensing data is complete till an assessment of accuracy is performed. In digital image processing, accuracy is a measure of agreement between standard information at a given location to the information at same location on the classified image. Generally, the assessment of accuracy of the classified data is done by comparing the map based on the analysis of remote sensing data with the map based on information derived from actual ground also known as the reference map consisting of a network of discrete parcels, each designated by a single label. The simplest method of evaluation of accuracy is to compare the areas assigned to each class or category in the two maps. This yields a report of the aerial extents of classes, which agree to each other.

16.2.6  Data Merging and GIS Integration In the early days of analog remote sensing when the only remote sensing data source was aerial photography, the capability for integration of data from different sources was limited. Today, with most data available in digital format from a wide array of sensors, data merging, also known as data integration is a common method used for interpretation and analysis. Data merging fundamentally involves the combining or merging of data from multiple sources in an effort to extract better and/ or more information. For example, elevation data in digital form, called Digital Elevation or Digital Terrain Models (DEMs/DTMs), may be combined with remote sensing data for a variety of purposes. DEMs/DTMs may be useful in image classification, as effects due to terrain and slope variability can be corrected, potentially increasing the accuracy of the resultant classification. DEMs/DTMs are also useful for generating three-dimensional perspective views by draping remote sensing imagery over the elevation data, enhancing visualization of the area imaged. The data merging includes data that are multi-temporal, multi-resolution, multi-sensor, or multidata type in nature. The data merging technique is frequently used to combine remotely sensed data with information from other sources in the context of a Geographic Information System (GIS).

Image Interpretation and Digital Image Processing

123

16.3  FALSE COLOUR IMAGES USED IN INTERPRETATION Images comprising of different colours for different objects/features appearing in the images are produced using various combinations of the different bands of the spectrum. These images having different combinations of colours are very useful in interpretation processes.

16.3.1  True Colour Image

Chapter 16

To understand false colour, the concept behind true colour must be understood clearly. An image is called true colour image when it offers a natural colour rendition, or when it comes close to it. This means that the colours of an object in an image appear to a human observer the same way as if this observer were to directly view the object, i.e., a green tree appears green in an image, a red apple red, and a blue sky blue, and so on. A true colour image covers full visible spectrum using red, green, and blue/green spectral bands of the satellite mapped to the RGB colour space of the image (Fig. 16.2).

Fig. 16.2  True colour Landsat satellite image

16.3.2  False Colour Image The same area in Fig. 16.3 is shown as a false colour image using the near infrared, red, and green spectral bands mapped to RGB. This image shows vegetation in a red, as vegetation reflects much light in the near infrared.

16.3.3  Panchromatic Image A panchromatic image shown in Fig. 16.4 consists of only one band. It is usually as a grey scale image, i.e., the displayed brightness of a particular pixel is proportional to the pixel digital number which is

124

Geoinformatics

related to the intensity of solar radiation reflected by the targets in the pixel, and detected by the sensor. Thus a panchromatic image is just like a black-and-white aerial photograph of the area, and both are interpreted in a similar manner.

Fig. 16.3  False colour Landsat satellite image

The urban area at the bottom left and a clearing near the top of the image have high reflected intensity, while the vegetated areas on the right part of the image are generally dark. The roads and the blocks of buildings in the urban areas are visible. A river flowing through the vegetated area, cutting across the top right corner of the image can be seen. The river appears bright due to sediments while the sea at the bottom edge of the image appears dark.

16.3.4 Multispectral Image A multispectral image consists of several bands of data. For visual display, each band of image may be displayed in one band at a time as a grey scale image, or in combination of three bands at a time as a colour composite image. Interpretation of a multispectral colour composite image will require the knowledge of the spectral reflectance signature of the targets in the scene. In this case, the spectral information content of the image is utilized in the interpretation. The following three images Fig. 16.5 (a, b, and c) show the three bands of a multispectral image extracted from a SPOT multispectral scene. It may be noted that both the XS1 (green) and XS2 (red) bands look almost identical to the panchromatic image shown above. In contrast, the vegetated areas now appear bright in the XS3 (near infrared) band due to high reflectance of leaves in the near infrared wavelength region. Several shades of grey can be identified for the vegetated areas, corresponding to different types of vegetation. Water mass (both the river and the sea) appear dark in the XS3 in the near IR band.

125

Image Interpretation and Digital Image Processing

Fig. 16.4  SPOT Panchromatic image of resolution of 10 m

In displaying a colour composite image, three primary colours red, green, and blue are used. When these colours are combined in various proportions they produce different colours in the visible spectrum (Fig. 16.6). Associating each spectral band (not necessarily visible band) to a separate primary colour, results in a colour composite image.



(a)  SPOT XS1 (green band)

(b)  SPOT XS2 (red band)

Chapter 16

16.3.5  Colour Composite Image

126

Geoinformatics

(c)  SPOT XS3 (near infrared band) Fig. 16.5  Multispectral images

True Colour Composite Image If a multispectral image consists of three visual primary colour bands (red, green, and blue) (Fig. 16.6), the three bands may be combined to produce a true colour image. For example, the band 3 (red band), 2 (green band), and 1 (blue band) of a Landsat TM image or an IKONOS multispectral image can be assigned respectively R, G, and B colours for display. In this way, the colours of the resulting colour composite image resemble closely what would be observed by human eye (Fig. 16.7).





Fig. 16.6  Combination of three primary colurs (red, green, blue) produces many colours

Fig. 16.7  True-colour IKONOS image

Image Interpretation and Digital Image Processing

127

False Colour Composite (FCC) Image The display colour assignment for any band of a multispectral image can be done in an entirely arbitrary manner. In this case, the colour of a target in the displayed image does not have any resemblance to its actual colour. The resulting product is known as false colour composite. There are many possible schemes of producing FCC images. However, some scheme may be more suitable for detecting certain objects in the image. A very common FCC scheme for displaying a SPOT multispectral image is R = XS3 (NIR band), G = XS2 (Red band), and B = XS1 (Green band) (Fig. 16.8).



Fig. 16.8  SPOT false colour composite multispectral image

Fig. 16.9  SPOT4 false colour composite of multispectral image (1)

This FCC scheme allows vegetation to be detected readily in the image as vegetation appears in different shades of red depending on the type and condition of the vegetation. Clear water appears dark bluish, while turbid water appears cyan compared to clear water. Bare soils, roads, and buildings appear in various shades of blue, yellow, or grey depending on their composition. Another FCC scheme for displaying an image with a short-wave infrared (SWIR) band is R = SWIR band, G = NIR band, and B = Red band (Fig. 16.9). In this image vegetation appears in shades of green, bare soil and clear cut areas appear purplish or magenta. The patches of bright red area on the left is location of active fire. A smoke plume originating from the active fire site appears faint bluish in colour. Another FCC of SPOT 4 multispectral image without displaying the SWIR band has the scheme R = NIR band, G = Red band, and B = Green band (Fig. 16.10). In this FCC the vegetation appears in shades of red and the smoke plume bright bluish white.

Natural Colour Composite Image For optical images lacking one or more of the three visual primary colour bands (i.e. red, green and blue), the spectral bands (some of which may not be in the visible region) may be combined in such a way that the appearance of the displayed image resembles a visible colour photograph, i.e. vegetation in green, water in blue, soil in brown or grey, etc. Sometimes these images are called as true colour composites. This term is misleading since in many instances the colours are only simulated to look similar to the true colours of the targets, and therefore, to call them natural colour composites is more appropriate.

Chapter 16



128

Geoinformatics

Fig. 16.10  SPOT4 false colour composite of multispectral image (2)

The SPOT HRV multispectral sensor does not have a blue band. The three bands XS1, XS2 and XS3 correspond to the green, red, and NIR bands, respectively. But a reasonably good natural colour composite can be produced by the following combination of the spectral bands: R = XS2, G = (3 XS1 + XS3)/4 and B = (3 XS1 - XS3)/4 (Fig. 16.11).

Fig. 16.11  SPOT natural colour composite multispectral

17 Application of Remote Sensing

17.0 INTRODUCTION The remote sensing technique has a wide application in almost every field of engineering, science, and management. It is successfully applied in the fields of planning, surveying, and management of natural resources. It has provided easier techniques to undertake effective and efficient mapping of land, water, soil, forest, agriculture, urban area growth, flood plain mapping, crop acreage estimation, etc. Methods for monitoring vegetation change range from intensive field sampling with plot inventories to extensive analysis of remotely sensed data which has proven to be more cost effective. Estimation of damage by the natural calamities due to earthquake, cyclones, tsunami, etc., can be made timely and efficiently using remote sensing technique. Disaster management is another area where the remote sensing technique is being applied successfully. Detailed information can be extracted from satellite data on temporal basis, and can be used as an input into Geographic Information System. Satellite imagery and GIS maps for land cover, land use and its changes are keys to many diverse applications such as environment, forestry, hydrology, agriculture, and geology.

Evaluation of the static attributes of land cover (types, amount, and arrangement) and the dynamic attributes (types and rates of change) on satellite image data may allow the types of change to be regionalized and the approximate sources of change to be identified or inferred. Satellite images with moderate to high resolution have facilitated scientific research activities at landscape and regional scales. Availability of satellite imagery can provide spatial resolutions of less than 0.5 m for analysis of urban growth and transportation development for assessment and monitoring. Moreover, multispectral bands can provide increased spectral resolution that can be used to further analyze and classify environmental conditions, land cover and change detection, and how urban growth and associated transportation development impact these conditions. Satellite image analysis allows for:

(a) Fast and accurate overview, (b) Quantitative green vegetation assessment, and (c) Underlying soil characteristics.

Satellite images enable direct observation of the land surface at repetitive intervals, and therefore, allow mapping of the extent, and monitoring and assessment of:

Chapter 17

17.1  MORE ABOUT THE APPLICATION OF REMOTE SENSING DATA

130

(i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x)

Geoinformatics

Crop health, Storm water runoff, Change detection, Air quality, Environmental analysis, Energy savings, Irrigated landscape mapping, Carbon storage and avoidance, Yield determination, Soils and fertility analysis.

Automated datasets can be used for vegetation and land cover use by updating the projected area and incorporating a more recent image to determine the changes. High and medium resolution satellite image data at different spatial, spectral, and temporal resolutions using the appropriate combination of bands can be employed to bring out the geographical and manmade features that are most pertinent to a particular project for detecting changes.    Stereo satellite image data can be used for the production of 3D Terrain Visualization products including Digital Surface Models (DSM’s) and Digital Elevation Models (DEM’s) which are generated from a variety of resources. The DEM’s are utilized in support of the analysis, monitoring and management of many environmental assessments. A few examples of remote sensing application taken from three M. Tech. theses carried out at Indian Institute of Technology, Roorkee, India, have been discussed below to make the readers understand that how this technique can be applied in the real world.

17.2  LAND USE AND LAND COVER MAPPING Land is one of the critical natural resources on which most developmental activities are based. For success of any planning activity, detailed and accurate information regarding the land cover and the associated land use is of paramount importance. In order to undertake a proper, systematic, and structured land cover/land use mapping, it is important to identify land cover/land use classes as per a classification scheme, such as U.S. Geological Survey Land Use/Land Cover Classification system or develop a new one. A study on land use and land cover for a part of Haridwar district was carried out for the area nearly 260 km2 lying between 78°07′13″ E and 78°16′14″ E longitudes and 30° N and 30°08′53″ N latitudes. The area is primarily covered with forest, vegetation, built-up areas, water, sand, etc. IRS-1C LISS-III image of April 03, 2000, was used along with PAN image of the same date. The methodology adopted is shown in Fig. 17.1. On the basis of field visits, eleven classes were identified. These classes are thin forest, medium forest, dense forest, fallow land, shrubs, open land, shallow water, deep water, wet sand, dry sand, and built-up area. Training data were identified and extracted using ERDAS Imagine software. Based on the training data statistics, it was found that Average Transformed Divergence was 1999, which is an indication that all the classes had good spectral separability. Two classifiers, namely, Minimum Distance to Mean and Maximum Likelihood were used for classifying the land use and land cover of the area. The classified image was tested for accuracy, and it was found that the Minimum Distance to Mean classifier gives an overall accuracy of 90% while the Minimum Likelihood classifier gave an overall accuracy of 94.44%.

Application of Remote Sensing

131

Fig. 17.1  Flowchart of methodology adopted for land use/land cover mapping

Water is an extremely important source. Groundwater occurrence at any place is consequence of the interaction of climatic, geologic, hydrologic, physiographic, and ecological factors. The groundwater in India occurs mainly in unconsolidated, semi-unconsolidated, and consolidated geological formations. Search for groundwater is confined to the most promising zones in terms of porosity and permeability. Remotely sensed data provide quick and useful baseline information on the factors controlling the occurrence and movement of groundwater, such as geology (lithology/structure), geomorphology, soils, land use/land cover, etc. A systematic study of these factors leads to better delineation of prospective groundwater zones in a region. Such perspective zones identified from the satellite imagery are normally followed up on the ground through detailed hydrological and geophysical investigations before actual drilling is carried out for exact quantitative assessment and exploitation. The satellite data helps greatly in identification of linear features such as fractures/faults that are usually the zones of localization of groundwater in hard rock areas, and certain geomorphic features, such as alluvial fans, buried channels, etc., which often form good aquifers. The general keys for detection of groundwater aquifers relate to identification of springs, seeps, and phreatophytes indicating presence of shallow water table conditions, differentiation of vegetation that are closely related to depth and salinity of groundwater, and location and monitoring of groundwater systems under stress. The methodology adopted for the preparation of district wise hydrogemorphological maps on 1:250,000 scale comprised of the following steps (Fig. 17.2):

Chapter 17

17.3  GROUND WATER MAPPING

132

(i) (ii) (iii) (iv) (v) (vi)

Geoinformatics

Data procurement, Base-map preparation, Preliminary interpretation, Ground checks, Final interpretation, and Final map preparation.

The data used initially was Landsat Thematic Mapper imagery in the form of FCC using bands 2, 3, and 4 with 30 m spatial resolution, and later IRS-1A LISS-II (FCC) imagery with 36 m spatial resolution. In some cases, LISS-I imagery with 73 m spatial resolution has also been used. The imagery used was in the form of paper prints on 1:250,000 scale or transparencies which could be enlarged to 1:250,000 scale. To start with, district wise base maps were prepared on 1:250,000 scale, showing major drainage courses, important localities, roads, rail lines, and a few other cultural features, such as canals, roads, etc. The next step involved a systematic visual interpretation of the satellite imagery to delineate various geomorphic features/landforms, and their depiction on to the base map. These geomorphic features/ landforms were then evaluated critically in terms of the broad lithology they are comprised of, the associated structural features, the development of drainage around and on them, the broad soil type and its effective depth, the thickness of weathered mantle in case hard rock country, and type of land use/ land cover prevailing in the area to finally arrive at groundwater prospects. Existing geological maps and other collateral data supplemented by field checks were also used. District wise hydrogeomorphologic maps were thus prepared for all the districts of the country.

Fig. 17.2  Steps in hydrogeomorphological mapping using satellite

On the IRS imagery used for this study, it was only possible to identify broad rock groups based on the factors like spectral reflectance of exposed rock types in conjunction with associated structural, drainage, and landform characteristics, etc. Fractures, faults and their intersections in the hard rock

Application of Remote Sensing

133

terrain as well as certain synclinal structures which at places are the loci for groundwater occurrence in sedimentary rocks were interpreted carefully from IRS imagery. While certain landforms such as buried channels, present day valley fills, alluvial fans, bazadas, etc., having known relationship with groundwater occurrence were readily mapped from IRS imagery. The utility of the satellite data in studying the drainage pattern/density, etc., including the paleo drainage (which have a bearing on recharge conditions and localization of groundwater) was also fully harnessed. The distribution of soils with regard to their broad textural classes and land use/land cover pattern which could be mapped from IRS imagery, were also given due consideration.

Disaster is a phenomenon of abnormal condition of the environment which can exert a serious and damaging effect on human, animal, and plant life beyond a certain critical level of tolerance. Some of the disasters that occur worldwide on a regular basis are earthquake, flood, cyclone, avalanche, landslide, tsunami, drought, forest fire, etc. Satellite technology can help in the disaster preparedness by providing repetitive and synoptic up to date information on the locally available resources, and by facilitating the forecast of the event in time so that the alternative arrangements could be provided. Disaster prevention measures can also be improved through satellite technology. Satellite data can help in disaster relief operations by providing the information on the extent of the areas affected, magnitude of damages, and the needs of the local population. For an effective disaster management, time is a crucial factor, and hence, information in a near real time should be available. Many times low spatial but high temporal resolution data are valuable in certain phenomenon, such as floods. Geostationary satellite data are capable of providing information every half an hour, and are useful in monitoring short term disasters like cyclones and tornadoes. A combination of high spatial, temporal and spectral resolution data is certainly better and beneficial in disaster management. The present study was made to assess the extent of damage in India caused by tsunami which came in the morning of December 26, 2004. This information tends to vary with resolution and sensor type. LISS IV AWiFS image gave an idea of extent of damage around the city of Kararikal. The damage to Chennai city could be clearly seen on IRS-P6 Mx image. It was found that the mouth of the river Adayar has gone drastic changes. The extent of damage was seen more clearly on OrbView-3 image. In this image, the recent changes to river channel configuration, and also the depositional pattern of bed material could be seen.

Chapter 17

17.4  DISASTER MANAGEMENT

This page intentionally left blank

SECTION V

GEOGRAPHIC INFORMATION SYSTEM

This page intentionally left blank

18 Geographic Information System

18.0 INTRODUCTION The collection of data about the spatial distribution of significant properties of the Earth’s surface in the form of maps by navigators, geographers, and surveyors has been an important part of activities of organized society. Whereas the topographical maps can be regarded as general purpose maps, the thematic maps for assessment and understanding of natural resources are for scientific purposes. The use of aerial photography and remote sensing has made it possible to map large areas with greater accuracy for producing thematic maps of large areas, for resource exploitation and management. Handling of large volume of data for quantitative spatial variation of data requires appropriate tool to process the spatial data using statistical methods and time series analysis. With the introduction of computer assisted cartography, many new tools were developed to perform spatial analysis of the data, and to produce maps in desired formats. These operations required a powerful set of tools for collecting, storing, retrieving, transforming, and displaying spatial data from the real world for a particular set of purposes. This set of tools constitutes a Geographic Information System (GIS). A GIS should be thought of as being much more than means of coding, storing, and retrieving the data about aspects of Earth’s surface, because these data can be accessed, transformed, and manipulated interactively for studying environmental process, analyzing the results for trends, or anticipating the possible results of planning decisions.

A GIS is an information system that is designed to work with data referenced by spatial or geographical coordinates. In other words, a GIS is both a database system with specific capabilities for spatiallyreferenced data as well as a set of operations for working with the data. In view of the tasks performed by a GIS as above, many definitions of GIS have been given but the most rigorous one has been given by Earth System Research Incorporated, U.S.A. (ESRI), which is “An integrated collection of computer hardware, software, geographical data, and personnel designed to efficiently capture, store, update, manipulate, analyze, and display all forms of geographically referenced information”. A GIS is also result of linking parallel developments in many separate spatial data processing as shown in Fig. 18.1.

Chapter 18

18.1  DEFINITION OF GIS

138

Geoinformatics CAD and computer graphics Cartography

Surveying and photogrammetry

(High quality drafting)

GIS

Spatial analysis using rasterized data from thematic maps

Remote sensing technology User needs

Fig. 18.1  Linking several initially separate but closely related fields through GIS

18.2  COMPONENTS OF GIS A GIS has very comprehensively the following components:

1. Computer system (hardware and operating system), 2. Software, 3. Spatial data, 4. Data management and analysis procedures, and 5. Personnel to operate the GIS. V.D.U. Keyboard

Disk drive

Mouse

C.P.U.

Tape drive

Digitizer

Scanner Printer

Fig. 18.2  Major hardware components of a GIS

The hardware components of a GIS comprise of a C.P.U., disk drive, tape drive, digitizer, plotter, and V.D.U. (Fig. 18.2). An operating system of a computer is a kind of master control program to manage all the activities required to implement the GIS. The GIS software package has a set of modules for performing digitization, editing, overlaying, networking, vectorising, data conversion, analysis and for answering the queries, and generating output. All GIS software are designed to handle spatial data (characterized by information about position, connections with other features, and details of non-spatial

Geographic Information System

139

characteristics) referenced using a suitable geographic referencing system. The management of data in GIS includes storage, organization, and retrieval using a Database Management System (DBMS). The analysis procedures include the storage and retrieval capabilities for presenting the required information, queries allowing the user to look at pattern in the data, and prediction or modeling capabilities to have information about what data might be at different time and place. The success of any GIS project depends upon the skills and training of the personnel handling the project.

18.3  UNDERSTANDING THE GIS A GIS is software which deals with real-world geographic information. The geographic information may be in the form of spatial data or non-spatial data, known as attributes, assigned to spatial data. The geographic information has the following properties:

1. Locations, 2. Attributes, and 3. Spatial relationships.

It can be said, in other words, that real-world geographic information is represented through computer using GIS. Thus there are following three steps required in order to go from the real-world geographic objects to geographic objects on computer:

1. Representation of geographic reality, 2. Linking attributes to geographic representation, and 3. Spatial relationships between geographic representations.

In GIS the geographic reality is represented through data models. There are two common methods of data models used in GIS. They are:

1. Vector data, and 2. Raster data.

Chapter 18

The vector data model is used for well defined objects such as trees, rivers, and lakes using point, lines, and polygons, respectively, whereas the raster data model is for continuously changing attributes such as elevation. Since the real world geographic objects have some locations, their positions are defined using some coordinate system, preferably a Cartesian coordinate system obtained through some projection system. This is known as georeferencing. Once all geographically referenced data of the real world objects with their attributes have been stored in the database of the GIS, the GIS can be applied for the project for which the data has been acquired and stored in computer to perform all GIS operations given in the definition of the GIS including queries, analysis, and results in desired format.

19 GIS Data

19.0 INTRODUCTION The success of any GIS project depends on the geographic data available for its implementation. The data should be collected from all available sources as per requirement and be stored in the required format for the use of the GIS software. The output data from the GIS should be in the formats as per the need of the user. Geographic data in digital form are numerical representations of the real world. It describes the real-world features and phenomena coded in specific ways in support of GIS and mapping applications using the computer. The digital geographic data must be organized as a geographic database. Roughly two-third of the total cost of implementing a GIS involves building the GIS database which should be accurate, and has a significant impact on the usefulness of the GIS.

19.1  INPUT DATA AND SOURCES Input data for GIS cover all aspects of capturing spatial data and the attribute data. The sources of spatial data are existing maps, aerial photographs, satellite imageries, field observations, and other sources (Fig. 19.1). The spatial data which are not in digital form are to be converted into standard digital form using digitizer or scanner for use in GIS. Satellite imageries Aerial photograph

Field observations Digital input data Other sources

Maps

GIS

Fig. 19.1  Input data for GIS

GIS Data

141

The digital spatial data in an acceptable format and the attribute data are stored in the computer memory and managed by DBMS which is a part of GIS, for analysis and producing the results in userdesired formats.

Data acquisition in GIS refers to all aspects of collecting spatial data from all available sources, and converting them to a desired standard digital form. This requires tools such as interactive computer screen and mouse, digitizer, word processors and spreadsheet programs, scanners, and devices necessary for reading data already written on magnetic media as tapes or CD-ROMs.

19.2.1  Data from Satellite Remote Sensing The terrain data acquired through sensors onboard satellite platforms being in digital format can be directly used after preprocessing for preparing a GIS database. These data are coded in picture elements called pixels, and are stored in the form of a two-dimensional matrix.

19.2.2  Data from Existing Maps The maps are available for a large part of the country, and the data from these maps are acquired by digitization using digitizers. The scanners are also used for digitization when area is extensive, and a large number of maps are to be digitized. The elevation data extracted using contours have poor accuracy compared to spot heights.

19.2.3  Data from Photogrammetry When the area of interest is extensive or too rugged, the photogrammetric method is employed to collect the digital terrain data using appropriate photogrammetric instruments.

19.2.4  Data from Field Surveying Terrain data in digital form can be obtained directly by field surveying methods by employing instruments such as electronic tacheometer or total station.

19.2.5  Data from GPS Global Positioning System (GPS) is a satellite based surveying system in which the digital terrain data in the form of x, y, and z coordinates are directly obtained (c.f., Section VI).

19.2.6  Data from Internet/World Wide Web (WWW) The Internet is a vast network of digital computers. They are linked by an array of different data transfer media such as a satellite and radio links, and fiber optic cables. The transfer of data in digital form is carried out by using standard coupling protocol known as TCP/IP (Transmission Control Protocol/ Internet Protocol). For GIS users the WWW provides data, and is source of information. Whole libraries of vector, raster, and object data are being offered on the Internet.

19.2.7  Attribute Data Tagging The attribute data that are feature identifiers, feature codes, and contour labels are interactively added

Chapter 19

19.2  DATA ACQUISITION

142

Geoinformatics

on screen to the graphical data after raster-vector conversion. The attributes are stored in tables and then linked with objects in GIS layers. The tables are:

(i) Point attribute table, (ii) Arc attribute table, and (iii) Polygon attribute table

depending on whether the attributes are linked to point, line or polygon layer.

19.3  LAYER CONCEPT OF DATA STORAGE IN GIS Every geographical phenomenon in principle is a point, line or area plus a label saying what it is, i.e., attribute of the geographical object. The labels could be the actual names, or numbers that cross-reference with a legend, or they could be special symbols. All these techniques are used in conventional mapping. Concept of layers or themes is very important in GIS (Fig. 19.2). In topographic map all components are seen in a single sheet but in GIS each of the components are presented in separate layers. For example,

Fig. 19.2  Layer concept of data storage in GIS

GIS Data

143

example, different layer will have different theme such as buildings, topography, land use, soil type, etc. The layers in GIS help in decisions on spatial queries.

It is important to check the acquired data for errors due to possible inaccuracies, omissions, and other factors. The errors in spatial data are, generally checked by printing the data or by taking its computer plot, preferably on translucent or thin paper to compare it by overlaying on the original map. Checking of the attribute data is also done by visual inspection of the print out or comparing the scanned data with the original using computer. Errors may arise during the capturing of spatial and attribute data in the following cases:

(a) (b) (c) (d) (e)

Spatial data are incomplete or double, Spatial data are in wrong place, Spatial data are defined using too many coordinate pairs, Spatial data are at the wrong scale, and Spatial data are distorted.

The errors in the attribute date may be:

(a) (b) (c) (d)

Wrong name of a feature, Wrong value of a quantity, Interchanged values, and Missing values or names.

19.5  GEOREFERENCING OF GIS DATA A spatial referencing system is required to handle spatial information. The primary aim of a reference system is to locate a feature on the Earth’s surface or a 2D representation of this surface such as a map. The objective of georeferencing is to provide a rigid spatial framework by which the positions of the real-world features are measured, computed, stored, and analyzed in terms of length of a line, size of an area, and shape of a feature. The several methods of georeferencing may be grouped into the following three categories:

1. Geographic coordinate system, 2. Rectangular coordinate system, and 3. Non-coordinate system.

The geographic coordinate system is the only system that defines the true geographical coordinates of a point on the Earth’s surface in terms of latitude and longitude. The rectangular coordinate system is in the form of a graticule laid down on a 2D surface. The coordinates of the points on the surface of the Earth in terms of x- and y-coordinates in the plane of the graticule are obtained by some map projection system (most commonly used is the Universal Transverse Mercator), and the z-coordinate in third dimension is represented by some technique such as contours of elevations. In the non-coordinate system, spatial referencing is done using descriptive code, such as Postal Index Number (PIN) Code used in India.

Chapter 19

19.4  DATA VERIFICATION AND EDITING

144

Geoinformatics

19.6  SPATIAL DATA ERRORS It is essential that the GIS products are high quality products. This is achieved by making use of quality data with minimum errors. The GIS data may have some inherent errors and some errors produced by the system. The term quality is used for describing errors, and errors are associated with the sources, propagation, and management of errors. Examining issues such as errors, accuracy, precision, and bias help in assessing the quality of data sets. Portrayal of features of interest is affected by the resolution and generalization of source data, and data model. Data sets used in GIS for analysis should be complete, compatible, and consistent. Errors in GIS data may be categorized as:

(i) (ii) (iii) (iv) (v) (vi)

Conceptual errors, Source data errors, Data encoding errors, Data editing and conversion errors, Data processing and analysis errors, and Data output errors.

19.7  SPATIAL DATA MODELS AND STRUCTURE Digital geographic data represents real-world features and phenomena in numeric form coded in a specific way to support GIS and mapping applications using computer. The ways of representing data are known as data models. The real-world features can be as object or phenomena. While objects are discrete and definite, such as buildings, roads, cities, and forests, phenomena are distributed continuously over a large area, such as topography, population, temperature, rainfall, and noise level. Consequently there can be two following distinct approaches of representing the real world in geographic data base:

(a) Object-based model, and (b) Field-based model.

In object-based model, the geographic space is treated to be filled by discrete and identifiable objects. If the objects represent discrete features, such as buildings, roads, and land parcels, they are said to be exact objects, and if the characteristics of the objects change gradually across the assumed boundaries between neighboring spatial objects, the objects are said to be inexact objects or fuzzy entities. Soil types, forest stands, wildlife habitats are some of the examples of inexact objects. The field-based model treats geographic space as populated by one or more spatial phenomena of real-world features varying continuously over space with no obvious or specific extent, such as elevation, average income, and ground-water level. At the database level, data in object-based spatial database are mostly represented in the form of coordinate lists, and the spatial database is generally referred to as the vector data model. A spatial database when structured on the field-based model, the basic spatial units are different forms of tessellation by which phenomena are depicted. The most commonly used type of tessellation is a finite grid of square and rectangular cells, and thus, field-based data models are generally known as the raster data model.

145

GIS Data

Chapter 19

In the vector data model the discrete objects are represented by the entities point, line, and polygon as shown in Fig. 19.3. The continuously changing phenomena are represented by raster data model, the structure of which is shown in Fig. 19.4.

Fig. 19.3  Representation of exact objects by vector data model

Forest

Cultivation

Water

Residential

Entity model

8,

8,

4

2

2

2

2

1

1

1

1

2

2

2

2

1

1

1

1

2

2

2

1

1

1

1

1

2

2

2

1

1

1

1

1

2

2

2

2

2

1

1

1

2

2

2

2

2

1

1

1

3

3

3

3

3

3

1

1

3

3

3

3

3

3

1

1

3

3

3

3

3

1

1

1

3

3

3

3

3

1

1

1

3

3

3

1

1

1

1

1

3

3

3

1

1

1

1

1

4

4

4

4

1

1

1

1

4

4

4

4

1

1

1

1

4

4

4

4

4

1

1

1

4

4

4

4

4

1

1

1

Cell values

File structure

Fig. 19.4  Raster data structure

19.8  GIS DATABASE AND DATABASE MANAGEMENT SYSTEM A large portion of GIS data collected from various sources is managed in database. The database approach provides a robust method to data organization and processing in the computer. However, it also requires more effort and resources to implement, and therefore, the design and building of GIS databases is an elaborate process by necessity. The databases offer more than just a method of handling the attributes of spatial entities; they can help to convert the data into information with a value. Information results from the analysis or organization of data, and in a database, data can be ordered, re-ordered, summarized, and combined to provide information. A database can perform sorting, ordering, conversion, calculation, and summarization. For making decisions, the GIS needs information derived from the stored data in database. A GIS database is a collection of multiple files, made up of three basic types of file structure for storage, retrieval, and organization of data. The data in database may be in the form of simple list, ordered sequential files, and indexed files. A Database Management System (DBMS) is a computer program designed to store and manage large amount of data. The overall objective of a DBMS is to allow users to deal with data without

146

Geoinformatics

needing to know how the data are physically stored and structured in the computer. Although new forms of database structure are being developed all the time, there are three fundamental ways of organizing information that also reflect the logical models used to model real world structure as below:

1. Hierarchical database structure, 2. Network database structure, and 3. Relational database structure.

19.9 TOPOLOGY The topology is defined as the spatial relationships between adjacent or neighboring futures. In general, a topological data model manages spatial relationships by representing spatial objects (point, line, and arc features) as underlying graph of topological primitives—nodes, faces, and edges. These primitives, together with their relationships to one another and to the features whose boundaries they represent, are defined by representing the feature geometries in a planar graph of topological elements. Topology is fundamentally used to ensure data quality of the spatial relationships and to aid in data compilation. Topology is also used for analyzing spatial relationships in many situations such as dissolving the boundaries between adjacent polygons with the same attribute values or traversing along a network of the elements in a topology graph. Topology can also be used to model how the geometry from a number of feature classes can be integrated. Sometimes it is referred to as vertical integration of feature classes. Generally, topology is employed to do the following:

l Manage coincident geometry (constrain how features share geometry). For example, adjacent

polygons, such as parcels, have shared edges, street centerlines and the boundaries of census blocks have coincident geometry, adjacent soil polygons share edges, etc. l Define and enforce data integrity rules (such as no gaps should exist between parcel features, parcels should not overlap, road centerlines should connect at their endpoints). l Support topological relationship queries and navigation (for example, to provide the ability to identify adjacent and connected features, find the shared edges, and navigate along a series of connected edges). l Support sophisticated editing tools that enforce the topological constraints of the data model (such as the ability to edit a shared edge and update all the features that share the common edge). l Construct features from unstructured geometry (e.g., the ability to construct polygons from lines sometimes referred to as spaghetti). The topology between the features is generated through topological elements face, edge, node, and direction of edge. Fig. 19.5 shows the topological elements and relationships between them. The faces are A, B, etc., the edges are 1, 2, 3, etc., the nodes are 1, 2, etc., written in diamonds, and the direction of edges are shown by arrows.

147

Chapter 19

GIS Data

Fig. 19.5  Topological elements and relationships

There can be following six types of topology (Fig. 19.6):

1. 2. 3. 4. 5. 6.

Arc-node topology: Line features can share end points (Fig. 19.6a), Polygon topology: Area features can share boundaries (Fig. 19.6b), Route topology: Line features can share segments with other line features (Fig. 19.6c), Region topology: Area features can overlap with area features (Fig. 19.6d), Node topology: Line features can share end point vertices with point features (Fig. 19.6e), and Point events: Point features can share vertices with line features (Fig. 19.6f).

(a)

(b)

(d)

(e) (f)

Fig. 19.6 Types of topology

(c)

148

Geoinformatics

The following illustrations show that there are two alternatives for working with features—one in which features are defined by their coordinates (Fig. 19.7), and another in which features are represented as an ordered graph of their topological elements (Fig. 19.8).

Fig. 19.7  Polygon features

Fig. 19.8  Topological elements and relationship

19.10  TYPES OF OUTPUT PRODUCTS The output products can be of various kinds, and since these products are computer generated, the user/ analyst should be aware of the desired forms of output options available in GIS software. The most common graphic products produced by GIS are various types of thematic maps as given:

(a) (b) (c) (d)

Thematic maps, Chloropleth maps, Proximal or dasymetric maps, and Contour maps.

GIS Data

149



(a) Dot maps, (b) Line maps, (c) Land form maps, (d) Animated maps, (e) Bar charts, (f ) Pie charts, (g) Scatter plots, and (h) Histograms.

Sometimes user may require numerical products, such as mean median, variance, standard deviation, average value, maximum and minimum values of attributes. This statistical information may be presented in the form of tables and reports.

19.11  SPATIAL DATA ANALYSIS After creation of database in GIS using database, the data is to be analyzed to extract information for specific purposes. The extracted information is used for decision making. One of the objectives of the spatial analysis is to investigate the patterns in the spatial data, and to identify possible relationships between patterns and other attributes within the study region. There are a wide range of functions available for data analysis in most GIS packages, including measurement techniques, query on attributes on proximity analysis, overlay operations, and analysis of models of surfaces and networks.

Chapter 19

The other kinds of maps required for a specific purpose are:

20 GIS Application 20.0 INTRODUCTION GIS application involves practical aspects of designing and managing a GIS for a project. Good project design and management are essential for producing a useful and effective GIS application. Design techniques help in identifying the nature and scope of a problem, defining the system requirements. Quantifying the amount and type of necessary data and indicating the data model needed, and the analysis required. Delivering a completed project on time and ensuring its quality are a part of management techniques. Building a new GIS project is a major investment. Choosing the right GIS software package/tool is critical to the success and failure of such investment. The problem of selecting the most appropriate GIS software package/tool for a particular GIS project is difficult to define. In general, a GIS application may include:

(a) (b) (c) (d) (e) ( f )

Problem identification, Designing a data model, Project management, Identifying implementation problems, Selecting an appropriate GIS software, and Project evaluation.

20.1  PROBLEM IDENTIFICATION The first step towards the GIS application is to identify the problem that has to be addressed by the GIS. This can be done by (i) creating a rich picture or (ii) developing a root definition. A schematic view of the problem to be addressed in a project is referred to as rich picture. The schematic view presents the main components of the problem in a project as well as any interactions that exist, and help to organize the ideas that arise during discussions between, for example, a property dealer and a property buyer. A consensus view of the problem emerging out of participation of all concerned can be well represented by the rich picture. The root definition is a view of a problem from a specific perspective. The system developer must arrive at a common root definition considering all view points as different users have different views of the problem. This helps the others to evaluate and understand the way, GIS is being constructed.

GIS Application

151

There is also a soft systems approach for identification of a problem, and addressing unstructured problems. A problem is said to be structured problem when a definite location is identified for a particular problem, whereas in unstructured problem, instead of definite location, a neighborhood is known.

GIS application requires a well planned data modeling to meet out the expectations of GIS users. In the context of the design and management of a project, the data modeling can be viewed as consisting of a conceptual model and a physical model. In a conceptual model, elements of spatial form and process are included in the rich picture. The detailing to represent the conceptual model within the computer is done through physical data model. One way of creating a conceptual data model is to identify clearly the elements of data model: the entities, their states and their interrelationships, and present them in the form of flowcharts using standard symbols that illustrate different aspects of the model. To create a physical data model, additional details are required that describe modeling of spatial entities, their associated attributes, and interrelationships of entities in the computer. Cartographic modeling is one of the most frequently used techniques of designing an analysis scheme. It is a generic way of expressing and organizing the methods by which the spatial variables and spatial operations are selected, and used to develop a GIS data model.

20.3  PROJECT MANAGEMENT A good project management is an essential prerequisite for the success of a GIS project. After constructing the data model, the GIS must be implemented, and in many cases integrated into a wider information strategy of an organization. The commonly used approaches employed for the management of project, are (i) the systems life cycle and (ii) prototyping. The systems life cycle is a linear approach used to manage the development and implementation an information technology system. It provides a very structured framework for the management of a GIS project. In the prototype approach to manage the information technology projects, the basic requirements of the system are defined by the user, using the rich picture and root definition techniques. The basic ideas are utilized by the system designer to create a prototype structure fulfilling the needs identified by the user. This developed system is then experimented by the user to find out if it fulfills the requirements as expected. The system designer may improve the system on recommendations of the user and potential users to make it of wider value.

20.4  IDENTIFYING IMPLEMENTATION PROBLEM GIS design and management will always have problems, which cannot be predicted by any mean. These problems, for example, may be non-availability of data in the format required by the GIS software, lack of knowledge about the GIS being used imposing technical and conceptual problems in implementation, users changing their requirements. The solution to the problem of changing needs of users is to get frequent-feedback from the end users of the GIS.

Chapter 20

20.2  DESIGNING A DATA MODEL

152

Geoinformatics

20.5  SELECTING AN APPROPRIATE GIS SOFTWARE GIS software is a fundamental and critical part of any operational GIS. The GIS employed in a GIS project has a controlling impact on the type of studies that can be undertaken and the results that can be obtained. There are also far reaching implications for user productivity and project costs. Today, there are many types of GIS software product to choose from and a number of ways to configure implementations. GIS software can be classified into the five main types shown in Table 20.1. Table 20.1:  Various types of software and their salient features Software

Salient features

Desktop GIS software

Desktop GIS software owes its origins to the personal computer and the Microsoft Windows operating system, and is considered the mainstream workhorse of GIS today. It provides personal productivity tools for a wide variety of users across a broad cross section of industries. The desktop GIS software are (i) ESRI ArcReader, Intergraph GeoMedia Viewer and MapInfo ProViewer, (ii) Autodesk Map 3D, ESRI ArcView, Intergraph GeoMedia, and MapInfo Professional, and (iii) ESRI ArcGIS ArcInfo, Intergraph GeoMedia Professional, and GE Smallworld GIS. Price range: $1000–$20000 per user.

Server GIS

Third generation server GIS offers complete GIS functionality in a multi-user server environment. The server GIS are AutoDesk MapGuide, ESRI ArcGIS Server, GE Spatial Application Server, Intergraph GeoMedia Webmap, and MapInfo MapXtreme. Price range: $5000–$25000.

Developer GIS

Developer GIS are of interest to developers because it can be used to create highly customized and optimized applications that can either stand alone or can be embedded with other software systems. The developer GIS are Blue Marble Geographics GeoObjects, ESRI ArcGIS Engine, and MapInfo MapX. Price range: $1000–$5000 for developer kit and $100–500 per deployed application.

Handheld GIS

Hand-held GIS are lightweight systems designed for mobile and field use. The Hand-held GIS include Autodes.

Other types of GIS software

There are many other types of commercial and non-commercial software that provide valuable GIS capabilities. For example, GRASS GIS, is a Geographic Information System (GIS) used for data management, image processing, graphics production, spatial modeling, and visualization of many types of data.

The following criteria may be used for selecting GIS software:

(i) Cost, (ii) Functionality, (iii) Reliability, (iv) Usability, and (v) Vendor.

20.6  PROJECT EVALUATION Project evaluation requires that the output produced by the system is usable, valid, and meets the objective of the project. Testing the GIS and validating output are a crucial part of the design process.

153

GIS Application

GIS can be made more economical and appropriate by taking prototyping approach in which frequent testing and evaluation take place automatically. To determine that the developed GIS applications meet the objectives defined at the beginning the design process, the following approaches may be adopted:

(i) Feedback can be obtained from all the parties involved in the process of design and development of the GIS about achieving the goals for which it was designed, (ii) GIS output can be checked against reality, and (iii) The adaptations and changes that had to be made at the time of moving from the rich picture through the GIS data model to the implementation can be evaluated.

In this section two case studies, conducted at Indian Institute of Technology, Roorkee, India, as a part of M. Tech programmes, have been discussed to give an idea regarding the data required for a GIS project, the methodology employed, the procedure of analysis, and presentation of output.

20.7.1  Site Suitability for Urban Planning Urbanization is a dynamic phenomenon. It is expansion of urban areas due to urban development and migration of rural population to urban areas. To satisfy various needs of proper urban planning, accurate and timely data are required. The Dehradun city, after becoming state capital city of Uttarakhand, is facing fast development and urbanization, and requires proper planning for its planned growth. In the present study, the remote sensing technique was combined with the GIS technology to analyze the urban growth of the city and its direction of expansion. The study also included to find out suitable sites for further development. The data and software used for this study are given in Tables 20.2 and 20.3. Table 20.2:  Data source used Sl. no.

Data source

Year

Scale

1

Dehradun Guide map

1945

1:20,000

2

Dehradun Guide map

1965

1:20,000

3

Toposheet 53J/3

1984

1:50,000

4

Toposheet 53F/15

1965

1:50,000

5

IRS LISS II

1988

6

IRS ID PAN

1997

7

IKONOS

2001

Table 20.3:  Software used Sl. no.

Software

Used for

1

ERDAS Imagine 8.5

Registration and onscreen digitization

2

Arc GIS

Finding change, suitability analysis, and preparing maps

3

Adobe Photoshop

Screening and mosaicing

4

MS Excel

Numerical analysis

Chapter 20

20.7  CASE STUDIES

154

Geoinformatics

The methodology adopted for this study is presented in the flowchart (Fig. 20.1). All the guide and topological maps were converted into the desired land use classes: urban, agriculture, forest, vacant/scrub, river, main roads and other roads, and railway lines. The same classes were also classified on the satellite images, and land use maps were prepared. Subsequently these land use maps were scanned and mosaiced using ERDAS Imagine 8.5 GIS software, and then digitized. MS Excel was used for preparing the attribute table. Arc GIS software was used for finding changes in land use, performing suitability analysis, and preparing final out put maps.

Fig. 20.1 Flowchart for study of urban growth

20.7.2  Road Accident Analysis To minimize road accident hazards on roads, the existing road network has to be optimized, and also the road safety measures have to be improved. This requires proper traffic management. A study was carried out for Dehradun city of Uttrakhand state of India, by creating a database within GIS environment to analyze the accident hazards using the toposheet of the area of the year 1984, the guide map of Dehradun city of the year 1965, and IKONOS data for the year 2001.

GIS Application

155

Chapter 20

Initially, the road network is extracted from 1984 guide map, and subsequently updated using high resolution (1 m) IKONOS data for the year 2001. In order to ascertain the accident trends, a minimum 5 year data is required, and the same was collected from traffic police records. A GIS database was created using MS Access, and linked to the spatial road network map. The methodology adopted for this study is shown in Fig. 20.2.

Fig. 20.2  Flowchart of methodology used for road accident analysis



The analysis for different accident scenario and parameters consists of (a) Road accident analysis according to (i) Yearly variation, (ii) Monthly variation, (iii) Comparative vehicle-wise, (iv) Time slot-wise, (v) Gender-wise, (vi) Type of accident, and (vii) Road-wise.

(b) Identification of black spots. By identifying the accident prone-locations with their accident severity, remedial measures can be planned by the district administration to minimize the accidents in different parts of the city.

This page intentionally left blank

SECTION VI

GLOBAL POSITIONING SYSTEM

This page intentionally left blank

21 Introduction and Basic Concepts

The Global Positioning System (GPS) is a satellite-based radio navigation system provided by the United States Department of Defense (DoD). It provides unequalled accuracy and flexibility in positioning for navigation, surveying, and GIS data collections. GPS is the shortened form of NAVSTAR GPS which is an acronym of Navigation System with Time and Ranging Global Positioning System. GPS uses a constellation of 24 satellites to give a user an accurate position on the Earth. The positional accuracy is different for different users depending on their requirements. To a soldier in a desert accuracy means 15 m, and to a ship in coastal waters accuracy means 5 m, but to a land surveyor accuracy means 1 m or less. GPS can be used to achieve all these accuracies in all these applications, by applying suitable receivers and relevant techniques.

21.1  WHAT IS UNIQUE ABOUT GPS? GPS was originally designed for military use but soon it was made available for civilian use. GPS has demonstrated a significant benefit to the civilian community who are applying it for a number of rapidly expanding applications. GPS is a unique positioning system in the following respect:

(i) It provides relatively high positioning accuracies from tens of metres down to millimetre level. (ii) It has capability of determining velocity and time to an accuracy commensurate with position. (iii) GPS signals are available to users anywhere on the globe; in air, on the ground, or at sea. (iv) It has a positioning system with no user charges, and uses relatively low cost hardware. (v) It is an all-whether system, available 24 hours a day. (vi) It gives the positioning information in three dimensions, i.e., latitude θ, longitude λ, and altitude h.

21.2  ADVANTAGES OF GPS OVER TRADITIONAL SURVEYING GPS has numerous advantages over traditional surveying methods. They are:

(i) (ii) (iii) (iv)

Intervisibility between the points is not required. Can be used at any time, day or night and in all weather conditions. Produces results with very high geodetic accuracy. More work can be accomplished in less time with less manpower.

Chapter 21

21.0 INTRODUCTION

160

(v) (vi) (vii) (viii) (ix) (x)

Geoinformatics

Limited calculation and tabulation works required. Large area can be surveyed in short duration. Network independent site selection is used; hence sites can be placed where needed. Economic advantages arise from greater efficiency and speed of survey. Geodetic accuracies are easily achieved. Three-dimensional coordinates are obtained.

21.3  LIMITATIONS OF GPS BASED SURVEYING GPS based surveying has the following limitations:

(i) (ii) (iii) (iv) (v) (vi) (vii)

High capital cost (however, the cost is becoming affordable now). GPS antenna requires a clear view of the sky. The antenna must get direct signals from at least 4 satellites. Satellite signals can be blocked by tall buildings, trees, etc. GPS cannot be used indoors. Difficult to use in town centers or in dense forests. Horizontal and vertical coordinates must be transformed if they are to be used for conventional survey applications. (viii) New skills are needed for error-free GPS survey results.

21.4  GPS A NEW UTILITY With today’s integrated-circuit technology, GPS receivers are fast becoming small enough and cheap enough to be carried by just about anyone, i.e., everyone will have the ability to know exactly where they are, all the time fulfilling the man’s basic needs. The GPS service has become now a basic utility of mankind like telephone. The applications of GPS are almost limitless in day-to-day life. Some of these are:

(i) (ii) (iii) (iv) (v) (vi) (vii)

Use in delivery vehicles to pinpoint destinations. In emergency vehicles for more prompt service. Use of electronic maps instantly showing the way to destination. Use in aircrafts due to its capability of three-dimensional locations. It may be best and cheapest system for a fool-proof air collision avoidance system. It may be a very accurate zero-visibility landing system for aircrafts. Immediate searching of utility centers, e.g., Chinese restaurant, using mobile phone database.

The GPS technology would give the world a new “international standard” locations and distances, and it would allow nations to monitor and use natural resources more efficiently than ever before.

22 Satellite Ranging

22.0 INTRODUCTION The GPS technology is based on satellite ranging. It is locating our position on earth by measuring our distance from a group of satellites in space. The satellites work as precise reference points in space. These satellites are made available to the user to get his position through three segments covering the entire globe.

The basic principles behind GPS are quite simple but the system itself employs some of the most “hightech” equipment ever developed. To understand the GPS system, let us break it into the following five conceptual parts (Fig. 22.1):

Fig. 22.1 Principle of GPS working



1. Triangulation from satellites is the basis of the system, 2. For triangulation, GPS measures distance using the travel time from a radio message,

Chapter 22

22.1  PRINCIPLES OF GPS WORKING

162

Geoinformatics

3. For measurement of travel time, accurate clocks are required, 4. From distance to a satellite, location of satellite in space is determined, and 5. Delay in satellite signals as they travel through the ionosphere and earth’s atmosphere.

22.1.1  Satellite Ranging The satellites in space act as precise reference points, and we locate our positions by measuring our distance from a group of satellites working as reference points. To understand where our position with respect to the satellites is, let us assume that our distances from three satellites of known positions, are known. Considering the satellite A, let the point P be 11,000 km as shown in Fig. 22.2, then the P can be anywhere on the sphere of radius 11,000 km with the satellite A at its centre. Now if at the same time the point P is also at a distance of 12,000 km from another satellite B then the point P may be thought of lying on a circle which is formed by the intersection of the two spheres of radii 11,000 km and 12,000 km (Fig. 22.3). Then to pinpoint the position of P if a third satellite C is considered from which P is, say, 13,000 km, the point P can be only at two points where the third sphere of radius 13,000 km of satellite C cuts the circle formed by the spheres of satellites A and B (Fig. 22.4).



Fig. 22.2  Distance from one satellite

Fig. 22.3  Distance from two satellites

By ranging from three satellites, the position of the point P has been narrowed down to two points. Now if the point P is to be unambiguously located, from trigonometry, it can be found that a fourth satellite is also required. Thus the basic principle behind GPS is to triangulate the position of any point at least four reference satellites are needed.

22.1.2  Measuring Distance from a Satellite The working of GPS is based on the principle of triangulation in which a point is located using the distances from the points of known locations. In Fig. 22.5, the points A and B are those points whose locations are known, i.e., their coordinates (XA, YA), and (XB, YB), respectively, and thus the distance lAB. The point P, whose position has to be fixed, can be determined by the intersection of the distances lPA, and lPB of P from A and B, respectively, if known in two dimensional space. The GPS system works out the distances by calculating the time taken by the radio signals to reach the point from the GPS satellites. The radio signals in the form of waves travel at a speed of light c which is 299,790,000 ms–1. Thus if the time of travel t of a distance D is known, the distance can be determined as follows:

163

Satellite Ranging

Fig. 22.4  Distance from three satellites



Distance = speed of light × time

D = ct …(22.1) For accurate determination of positions, the measurement of time of travel has to be very accurate as the radio waves travel with the speed of light. The GPS receivers are equipped with atomic clocks which can measure time with nanosecond accuracy, i.e., 0.000000001 second.

Chapter 22

or

Fig. 22.5  Locating a point with reference to known points

Measurement of Time For knowing the time of travel of the signal from the satellite to the point, when the signal left the satellite must be known. It is done by synchronizing the satellites and the receivers so that they are generating the same code at exactly the same time. Now to know when the signal left a particular satellite, the received code at the receiver is compared with the same code which was generated by the receiver some time earlier. This is the time taken by the signal to reach the point from that particular satellite (Fig. 22.6).

164

Geoinformatics

Fig. 22.6  Measurement of time difference

Pseudo-random Codes The GPS system does not use numbers. Both the satellites and the receivers actually generate a very complicated set of digital codes looking like string of random pulses. The codes are made complicated on purpose so that they can be compared easily and unambiguously, and for some other technical reasons. The codes are not actually random but they are carefully chosen pseudo-random sequences Fig. 22.7  Pseudo-random codes that repeat every millisecond. So they are often referred to as the pseudo-random codes Fig. 22.7.

22.1.3  Atomic Clock and Determination of Position The satellites have atomic clocks on board which are unbelievable precise and unbelievable expensive. These clocks do not run on atomic energy. They get the name because they use oscillations of a particular atom as their metronome. It is the most stable and accurate time reference man has ever developed. As the atomic clocks are very expensive, only the satellites have them, and the receivers are not equipped with atomic clocks because the cost of the receivers will become unaffordable for general users of GPS. Thus the time measurement with the receivers will not be as accurate as with the satellites, and this would cause error in calculation of distances from the satellites resulting finally into inaccurate determination of positions. Since every point has its own unique position in space, the GPS system determines the position of a point by built-in computers in the receivers by solving four simultaneous equations in four inaccurate distances for three unknowns (latitude θ, longitude λ, and altitude h).

23 GPS Components

23.0 INTRODUCTION

Chapter 23

The components of GPS consist of three major segments. These are the space segment (SS), the control segment (CS), and the user segment (US). The space segment consists of satellites which broadcast signals, the control system steers the whole system, and the user segment includes the many type of receivers. The U.S. Air Force develops, maintains, and operates the space and control segments. GPS satellites broadcast radio signals from space, and each GPS receiver uses these signals to calculate its three dimensional location in terms of latitude, longitude, and altitude, and the current time. The three segments are illustrated in Fig. 23.1.

Fig. 23.1  Components of GPS

166

Geoinformatics

Table 23.1 presents the functions and products of the three segments of GPS. Table 23.1:  Functions and products of space, control and user segments Segment

Input

Function

Product

Space

Navigation message

Generate and transmit code and carrier phases, Navigation message

P(Y)-codes, C/A-codes, L1, L2 carrier waves, Navigation message

Control

P(Y)-code, Observations, Time (UTC)

Produce GPS time, ephemerides, Manage space vehicles

Navigation message

User

Code and carrier phase observations, Navigation message

Navigation solution, Relative positioning OTF, etc.

Position, Velocity, Time

23.1  SPACE SEGMENT The space segment is composed of 24 orbiting GPS satellites in GPS parlance. The satellites are positioned in six Earth orbital planes with four satellites in each plane. The nominal orbital period of a GPS satellite is one and half of the sidereal day or 11 hr 58 min. The orbits are nearly circular and equally spaced about the Earth’s equator at a 60° separation with an inclination relative to the equator of approximately 55°. Fig. 23.2 depicts the GPS constellation. The orbital radius of the satellites is approximately 26,600 km. This satellite constellation provides a 24 hr global user navigation and time determination capability.

Fig. 23.2  GPS satellite constellation

Fig. 23.3 presents the satellite orbits in a planar projection referenced to the epoch time of 00 hr July 01, 1993 UTC (US No.). Considering the orbits as a “ring”, this figure opens each orbit and lays it flat on a plane. Similarly for the Earth’s equator, it is like a ring that has been opened and laid on a flat surface. The slope of each orbit represents its inclination with respect to the Earth’s equatorial plane, which is normally 55°.The orbital plane locations with respect to the Earth are defined by the longitude

167

GPS Components

of the ascending node while the location of the satellite within the orbital plane is defined by the mean anomaly. The longitude of the ascending node is the point of intersection of each orbital plane with the equatorial plane. The Greenwich meridian is the reference point or point where the longitude of the ascending node has the zero value. Mean anomaly is the angular position of each satellite within the orbit with the Earth’s equator being the reference or point with a zero value of mean anomaly. It can be observed that the relative phasing between most satellites in adjoining orbits is approximately 40°.

Fig. 23.3  GPS constellation planar projection

Several different notations are used to refer to the satellites in their orbits. One nomenclature assigns a letter to each orbital plane (i.e., A, B, C, D, E, and F ) with each satellite within a plane assigned a number from 1 to 4. Thus a satellite reference as B3 refers to the satellite number 3 of orbital plane B. A second notation used is a NAVSTAR satellite number assigned by the U.S. Air Force. This notation is in the form of space vehicle number (SVN) 11 to refer to NAVSTAR satellite 11. In the third system of notation, a satellite can be identified by the PRN (pseudo-random noise) codes that it generates.

23.1.2  Satellite Signals Each GPS satellite transmits data on two frequencies, L1 (1575.42 MHz) and L2 (1227.60 MHz). The atomic clocks aboard the satellite produce the fundamental L-band frequency, 10.23 MHz. The L1 and L2 carrier frequencies are generated by multiplying the fundamental frequency by 154 and 120, respectively. Two pseudorandom noise (PRN) codes, along with satellite ephemerides (Broadcast Ephemerides), ionospheric modeling coefficients, status information, system time, and satellite clock corrections, are superimposed onto the carrier frequencies, L1 and L2. The measured travel times of the signals from the satellites to the receivers are used to compute the pseudo ranges. The Course-Acquisition (C/A) code, sometimes called the Standard Positioning Service (SPS) which is made available for civilian use, is a pseudorandom noise code that is modulated onto the L1 carrier.

Chapter 23

23.1.1  Satellite Identification

168

Geoinformatics

Because initial point positioning tests using the C/A code resulted in better than expected positions, the DoD directed Selective Availability (SA) in order to deny full system accuracy to unauthorized users. SA is the intentional corruption of the GPS satellite clocks and the Broadcast Ephemerides. Errors are introduced into the fundamental frequency of the GPS clocks. This clock dithering, affects the satellite clock corrections, as well as the pseudo range observables. Errors are introduced into the Broadcast Ephemerides by truncating the orbital information in the navigation message. The Precision (P) code, sometimes called the Precise Positioning Service (PPS) which has been reserved for use by the U.S. military and other authorized users, is modulated onto the L1 and L2 carriers allowing for the removal of the first order effects of the ionosphere. The P code is referred to as the Y code if encrypted. Y code is actually the combination of the P code and a W encryption code, and requires a DoD authorized receiver to use it. Originally the encryption was intended as a means to safeguard the signal from being corrupted by interference, jamming, or falsified signals with the GPS signature. Because of the intent to protect against spoofing, the encryption is referred to as Anti-spoofing (A-S). A-S is either “on” or it’s “off;” there is no variable effect of A-S as there is with SA.

Denial of Accuracy and Access There are basically two methods for denying civilian users full use of the system. As described above the first is Selective Availability and the second is Anti-spoofing. The first method prevents the civilian users from accurately measuring instantaneous pseudo ranges and affects the receiver operation. The second method of denial truncates the transmitted message resulting into inaccurate computation of the coordinates of satellites. The error in satellite positions roughly translates to a like position error of the receiver.

Dilution of Precision The arrangement of satellites in the sky also affects the accuracy of GPS positioning. The ideal arrangement of the minimum four satellites is that one satellite is directly overhead, and the three others are equally spaced near the horizon. GPS coordinates are calculated when satellites are clustered close together in the sky and suffer from dilution of precision (DOP). It is a factor that multiplies the uncertainty associated with User Equivalent Range Errors (UERE). The User Equivalent Range errors are associated with satellite and receiver clocks, the atmosphere, satellite orbits, and the environmental conditions that lead to multipath errors. The DOP associated with an ideal arrangement of the satellite constellation equals approximately 1, which does not magnify UERE. It has been found that the lowest DOP encountered in practice is about 2 which doubles the uncertainty associated with UERE. GPS receivers report several components of DOP, including horizontal dilution of precision (HDOP) and vertical dilution of precision (VDOP). The combination of these two components of the three-dimensional position is called position dilution of precision (PDOP). A key element of GPS mission planning is to identify the time of day when PDOP is minimized. Since satellite orbits are known, PDOP can be predicted for a given time and location. Various software products allow the user to determine when conditions are best for GPS work.

23.2  CONTROL SEGMENT The GPS control segment comprises the Operational Control System (OCS) which consists of a global network of ground facilities that track the GPS satellites, monitor their transmissions, perform analyses, and send commands and data to the constellation (Fig. 23.4).

169

GPS Components

Fig. 23.4  Overview of the control segment operation

Chapter 23

The current operational control segment includes a Master Control Station (MCS), an alternate master control station, 12 command and control antennas, and 16 monitoring sites. The locations of these facilities are shown in Fig. 23.5.

Fig. 23.5  Location map of control stations

23.2.1  Master Control Station The master control station in Colorado is where 2nd Space Operations Squadron (2SOPS) performs the primary control segment functions, providing command and control of the GPS constellation. The MCS generates and uploads navigation messages, and ensures the health and accuracy of the satellite constellation. It receives navigation information from the monitor stations, utilizes this information to compute the precise locations of the GPS satellites in space, and then uploads this data to the satellites.

170

Geoinformatics

The MCS monitors navigation messages and system integrity, enabling 2SOPS to determine and evaluate the health status of the GPS constellation. 2SOPS uses the MCS to perform satellite maintenance and anomaly resolution. In the event of a satellite failure, the MCS can reposition satellites to maintain an optimal GPS constellation.

23.2.2  Monitor Stations Monitor stations track the GPS satellites as they pass overhead and channel their observations back to the master control station. Monitor stations collect atmospheric data, range/carrier measurements, and navigation signals. The sites utilize sophisticated GPS receivers, and are operated by the MCS. There are 16 monitoring stations located throughout the world, including six from the Air Force and 10 from the National Geospatial-Intelligence Agency (NGA).

23.2.3  Ground Antennas Ground antennas are used to communicate with the GPS satellites for command and control purposes. These antennas support S-band communications links that send/transmit navigation data uploads, and processor program loads, and collect telemetry. The ground antennas are also responsible for normal command transmissions to the satellites. S-band ranging allows 2SOPS to provide anomaly resolution and early orbit support. There are four dedicated GPS ground antenna sites co-located with the monitor stations at Kwajalein Atoll, Ascension Island, Diego Garcia, and Cape Canaveral. In addition, the control segment is connected to the eight Air Force Satellite Control Network (AFSCN) remote tracking stations worldwide, increasing visibility, flexibility, and robustness for telemetry, tracking, and command.

23.3  USER SEGMENT The user segment consists of the users and their GPS receivers. The users are civilians and military. GPS receivers come in a variety of formats, from devices integrated into cars, phones, and watches, to dedicated devices. The user segment is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial and scientific users of the Standard Positioning Service. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock (often a crystal oscillator). They may also include a display for providing location and speed information to the user. A receiver is often described by its number of channels: this signifies how many satellites it can monitor simultaneously. Originally limited to four or five, this has progressively increased over the years so that, as of 2007, receivers typically have between 12 and 20 channels.

24 GPS Receivers for Surveying 24.0 INTRODUCTION The most important instrument in GPS surveying is the GPS receivers. Its features and capabilities influence the technique available to the user from initial planning to final processing. There are different types of GPS receivers available in the market but only a few of them are suitable for GPS surveying. The receivers having sub-metre and sub-centimetre accuracies are suitable for surveying work. They are capable of doing different hybrid techniques, such as differential GPS (DGPS), static GPS, and rapid static GPS, and are always accompanied by post processing software. GPS surveying receivers are usually equipped with extra batteries, battery charger, data storage devices, such s PCMCIA cards, external antennas, and tripod mounting devices such as tribrach, etc.

24.1  GPS RECEIVERS AND ITS FEATURES

Chapter 24

GPS is a one way ranging system in which the receiver determines its position by processing range measurements to GPS satellites. Hence, the GPS receiver has to include all hardware and software needed to determine the GPS’s position, velocity and time data, and other derived parameters as required. Further, the GPS receiver must have provision for interfacing it with other navigation systems to provide accurate navigation. It also converts the input data to internal computer format. Hence, a receiver can be comprised of two units—a receiver and an interface. A generic receiver is shown in Fig. 24.1.

Fig. 24.1  A generic GPS receiver

172

Geoinformatics

The following is the classification of GPS receivers based on the type of observable that is tracked:

1. 2. 3. 4.

Commercial navigation receivers using the C/A-code on L1 frequency, Military navigation receivers using P(Y)-code on both L-band frequencies, Single frequency (L1) carrier phase tracking receivers, and Dual-frequency carrier phase tracking receivers.

The GPS receivers are also classified in different ways based on their application, the type of space vehicle tracking, code dependency, the dynamics of the platform, etc. (Table 24.1). Table 24.1:  Classification of GPS receivers Based on

Categories l l

Application

l l l l

Type of satellite tracking

l l l

Code dependency

l l l

Platform dynamics

l l l

Type of hardware

l l

Navigation receiver Timing receiver Surveying receiver Geodetic receiver GIS mapping receiver Continuous tracking receiver Sequential tracking receiver Multiplex receiver Code dependent receiver Sequential tracking receiver Multiplex receiver Low dynamic receiver Medium dynamic receiver High dynamic receiver Analog receiver Digital receiver Analog/digital receiver

24.1.1  Surveying Receivers The surveying receivers are intended for high-accuracy measurements for land and hydrographic surveying. Such receivers have external tripod mounted antennas, and are able to swap power sources while in operation. The surveying receivers are broadly of two categories - receivers for timing applications and geodetic receivers.

Receivers for Timing Applications These types of receivers are intended to act as a time and frequency reference. Position is the secondary information to these receivers, and is often ignored by the users. The primary benefits of GPS derived time and frequency, are long-term stability and coordination with the worldwide time network via the GPS standard. These receivers are often used for applications such as:

(i) Calibration of test instruments by calibration laboratories, (ii) Digital network synchronization for telecommunication services,

GPS Receivers for Surveying



173

(iii) Synchronizing astronomical observations by observatories, (iv) Synchronizing fault recorders for electric utility grids, and (v) Synchronizing seismograph for accurate earthquake location.

These receivers are used in critical applications. The GPS derived time is often paired with a different kind of time receivers, such as LORAN or the additional high frequency clock such as a cesium atomic clock. Thus, if the GPS receiver fails, the output could still be assured over some period of time.

Geodetic Receivers Dual-frequency high frequency carrier phase receivers capable of giving positional accuracies in millimetres come under this category. These receivers are capable of operating continuously for a long duration as they are provided with good power backup.

24.1.2  Receivers by Method of Operation The receivers are also classified by the method of their operation, and they are:

1. Code phase receivers, 2. Carrier phase receivers, and 3. Military grade receivers.

Code Phase Receivers A code phase receiver is also known as code correlating receiver because it requires access to the satellite navigation message of the P- or C/A-code signal to function to provide an almanac for operation and signal processing. They can produce real-time navigation data, and have anywhere-fix capability and, consequently, give a quicker start-up time at survey commencement. These receivers have the unique capability to begin the calculations within given approximate location and time. The anywherefix capability means that it can synchronize itself with GPS time at a point with unknown coordinates once a lock on the signals of four satellites has been obtained. These are low-cost receivers, and are capable of giving 25 m accuracy (without selective availability).

These receivers determine their position by processing measurements of the carrier phase of the satellite signal over a period of time. They do not need to decode the information being transmitted except for locating satellites. Some such receivers may have no code reception capability at all, in which case the receiver must be preloaded with that data from another source. The advantage of this method is its high accuracy. This type of receivers can give centimetre accuracy in real time when used with differential corrections. The disadvantage is the high cost of these receivers. A carrier phase receiver utilizes the actual GPS signal itself to calculate a position. There are two general types of carrier phase signals:

1. Single-frequency, and 2. Dual-frequency.

Single-frequency Receivers A single-frequency receiver tracks the L1 frequency signal. It has generally a lower price than the dualfrequency receiver because it has fewer components, and is in greater demand. It can be used effectively

Chapter 24

Carrier Phase Receivers

174

Geoinformatics

to develop relative positions that are accurate over baselines of less than 50 km or where ionosphere effects can be ignored.

Dual-frequency Receivers A dual-frequency receiver tracks both L1 and L2 frequency signals, and is generally more expensive than a single-frequency receiver. A dual-frequency receiver will more effectively resolve longer baselines of more than 50 km where ionosphere effects have a longer impact on calculations. These receivers eliminate almost all ionosphere effects by combining L1 and L2 observations. Most manufacturers of dual-frequency receivers utilize codeless techniques which allow the use of the L2 frequency during anti-spoofing. These codeless techniques are squaring, cross-correlation, and P-W correlation.

Military Grade Receivers The current military GPS receiver is the precise lightweight GPS receiver (PLGR) AN/PSN-11, which uses the course/acquisition (C/A), precise (P), or encrypted P(Y)-codes. PLGR is designed to operate as a stand-alone unit, and provides navigation information, such as position, velocity, and time. PLGR requires a crypto key to operate as Precise Positioning Service (PPS) receiver. A PPS receiver corrects errors introduced by selective availability (SA), and cannot be spoofed.

24.2  GPS ERRORS There are many sources of possible errors that will degrade the accuracy of positions computed by a GPS receiver. The travel time of GPS satellite signals can be altered by atmospheric effects; when a GPS signal passes through the ionosphere and troposphere it is refracted, causing the speed of the signal to be different from the speed of a GPS signal in space. Sunspot activity also causes interference with GPS signals. Another source of error is measurement noise, or distortion of the signal caused by electrical interference or errors inherent in the GPS receiver itself. Errors in the ephemeris data (the information about satellite orbits) will also cause errors in computed positions, because the satellites were not really where the GPS receiver “thought” they were (based on the information it received) when it computed the positions. Small variations in the atomic clocks (clock drift) onboard the satellites can translate to large position errors; a clock error of 1 nanosecond translates to 1 foot or 0.3 metres user error on the ground. Multipath effects arise when signals transmitted from the satellites bounce off a reflective surface before getting to the receiver antenna. When this happens, the receiver gets the signal in straight line path as well as delayed path (multiple paths). The effect is similar to a ghost or double image on a TV set.

GPS Surveying

25.0 INTRODUCTION Proper planning and preparation are essential ingredients of a successful survey. The surveyor should know the capabilities and limitations of the GPS receiver he is going to use. A surveyor gives more importance to the accuracies and practicalities of a GPS receiver and its effective use. Efficient service of a GPS receiver is very much related to the efficiency of its operator.

25.1  GPS NAVIGATION AND GPS SURVEYING The GPS navigation is different from the GPS surveying in the following respect:







1. GPS navigation supports the safe passage of a vessel or an aircraft, from the port of departure, while underway and to its point of arrival; while GPS surveying is mostly associated with the traditional functions of establishing geodetic control, supporting engineering construction, cadastral surveys, and map making. 2. Navigation-type GPS receivers are comparatively low-cost, code-correlating type instruments that only measure pseudo-range, whereas GPS surveying receivers are expensive, and phase measuring instruments having many special features and complex software to support their function. 3. GPS navigation is based on pseudo-ranges in which biases are dealt with the exception of the clock errors. GPS surveying requires a more careful treatment of the biases during the data processing.

25.2  GPS SURVEYING TECHNIQUES The surveyor should know the requirements of accuracy for various categories of surveying to be employed as below: Category A (Scientific) Better than 1 ppm Category B (Geodetic) 1 ppm to 10 ppm Category C (General surveying) Lower than 10 ppm There are wide variety of GPS applications, which are matched by a similar diversity of user equipment and techniques. Nevertheless, the most fundamental classification system for GPS technique is based on the type of observable that is tracked:

Chapter 25

25

176

Geoinformatics

(a) (b) (c) (d)

Civilian navigation/positioning receivers C/A code and L1 frequency, Military navigation receivers using P(Y)-code on both L-band frequencies, Single frequency (L1) carrier phase tracking receivers, and Dual-frequency carrier phase tracking receivers.

When these classes of hardware are used in appropriate manner for relative positioning, the accuracy that is achieved ranges from a few metres in case of standard pseudo range based techniques, to the subcentimetre level in case of carrier phased based techniques. Although Single Point Positioning (SPP) accuracy of 5–10 m is now possible, it is assumed that for most geo spatial applications only relative positioning are of relevance. The following classes of relative positioning techniques can, therefore, be identified.

1. Static and Kinematic GPS surveying techniques: High precision techniques based on post processing of carrier phase measurements. 2. Differential GPS (DGPS): Instantaneous low to moderate accuracy positioning and mapping technique based on pseudo range measurements. 3. Real-Time Kinematic (RTK): Versatile high precision techniques that use carrier phase measurements in an instantaneous positioning mode.

The DGPS and RTK techniques, because they are able to deliver results in real time, are very powerful GPS positioning technologies. There are essentially following two types of conventional static GPS techniques:



1. Ultra Precise, Long Baseline GPS Survey Technique: Accuracy from few parts per million to several parts per million, characterized by top-of-the-line GPS receivers and antennas, many hours (even days) of observations and data processing using sophisticated scientific software can be achieved. 2. Medium-to-Short Baseline GPS Survey Technique: Accuracy at few parts per million level for baselines typically