Everyday Applied Geophysics 2: Magnetics and Electromagnetism 1785482807, 9781785482809

Everyday Applied Geophysics 2: Electromagnetics and Magnetics covers the physical methods permitting the environmental e

853 114 46MB

English Pages 166 [161] Year 2018

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Everyday Applied Geophysics 2: Magnetics and Electromagnetism
 1785482807, 9781785482809

Table of contents :
Cover
Everyday Applied
Geophysics 2:
Magnetics and Electromagnetism
Copyright
Foreword
Introduction
1 Magnetic Methods
2 The Electromagnetic Induction
or “Slingram” Method
3 Processing Geophysical Maps
References
Index
Back Cover

Citation preview

Everyday Applied Geophysics 2

Series Editor André Mariotti

Everyday Applied Geophysics 2 Magnetics and Electromagnetism

Nicolas Florsch Frédéric Muhlach Michel Kammenthaler

First published 2018 in Great Britain and the United States by ISTE Press Ltd and Elsevier Ltd

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Press Ltd 27-37 St George’s Road London SW19 4EU UK

Elsevier Ltd The Boulevard, Langford Lane Kidlington, Oxford, OX5 1GB UK

www.iste.co.uk

www.elsevier.com

Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. MATLAB

®

is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not

warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a ® particular pedagogical approach or particular use of the MATLAB software. For information on all our publications visit our website at http://store.elsevier.com/ © ISTE Press Ltd 2018 The rights of Nicolas Florsch, Michel Kammenthaler and Frédéric Muhlach to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988. British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library Library of Congress Cataloging in Publication Data A catalog record for this book is available from the Library of Congress ISBN 978-1-78548-280-9 Printed and bound in the UK and US

Foreword

The scientific books published by ISTE Press include a multidisciplinary series called Earth Systems – Environmental Sciences, and it is in this context that today I present a work dedicated to geophysical prospecting and its applications, coordinated by Professor Nicolas Florsch. Its title, Everyday Applied Geophysics, deserves to be explained in more detail. First, we should recall the important role played in some scientific fields by the so-called “amateurs”. This is especially the case for astronomy, a field where a socioepistemology of amateur practices, whose main points can be summed up here, has been established. These amateurs are not organized to compete with professionals, as they evidently lack the skills and the necessary resources. However, this is not a case of popular science: their practices, beyond the understanding of the sky, stars and the universe, are active and mobilized by the desire to make discoveries. Astronomy is a science where amateurs can obtain significant observation data, which are very useful for scientists. On a smaller scale, some amateurs, for example, are quite involved in electronics and radio communication. However, so far this has not been the case for Everyday Applied Geophysics, a domain that has potentially numerous applications associated with the exploration of the near subsoil: looking for water, archeological remains, geological peculiarities, etc.

x

Everyday Applied Geophysics 2

Moreover, making Everyday Applied Geophysics available for researchers based in developing countries is a challenge of the utmost importance. The goal is to open this field and allow everyone to employ the tools and methods used for exploring the near subsoil in order to highlight reservoirs or flow paths, locate holes, define geological stratifications, follow pollution plumes, search for archeological remains, etc. If curious and exploring amateurs may be involved, the main objective of the scientific community of these countries, which needs financially and technologically affordable tools, is to implement cheap and unsophisticated methods and techniques that, nonetheless, will produce plenty of essential data. Let us provide an example to illustrate this point. Some geophyscal devices cost up to tens of thousands of dollars (or euros) on the market; in this work, we will discover that with a few hundred euros, or even less, we can implement a system that, despite being naturally lower in terms of data acquisition, allows everyone to carry out actual and effective subsurface geophysical prospection. This work will also focus on the issue of self-learning. The existing literature does not tackle practical aspects either in terms of material implementation or basic interpretation concepts (actual resolution of the methods, sensitivity, etc.). This work is also very useful insofar as it can solve the problem of signal acquisition: it provides open-source “Arduino” solutions, supported by a downloadable program, for data acquisition in the field. Thus, this work, which is unique in its genre and accessible to everyone (with a few more technical and/or mathematical boxed passages), bridges a double gap in the existing scientific literature by: – providing accessible tools for the exploration of the near subsoil: from tools to acquisition systems (the latter being available with the use of computers) including a guide of free programs; – providing practical information for implementation that cannot be found in other works, such as the design of devices (from electrodes to current flow, for example to carry out an electrical survey), the protocol for the creation of geophysical maps, etc.

Foreword

xi

We hope that this work reaches its audience and that the scientists that played a part in it may thus contribute to the removal of the ideological barrier between the world of basic research carried out in the academic world and applied research, as the markedly ideological gap that divides these two communities has not been entirely bridged yet. Besides, helping the development of the environmental field should be invaluable for a large number of countries. André MARIOTTI Emeritus Professor at the Université Pierre-et-Marie-Curie Honorary member of the Institut Universitaire de France

Introduction

Let us recall some introductory elements from Volume 1 in order to set the scene for this book. This is not a book on geophysics. If it were, it would be rather incomplete and well below the usual technical and mathematical standards for these kinds of works, which are vast (we provide a list at the end of this volume). It is not intended for experts in geophysics. Its aim is to make geophysical methods accessible to a wide audience, while simultaneously taking the digital landscape into account, which provides access to an ever-increasing and more accessible quantity of technical information. Thus, this series aims to complement what is already accessible, by attempting to define geophysical methods from a different perspective, and especially by bringing to light what cannot be (easily) found in the literature or online. As such, it is intended to be as practical as possible, and it must be used in conjunction with the curiosity of the reader, who should already have explored the field of geophysics a little but may be seeking something a little more hands-on. Some elements that were discussed in Volume 1 are universal and will not be repeated here (for example how to carry out a geophysical campaign and deploy measurements on the ground, prospection grid design, boustrophedon method). Similarly, some concepts will not be discussed here, such as the resistivity of rocks and the subsoil, which is alluded to in Chapter 2. The interested reader should be proactive and interactive; if a term or a concept is not familiar, the reader should refer themselves to a search engine. The authors have helped in this regard either with the symbol (#), which

xiv

Everyday Applied Geophysics 2

means “look up this term in a search engine”, or if they seek to point the user in a specific direction, they provide a direct link (so long as it is not too long to transcribe for those reading the paperback version of this book). Without the reader’s active participation, reading this book would be too fragmented. This volume focuses on two geophysical methods, as well as presenting tools for the graphic representation of geophysical maps. As far as methods are concerned, we propose instrumental solutions that are accessible to an amateur, from the do-it-yourselfers to more technical readers. A reader who does not yet have any equipment should find skilled persons in his or her entourage to help build devices for which we provide the electronic diagrams, or they can train themselves online: the resources are infinite and the driving forces to carry out a project are ageless – curiosity and desire to create. And perhaps a little patience too.

1 Magnetic Methods1

1.1. Magnetism, the natural power for our compasses Records suggest that the compass was invented in China. But the moment when the “magnet stone” (magnetite), with chemical formula Fe3O4, began to be used for navigation is unclear. As such, magnetite has been known since antiquity, since there is a mention by Pliny the Elder. “Magnetos” is actually the name of a Greek mountain that is rich in this mineral: as such, magnetite is not a rare mineral. From there to putting a piece of magnetite on a floating splint and seeing that it always oriented itself in the same direction, fate undoubtedly gave a hand. The needle of a modern compass is much more magnetic than a piece of magnetite. It is connected to a support structure that lets it rotate freely; by convention, its north pole is the one that points to geographic North. Upon nearing the poles, this direction becomes variable and no longer has anything to do with geographic North. They say that the needle panics... well, at least it is not the navigator. Let us not forget that it is not just by simple force that our compass points North, but rather it is the action of two opposing forces about an axis (here, we mean a forced axis because it is embodied by the axis of the compass), two forces that make the needle turn, and which we qualify as a single effect: “torque”2. 1 In addition to this chapter, there are other documents available: ftp://geom.geometrics.com/ pub/mag/Literature/AMPM-OPT.PDF. 2 https://en.wikipedia.org/wiki/Couple_(mechanics).

2

Everyday Applied Geophysics 2

The effect of the Earth’s field on the needle is characterized by its intensity (torque) and the direction in which the needle points. The Earth field thus has an intensity and a direction: we will represent this by a vector3. The magnetic field vector has an application point (the compass, for example), a direction (horizontal, toward “magnetic” north) and an intensity (which could be characterized by measuring the torque acting on the needle, for example with a torsion balance4). Thus, the compass needle is subjected to the Earth’s magnetic field. This field exists everywhere on Earth. Near the magnetic poles, if we really left the needle free to move, it would point toward the ground, and would do so at an angle of inclination of about 60° in France. 1.1.1. To introduce the topic: an example of magnetic prospecting or mapping 1.1.1.1. Main principles The basic idea that will be discussed in the chapters of this book is that the Earth’s field, which is essentially generated in the depths of the Earth’s core, comprises localized anomalies, which are determined by the presence of structures in the subsoil. It is these anomalies, in other words deviations from normal, that are mapped. By “normal”, we mean the value of the field that would exist if the structures of the near-subsoil did not exist. There is always an a priori of scale that depends on the exploration goal, which is why this distinction between “normal field” and “anomalous field” is, by definition, subjective. At a geological scale of thousands of km2, an anomaly may have a multikilometer spatial dimension. This is, for example, the effect of a dike (even if it is not cropped out). A well-known historical example of regional-scale prospecting is the ocean floor’s “conveyor belt”. It is rather curious that the wiki on “magnetic anomaly” almost entirely covers only that5, ignoring all prospecting with mining objectives, or more local prospecting (from the detection of metal objects to archaeology). If you type “magnetic anomaly ocean floor” into a search engine, you will also find a plethora of explanations and images. 3 In addition to Wikipedia, there are many accessible websites, such as https://www. mathsisfun.com/algebra/vectors.html. 4 https://en.wikipedia.org/wiki/Torsion_spring#Torsion_balance. 5 https://en.wikipedia.org/wiki/Magnetic_anomaly.

Magnetic Methods

3

We leave it to the reader to explore these prospections on a regional scale online in order to leave us space to introduce prospecting on a hectare scale. This concerns the Gallo-Roman site of Barzan, in Charente-Maritime, France (Figure 1.1).

Figure 1.1. Magnetic cartography on the Gallo-Roman site of Barzan, CharenteMaritime, France. Compared to the aerial photo of Dassié found on the Wikipedia page about the site (https://fr.wikipedia.org/wiki/Site_gallo-romain_de_Barzan), the geophysical prospection extends across the cultivated part on the far right of the photo

The equipment used for this prospecting is an optically-pumped cesium vapor magnetometer, the G-858G6 from Geonics. In this case, measurements were made along parallel rows spaced 1 m apart, at a rate of about 10 points per meter along many rows, in “walking mode”, using the boustrophedon protocol (see Volume 1, section 1.3). This first example shows a prospecting approach that essentially produces an image. We are more interested in the produced image than in the precise values of the field at each point. Indeed, because of the procedure that the raw magnetic map has undergone to organize the city in this shape, the 6 http://www.geometrics.com/geometrics-products/geometrics-magnetometers/.

4

Everyday Applied Geophysics 2

(magnetic) value scale no longer matters, and instead, it is the gray scale that matters. Let us consider this area on Google Earth (Figure 1.2).

Figure 1.2. This Google Earth image covers the same area as the cadastral extract where magnetic prospecting was reported. The archaeological structures are completely invisible. The images from the aviator, Dassié (link above), are meaningful in areas where agricultural decisions happen to favor aerial images. It is likely that electrical prospecting (see Volume 1) would also provide good results, but this was not implemented in Barzan

The structures revealed by magnetic prospecting testify to the GalloRoman installation, while aerial images which, most of the time and depending on agricultural use, only “speak” when very precise conditions are met (if the structures are superficial enough to have an impact on the root layer). 1.1.2. Origin of magnetism Let us begin with a fairly manual approach to magnetism and for this, let us consider a magnetic bar. Some bars come in two colors and are often marked: N in red and S in another color (often blue or white, for example).

Magnetic Methods

5

As you bring the blue (south) end of the bar close to a compass, you will notice that it attracts the north pole of the compass. When you flip the magnet round, you will see that the north pole, usually painted in red, attracts the compass’ south pole. Physicists in the 18th and 19th Centuries studied magnetic forces closely, looking for magnetic monopoles such as those found in electricity for positive charges (like a proton or a Na+ ion) and negative charges (electron... Cl–). The discovered forces were much in agreement, such as Coulomb’s law (#), in other words that “magnetic charges” existed and they could be positive or negative. Like for electricity, opposite charges attract and charges of the same sign repel. Moreover, the force is inversely proportional to the square of the distance. But to carry out the experiments that led to these laws, researchers had to take into account that the positive and equivalent charges remained inseparable. Let us take one of these magnetized bars, with the poles clearly defined, and break it in two. You might think you would split the two poles. But this is not the case, as each half becomes a bipolar magnet, each half has its own south and north pole. Try again: the same thing will happen again, just with decreasing intensity of this bi-pole as we shrink the bar. Well, it is time I told you the truth. In reality, magnetism does not exist.... And that is not even a scoop, because I am just repeating what Albert Einstein already said! Fortunately, we can now easily look up his historical article from 1905 online, where he lays out the foundations of restricted relativity7. The title of his publication did not yet include the word “relativity” since it was just being born. His article is entitled “On the electrodynamics of moving bodies”. A moving body, electro- (charges) and dynamics (force). Well, it is from this article that I want to share with you the following extracts written by Einstein, about two manners of expression, 1 and 2. Here they are as follows: 1) if a unit electric point charge is in motion in an electromagnetic field, there acts upon it, in addition to the electric force, an “electromotive force” that, if we neglect the terms multiplied by the second and higher powers of v/c, is equal to the vector-product of the velocity of the charge and the magnetic force, divided by the velocity of light (old manner of expression);

7 http://hermes.ffn.ub.es/luisnavarro/nuevo_maletin/Einstein_1905_relativity.pdf.

6

Everyday Applied Geophysics 2

2) if a unit electric point charge is in motion in an electromagnetic field, the force acting upon it is equal to the electric force that is present at the locality of the charge, and which we ascertain by transformation of the field to a system of co-ordinates at rest relatively to the electrical charge (new manner of expression). Let us dismantle this. In manner 1, Einstein reported that a force would be exerted on a moving charge in the presence of a magnetic field. This force does exist! It is called the Lorentz force8 and is written, in modern units, q ( E + v × B ) , where q is the electrical charge of the particle in play, E is the electric field present, v is the speed of the particle and B is the magnetic field, fittingly. In manner 2, it is no longer about magnetism. This force exists, but it is merely a question of point of view when one changes reference frame. The force is relative to the frame of reference. This is an effect of relativity. An even simpler point of view would be the following: consider two electrons separated by 1 m, for example. We know that they repel each other according to Coulomb’s classic law9. If you were to suddenly move one of the electrons, say number 1, the reciprocal force between them will change. But not instantly. The disturbance created by the displaced electron will only reach the immobile one at the speed of light (so 1/300,000,000th of a second... which is pretty fast). It is worth noting that there is a sort of asymmetry: manner 1 is already disturbed while manner 2 does not yet know that anything has happened. If we dive into the roots of the phenomenon, it is this asymmetry that makes of asymmetrical objects a magnetic source. You might be wondering why I am looking at Einstein’s work, as this is supposed to be a book on geophysics. Well, it is because I wish to lift the veil a little on this topic of magnetism, to remove some of its mystery (inevitably, to really pursue the topic, you had better be named Albert). So, let us quickly summarize what we know so far.

8 https://en.wikipedia.org/wiki/Lorentz_force; but it may be a little overwhelming. In this book, we want to avoid an avalanche of equations. This link is only here for the sake of information. 9 https://en.wikipedia.org/wiki/Coulomb%27s_law.

Magnetic Methods

7

1) magnetism is created by moving electrical charges. It is therefore always associated with an electric current, even if it is a little difficult to decipher; 2) if a current loop is created, the magnetic field will depend on the direction of the current in the loop. We can identify the direction of this field with Maxwell’s right-handed corkscrew rule10. Herein lies the asymmetry; 3) the point of view of two magnetic poles only exists because of historical residue from ancient studies in which researchers sought to understand magnetism by analogy with electrostatics. In reality, a current coil behaves exactly like a magnet, and there are no poles there... And since all magnetic fields are created by current loops, the very notion of a magnetic pole should not arise. What Einstein said is that magnetism defines a force of relativistic origin, which in classical physics is practical for calculating magnetic forces (in magnets, compasses, electromagnets). Now that we have stated the above, in order to conclude our introduction to the magnetic field, we must highlight few things that are a mere matter of detail. First, where does the magnetism of a magnet come from, this “magnet” that you stick on your fridge? There’s no power, no battery, nothing... Well, actually there is! Electrons rotate around the nucleus of each atom: these are the equivalent of current nanocoils...11. The electrons themselves, and the protons, are all electric charges rotating on themselves. Matter is full of tiny currents. It is these complexly coupled currents that give matter its magnetic properties. Second, it is often convenient to pretend that magnetic charges exist. This is very useful for calculations. It is not wrong to consider that one has two poles within a magnet in order to calculate the influence of this magnet on a

10 https://en.wikipedia.org/wiki/Right-hand_rule. 11 In writing this, I have probably really upset some quantum physicists, as they prefer to manipulate spins, wave functions and quanta. The perspective of classical physics provides images that are simplified and not always completely true. But these are mnemonics for the layman, of which the authors are included.

8

Everyday Applied Geophysics 2

compass (for example). These calculations were the daily bread and butter before a certain Albert came along and shook the anthill. 1.1.3. Vector representation of magnetism and the magnetic dipole Vector representation is universal for marking the direction of a force. When you carry a small mass at the end of a string, the vector is directed according to the string, and for you who holds the string, that vector force is directed toward the ground and has an intensity that is written P = mg, where, in the International System of Units (“SI”), P is the weight in Newtons, m is the mass in kilograms and g is the acceleration of gravity (the well-known 9.8 m/s2). Ultimately, as the operator you exert an upwards force. There are two forces involved, so the sum is null since the object is motionless. The fact that “weight” is in kilograms is a misuse of language, or rather, it is a convenience for everyday life; on the moon, the force of attraction toward the ground would be six times weaker12. But back to magnetism, we can compare the gravity field (near the ground or far from the planet) and the “dipole field”, which is created by a magnet and its two poles. For gravity, the “test mass”, as scientists say, is the mass that is attracted to the Earth. For magnetism, one might imagine an attraction of “magnetic masses” (+) and (–). This is not a completely silly idea because, to some extent, it is consistent with experience as long as you do not look too closely at a compass needle. The “test object” of magnetism is then the compass needle (in the absence of another magnetometer, because it is itself one). Figure 1.3 compares the geometries of a gravity field and a magnetic field (magnet). Like electrostatic forces (identical charges that repel each other, and opposite charges that attract each other), the effect of a field line is to “turn” the needle: this is how one never loses North. An important property of magnetic line fields is that they close back on themselves, which is not the case for gravity fields.

12 A “Roberval balance” measures mass by comparison within a homogeneous gravity. Modern scales (like bathroom scales) measure weights but are graduated in mass.

Magnetic Methods

9

Figure 1.3. a) The field created by a bar magnet. It gives the (misleading) impression that magnetic masses with opposite signs exist. Small compasses follow field lines. The figure rotates about the S-N axis. b) The classical image obtained by sprinkling iron filings on a sheet of paper over a magnetic bar. In (c) and (d), for the sake of comparison, we have represented the gravity field. In contrast to image (a), where only the field lines without intensity criteria are shown, the decrease in gravity is shown here as one moves away from Earth. In (d), on the human scale, gravity is almost constant (in direction and intensity). For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

1.1.3.1. The magnetic field and its intensity Mass is to Newtonian attraction what a magnetic moment is to a magnetic dipole. The Wikipedia page on magnetic moment will tell you everything you need to know on the topic. Here is what we want to learn from it. First, we recall that a magnetic field does not exist. Sorry about that – it is very convenient to refer to it as such, but really it is created by currents! So, let us consider the circular turn of surface S crossed by a current I. The magnetic moment is equivalent to:

M = S ⋅ I ; Unit: ampere m2 ([A m2]). If N turns are used, this value must be multiplied by N.

10

Everyday Applied Geophysics 2

We then see how to characterize the field created by a magnet: we look for the amount of current a small loop takes to produce the same effect as a magnet. In both cases, we assume that the axis is used (of the magnet bar or the coil). An ancient process consists of observing how much a compass rotates on its axis (relatively) close to the magnetic source. A smarter process involves measuring the force exerted by the magnetic source on a wire traversed by a current, which is a force that we have already mentioned: the Lorentz. It needs to be adapted when considering masses moving within an electric wire. However, if one places a taut wire into a space containing a uniform magnetic field, the force exerted on this wire is given by (a macroscopic expression of Lorentz’s force): F = B ⋅ I ⋅ Lsin(α),

where I is the current, L is the length of the piece of wire and (α) is the angle that the wire (the current) makes with what we will call the magnetic field, denoted as B. The field around an electric wire is represented in Figure 1.4.

Figure 1.4. Magnetic field around a taught wire

As such, we define B =

F for this relationship. We even propose I ⋅ Lsin(α)

that measuring B from a force and a current, which is an important historical tool (that you can easily find online), gives Cotton’s scale (#).

Magnetic Methods

11

Watch out: tradition prevails over rigor. The correct term for B should not be “magnetic field”, but rather “magnetic induction”. Everyone is guilty of this mistake (even the Wikipedia page on magnetism (#)), so there is no need to be embarrassed. In Maxwell’s theory, H is the true magnetic field that produces magnetic induction in vacuum or matter and is equivalent to: B = μH . Nowadays, we usually denote H as the “magnetizing field” and B as the “magnetic field”. In general, μ is a tensor(#) (a small 3 × 3 matrix that has symmetry properties and can take the anisotropy (#) of a material into account, if there is any). But most often (in the absence of anisotropy), μ is reduced to a scalar

μ

(a −7

μ = μ0 = 4π ⋅10

simple

real

value).

In

the

vacuum,

we

get

−6

1.26 ⋅10 .

The presence of matter allows us to write: μ = μ 0 (1 + χ ) . The magnitude (χ) , which is called magnetic susceptibility, is fundamental for the prospector, since it reflects an increase (or decrease, depending on the sign) in the magnetic field due to the presence of matter. The value μr = 1 + χ is the relative magnetic permeability. And we say “field” here for what should really be “induction”. Let us now compare the concept of a magnetic moment that occurs due to a current turn to the magnetic moment of matter. A body that becomes magnetized within a magnetic field H becomes a carrier of a magnetic moment per unit volume and is written:

M = χH . If χ is a tensor, this is what gives μ its tensor property. But most often, we consider a scalar13.

13 But geomagneticians attach great importance to magnetic susceptibility anisotropy, which provides information about the “rock factory”, on the way it was made.

12

Everyday Applied Geophysics 2

Thus, for a magnetic body (scalar or tensor), we get:

(

)

B = μ 0 H + μ 0 M = μ 0 H + M = μ 0 (1 + χ ) H = μ 0 μ r H .

1.1.3.2. Magnetism units We have already encountered four quantities: the magnetic field H (or magnetizing field), the magnetic induction B (commonly called “field”), the magnetic moment (magnetic intensity of a current coil or magnetic material), which is denoted M, and finally, the magnetic susceptibility χ . If you open any physics or geophysics book on magnetism, you will see that the unit system “uem-cgs”, which preceded our International System of Units, is stubbornly hard to shift. For electromagnetism, we specifically used to speak of “uem” for “unit of electromagnetism”. A conversion table is all the more useful, as the old system used the same unit for B and M! It is also striking to see that most commercial notices, for example in the world of electronics, are still expressed in Gauss. Name and symbol

International system unit

Former “uem cgs” unit

Magnetic field H

A/m (ampere per meter)

Oe (Oersted)

Magnetic induction B

T (Tesla)

Gauss

Magnetic moment M

Am2

erg/Gauss

Magnetic susceptibility χ

Dimensionless

Dimensionless

Conversion

1 Oe = 103/(4) A/m 1 A/m = 4 × 10–3 Oe 1 Gauss = 10–4 T 1 T = 104 Gauss 1 erg/gauss = 103Am2 1 Am2 = 10–3 erg/Gauss

χ [SI] = 4π χ [cgs]

χ [cgs] = 1/(4π χ [SI]

Magnetic Methods

13

1.1.4. The Earth’s magnetic field (or, more accurately, magnetic induction) 1.1.4.1. The global magnetic field The Earth’s magnetic field is not caused by magnets in the center of the Earth. Besides, the magnetism that could occur from such matter would collapse within the depths of the Earth. The most important mineral, magnetite, sees its magnetizing capacity collapse to almost zero14 beyond its Curie temperature (#) at about 593°C (Figure 1.5). Thermal agitation becomes such that it prevents the elementary dipoles (linked to crystals) from organizing and aligning themselves. However, this temperature is reached at a depth of less than 25 km.

Figure 1.5. Magnetization standardized to 1 as a function of temperature, for magnetite and hematite

Today, we know that electrical currents circulating in the liquid core function like a gigantic current coil. The magnetic moment of these currents, their dynamics and their maintenance are particularly complex theories. The terrestrial dynamo (#) is often mentioned. The rotation of the Earth and the permanent “solar wind” maintain this system, which is what causes the Earth’s field and incidentally, also the aurora borealis! 14 Magnetite goes from a state called “ferrimagnetic” (#), which has a strong propensity to magnetization, to a “paramagnetic” state (#), with weak magnetism. This transformation is reversible upon cooling.

14

Everyday Applied Geophysics 2

Everything happens “as if there was” a gigantic magnet in the center of the Earth, one that was not very well aligned on the axis of the poles and, in reality, produced a field that is a little irregular and slowly variable over time (Figure 1.6). The Wikipedia page about “Earth’s magnetic field” contains several relevant pictures.

Figure 1.6. Simplified representation of the Earth’s magnetic field. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

To know everything (or almost everything) about the Earth’s magnetic field, we must refer to international cooperation tools, which propose spherical harmonic field models (#): https://www.ngdc.noaa.gov/IAGA/ vmod/igrf.html. The field is the opposite of the gradient of the potential described by the formula on this page. Local elements, which will be described in the next section, can be calculated online and the links are also available on this reference website: https:// www.ngdc.noaa.gov/geomagweb/?model=igrf. The website https://en.wikipedia.org/wiki/Earth%27s_magnetic_field is also instructive. 1.1.4.2. The local magnetic field This topic is interesting for prospecting. The field vector points northwards in this example (Figure 1.7). The field vector can be projected in the horizontal plane: this defines the declination, which is the angle that the

Magnetic Metthods

15

projecteed vector maakes relative to geographical North. The T inclinatioon is the angle thhat the fieldd makes withh the horizo ontal plane: this is abouut 64° in Paris.

Fig gure 1.7. Loca al elements of the magnetic field

ontally arounnd a verticall axis. It The needle of a compass rootates horizo wards, but itts center of gravity musst be well bbelow its should point downw suspenssion point annd a – discrete – counterrweight preveents the needdle from pointingg toward thee ground (noorthward in the t northernn hemispheree). Some compasses have ann adjustable counterweig ght. Others have h a microo-cardan w ensurres good funnction whateever the shaft syystem in theeir center, which latitude (like Recta’s “Global Syystem”). Know wledge of thhe local fielld is not esssential for prospecting, p but it is useful to t know whhere North iss. In walking mode (seee Volume 1) with a fluxgatee gradiometeer, the verticaal pole of th he magnetom meter sways bback and forth a little. The angular a effecct (fluctuatio on of the verrticality of tthe pole) will be less l pronounnced (but nott zero!) if you walk in thee east–west ddirection than if you y walk aloong the north––south axis. How wever, as we will see lateer, the types of anomaliess depend stroongly on where we w are on Earrth. Everythiing is differeent if you aree at a pole (w where the field is almost verrtical) or neear the equaator (where the field is nearly horizontal).

16

Everyday Applied Geophysics 2

The above-mentioned website provides a link to calculate the declination (and other parameters including inclination and intensity) by yourself wherever you are. Variation and inclination maps can easily be found online; but be careful because as the fields change over time, these can become obsolete. 1.2. The magnetism of rocks and objects containing iron 1.2.1. The magnetism of rocks and magnetite Magnetism, the science of currents that flow in all directions through matter (particles carry “spins”, which are rotating electrical charges), is a very complex discipline because the interactions between all these small elements are themselves complex. Most mechanisms were brought to light by the Nobel Prize winner Louis Néel. About the magnetic properties of matter, we could say that: “the whole is more than the sum of its parts”, meaning that there are properties that only become effective thanks to the large number of interactions, and it is difficult to “predict” the behavior of a complex whole15. The point of this book is not to discuss these theories, nor is it to go over all the ways in which magnetic signals work in nature. There are considerable resources in bookshops and online for this16. In fact, the only material that interests us here is magnetite. To better understand it, let us take a look at the susceptibility of some common minerals (in order of magnitude): – Quartz: –0.00015 – Calcite (marble): –0.00009 – Coal: 0.00002 – Pyrite: 0.001 – Hematite: 0.006 – Magnetite: 5 and above!

15 This is the very question of “complexity” in the scientific sense of the term. Your eyes and brain reading these words are more than the molecules that make you up. 16 https://en.wikipedia.org/wiki/Rock_magnetism.

Magnetic Methods

17

Magnetite is a fairly common mineral, especially in basalts and terracottas. Thus, considering a rock that is even a thousandth part magnetite (which is not much), its susceptibility will still be 0.005, which is as much as pure hematite, and much more than all the other minerals. (The second most magnetic mineral is ilmenite, which is not as common.) In short, it will be magnetite (in rock) that magnetic prospecting will detect, and for up to 20% of the proportion of magnetite in a rock, the magnetic susceptibility of the rock will be proportional to the concentration. In more detail, physicists use the terms diamagnetism (case of quartz with a negative susceptibility (Lentz effect)), paramagnetism, and several forms of ferromagnetism including ferrimagnetism (case of magnetite). We will let the reader read up on these themselves; for us, in practice, magnetite is the only mineral that appears in various proportions and has an effect on rock magnetism.

10

300 250 200 150 120 90 60 40 30 20 10 0 -10 -20 -30 -40 -60 -90 -120 -150 -200 -250

8 6 4 2 0

0

5

10

15

20

25

N

Although one may be able to find tables on rock susceptibility in books on geophysics or online (it is basalts that hold the top spot), for the prospector on the ground, it is necessary to take into account the specificities of grounds where chemically reduced zones or zones that have experienced fires (from prehistoric furnaces to a potter’s kiln) can lead to higher concentrations of materials than elsewhere. Thus, magnetic prospecting is very useful for the archaeology of “fire arts”: workshops of potters, glassmakers and of course, metallurgists. Figure 1.8 shows an example of anomalies at an 18th Century glass-making site. The two large anomaly areas correspond to the bases of old glass furnaces.

nT

Figure 1.8. Anomalies due to glass furnaces (largely leveled) in a Vosges forest. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

18

Everyday Applied Geophysics 2

Apart from the remains of fire arts, it should be mentioned that the most magnetic mineral found in soils is often maghemite (#), which has the same chemical formula as hematite (Fe2O3) but has a crystalline structure that is similar to magnetite and moreover, has about the same susceptibility as magnetite. The action of fire sometimes reduces hematite to maghemite and then to magnetite (“Le Borgne effect”) and this is what causes an increase in magnetic properties17. Metal iron is also sometimes present in soil: it is always of anthropic origin and can take the form of agricultural objects (ox or horse shoes, machine parts, fence remains) or can sometimes be military remains. In the latter case, prospecting can involve demining and the fingers on both your hands are not enough to count the number of geophysicists who, in the north of France, tripping over one day, accidentally landed on unexploded shells or grenades. This is the field of expertise of “UXO”, which constitutes a real market for magnetic prospecting (see for example http://www.gemsys.ca/ unexploded-ordnance). 1.2.1.1. A safety rule in magnetism Measuring a magnetic field is not dangerous. But you have to know where you are working. Needless to say, caution is required: one may accidentally stumble upon an unexploded ordnance – this has happened to two of the authors, but it is still unlikely, and even less probable that the munition will explode. But going explicitly in search of them exposes you to serious dangers by greatly increasing the probability of an accident. Indeed, there may come a time when one tries to extract a source of earth anomaly, if only to identify it. As long as a weapon is in the ground, it is confined. Once extracted, this condition changes, sometimes after decades of being buried. This is the case with grenades for which the pin has been rusting for a long time, but for which the internal spring and detonator have remained intact. Simply taking the object out of the ground is what makes it explode! Platforms used for mountain metallurgy in the 16th Century were often reused to park artillerymen or others: what a godsend these flattened zones are! Do not become an involuntary deminer. Real deminers send robots or drones to the front line and are equipped with specialist devices.

17 For more information, see http://www.lancaster.ac.uk/staff/maherb/papers/MaherPalaeo 31998.pdf.

Magnetic Methods

19

1.2.2. Induced magnets, remanent magnets We have already seen how to define the magnetic moment of matter by relating it to the magnetic moment of a current coil. But if the field created by a current stops when we cut the current, we know that magnets produce a permanent field! You can even “magnetize” a needle (or screwdriver) by rubbing it against a magnet. These objects are said to have a remanent magnetization (#). 1.2.2.1. The hysteresis cycle Experiments are so conventional and so well documented that we wonder if it is even worth covering them here. There are but a few key words to type into a search engine (hysteresis magnetism, etc.). I really like the Wikipedia graph shown in Figure 1.9.

Figure 1.9. The hysteresis cycle, image from Wikipedia. See comments in text below

On the x-axis, an “outer” field H is shown in A/m, which can be created, for example, by a pair of Helmholtz coils (#). On the y-axis, we see the magnetic induction B, in Tesla. Do we “cycle” the outer field along a sinusoid over time? Really, we should be looking at it in terms of frequency, because it depends on frequency, but this is another topic that goes beyond

20

Everyday Applied Geophysics 2

the scope of this book. So, let us say that this cycle occurs at 1 Hz, for the sake of clarification. The oscillation of H leads to an oscillation of B. The weak field curve (up to about 10 A/m or so) leads to an amplitude of 0.3 T. We note that this is an ellipse. We would have the same elliptic curve if we took the following: H = H 0 cos( ω t) and B = B 0 cos( ω t + ϕ ) .

If we draw this in a diagram (H,B), we find that: – if ϕ , which is a phase shift, is zero (=0), we get a straight line that passes through the origin; – if ϕ =

π (rad) = 90° , then the ellipse axes are parallel to the coordinate 2

axes. The value of the phase shift can be calculated, as shown in Figure 1.10.

Figure 1.10. Illustration of the phase shift between the applied field (abscissa) and the resulting magnetization (ordinate). For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

Magnetic Methods

21

The phase shift reflects a delay in magnetization. This is explained by the physical behavior of magnetic domains (#) or Weiss domains (#) and Bloch walls (#)18. Let us specify that one does not fundamentally need to know all these concepts in order to carry out a prospection. Bypassing the middlemen, let us consider the curve in the strong field. Field H increases at about 140 A/m, but B saturates at 1.7 T. The entire matter is magnetized, and there is no way to increase induction. Let us take a step back. When H goes back to 0, the induction is not zero, it is in fact at BR, where “R” is remanent. It will remain as remanent magnetization, even in the absence of any applied external field. Strictly speaking, this is no longer induction. To remove this “remanent induction” (induction because it is B, not because it is still induced), you would have to apply a field H of the opposite sign, the inverse, the coercive field, noted HC over the figure. But we are not so interested in this topic. It is widely accepted that in addition to magnetic induction in the sense of B = μ 0 μ r H , remanent magnetization must also be taken into account. And this will have its own properties: its own amplitude and its own direction. Therefore, the magnetic moment (per unit volume or for a piece of material) will be the sum of: M = M induced + M remanent

An important concept is Koenigsberger’s ratio (#), often denoted Q:

Q=

M remanent Minduced

In nature, this ratio is extremely variable, usually between 0 and 1, but in some cases, for example on lateritic crust, it can be much higher than 1. Most exploratory prospecting does not seek to determine this, but it would be a good research topic.

18 https://en.wikipedia.org/wiki/Magnetic_domain is a fascinating read for those who are more inquisitive.

22

Everyday Applied Geophysics 2

One cannot bring up induced and remanent magnetization without evoking that there are many types of remanent magnetization, and that sometimes the boundary between the two situations can become very blurred. Let us give an example (see also paleomagnetism (#), which strongly refers to these notions). Placing a sample of rock in a constant field for a long time (such as a rock that does not move in nature within the Earth’s field), the sample slowly and surely becomes magnetized. This is viscous remanent magnetization (VRM). This initial case already raises a question: is this magnetization, which slowly disappears in a null field, and which is parallel to the external field, induced or remanent? In a time frame of a few seconds, it is remanent. In the time frame of a century, we could say that it is induced. So, what the induced/remanent distinction does not say is that we must take the time scale into account, the duration in which we observe this phenomenon! In general, a prospector is actually not equipped to answer the question on remanence and induction. However, for the arts of fire, such as bricks and potters’ kilns, these are most often thermoremanent magnetism (TRM), which are acquired at the time of cooling to Curie temperature. For geologists, it is a godsend to be able to measure this TRM on volcanic rocks. For archaeologists, this leads to the discipline of archaeomagnetism (#). For environmentalists, magnetism and its types of remanence (notably detritic remanent magnetization (#)) make it possible to date sediments from the lake floor or peat bogs, and it is a precious aid for studying past climates. The reader can easily read up about all these questions and search for links to find out more about geomagnetic reversal (#). 1.3. Magnetic anomalies and prospecting 1.3.1. The magnetic dipole and its field Calculating the vector field – which I propose to denote F (as it is a magnetic induction B ) – is a favorite in publications on physics and can be easily found online. It can also be found in Breiner’s document19.

19 ftp://geom.geometrics.com/pub/mag/literature/ampm-opt.pdf.

Magnetic Methods

23

It is conventionally expressed as a reference associated with the magnetic momentum vector, as shown in Figure 1.11.

Figure 1.11. On the left is the reality of a magnet. On the right is a construction of the 20 field for a dipole that is assumed to occur at a point . The field revolves around the axis defined by the vector momentum, and the field only depends on the distance to the point of observation, and the angle θ at which the point of observation in the field is relative to the dipole axis. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

μ 0 2M ⎧ ⎪⎪ Fradial = 4 π r 3 cos θ We get: ⎨ . μ M 0 ⎪F = sin θ ⎪⎩ tangential 4 π r 3 (we consider that in air, μ = μ0 = 4π⋅10−7 ). The factor μ 0 is there because F is a magnetic induction, still often wrongly called “magnetic field” (except when one is being accurate). The 1 factor is calculated from the momentum created when a current loop is 4π harmonized with the momentum of a magnetized body. 20 The use of point sources, which cannot exist for real, is an idealization of sources in order to attain mathematical representations that are useful for calculations. This is an approximation, but it is very close to reality, and is even more so with an increasing distance from the source. To think that the stars, which we see from afar as dots, are actually huge! As far as dipoles are concerned, there is an apparent paradox in speaking of a point dipole. But in reality, everything happens physically with a very small current loop... that we see from afar.

24

Everyday Applied Geophysics 2

2 2 + Ftangential . The field module is then the Pythagorean root: F = Fradial

It is worth noting the decrease in field strength that occurs due to the inverse of the distance cubed. This property distinguishes the magnetic field from electrostatic or gravimetric fields for which the field’s decline is in 1/r². All calculations in magnetism are based on this double expression! For example, to calculate the field created by a three-dimensional object (for example, a sphere), we must integrate (sum) the elementary dipoles that occupy the whole volume. It is also worth noting that physicists and mathematicians like to simplify things, with quantities that are barely more abstract than fields. On the one hand, they use the potential (which requires taking the opposite of the gradient of this potential in order to obtain the field) but, on the other hand, they write the expressions of the field or the potential in such a way that they free themselves from the specific reference that defines the momentum of the dipole. We must then consider the dipole to be underground, where it is initially used to represent a “confined” body that is of a smaller size than its underground depth. This then requires calculating the anomaly it produces at the surface during a mapping operation. We allude to a geometric situation where the source is at depth h and, for the sake of simplicity, is below the origin of a reference point (0,x,y,z) at the surface such that Ox points (magnetic) north and y points (magnetic) west and z points downward. At this point, we assume that the inclination is angle I. Let us shed some light on how to calculate and express the anomaly of a dipole buried at depth h, as seen by a magnetometer at a supposedly horizontal ground surface. First, if we are interested in the total field, we must consider that the module of the total field, which is the Earth’s magnetic field plus the field with the anomaly, is practically equivalent to the module of the Earth’s field plus the projection of the anomaly on this field. In other words, we get: Ftotal ≅ FEarth + A cos θ .

The figure below illustrates this, and is only valid if the field created by the source is small relative to the Earth’s field.

Magnetic Methods

25

Thus, if A is the field created by the source, the anomaly – namely the difference between the total field Ftotal and the “normal” field FEarth – is not A but A cos θ , which is the projection of the anomaly vector on the Earth’s

field. It is valid as long as A

FEarth , which is usually what happens (for

example, 100 nT of anomaly reported at 50,000 nT). Let us accurately consider the calculation of an anomaly in this case. ⎛ cos(I) ⎞ ⎟ . The ⎟ ⎜ sin(I) ⎟ ⎝ ⎠

The orientation of the dipole is given by the unit vector m = ⎜⎜ 0

measurement point being in (x,y) at the surface, the vector that points to this ⎛x ⎞

point from the dipole is: r = ⎜⎜ y ⎟⎟ . The angle in the dipole formula is that ⎜ −h ⎟ ⎝ ⎠

between these two vectors; then by the scalar product, we obtain cos θ =

1 ( x cosI − hsin I) . As the two components of the dipole are Fradial and r

Ftan gential ,

the

projection

of

the

dipole

field

onto

the

direction

of the Earth’s field is written as: Fradial cos θ − Ftan gential sin θ. The anomaly is then μ0 ⎡ cos θ sin θ ⎤ 2M 3 cos θ − M 3 sin θ ⎥ , 4π ⎢⎣ r r ⎦

anomaly =

(

which

is

to:

)

μ0 M 3cos2 θ − 1 . 4π r 3

We then just need to substitute in this expression cos θ = r=

simplified

1 ( x cosI − hsin I) and r

x 2 + y 2 + h 2 to get the final expression of the total field anomaly.

With a fluxgate sensor, the vertical component is usually measured. This is given by:

(

)

μ0 M ⎡ 2 2h − x 2 − y2 sin I − 3yz cos I ⎤ . ⎥⎦ 4π r 5 ⎢⎣

26

Everyday Applied Geophysics 2

Of course, (h) is only the depth of the source if the sensor is placed on the ground! However, a sensor is always placed at a height H above the ground, and (h+H) must be substituted for h in our calculations. For a gradiometer, whether for the total field or for two components, the difference between a sensor placed low or high is calculated. For example, for a fluxgate gradiometer with two vertical single-component sensors, measurements will be reported as:

F(low) − F(high)

(

)

⎧ 1 ⎡ ⎫ 2 2 2 ⎤ ⎪ 5 ⎢⎣ 2(h + H low ) − x − y sin I − 3yz cos I ⎥⎦ ⎪ r μ ⎪ low ⎪, = 0 M⎨ ⎬ 1 ⎡ 4π ⎪ 2 2 2 ⎤ − 5 2(h + H high ) − x − y sin I − 3yz cos I ⎪ ⎦⎥ ⎪ ⎪ rhigh ⎣⎢ ⎩ ⎭

(

)

where rlow = x 2 + y 2 + (h + H low ) 2 and accordingly for high.

Section 1.3.5 shows some anomalies calculated using this formula. 1.3.1.1. Taking the real shape of bodies into account The magnetic property of a body is its magnetic momentum per unit volume, often denoted as J. For a volume M, this will then be M = VJ. There are some complications however, notably with what is called the “demagnetizing field” (#), which simply leads to an apparent attenuation of the magnetic momentum density (for more – technical – details on this, which we can do without here, see footnote21). A complexly shaped body can always be divided into small cubes. This then requires adding the vector contributions of all these cubes together to obtain the field of the complex body made up by these cubes. For a volume, this occurs when one forces the calculations (with mathematical force and integral calculation), which may seem strange at first: everything happens as if the body is empty but includes positive monopoles on one part of the surface and negative monopoles on the other... Ignoring the dipole, let us see how magnetic monopoles appear, which are only apparent. To do this, we must keep in mind that to make a dipole, a magnetic mass “+” and a magnetic mass “–” can be taken and shifted a little, as shown in Figure 1.12.

21 http://www.dsf.unica.it/~fiore/libricorsoptr/coey-magnetism.pdf.

Magnetic Methods

27

Figure 1.12. A uniformly magnetized body behaves in the same way as a body that only carries “magnetic charges” on the surface. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

Let us apply this to a dipole generation created by a larger body, as shown in Figure 1.13 in the middle and at the bottom. We take a positive “potato” and a negative “potato”, and we superimpose them with a very small offset. In the common area, there will always be some compensation between positive and negative masses – everything happens as if there were no charge inside. The positive magnetic masses remain on one side and negative ones on the other, but only on the surface. This property is demonstrated mathematically (by three-dimensional integration by parts), which assumes that the magnetization inside the body is homogeneous. The mathematics does not suggest that the inside of the body is not magnetized (that would be the role of physics, not mathematics), but that, from the outside, we see (we magnetically undergo) exactly the same thing for a body that is uniformly magnetized as for a body that is only magnetized on its surface – with virtual magnetic monopoles! Let us apply the same principle to the model of a magnetic bar or plate, inclined like the Earth’s field (or vertically), which would be present in the subsoil22. Figure 1.13 depicts the situation.

22 An iron bar or, in geology, a dike.

28

Everyday Applied Geophysics 2

Figure 1.13. Magnetically susceptible bodies within the Earth’s field. The core of the structure disappears to the benefit of the surface (all this happens hypothetically). For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

Let us consider these situations, starting with the most impressive case: this is case (b), where the body is parallel to the field. Everything occurs as if there were two poles far apart from each other. Although when further apart, the field of this “bi-pole” varies as it should in 1/r³, this is not the same up close, so when prospecting on the surface of the ground. Indeed, the negative pole will be much closer to the sensor than the positive pole! However, the field decrease in a monopole is like 1/r². Thus, the positive pole will be about eight times further away than the negative pole, and therefore the effect of the positive pole will be 82 => 64 times weaker than for the negative pole! So, the prospector will pretty much only see the negative pole, as if there were a negative monopole. In reality, this is not the case: the counterpart does exist, but it is much further away. Watch out: there is a negative pole near the surface, and it attracts the field lines! We therefore have a highly positive anomaly. The other cases are similar. The horizontal body will lead to anomaly peaks at both ends. We invite the reader to look at other case studies through the programs provided at https://github.com/NicolasFlorsch/geophysics.

Magnetic Methods

29

Let us just give a quick reminder to have a look at a book that we have already mentioned: “Applications Manual for Portable Magnetometers” by S. Brenier, which one can easily find online. It may seem “old-fashioned” but it is particularly rich in practical explanations and complements our book very well. 1.3.2. Implementing magnetic mapping survey Magnetic anomalies have lateral dimensions (or “widths” relative to their amplitude) that essentially depend on the depth of their source. For objects that are wider than they are deep, the anomaly will be visible over this area. Some striking examples of these features can be seen in Figure 1.1 of the introductory example. From the slab that is visible at the bottom of the figure, the extent is well delimited, but its depth determines the speed with which the anomaly passes from its presence as an anomaly to its lack thereof. We recommend having a look at the excellent figures in the book cited above, which are readily available (Breiner’s aforementioned document). Page 23 of Breiner’s document offers anomaly profiles alongside different structures, page 25 shows traditional anomaly shapes, pages 29 and 31 define, very graphically, the “width” of anomalies – in the following figure, we see that how it is defined depends on the shape of the anomaly itself. Let us retain the following few elements, which are useful even if approximate (they are shown in Figure 1.14, modified from page 30 of the cited reference): – to correctly identify an anomaly, its reference level must be identified, its anomaly zero, which is also the mean value of the surrounding field (with or without tendency); – we often define the width of anomaly X as follows: we divide its maximum (relative to its reference level) by 2, and we retain the distance that separates the two identified points on the slopes of the anomaly at this half height. X1/2=X/2 is the “half width”; – the anomaly width X (or half width) can be approximated to the anomaly depth23.

23 There are so many rules for determining the depth of a source. All of them presuppose a shape at the source which, by definition, is unknown. We must therefore consider that these rules are only semi-quantitative, indicative.

30

Everyday Applied Geophysics 2

Figure 1.14. Defining the half width X1/2 and anomaly width X = 2X1/2 and the approximate rule of association with depth

We distinguish between different types of operating modes depending on the objective, as summarized in the following table: Objective

Geological

Environmental

Archaeological

Spatial dimension of anomalies

Grid

100 m– 100 km

Irregular hectometric to kilometric

1 m– 100 m

0.1– 10 m

(±) Regular metric

0.2– 2m Regular metric

Technology, sensor height

Total field Often airborne Total field or pseudogradient sensor height 1–3 m Pseudogradient, sometimes total field, up to 20 cm from the ground

Typical targets

(Typical) size of anomalies

Volcanism (in the broad sense) and structural geology

10– 1,000 nT

Metal waste, landfills

10– 10,000 nT

Fire-places, fire arts (potters’ and metallurgists’ ovens) Building foundations and pre-historic installlations

0.1– 100 nT

Magnetic Methods

31

1.3.2.1. Adaptation of the grid The issue of over-sampling or under-sampling was addressed in Volume 1, section 1.3.2.3. However, in magnetic prospecting, measurements are not taken as close to the ground as possible, and this should be taken into account. Indeed, the anomaly widths are, as highlighted above, of the same magnitude as the anomaly depth and, for the latter, it is not the true depth in the ground that matters, but actually the vertical distance between the measurement sensor and the structure. In order of magnitude, we can write: Anomaly width = depth of structure + height of sensor relative to the ground. But depth of structures is unknown by definition, so instead we will have the inequality: Anomaly width ≥ sensor height above ground. We use mathematics to show that for a given grid (a step Δx (and Δy , if applicable)), a sensor height equal to this grid does not produce more than 10 % error for the anomaly. For an image like the one in the example at the beginning of the book, this will not affect our interpretation much. This is all even more true since the structures are never totally superficial, and there a certain margin exists. If, on the other hand, fine interpretations of an anomaly must subsequently be made, we recommend taking a measuring step that is at least half the height of the sensor relative to the ground. The rule can ultimately be written: measurement grid spacing (in x and y) ≤ height of sensor above ground. The boustrophedon U-turn prospecting method (see Volume 1, Figure 1.9) is carried out in “walking mode” in order to seek compromises to this rule. Walking at 3.6 km/h, at an acquisition rate of 5 measurements per second leads to a step of 20 cm according to y. Profiles are created, say, every meter, such that we get a final grid of 0.2 m × 1 m. Usually, this is done with gradiometers (see Figure 1.19) in order to have a sensor located 30 to 50 cm above ground.

32

Everyday Applied Geophysics 2

Under-sampling is a more critical issue for larger grids. In a 5 m x 5 m grid adapted to the surface geology, a sensor would need to be 5 m above ground: this becomes impractical. To be more accurate, the risk lies in being directly above a shallow structure, and thus producing a very punctual signal, which would ultimately affect the whole. One technique is to do a quick exploration of the field values at a short distance from the measurement point: if the field does not change, it means that undersampling has likely not occurred here. On a geological scale, prospecting is only effective if done from the air: a 100 m grid with an aircraft flying at 200 m makes measurements redundant at 300 km/h (or faster). Airborne magnetic prospecting is widely used in mineral exploration24. Finally, how can we not mention the developments made possible by UAVs? With fluxgate sensors weighing a mere few grams and being quite cheap, many systems have recently been developed. They permit low altitude flights and fast acquisition to be achieved over large areas. This is very relevant for environmental targets but is a little more difficult for archaeology, which requires precise work no higher than 50 cm from the ground. 1.3.2.2. Prospecting and yield: a squared law One might think: why not use a finer grid than that recommended above, in order to protect against possible inconveniences of under-sampling, however mild they may be? Initially, we planned to create a 1 m x 1 m grid map, point by point, which is what a wooded environment would require even in “walking” mode. But, for a 50 cm x 50 cm grid, the measurement time is not multiplied by 2, but by 4. And in 25 cm x 25 cm, it would take 16 times longer. To prospect a hectare (2.47 acre) with a 1 m x 1 m grid point by point, this already takes over two hours in practice. We then begin to see why a compromise must be sought between speed and accuracy.

24 A very complete and rich reference on airborne prospecting (and magnetic prospecting in general) is available here: http://www.geosoft.com/media/uploads/resources/technical-papers/ Aeromagnetic_Survey_Reeves.pdf. Moreover, this thesis http://www.fedoa.unina.it/10834/1/ PhDThesis_DomenicoDiMassa.pdf is also rich in various information (it is in English – just the title is in Italian).

Magnetic Methods

33

1.3.3. Taking natural time variations of the field into account (and other “drifts”) The magnetic field varies over time. We do not mean “secular variations”, which are not very significant for a prospection of a few days (1000 nT in 50 years, or 0.05 nT/day). These are caused by slow variations in current patterns in the Earth’s core. But on top of this, there are rapid variations and two possible patterns: regular variations or stormy variations25. These variations are produced by currents located high in the ionosphere. They are largely determined by solar activity. Figure 1.15 illustrates such variations26.

Figure 1.15. Time variations of the magnetic field (taken from the aforementioned book by S. Brener)

25 https://www.oa-roma.inaf.it/cvs/variazione_ev.html. 26 See NASA’s relevant website: https://www-spof.gsfc.nasa.gov/Education/wmagstrm.html. See also https://www.nasa.gov/mission_pages/sunearth/spaceweather/index.html.

34

Everyday Applied Geophysics 2

At a scale of 100 km², these variations are quite similar. So here are three procedures to rid ourselves of these time variations from the field, which are obviously very embarrassing during a prospection (if nothing is done about them). PROCEDURE 1.1.– Only one sensor is available. This method involves first establishing a base station to which you return regularly (here, “regularly” depends on what level of accuracy we seek). Figure 1.15 shows that an interval between measurements at the base of 5 min is already insufficient. Next, with interpolation between points, these regular measurements allow the “drift” of the magnetic field to be traced (at a same point). It is this variation, which is interpolated as required, that will be subtracted from the prospecting points on the map. The times of all measurement points and these bases should be recorded, unless measurements are made at very regular intervals (in which case it is not necessary to record the times, but only the order of the measurements). PROCEDURE 1.2.– Two sensors are available. One sensor is placed as a “base station” where it will continuously record the field (for example, every 5 s). The prospecting is carried out normally (including the measuring times) and once back at the office, we calculate the differences between the values measured on the map and the variation of the field. We must not forget to synchronize the clocks on both systems. PROCEDURE 1.3.– There are two sensors mounted in pseudo-gradiometer, on a same rod, with for example one sensor 50 cm above ground and the other at 1.5 m. Then, for each point, we calculate the difference between the bottom sensor (closer to the sources) and the top sensor. As the time variation is the same in both sensors, it is automatically eliminated. The pseudo-gradient is discussed in the following. 1.3.4. Total field, pseudo-vertical gradient of total field and pseudo-vertical gradient Since the magnetic field is a vector quantity, each of its components can be taken into account separately (for example, the cardinal point markers north, east, vertical) or as a module. On a strictly mathematical level, we can show that these choices are almost equivalent (we can switch from one to the other with specific mathematical operators and within certain limits of

Magnetic Methods

35

accuracy). More often than not, it is practical contingencies that guide these decisions, and in particular instrumental constraints (which are sometimes economic or weight and/or congestion constraints). Section 1.4.1 describes the types that are most used in prospecting. You can also refer to the documents that have already been mentioned. Let us concentrate here on two types of measurements: total field and “pseudo-gradiometry”. 1.3.4.1. Total field measurements Let us begin by recalling the intensity of the Earth’s field, as well as the size of the anomalies we seek. The Earth’s field is about 48,000 nT in Paris, about 30,000 nT at the equator and 60,000 nT at the pole. Prospecting on any terrain (non-volcanic, otherwise it will be more strongly magnetic) will usually show that the field varies by just a few nanoTesla. Most often, anomalies from interesting structures (geological or not) range from a few nanoTesla to a few hundred nanoTesla. These quantities and variations are the usual prerogative for proton and optically pumped magnetometers, which are also the most widely used devices (we will not review all the different types of existing magnetometers, just some of the more popular ones). The very first proton magnetometers had a resolution27 of a few nanoTesla. This was due, on the one hand, to the Larmor frequency (#), which is close to 2 kHz, and to the relaxation time of the proton signal, which is around 2 s, so that the measurement of the number of pulses over 1 s, to the nearest pulse, leads to an accuracy of about 1/4,000. At our latitudes, this leads to an accuracy of 48,000/4,000 = 12 nT, which is not very precise. One electronic solution was to integrate a PLL frequency multiplier, usually by a factor of 64, which leads, for one signal second, to 12/64 = 0.2 nT. This is usually more than enough resolution, but unfortunately, to achieve it, each measurement has to last 2–3 s, so that the point-by-point mode (see Volume 1, section 1.3.4) is the only possible option. Figure 1.16 shows a point-by-point prospection using a proton magnetometer. The map obtained is shown in Figure 1.17.

27 It is difficult not to mix up everything when it comes to the quality of devices. Here are some useful reminders: https://en.wikipedia.org/wiki/Accuracy_and_precision; https:// meettechniek.info/measurement/accuracy.html.

36

Everyday Applied Geophysics 2

Figure 1.16. Proton magnetometer prospection, point-by-point. The floor grid is 2 m x 2 m. The sensor is 2.5 m above ground and avoids aliasing problems with the grid at this height (see Volume 1, section 1.3.2.3). The objective here is geological structural reconnaissance: small surface anomalies will not be visible 270500

270600

270700

270800

270900

271000

271100

271200

271300

271400

271500

271600

4997000

271700

4997000

N 4996900

4996900

4996800

4996800

4996700

4996700

nT

4996600

4996600

4996500

4996500

270500

270600

270700

270800

270900

271000

271100

271200

271300

271400

271500

271600

271700

(WGS84 - projection UTM fuseau 31)

Figure 1.17. Example of a total field anomaly map obtained during the prospecting illustrated in the previous figure. The area at the bottom left of the map, which is heavily anomalous, corresponds to the presence of a mineralized vein. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

Optically pumped magnetometers have revolutionized total field measurement. They allow time sampling at several tens of hertz, with a resolution and an accuracy that is better than 0.1 nT. But they are heavier and more expensive, and more fragile. They are widely used for the

Magnetic Methods

37

environment (ferrous waste, archaeology), and the example in Figure 1.1 is explained using such a magnetometer in the 1990s. Whether a proton or optically pumped magnetometer is used, the measurement is a frequency measurement and can benefit from the accuracy of quartz time bases. This makes them “absolute” instruments, without any bias on the field other than clock error. “Fluxgate” type magnetic sensors provide the field component according to their sensitivity axis. Thus, another procedure for obtaining the total field consists of having a fluxgate magnetometer with 3 components (X,Y,Z) and calculating the total field defined by the Pythagorean root: T = X 2 + Y 2 + Z2

This solution seems ideal, but fluxgate sensors come with several biases, which make the task a little more complicated. Fluxgate sensors must be calibrated for sensitivity, as just a 0.1% error leads to an error of 50 nT (in order of magnitude). Their measurement direction is difficult to define to better than one degree, which leads to the same sized bias. Ultimately, and this is the most critical part, they present a lag (an “offset”) that can reach 100 nT and is not necessarily stable over time. Thus, the quantity “total field”, which should not depend on the orientation of the sensor at a given point, actually depends significantly on it. The three-component fluxgate can still be used, provided that it is regularly calibrated so that T is invariant when the sensor rotates on itself. This procedure is described at the end of this chapter. 1.3.4.2. Can a simple fluxgate sensor be used for total field or component measurement? Let us not forget that the sensor measures the field component in the direction of the sensor (from its magnetic core, to be accurate). To measure the total field, we just have to point the sensor in its direction. This can also be done in fixed station, although this direction varies a little because of the time variations of the field. Ultimately, we would have a second-order error if we did not correct for the variation of direction of the field, since we α2 would in fact be measuring Tα = T cos α = 1 − + ... , with angle α being 2

38

Everyday Applied Geophysics 2

between the instantaneous field and the direction of the sensor. For one-tenth of a degree, which is typical for a small magnetic storm, or about 1/600th of a radian, the relative error would be around a millionth of the Earth’s field, or in absolute terms in France, less than one-tenth of a nanoTesla. Paradoxically, for a measurement in a direction other than that of the Earth’s field – for example for the vertical component – such a variation in angle would result in a first-order error, as shown in Figure 1.18.

Figure 1.18. We can compare (a) the total field measurement on the left; and (b) the vertical component measurement on the right. For a total field measurement, the device measures the total field OT and with the normal field being OC, the anomaly is the difference OT-OC, which is very close to the CD projection (see section 1.3.1). The orientation of the sensor is of little or no importance. For a fluxgate measurement, usually the vertical component of the field is measured. The device gives OT’ and the anomaly is the value of the total field projected onto the vertical axis of the sensor, from which we remove the value in the absence of anomaly OC’, which leaves C’T’. The result can be counterintuitive: here, C’T’ is bigger than the CD projection. The component would be larger than the total field! This is because in total field, the anomaly does not concern the field itself, but the projection of the total field onto the Earth’s field. Finally, for fluxgate, the non-verticality of the sensor leads to a significant variation of the anomaly

But this is valid for a sensor that is fixed to a frame in a fixed station. When prospecting, it is just as impossible to keep a sensor in the direction of the total field as it is to keep it exactly vertical. This time though, it is no longer the direction of the field that is important to us, but the direction of the sensor, which changes throughout the walk by a few degrees according to the steps. The geometric situation in Figure 1.18 is still valid. Let us consider this for the vertical component. Suppose that the total field T makes

Magnetic Methods

39

an angle I (inclination) with the vertical. The vertical component is “normally” TV = T cos(I) . But if the sensor, within the vertical plane containing the Earth’s field, makes an angle with the vertical, the fluxgate sensor will measure:

TV = T cos(I ± ε ) ≅ T [ cos(I) ∓ ε sin(I) ] 28, and the sign depends on whether the small angle is moving away from or closer to the Earth’s field. The absolute error is therefore (in absolute value): Δ T = T ε sin(I) . With T = 48,000 nT, epsilon = 1° = 1/57 rad and I = 64° (in Paris), which makes about 15 nT or so. For 2 or 3 degrees, which would be difficult to maintain while walking, it will be double or triple, which is unacceptable. “Pseudogradient” measurements allow us to overcome this difficulty. 1.3.4.3. Vertical “pseudo-gradient” measurements This typically involves placing two identical sensors one above the other at a distance d, with the higher one denoted, say, H and the lower one denoted L forming the difference:

G = (L − H ) / d . This looks like a vertical derivative, but with the distance between sensors (d) not tending toward 0, this quantity cannot really be called a “vertical derivative”. As the terminology in this mode has not yet been set in stone, we will continue to call it “pseudo-gradient”29. An initial advantage of this method of differences is that it allows total freedom from time variations in the Earth’s field. Indeed, these variations similarly affect both sensors, and disappear in the difference. Another feature of this device is that it is more sensitive to more superficial structures than a single sensor. This is due to the 1/r³ decrease in magnetic field as one moves away from a point source. To clarify this, let us 28

This

is

obtained

by

first

order

development

of

the

Taylor

formula:

f (x + ε ) = f (x ) + ε f ′( x ) + ... with ε being small. The derivative of the cos being –sin justifies the passage of the symbol ± to ∓ .

29 https://www.researchgate.net/publication/266634737_Investigating_Pre-Columbian_Ceremonial_ Features_at_El_Cano_Archaeological_Site_Panama_through_Geophysical_Surveys is also a good example of prospection.

40

Everyday Applied Geophysics 2

imagine two sources, one 30 cm deep and the other 1 m deep. Let us also suppose that sensor L is 20 cm above ground, and sensor H is 1 m above ground. With a multiplicative constant (which mainly depends on the intensity of the source), we will get (for the structures at 30 cm and 1 m depth, respectively, and taking into account the height of the sensors): 1 ⎧ ⎪⎪ L 30cm = 0.53 = 8 ⎨ 1 ⎪L = 0.58 100cm = ⎪⎩ 1.23

The structure at 30 cm gives an anomaly 8/0.58 = 14 times stronger than the one buried at 1 m. The same calculation for the H sensor gives a lower ratio, close to 5: 1 ⎧ ⎪⎪ H 30cm = 1.33 = 0.46 ⎨ 1 ⎪H = 0.094 100cm = ⎪⎩ 2.23

But the pseudo-gradients are what interest us most. We get: 1 1 ⎧ ⎪⎪ L 30cm − H 30cm = 0.53 − 1.23 = 7.4 ⎨ 1 1 ⎪L − H100cm = − = 0.34 3 ⎪⎩ 100cm 1.3 2.13

The most superficial structure seen this time is 7.4/0.34 => 22 times better than the deeper structure. 1.3.4.4. In a nutshell, the two-sensor system sees more superficial structures much better Another way of considering this is to see the difference as an approximation of the derivative. This will be all the more true if the distance d between the sensors is small or if the sources are further away. So, whereas for a single sensor, the decrease is in 1/r³, for two sensors it will be more like the derivative of 1/r³, so 1/r4, which is a much faster decrease compared to the distance.

Magnetic Methods

41

Another advantage of this property is that deeper structures (at geological depths of, say, 10 m and deeper) produce long spatial wavelength anomalies, referred to as “regional anomalies”. Using the pseudo-gradient almost completely eliminates the “regional”. Finally, there are mainly advantages to working in pseudo-gradient, except for exploring geological situations (such as the presence of magmatic intrusions, for example). The pseudo-gradient can concern both total field measurements (each sensor gives the total field) or component measurements (with two vertical or other fluxgate sensors arranged one above the other30 (or otherwise!)). Figure 1.19 shows a prospection with an optically pumped cesium vapor magnetometer (GEOMTREICS G-858G mounted for the pseudo-gradient) in walking mode. What we get here is a difference between two total fields.

Figure 1.19. Magnetic prospecting with two sensors mounted in pseudo-gradient, here on a Geometrics G858-G

Let us take it a step further for the fluxgate mounted as a gradiometer, because the limitations we described about the problem of accuracy of the 30 The fluxgate provides a component that depends on how the sensor is oriented. Gradiometry therefore involves two axes: the sensitivity axis of a sensor depending on how it is oriented, and the axis that separates the two sensors. Most often, the sensors are arranged for the vertical component, and both sensors are arranged on the same vertical pole. We then obtain the vertical difference of two vertical components.

42

Everyday Applied Geophysics 2

angle of the sensors relative to the vertical almost no longer arise. Indeed, we saw that for a single sensor, we had a calculated error for Δ T = T ε sin(I) of about 15 nT or so for 1° verticality error. For the pseudo-gradient, we use the same error formula, but apply to G = L – H instead of the Earth’s field. Even with an anomaly of 1,000 nT – which is huge – this is only about 1/50th of the Earth’s field. For 1,000 nT of anomaly, in other words about 1/50th of the Earth’s field, the absolute error on the pseudo-gradient will only be about 15/50 or 0.3 nT for 1°. However, the pole can be maintained to within 3° without too much difficulty. Thus, at 1,000 nT of anomaly, the error will only be 1 nT (1/1,000th of the anomaly, which is excellent), and even less for anomalies that are 10 or 100 times smaller. As a result, excellent results can be achieved with a rod held roughly vertically, carrying two vertically oriented fluxgate sensors. Contrary to what could be obtained with a single sensor, it is corrected for time variations in the field. There is no longer much need for expensive and heavy optically pumped steam magnetometers for surface structure prospecting. Archaeologists understand this well. An entry-level fluxgate sensor costs €350 and a highend one will be 5–10 times this price. These sensors are light. Why not get several at the same time with several vertical poles? Figure 1.20 shows an example of prospecting with 10 poles of two sensors, which creates 10 profiles simultaneously.

Figure 1.20. Ten fluxgate poles (each with two sensors per pole) speed up prospecting by a factor of 10. With a 20 bis recording system with 20 channels, plus a GPS (the antenna of which can be seen on the frame), we have a particularly efficient prospecting tool. Figure from http://www.eastern-atlas.com/service/ forschung_und_entwicklung_eng.php?s=4,1&lang=eng

Magnetic Methods

43

1.3.5. Pre-processing of magnetic maps The maps almost always have a number of bugs or defects that must be removed to facilitate using them. Let us go back to the map in Figure 1.1. Such a map is not made in one go, but often by 50 m × 50 m squares that are then assembled together31. For the sake of discussion, let us examine one of the squares at the very bottom of the map (Figure 1.21). (a)

(b) 50

500 400

40

300 30

200 100

20

0 -100

10

-200 0 200

nT/m 210

220

230

240

(d) 50

4

nT/m

(c)

3

40

2 1

30

0 -1

20

-2 -3

10

-4

nT/m 0 200

210

220

230

240

Figure 1.21. a) Extract from the map in Figure 1.1, such as the data are after processing. b) Raw map. It is saturated (especially in terms of gray scale) by a very strong anomaly on the edge (steel telephone pole frame) that the perspective c) shows well. This effect is discussed at the beginning of Chapter 3. d) An initial preprocessing step consists of clipping the pure signal to only keep the small anomalies that are of archaeological significance. This is done after examining the statistics (mainly mean and standard deviation) of the signal carrying the relevant information. Here, the signal has been clipped from –4n to +4n T/m. The raw data are on a grid of 1 m in x and about 10 cm in y, resulting from “walking” mode. A first regular grid file of 1 m in x and 20 cm in y is generated by interpolation. The device is a Geometrics G858-G, mounted as a gradiometer with two sensors at about 40 cm and 120 cm from the ground. For a color version of this figure, see www.iste.co.uk/ florsch/geophysics2.zip

31 It used to be customary to work on a 50 m “decameter” (tape measure) before GPS became widely used; still, it was necessary to ensure the regularity of the profiles every meter.

44

Everyday Applied Geophysics 2

This is not always necessary, but a visual glance at the raw digital data ensures the quality of the signal and allows us to see possible saturations, such as that shown in the figure (looking at the contents of the file with or without visualization). Figure 1.22(d) shows two overlapping artifacts. On the one hand, these are profile effects, which result in alternating light and dark bands. This is due to the existence of a small offset (or shift) depending on the direction of travel, since it alternates in boustrophedon (see Volume 1). A simple way to eliminate this is to average all even profiles and all odd profiles. The offset is then the difference between these averages, and it simply needs to be added (with the correct sign) to one half of the profiles to make an effective correction. On the other hand, there is a slight chevron effect. This is due to the slight delay in recording from the magnetometer during the walk, when making a U-turn. It may also depend on the operator’s ability to start recording in the right place at the right time when using this boustrophedon technique. The processing consists of estimating this average offset by comparing the average of the even profiles with the average of the odd profiles. A visual estimate may be appropriate, otherwise intercorrelation of average profiles will be used, the peak of which will give this delta offset. Half of the delta/2 profiles are then shifted in one direction and the other half in the other direction. By not working overly fast, the chevron effect is limited (Figure 1.22).

Figure 1.22. Phenomenon of chevron effect due to a slight delay in recording in walking mode. If it is more obvious, it is worth correcting it using more or less automated procedures. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

Magnetic Methods

45

1.3.6. Type of anomalies depending on the latitude Most of the fields that we look at will be of induced sources, or of viscous remanences that are almost impossible to distinguish from the induced ones. The diversity in types of anomalies generally comes from the superposition of sources at different depths, and we saw that one can go from decreases in 1/r³ to decreases in 1/r. It can then be difficult to separate the overlapping anomalies, either in your head or even by calculation. To try to see this clearly without pretending to be able to achieve it completely, it is useful to keep the most common types of anomalies in mind, starting with the one produced by a simple dipole. And the type of the anomaly will above all depend on latitude, or more exactly on the inclination. Figure 1.23 shows a collection of anomalies, from latitude + 90° (North Pole) to latitude – 90° (South Pole) in 30° steps. The Earth’s field for this study is “ideal”, with a magnetic axis merged with the axis of the geographical poles. Starting from the values of the dipole components, we replace the angle θ with the latitude λ = θ − π 2 . The nominal value of the total field is then given by: F = 30000 (2 sin λ ) 2 + (cos λ ) 2 .

The inclination is also derived from the dipole components, which are aligned along a N-S axis through Earth. We get: tan I = 2 tan θ ⇒ I = tan − 1(2 tan θ ) .

For the total field, the buried body is a sphere of radius 10 cm and susceptibility 100, at a depth of 1 m with a sensor height above the ground of 50 cm.

46

Everyday Applied Geophysics 2

Figure 1.23. Anomalies for latitudes +90, –90, in 30° steps. Note that at the equator, the anomaly is negative (from 30° to 0°, the negative part of the anomaly is imposed to the detriment of the positive part). For a color version of this figure, see www.iste. co.uk/florsch/geophysics2.zip

Let us do the same process for the following assumptions: a vertical sensor fluxgate mounted on a vertical pole: structure 1 m deep, bottom sensor 50 cm above ground and top sensor 1.5 m above ground. Figure 1.24 shows the result.

Magnetic Methods

47

Figure 1.24. Anomalies obtained for a vertical component fluxgate gradiometer. The southern hemisphere has opposite gradients to the northern hemisphere. Overall, the anomalies are simpler in shape than for the total field. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

1.3.7. Interpreting a magnetic anomaly: some examples EXAMPLE 1.1.– The magnetic map is shown in Figure 1.25.

48

Everyday Applied Geophysics 2

Figure 1.25. Magnetic map revealing an extremely magnetic structure. This is the typical signature of the presence of steel parts in the ground. The whole area is saturated with this major anomaly. In the anomaly area, some points are missing. This is because the proton magnetometer sensor no longer produces a good signal if it is close to the source of a field gradient that is too strong, which was the case in places on this site

This anomaly is due to foundations that have remained in place (but are not visible at the surface) of the anchoring in the ground of four posts of a very high voltage electrical tower. Due to some missing measurements (from saturation of the proton magnetometer, here used with a 2.5 m × 2.5 m grid and a sensor 2 m above ground), a detailed interpretation cannot be given. We can however predict the presence of steel supports in areas of higher gradient.

Magnetic Methods

49

EXAMPLE 1.2.– This example shows the possible complexity of anomalies, if the sources themselves are nested (Figure 1.26).

Figure 1.26. On a 50 m x 50 m square, the total field and pseudo-gradient of total field on a historic iron metallurgy site. The total field, measured 1.2 m above ground, varies from –140 to 350 nT. The pseudo-gradient varies from –350 to 800 nT: the variation is much greater due to the bottom sensor, which is 20 cm above ground. The site consists of accumulations of slag and residues from a metallurgical furnace. The anomalies are overlapping and difficult to interpret individually. Prospecting is indicative of areas where slag is spreading and those who will be able to give details on the structures must be smart. In such a case, the contribution of geophysics is the delineation of borders. For a color version of this figure, see www.iste.co.uk/florsch/ geophysics2.zip

50

Everyday Applied Geophysics 2

EXAMPLE 1.3.– Let us look at a simple case (Figure 1.27).

Figure 1.27. A magnetic anomaly on one site in the Vosges massif (a fixed value of 47,336 nT was subtracted from the whole map, leaving only the anomaly itself). The negative part of the main anomaly is duplicated due to a change in slope (white dotted line). The bottom part is approximately horizontal, while an ascending slope starts at this limit. The excavation discovered a heap of forge slag (18th Century) a good meter deep. The anomaly at the bottom left corresponds to a brick paving at a depth of about 1 m. For a color version of this figure, see www.iste.co.uk/florsch/ geophysics2.zip

A case like this lends itself quite well to interpretation, despite the somewhat irregular terrain. The depth was obtained correctly, bearing in mind that it was calculated perpendicular to the slope.

Magnetic Methods

51

EXAMPLE 1.4.– Gallo-Roman site of Cravant and superposition of several structures and periods. This original is shown in Figure 1.28. 25

50

75

100

125

150

175

200

225 1

150

150

A

0,6

B'

B

125

0,8

125

0,4 0,2

C

100

100

C

0 -0,2

75

75

-0,4 -0,6

0

50

10 20 30 40 50 m

-0,8

50 -1

25

50

75

100

125

150

175

200

225

(nT/m)

Figure 1.28. Magnetic map of the Cravant site. It presents paradoxical sign anomalies (see text). Besides the obvious and classical structure of a fanum (#), the two anomalies at A correspond to Merovingian tombs. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

This map is more of an exception because it shows two opposite polarities. Normally, a magnetic structure at our latitudes will produce a positive bump on the South side and a trough on the North side. This is what we see for anomaly C, which corresponds to a channel with magnetic filling. In B and B’, we see a reverse polarity. Why? These are two specific examples. The surrounding container, here being calcareous ground, presents a certain susceptibility. A ditch that is filled with a more magnetic structure will appear as a “positive” anomaly, similar to anomaly C. If, on the other hand, we see a trough in the ground and a less magnetic material fills this trough, we will see a negative anomaly, by contrast. This site also gives us the opportunity to show that it is interesting to try modeling, in other words carry out calculations that attempt to reproduce what is observed.

52

Everyday Applied Geophysics 2

This is illustrated in Figure 1.29.

Figure 1.29. Interpretation using modeling. A geophysicist comes up with a structure that can lead to the observed map and then, by trial and error or by inversion (see Volume 1), refines his “model”. Sections AA’ (model) and BB’ (data) allow us to appreciate the reconstitution of amplitudes (calculation by C. Camerlynck). For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

1.3.8. Other procedures Several more or less useful processes exist that can help confirm our interpretations. Pole reduction (#) involves transforming an anomaly measured at a given latitude to that which would have been measured had we been at the pole. This has the effect of transforming a bipolar anomaly (with its positive and negative parts) into a simpler, and especially better-centered anomaly (Figure 1.30). Pole reduction is also discussed in Chapter 3. Other techniques, such as the two-dimensional analytical signal or the Eulerian deconvolution, make it possible to refine our interpretation. These methods, which will be further discussed in Volume 3, become mathematically more technical. Blakely’s book (1995) presents these mathematical tools.

Magnetic Methods

53

Figure 1.30. Pole reduction improves the definition of anomalies and brings the peak of the anomaly back to just over the source

1.3.9. Getting yourself in battle order for making a magnetic map! Figures 1.27 and 1.28 (for example) show how to use a point-by-point magnetometer or a fast magnetometer (optically pumped or fluxgate-based). Fluxgates are becoming more and more popular: they are lighter, cheaper, more solid and bring magnetic prospecting to within everyone’s reach. But there are a few precautions to take when doing magnetic prospecting. We absolutely have to demagnetize ourselves! Here is a non-exhaustive list of what to avoid wearing when working with magnetism (and participants must stay at least 3 m from the operator!): – mobile phone and various radios (loudspeakers have magnets); – capped boots; – metal underwire in bras (use a “sports” version); – hair pins;

54

Everyday Applied Geophysics 2

– glasses! Often, the side branches are made of steel (the author has titanium glasses, which are non-magnetic); – keys, knives, screwdrivers, coins in pockets; – beware of hats: they sometimes have a magnetic clasp; – ordinary shoes: very often, these have a soft steel anti-theft device embedded in the sole! – rods (in bones...); – shrapnel in the body (yes, we saw this case in a veteran). I have certainly forgotten other things. So test yourself: place the magnetometer in a fixed setting, put your shoes close to it, your glasses, your belt buckle, etc., or else... In walking mode, we recommend using decameters, with the U-turn method (boustrophedon). Other techniques use signposts (but be careful, we need two in your line of sight because we are always aligned with a single one). GPS could be used – soon Galileo – with submetric accuracy, but is it non-magnetic? A solution is to place it at a certain distance, then correct the vector data that goes from the antenna to the sensor, in delayed time. Note that working in the forest can take up to 10 times longer than in open terrain. That’s it, now it’s your turn. 1.4. Let us build a magnetometer (or something better) To find out more about the different types of magnetometers that exist, we suggest that the reader consult the Wikipedia page for “magnetometer” (#). Our magnetometer will be built based on fluxgate sensors, which we consider to be cheap, since they are the cheapest magnetic sensors available that are sensitive enough for our purposes32. Vector sensors that cost less than 350 euros with a resolution of about 1 nT are available but achieving

32 Remember that a static field must be measured, so induction coils cannot be used. The sensitivity of Hall effect sensors – the conventional ones – is too low.

Magnetic Methods

55

this level of accuracy is near impossible; the best we can expect is to ensure a certain level of precision33. Two of these sensors are required to make a gradiometer, but at the end of the day, the cheapest magnetometer in the world will cost you less than 800 euros. One can also measure the total magnetic field using a fluxgate with three components (expect to pay approximately 500 euros), but to create a map, we would be better off having two sensors, either to use one as a base or for a gradiometer, but in the latter case, we might as well use two vector sensors (directional). It is not our manner of doing things to create a list that would likely be inexhaustive anyway, but here are (in more or less increasing price order) three links to fluxgate sensor manufacturers (we will refer to the threecomponent fluxgates later): http://www.stefan-mayer.com/en/products/magnetometers-and-sensors.html http://www.sensysmagnetometer.com/en/fgm3d.html http://www.bartington.com/catalogs/three-axis-fluxgate-magnetometers 1.4.1. A gradiometer with two single component fluxgate sensors The world’s cheapest operational magnetometer can have two FL1-100 type probes, for example. In order to measure the difference between the two sensor outputs, there must be a connection. Basically, this just requires a voltmeter (which itself provides low-pass filtering). Using the constructor diagram, we get the following simple system (Figure 1.31(a)). The voltage can easily be converted into nanoTesla with the sensitivity provided by the manufacturer. A slightly more sophisticated way to do this is to send the signal to an instrumentation amplifier (for example, the INA 128P), preceded by a small low-pass filter, and followed by an acquisition system (Figure 1.31(b)). This system should be able to record negative signals as positive, since any field component (likewise, the gradient) can take both signs.

33 Search online for “metrological quality” to get a review on the properties of the devices. For the Wikipedia page, see https://www.mccdaq.com/TechTips/TechTip-1.aspx.

56

Everyday Applied Geophysics 2

Figure 1.31. Building a gradiometer using two FL1-100 fluxgate sensors. Depending on the manufacturer, the output (–) is at potential 0 (central) of the power supply, which must be symmetrical. At the top is a simple assembly. With (a), a “2,000 point” voltmeter, the nanoTesla can be displayed up to 2,000 nT (per meter if the distance between sensors is 1 m), which is a gradient value that is rarely reached. Beyond a significant gradient of 2,000 nT and even below it, we mainly work in relative terms (in the sense that 10 nT of variation in 2,000 is less than 1%, yet we always get positioning errors that lead to errors in the field when the horizontal gradient of field is significant; this is what accompanies the strong anomalies). If we have an acquisition system like those recommended in Volumes 1 and 2 of this series, the acquisition must be preceded by a low pass filter. With a resistance of 100 kΩ and a capacity of 330 nF, we get a cut-off frequency of about 5 Hz

Magnetic Methods

57

In addition to the importance maintaining the carrier boom approximately vertical while prospecting, the main constraint of this system is the existence of drifts (which are independent in principle) for each sensor and their offset. Since we are looking at the differences of fields, it is the differences in drift and offset that must be taken into account. The drift difference can be measured by regularly returning to a base, as we do for the total field measurement (but here, the drift will be the sensors, not the magnetic field itself). The offset difference can be measured by moving the two sensors closer together at a certain distance above the ground. As our use is not metrological, we need to see whether it is worth doing any corrections case by case. 1.4.2. Measuring total field using a 3-component fluxgate (and how to calibrate it) Measuring the total field with a three-component fluxgate is based on the fact that if X, Y and Z are the three components of the magnetic field at a given point, the total field is: F = X 2 + Y 2 + Z2 .

Note that the value of F does not depend on the orthonormal reference in which the field is expressed. For two markers (1) and (2), we “normally” get: F = X12 + Y12 + Z12 = X 22 + Y22 + Z 22 .

Similarly, imagine a sensor with three components that would be rotated in all directions from a fixed point. We would see the values of each component change significantly (and often the sign too) but F should remain constant. Unfortunately, fluxgate sensors are not perfect. They are relative (the sensitivity in V/nT requires calibration), they are perhaps imperfectly oriented along three perpendicular axes, and even worse is their offset! Even if the three probes have the same offset O, they cannot be equal (and nothing can be done with other offsets, because with sensor movements, F is supposed to remain constant while X, Y and Z will vary): F = X 2 + Y 2 + Z ≠ (X + O) 2 + (Y + O) 2 + (Z + O) 2

58

Everyday Applied Geophysics 2

The solution involves determining constants that allow these nine parameters to be corrected: three amplitude constants, three angles and three offsets. This is possible in the laboratory (by controlling magnetic fields) but unfortunately for us, we are not a space agency capable of doing this. Incidentally, it is precisely these satellite makers (magnetic ones, it goes without saying) and particularly those of the Oersted mission (https://directory.eoportal.org/web/eoportal/satellite-missions/o/oersted) who developed a technique that has since been used by several authors to create a total field with vectorial. As this work is quite technical, we decided to discuss it in section 1.7, including a calibration program in an understandable language on Github, so that anyone can use it. 1.5. Acquisition system for the three-component fluxgate for total field To correctly measure magnetic field using a three-component fluxgate, we must be able to sample synchronously over three channels, at above 20 bits, and at least at 10 Hz, and preferably autonomously (without being “connected” to a PC, in the field, not counting the magnetic field created by a PC). This is why we built a solution using an Arduino, while simultaneously looking for a cost-effective compromise. In this chapter, we propose to design a dedicated acquisition device for magnetic prospecting, which operates using a three-axis magnetic field sensor. We will present the design phases of the project: from the choice of components to electronic assemblies, including circuit calculations. All project sources, programs, libraries or electrical diagrams are open source! Thus, you can freely download them from web platforms, which will be sprinkled throughout this chapter. We also invite you to customize your device by adding features that may not be covered in this book. In any case, do not hesitate to send us your suggestions for improvements, as well as your remarks, through one of the project’s Github websites: https://github. com/MuhlachF/Magneto

Magnetic Metthods

59

1.5.1. Device D feattures Our magnetomeeter is desiggned to maake acquisittions (X, Y and Z s ratte of about 25 Hz. It is fully customizable. componnents) at a sampling There are a four butttons, placedd on the hou using (Figurre 1.32), whhich will facilitate navigationn and system m configuraation. This information will be L screen.. Once the prospectingg is done, tthe data displayeed on an LCD collecteed will be stoored on an SD S card and can be evaluuated and intterpreted using geeophysics sooftware.

Figure 1.32. Device naviga ation interface e

1.5.2. Functional F study of th he project The design of thhe magnetom meter is based on an asseembly of “fuunctional blocks”, as shown in Figure 1.33. Thesee blocks conntinuously eexchange l at the following ffunctions informaation (analogg or digital)). We will look in more detail: – thee conversionn of a physical quantity into i a voltagge: this is thee role of the threee-axis magnnetic sensor. The analog value generaated by the ssensor is proportiional to the magnetic m fielld in which the t sensor is placed. Thiss voltage must theen be adapteed and filtered; – thee signal filterring is carrieed out throug gh a low-passs filter, whicch makes it possibble to eliminnate any buggs that occurr during acquuisition. It eensures a better signal-to-noisse ratio of measurement m ts. The cut-ooff frequencyy for the m take the desired samp pling rate intto account; anti-aliaasing filter must – in order for thee microcontrroller to proccess this infoormation, thiss analog m be convverted into a digital valuue. This is doone by a quantityy (voltage) must specialized chip nam med an anallog to digitaal converter (ADC), ( whicch has a resolutioon of 24 bitss;

60

Everyday Applied Geophysics G 2

– thee microcontrroller processses the data a, but also controls c the modules that are directly inteerfaced with it (converters, screen, carrd reader...); – data backup is done via an SD card reader; – finnally, the maan–machine interface inccludes an LCD screen aand four buttons..

Figurre 1.33. Functtional blocks of o the project design d

1.5.3. Circuit C pow wer supply To guarantee g moobility of thee housing, thee main poweer supply is pprovided by a baattery. Analoog modules are a relatively y sensitive to t voltage variations and must be supplieed via a DC//DC convertter with balaanced regulatted 12 V f the TMR 3-1222 mod del by Traco Power P (Figuure 1.34). outputs.. We opted for It is enncapsulated in i a compacct SIP8 indu ustrial standdard casing and can producee voltages off ±12 V at a maximum current of ±125 ± mA. H However, during the t production of our prrototype, wee noted that this componnent was particularly sensitivve to temperrature and vo oltage variattions, here pproduced by the battery. b We would w also addvise you to allow the hoousing to accclimatize for a few w minutes firrst during meeasurements in order to avoid a errors.

Figure 1.34. The e TMR 3-1222 in its SIP8 ca asing proposed byy the company y Traco Powerr

Magnetic Methods

61

The components that will process the digital data are powered by the 3.3 or 5 V output of the microcontroller, here an Arduino module. Although it is possible to supply the circuit via the same voltage source, we nevertheless recommend that we separate the analog supply part from the digital part. The components are then connected together, as shown in Figure 1.35.

Figure 1.35. Separation of analog and digital power supplies

1.5.4. The fluxgate sensor The FLC3-70 sensor (Figure 1.36) is a three-axis fluxgate magnetometer manufactured by “Stefan Meyer Instruments”. It has been chosen for its stability and the accuracy of its measurements, which can range from –200 to 200 µT. Its bandwidth ranges from 0 to 1 kHz. Its supply voltage is between 4.8 and 12 V. Finally, its low current consumption, which is a few milliampere, makes it the ideal candidate for our battery-powered device.

62

Everyday Applied Geophysics 2

Figure 1.36. The FLC3-70 fluxgate sensor with three axes. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

Whatever the supply voltage, the analog output voltages are proportional to the three components X, Y and Z of the magnetic field (1 V for 35 µT) (Figure 1.37 for connections).

Figure 1.37. FLC3-70 sensor connections. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

1.5.5. Cable and connectors As the sensor is placed at the end of a pole, a cable of approximately 2 m must be prepared to connect it to its housing. Despite several attempts, we did not find it useful to use shielded cables. We simply suggest that you choose cables with a solid but flexible external insulation, so that they can be handled without too much difficulty in the field. Twisted pair multicore telephone cables (Figure 1.38) do the job perfectly.

Magnetic Metthods

63

Figure 1.38. Our cable con nsists of four tw wisted strande ed wires. For a ccolor version of o this figure, see s www.iste.c co.uk/florsch/g geophysics2.zzip

Everry signal has a pair! This minimizes in nterference and a disturbannces: – paiir 1: blue = X, X white = Ref; R – paiir 2: yellow = Y, white = Ref; – paiir 3: green = Z, white = Ref; R – paiir 4: brown = + supply, white w = – sup pply. Finaally, it is connnected to thhe housing using u bayoneet connectorss (Figure 1.39).

Figure 1.39. Bayonett connectors with w 2–7 conta act points

64

Everyday Applied Geophysics G 2

1.5.6. Signal S filterring Shannnon’s theorrem is that thhe sampling g frequency, which we sshall call “Fsamplingg”, is equal to at least twiice the frequ uency of the analog a signaal. Below this theooretical limitt, a signal cannnot be reconstituted from m its samplees. Therre is no poinnt in imposinng signals on n the converssion chain foor which variations would bee faster thann what it is able to proccess at the ““Fsampling” odule must thherefore be ppreceded frequency. The digittal analog coonversion mo by an “anti-aliasinng” low-passs filter, wh hich eliminaates signals with a frequency that is abbove Fsampling/2. The cutt-off frequenncy for the llow-pass filter is approximateely 12 Hz andd the samplin ng frequencyy is set to 25 Hz. The Butterworthh filter was chhosen because it has a flat response aand does not disttort the signnals within its bandwid dth. The traansfer functiion of a Butterw worth filter off order n is given g by the following f eqquation: 1

( )= 1+

Twoo second-ordder filters arre cascaded to obtain a fourth-ordeer filter, which corresponds c t a slope of –80 dB per decade to d (Figuure 1.40).

F Figure 1.40. Fourth-order F fillters in cascad de

Eachh stage of thhe second-oorder filter has h its resonnance frequeency and dampingg coefficientt Q. To actively a impplement the second-ordeer filter, wee use an opeerational amplifieer-based asseembly. Amoong the mostt common seetups is the “SallenKey” toopology (Figuure 1.41).

Magnetic Methods

65

Figure 1.41. Sallen–Key topology for a second-order low-pass filter. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

The damping coefficient Q is given by The pole frequency is given by

=

.

= 0.5



.

The cut-off frequency (3 dB attenuation) is given by For a second-order Butterworth filter:

=

= 0.707 and

.

.

= 1.

To get a Butterworth response with a cascade of several second-order Butterworth filters, the Q coefficients for each level must be scaled according to the values in the following table: Order

Level 1

Level 2

Level 3

Level 4

2

0.707

4

0.541

1.307

6

0.518

0.707

1.932

8

0.510

0.601

0.900

2.563

10

0.506

0.506

0.707

1.101

Level 5

3.197

Thus, to assemble two second-order filters, the Q coefficients of the two levels will be = 0.54 and = 1.31, respectively.

66

Everyday Applied Geophysics 2

For our study, we want the cut-off frequency to be 12 Hz. The value of the resistors is fixed at 30 Ω. In this configuration, and for the first filter, the calculated standardized values are: 1 = 0.39 μ and

2 = 0.47 μ

For the second, they are: 3 = 0.18 μ and

4 = 1.2 μ

An electronic simulation software allows us to check the behavior of the setup with the selected values (Figures 1.42 and 1.43).

Figure 1.42. The diagram is entered into an electronic simulation software to be simulated. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

The gain from the Butterworth filter is as constant as possible in the bandwidth (0–12 Hz). The cut-off frequency Fc is approximately 12 Hz. From this frequency, the response decreases linearly toward –∞, with a slope of about –80 dB per decade. All circuits are represented in Figure 1.44 and recall on Github.

Figure 1.43. Bode curves represent filter characteristics

Magnetic Methods 67

Figure 1.44. Set of filters for acquisition. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

68 Everyday Applied Geophysics 2

Magnetic Methods

69

1.5.7. Digital analog conversions In Volume 1, we used an ADC proposed in a “Breakout board” version designed around an ADS1115 chip (Texas Instrument). Its resolution of 16 bits was largely sufficient for our setup, which was intended to carry out simple electrical prospection of the ground. However, measuring the Earth’s magnetic field with a triaxial fluxgate sensor requires greater precision. We therefore chose a converter with a resolution of 24 bits: the ADS 1220 proposed by Texas Instrument. A 24-bit converter, the ADS1220, is shown in Figure 1.45. The ADS1220 is a 24-bit ADC that can measure: – two signals in differential mode; – four unbalanced signals via an input multiplexer (MUX). Internally, it contains a programmable low noise gain amplifier (PGA) and two programmable excitation current sources. The internal reference voltage of 2.048 V is used here because it avoids the use of additional components. This chip can perform conversions at data rates of up to 2,000 samples per second (SPS). At 20 SPS, a digital filter can be activated for simultaneous rejection of 50 Hz and 60 Hz. Finally, this component is addressable via the SPI protocol and interfaces easily with an Arduino microcontroller.

Figure 1.45. Internal Structure ADS1220. Copyright © 2016, Texas Instruments Incorporated

70

Everyday Applied Geophysics 2

Sampling principle of the ADS1220 converter The ADS1220 converter follows the oversampling principle: the input signal is sampled at a higher frequency than the desired sampling frequency, and then filtered and decimated digitally. By increasing the oversampling ratio (OSR is the ratio between the modulator frequency and the output rate of the digital data), the performance of the converter is optimized. Increasing the PGA gain also reduces input noise, which is particularly useful when measuring low level signals. Adaptation of the signal to the operating mode of the converter The signals are acquired in differential mode using the chip’s PGA. This measurement mode uses two analog channels and guarantees more accurate measurements because it allows the converter to reject the common mode voltage as well as any other common mode noise that might present in the signal (common mode voltage refers to the voltage present in the instrumentation amplifier inputs relative to the mass of the amplifier). Finally, using the PGA (for which inputs are at high impedance) increases the signal-to-noise ratio. The PGA can be set to gains of 1, 2, 4, 8, 16, 32, 64 or 128 (Figure 1.46).

Figure 1.46. Simplified overview of the PGA level

VIN is the differential input voltage = − the PGA is calculated according to the following equation:

and the gain of

Magnetic Methods

71

=1+ The resistance Rg is internal to the PGA. It can be modified because of the chip’s register (SPI connection). The selected gain defines the value of the full-scale differential input voltage (FSR): = ± Here, our system reference is 2.048 V. An ideal system for measurement acquisition in differential mode should only produce the difference in potential between the AINp and AINn inputs of the converter, and thus totally reject the common mode voltages (Figure 1.47). The value of the referenced common mode voltage relative to the mass of the assembly is given by: =

1 ( 2

(

)

+

(

))

Figure 1.47. PGA common mode voltage

As the converter inputs are not perfect, they have a limited ability to reject common mode voltages. To remain within the linear operating range of the PGA, the input signals must comply with certain requirements to

72

Everyday Applied Geophysics 2

avoid any risk of saturation of the outputs for both amplifiers (A1 and A2). For the ADS1220, two rules must be adhered to: – Rule 1: do not get any closer than 200 mV to the supply voltages: (

)



(

)



+ 0.2 + 1 2

.

− 0.2 − 1 2

(

.

)

(

)

In the case of a single-pole power supply to the converter: = 0 and

=5

– Rule 2: the minimum common mode voltage must comply with the following equation: (

)

+ 1 4(





)

In our configuration, the common mode voltage is therefore between 1.25 and 3.776 V: 1.25 V ≤

≤ 3.776 V

To ensure even greater conversion accuracy, we only use the sensor range from –70 to +70 µT. As the voltage delivered by the sensor is ±1 V/35 µT (reference/OUT), the voltage values Vx, Vy and Vz are between 4 and 8 V relative to the mass. Without any adaptation, the common mode voltage does not allow the converter to operate properly, since the voltages would then exceed the permitted thresholds: = 1 2(

(

)

+

(

)

;

(

)

= 7 V and

(

)

=5V

To adhere to the frame defined by the previous equations and to optimize conversions, the voltages delivered by the sensor are divided by 4 and a gain of 4 is applied to the PGA level. In this configuration, the full-scale differential voltage is 0.512 V. The voltages are divided by four using the setup, as shown in Figure 1.48.

Magnetic Methods

73

Figure 1.48. Voltage adaptation to conform to the common mode voltage of the PGA

=

.

2 ; ( 1 + 2)

= 1 4.

From which we get 1V ( ) = 2 V.

(

)

= 1.5 V and

(

)

=

The common mode voltages now meet the constraints stated above and are, respectively: (

)

= 1.25 V and

(

)

= 1.75 V

Converter resolution The resolution of the converter is 24 bits. The quantum or value of the low significant byte is given by the following equation: =

2. .2

≈ 61 nV

The correlation of analog values to their digital equivalent is shown in Figure 1.49.

74

Everyday Applied Geophysics 2

Figure 1.49. Transition diagram

Design of a converter breakout board version The breakout board version of this converter is not easy to find. The only manufacturer that makes it is a company called Protocentral, costing a bit over €20 (https://www.protocentral.com/analog-adc-boards/773-ads1220-24bit-4-channel-low-noise-adc-breakout-board.html). In addition, depending on country of residence, you may need to pay shipping and administrative processing fees. In our case, this would have doubled the purchase price of the component. To remove these constraints, we decided to make our own Shield, thanks to the “EasyEDA” platform (see Figure 1.50). EasyEDA is a free and easy-to-use tool that supports all phases of circuit design: from schematic input, through simulation (SPICE models), to printed circuit board design. For this latter phase, we can either export the result of our routing in Gerber format or order the printed circuit directly through the website, with the associated electronic components. This program is accessible online and you should be able to use it whatever your operating system! You just need to register to take full advantage of the software. Finally, the collaborative dimension of this platform allows multiple people to work on the same project and allows the whole community to share the work.

Figure 1.50. Screenshot of an ADS1220 converter electrical diagram in its “Breakout board” version. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

Magnetic Methods 75

76

Everyday Applied Geophysics 2

The implementation of the shield can be retrieved either: – at the following website: https://easyeda.com/frederic.muhlach/Shield_ CAN_24_bits_ADS1220-7b9432eeffdb408e9d2d3d8e07e8d32d; – or by entering the keyword “ADS1220” in the website’s search engine. Simultaneously acquiring the three magnetic field components implies that the converters operate in parallel. This communication is made possible by the SPI protocol, which allows multiple components to be addressed on the same bus, as shown in Figure 1.51.

Figure 1.51. Architecture ensuring the converters operate in parallel. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

This structure also improves system response times by avoiding input multiplexing times that degrade converter performance. 1.5.8. Acquisition principle Having proven to be very popular, Arduino microcontrollers are at the heart of many projects. They are easy to program thanks to the “Arduino h” library, and their price tag of about €20 dollars has certainly had a hand in their diverse distribution to a wide audience with the most varied profiles: makers, students, those who are simply curious... We can only recommend that you use them. A simplified algorithm for this operation is shown in Figure 1.52.

Magnetic Methods

Figure 1.52. Simplified operating algorithm

77

78

Everyday Applied Geophysics 2

Digital-to-analog conversions are done in parallel, which reduces the latency between each measurement (x, y and z components). All these operations are clocked thanks to a timer. Backing up on a storage medium, such as an SD card, takes about 1 s to write 500 lines of data. To limit these interruptions, as many values as possible must be stored and only saved once the memory size limit has been reached, or once the measurements have all been taken. The Mega version of the Arduino only has an 8 KB SRAM memory. At a sampling rate of 25 H, only three tables with 500 decimal values can be stored without having to worry about microcontroller instability. This brings the maximum acquisition time to about 20 s. Taking all these factors into account, Arduino DUE modules are the best suited because they have a larger memory for storing these variables: Name

UNO

MEGA 2560

DUE

Processor

ATmega328P

ATmega2560

ATSAM3X8E

CPU speed

16 MHz

16 MHz

84 MHz

Analog In/Out

6/0

16/0

12/2

Digital IO/PWM

14/6

54/15

54/12

EEPROM (KB)

1

4



SRAM (KB)

2

8

96

Flash (KB)

32

256

512

With the Arduino DUE, we have a 96 KB SRAM memory and a sampling time of about 240 s. However, this version does not include an EEPROM memory, which would allow us to store the configuration parameters of the device when it is no longer powered. To avoid this problem, we will use an EEPROM chip, called an I²C, like the one shown in Figure 1.53.

Magnetic Methods

79

Figure 1.53. Example of a 24LC512 memory chip

1.5.8.1. Developing microcontroller

an

ADS1220

library

for

the

Arduino

1.5.8.1.1. Adding features: the role of libraries A library is a program that contains a set of functions that have been written to simplify the development of a program. Using them, sometimes we avoid rewriting complex lines of code. Many libraries have already been developed and can control an impressive number of electronic components (LCD display, SD card reader, etc.). These libraries are also used to add new functions to the microcontroller (mathematics, sorting algorithm, etc.). The ADS1220 converter is fully configurable: sampling frequency, PGA gain, input operation mode. For more information, consult the documentation provided by the manufacturer: http://www.ti.com/lit/ds/ symlink/ads1220.pdf. The only available library was developed by Procentral. The functionalities are limited and do not allow us to completely configure our converter. Nevertheless, we used it to develop a more advanced library. A library has two main files: – an “.h” file, called “header”, which lists all the constants and functions of the program; – the main file, which details all functions. The Arduino development environment natively includes several libraries. To import them into your program, simply click the “Sketch” menu, select “Include Library” then click “Add .ZIP Library”:

80

Everyday Applied Geophysics 2

Figure 1.54. Arduino development interface

The following instruction will then be added at the beginning of your program: #include This command adds the whole library content to the source code. From then on, all the functions it contains can then be called up in the program.

Magnetic Methods

81

As part of the design of your magnetometer, we suggest you download the magnetometer operating program as well as the Github library from the project website: https://github.com/MuhlachF/Magneto/tree/master/Magnetometre Finally, Figure 1.55 shows the fragmented but functional architecture of the project.

Figure 1.55. Magnetometer prototype. The blocks are: (1) power supply, (2) the FLC3-70 fluxgate sensor with three axes, (3) adapter and signal filtering board created on the EasyEDA platform, (4) shield: analog-to-digital converter purchased from Procentral, (5) shield: analog-to-digital converter board made on the EasyEDA platform, (6) Arduino microcontroller, here in its MEGA version, (7) four navigation and validation buttons and (8) LCD display with two lines of 16 characters each. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

82

Everyday Applied Geophysics 2

The module interconnections are detailed in Figure 1.56.

Figure 1.56. Interconnection of modules. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

Magnetic Methods

83

Here is the list of components. id 1 2

Value TMR3 1222 0.1 u

3

18 k

4

4

6k

4

5

30 k

16

6 7 8 9 10

4 4 4 4 2

12

1.2 u 0.18 u 0.47 u 0.39 u XHB-2A Connector Female header pitch 2.54 mm Connector

13

AD708N

9

1

ADS1220IP WR 100 nF HeaderMale2.54_1x8 47 Ω

3

11

2 3 4 1 2 3 4 5 6

Pushbutton SD Card reader Real Time Clock RGB LCD Screen Arduino Mega or DUE SD Card 2Go

Quantity 1 16

Package TRACO_ SIP-8 RAD-0.1 AXIAL0.5 AXIAL0.5 AXIAL0.5 RAD-0.2 RAD-0.2 RAD-0.2 RAD-0.2 XHB-2A

Filtering Components U2 C1,C2,C5,C6,C9,C10,C11,C12,C17,C18,C19,C20, C25,C26,C27,C28 R1,R7,R13,R19 R2,R8,R14,R20 R3,R4,R5,R6,R9,R10,R11,R12,R15,R16,R17,R18, R21,R22,R23,R24 C3,C13,C21,C29 C4,C14,C22,C30 C7,C15,C23,C31 C8,C16,C24,C32 P1,P3

1

HDR4X1/2.54

P2

1

HDRP4 4X1/2.54 DIP8 U1,U3,U4,U5,U6,U8,U7,U9,U10 Shields/analog to digital converter TSSOP-16 U1

6 6

RAD-0.2 HDR8X1/2.54

C1,C2 P2-AN,P1-DIG

15

AXIALR1,R2,R5,R4,R3 0.6 Interface and screen

4 1

Module card SD (SPI)

1

Grove - RTC (Seeedstudio.com)

1

Grove - LCD RGB Backlight

1 1

84

Everyday Applied Geophysics 2

1.6. Mechanical considerations We wanted to roughly reproduce the functional G858-G (Geometrics) carrying system. But the latter weighs several kilograms. Comparatively, fluxgates are extremely lightweight. Here, we only present the system with two sensors, as the one with just one and a three-component sensor does not require a vertical pole. Figure 1.57 shows an example of its implementation.

Figure 1.57. Implementation of the magnetometer with two fluxgate probes. On the back of the main boom is a counterweight. The strap over the shoulder helps to stabilize the whole assembly. A small bubble level can be affixed at the center to verify the horizontal balance (and thus the verticality of the carrying rod). This system is well suited for walking mode

To build the rod, we used aluminum coated tubes with an outer diameter of 20 and 16 mm, which we bought from a DIY store, plus a 12 mm diameter tube to hold the sensors for a gradient. To assemble the rod, the components were drawn using a 3D volume modeler, SolidWorks, then 3D printed (Figure 1.58). All the drawn parts are available on the magnetometer project’s GitHub account. As they are already in STL format, the files are ready to print on a 3D printer.

Magnetic Methods

85

For holding and tightening, all screws are made of (non-magnetic) brass, with a diameter of 3 and 4 mm (steel or stainless steel must be avoided at all costs). The hexagonal recess of the screw head is embedded into the plastic part, so it cannot be lost in the field. A wing nut is used to tighten and adjust the boom without tools.

Figure 1.58. Representation of the different elements of the boom, here mounted with a single component sensor (the boom is not extended to fit in the figure)

The electronics were loaded inside a waterproof electrical housing. The power supply is provided by a 12 V lead battery included inside. Printed supports with screw holes for the mega Arduino and various other printed circuits allow better integration and maintenance in the housing. To connect the sensors to the housing, we chose a flexible multistrand cable, in order to avoid the occurrence of swaying due to walking during prospecting. Aviation-type connectors were also chosen for the same reasons. To carry the housing during prospecting, the housing is strapped on. From there, we hung two straps using plastic snap hooks – the straps cross over the back to loop back round to the housing. 1.7. Appendix to Chapter 1: calibration of a three-axis fluxgate magnetometer As a preliminary step, our method involves rotating the sensor in all directions, from a fixed point, in order to acquire a signal where we

86

Everyday Applied Geophysics 2

supposedly have constant total field. Next, the sensor parameters are determined by following the procedures below. Finally, these parameters are used to correct the acquired measurements. Ideally, we would have a direct protocol in the field, but this requires running the inversion program immediately. Otherwise, we could carry out both acquisitions: the sensor measurement protocol first, then the prospecting. Calibration and correction can be done in the field on the way back to the office. The reader will find procedural details in the following publication, on page 31: http://scd-theses.u-strasbg.fr/1492/01/ BOUIFLANE_Mustapha_2008.pdf (in French) or equivalent information in http://www.phy.cuhk.edu.hk/itp/v3/links/mmm/pdfs/08D913_1.pdf34. For our calculations, we use the scientific calculation software “Octave”. It can be downloaded from https://www.gnu.org/software/octave/. It is compatible with all the basic Matlab functions and multiplies the support and tutorials tenfold. After unzipping the files into a folder, run the program. In the command window, we will have to install the libraries that we will need. You can type the following into the prompt: >> pkg install io-2.4.10.tar.gz >> pkg install statistics-1.3.0.tar.gz We then have to load the package. To do this, we insert a line in the code: >> pkg load statistics We used Olsen et al.’s (2003)35 method adapted by Munschy et al. (2007) (with some additional in-depth explanations)36. Let us take the latter’s measurements (including for offset, which is denoted by the letter O). The three-component sensor is made up of three fluxgate sensors with small calibration deviations that must be compensated for as best possible.

34 A good reference paper is https://www.sciencedirect.com/science/article/pii/ S0926985106000929 but unfortunately it is not free. 35 https://doi.org/10.1186/BF03352458 which you can download here: https://earth-planets-space. springeropen.com/track/pdf/10.1186/BF03352458?site=earth-planets-space.springeropen.com. 36 https://www.sciencedirect.com/science/article/pii/S0926985106000929.

Magnetic Methods

87

Using the Earth’s field itself allows us to do this, as explained by the simple consideration described in the following.

⎛ T1 ⎞ ⎜ ⎟ Let us denote T = ⎜ T2 ⎟ the Earth’s magnetic field at a local topographic ⎜T ⎟ ⎝ 3⎠ reference (typically, east, north and vertical, or any other reference system); we get the total field that is the standard of this vector, in other words:

T = T = T12 + T22 + T32 . ⎛ B1 ⎞ ⎜ ⎟ Let us denote B = ⎜ B2 ⎟ the same magnetic field at another reference: we ⎜B ⎟ ⎝ 3⎠ now speak of a reference that would be physically attached to the threecomponent sensor.

Of course, we know that B ≠ T . But on the other hand, we are sure that: B = B12 + B 22 + B32 = T = T12 + T22 + T32 .

Indeed, a vector standard is a vector standard: it does not depend on the chosen reference!

⎛ B1 ⎞ ⎛ T1 ⎞ ⎜ ⎟ ⎜ ⎟ We do not expect that ⎜ B2 ⎟ = ⎜ T2 ⎟ because if this were the case, it ⎜B ⎟ ⎜T ⎟ ⎝ 3⎠ ⎝ 3⎠ would be very difficult to align the three sensors relative to the three cardinal directions with a higher level of accuracy than the degree of angle. But we know that B = T or put more simply: B = T . Measuring the B module gives the T module, which is what geophysicists call “the total

magnetic field”.

88

Everyday Applied Geophysics 2

⎛ F1 ⎞ ⎜ ⎟ Let us now denote F = ⎜ F2 ⎟ the values provided by the three ⎜F ⎟ ⎝ 3⎠ magnetometer sensors. This is not a physical field, but just the three voltages provided by the sensors and their electronics. It turns out that vector F , relative to B (which is what it is supposed to give), is affected by errors that are not negligible, and because of these we do not immediately obtain equality. There are three main types of errors: – an error in amplitude, or in other words an error that affects the multiplicative constant that makes it possible to go from the voltage of a component to the field value. This is the sensitivity, which we will put in a diagonal 3 × 3 matrix denoted as S; – an angular error: the magnetic axis of the individual sensor is not exactly its geometric axis. We will associate it with a matrix of small rotations denoted as P , which goes from real field values to ones measured by the sensors, relative to the slightly distorted axes; – an “offset” is a static shift between the actual field and the calculated field. This is a simple vector denoted as O ; – in more detail:

⎛ s1 0 ⎜ - S = ⎜ 0 s2 ⎜0 0 ⎝

0⎞ ⎟ 0 ⎟ , where s are sensitivities, and are close to 1 by s3 ⎟⎠

definition;

⎛ 1 0 ⎜ - P = ⎜ − sin u1 cos u1 ⎜ ⎜ sin u sin u 3 2 ⎝

⎞ ⎟ ⎟ , where the u is a ⎟ 1 − sin 2 u 2 − sin 2 u 3 ⎟⎠ 0 0

small angle as shown in Olsen or Munschy’s publications;

Magnetic Methods

89

⎛ o1 ⎞ ⎜ ⎟ - O = ⎜ o 2 ⎟ are the three offsets or shifts; ⎜o ⎟ ⎝ 3⎠ – the relationship between the exact magnetic field at the sensors (in the triaxial probe reference) is then relative to the values provided by the three voltages, through the relationship: - F = SPB + O ; - matrices S and P , as well as vector O , contain the various calibration factors that link the real, unknown field with the fields or voltages from the sensors; – remember that what we know here is F, the “output” from the sensors. What we are looking for is B , so let us invert the above relationship to get the following equation: −1

(

)

- B = P S−1 F − O ; - we then just need to determine the nine constants that are

p = (s1 ,s 2 ,s3 , u1 , u 2 , u 3 , o1 , o 2 , o3 ). To do this, we need a criterion. And we have a criterion, namely that B MUST be constant, whatever the way the probe is oriented, since this quantity is also equal to T and these last two quantities, or modules, are invariant by change in reference; - we then rotate the sensor in all directions; we will see that F is not constant, as it is affected by calibration “errors”. But if we can determine the constants, B will be very constant. We then just need to mathematically translate this “constant”; - let us name this phase where the sensor is rotated in all directions: “movement”;

90

Everyday Applied Geophysics 2

- in other words, let B be as constant as possible, which is the same as minimizing the centered variance of B ; - the latter is: varC

( B ) = var ( B − B ) , where B 0

0

is the average of

B; - B0 is unknown but actually its value is not important, because whatever it is, it will lead to the same field variations during prospecting (with one constant). This is because the solution that minimizes the centered variance is the same as the one that minimizes the variance (and this is a property of variance: the offset of a random variable, in a way, does not affect its standard deviation). Munschy assumed this idea by taking B0 = 47,000 nT, an average value in France. We could also use the mean value of F , if the latter is more or less preserved relative to the actual field. Especially since this value B0 takes the place of the set point value, and consequently, the corrected field will have an average of B0 . It would be better to take an initial estimate of this average rather than a completely arbitrary value. Programming the calibration The movement data are written in the format (t, x, y, z), possibly with a fifth column with the total field (but this is recalculated in the code anyway). The separator is a comma and there is no header. The software looks for a solution for parameters that minimizes

( )

(

varC B = var B − B 0

)

−1

(

with B = P S−1 F − O

)

and B 0 = F . It uses

formula 23 from the well-known Tarantola and Vallette paper37. The software is available on the participatory Github website “Programs for the magnetometer”. It must run on the “Octave” software, which mimics 37 Tarantola A., Valette B., “Generalized nonlinear inverse problems solved using the least square criterion”, Reviews of Geophysics and Space Physics, vol. 20, no. 2, pp. 219–232, 1982. This paper is available online at http://www.ipgp.fr/~tarantola/Files/Professional/ Papers_PDF/GeneralizedNonlinear_latex.pdf.

Magnetic Methods

91

the commercial program Matlab, but just requires adding loads of libraries. The most delicate part is introducing the prior information and the random measurement error value (which occurs due to noise – the other errors that we seek to correct are systematic errors). Magnetometer software The program “magneto_calib_inversion_octave.m” is used to determine the tri-axial fluxgate sensor constants. It reads the data file of a “movement”, i.e. rotation of the sensor around a fixed point, exploring most directions. Jacobien.m calculates the derivatives (digitally). G_de_m.m is the “direct calculation” in the inversion program. Finally, “process_mag.m” makes the corrections, once the constants are defined and stored in an intermediate file by “magneto_calib_inversion_octave.m”. Figure 1.59 shows a calibration record.

Figure 1.59. Example of a calibration record. The three highly oscillating functions are the returns from the three fluxgate sensors. The top curve is their Pythagorean module, so the total field before correction. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

92

Everyday Applied Geophysics 2

Finally, Figure 1.60 shows the result after calibration. The standard deviation (centered) of the total field is 4 nT, which can be further reduced by lowering the low-pass filter cut-off frequency (currently 12 Hz).

Figure 1.60. Calibration result: before calibration, the Pythagorean root shows a lot of variation, as do the sensor offsets. After calibration, the noise drops to a value such that the three-component system is usable for the total magnetic field. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

2 The Electromagnetic Induction or “Slingram” Method1

2.1. Principle of induction and Slingram Slingram is a Swedish name. It is a system mainly used for measuring the electrical conductivity2 of a subsoil and is probably the most widely used electromagnetic prospecting system worldwide. It consists of two coils that are usually installed at both ends of a tube, one of which emits a magnetic field that varies by a few kilohertz, and the other that detects the field from the transmitter coil (which is not relevant to us) as well as secondary magnetic fields, which are generated by so-called “eddy currents (#)”. These currents depend on the conductivity of the ground, and the device can be graduated in Siemens per meter (S/m, unit of electrical conductivity, inverse of the resistivity). The depth of investigation of the device is about the same as the distance between the coils, and in a similar vein, contributions to the signal can be attributed to a kind of ellipse that roughly surrounds the device.

1 To expand the reader’s knowledge on certain points on electromagnetic theory, which we only cover superficially here, we refer the reader to the following documents: https://www. physi.uni-heidelberg.de/Einrichtungen/FP/anleitungen/F52.pdf and https://library.seg.org/doi/book/ 10.1190/1.9781560802631. 2 Conductivity and its inverse, resistivity, were discussed in Volume 1, in particular in the relationships that these parameters maintain with the contents and water and clay subsoil.

94

Everyday Applied Geophysics 2

2.1.1. Induction and eddy currents We see fewer and fewer dynamos on our bikes, but we still have alternators in our cars. These current generators are based on induction: a variable magnetic field “induces” a current in any conductor placed in the space where these variations exist. This is also what happens in induction stoves that can be found in some of our kitchens. The most traditionally academic form of induction is definitely Lenz’s law (#) (or Lenz–Faraday, or just Faraday). Take a flat coil of surface S (for example) crossed perpendicularly by a magnetic field that is homogeneous in space but variable in time (B), and we note that the wire is the seat of a “emf”: electromotive force (#), denoted (e) and given by:

e(t) = −

dΦ dB = −S dt dt

This means that if a resistor R is inserted in the coil (or put more simply, this is the coil resistance), this circuit will be crossed by a current (i) such that i = e R . In a bike dynamo, a rotating magnet produces the flow variation in a coil and thus generates the current. There is a big difference between a weakly resistant wire and a wire that is cut. In the latter case, there is no power at all. But in the former case, we have a coil through which a current flows, and thus a magnetic field is created. A quick geometrical analysis using Maxwell’s corkscrew rule (#) allocates a precise meaning to the (-) sign in the equation above: the field created by the current is the opposite of the field that produces this current. The above derivative introduces time in a particular manner into the equation. Indeed, the variation of B is what produces this emf. Let us imagine that B passes linearly from B0 to B1 in a time Δt. The emf will then be:

e=−

B1 − B0 Δt

We see that for a given B0 and B1, the emf will be increasingly larger with a smaller Δt interval. We can also consider a periodic phenomenon, as

The Electromagnetic Induction or “Slingram” Method

95

assuming that B is sinusoidal, we get: B = B0 sin(ωt) = B0 sin(2π f t), where f is the frequency and ω = 2 πf is the “pulsation”. Thus:

d [sin(ωt)] dB = B0 = B0 ω cos(ωt) dt dt What is interesting in this expression is not so much the passage from sine to cosine, but the fact that ω = 2 πf “comes from” the cosine and appears as a multiplicative factor that is proportional to frequency. In short, the faster the variations (transient or periodic), the stronger the induction. Slingram is the transposition of these principles to the field, as shown in Figure 2.1. In practice, several coil configurations are possible. We only use one where the two coils are in the same plane (“coplanar”). By remaining coplanar, we can either opt for “horizontal coplanar” mode (as in the figure, the dipoles are vertical relative to the coils, called HCP or VD mode) or “vertical coplanar” (VCP or HD) mode.

Figure 2.1. This classic figure shows the Slingram principle for a confined conductive body. The transmitter produces an oscillating magnetic field (the field lines are always the same, but the field strength is sinusoidal). The flow variation in the body produces eddy currents, which themselves create a so-called secondary magnetic field that is superimposed on the primary field in the receiving coil. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

96

Everyday Applied Geophysics 2

We see a sort of cause-and-effect chain: (current in transmitter coil) Î (alternating magnetic field in the ground) Î (eddy currents) Î (secondary magnetic field); the receiving coil bears the sum of the primary and secondary fields. Is this a rigorous point of view? Actually, it is not really, because the emf within the soil structure produces a secondary field that opposes the primary field, and so the field that passes through the coil is decreased, hence the induced current is decreased. But when the product of conductivity and frequency is small enough, these cause–effect relationships can be considered as good approximations. This is often what happens (but not always for highly conductive media or for large coil spacings). The theory behind this is Maxwell’s theory, to which we add a behavior law G G G that links current density to electric field, namely Ohm’s law J = σE , where J is G the current density (in A/m2), σ is the electric conductivity (in S/m) and E is the electric field (in V/m). Let us take a quick look at these four equations3; in fact, we G JJG G ∂B will only refer to two of them. These are Maxwell–Faraday’s equation: rot E = − ∂t G JJG G G ∂E . and Maxwell–Ampère’s equation: rot B = μ0 J + μ0 ε0 ∂t The constants μ0 and ε0 are the magnetic permeability (#) and electrical permittivity (#) of the vacuum (from the point of view of these constants, we can assume that the material constants have similar values from an initial approximation because above all, it is the influence from σ that we want to see).

G One could interpret these two formulas as follows: the first says that B is G G “produced” from E and therefore also from J through Ohm’s law. The second G G G says that a combination of E and J is “produced” from B . However, such an G G G interpretation is false and dangerous: B is “produced” from E and J , which is G “produced” from B and so on in a vicious circle. Physicists are more rigorous in saying that these equations are coupled and consider them as such.

However, at some point, we need to consider orders of magnitude. Let us take a look at some given conductivity terrain (for example, 0.01 S/m – in 3 For more details on Maxwell’s equations, we can find many links to good resources. Here is one: https://www.fiberoptics4sale.com/blogs/electromagnetic-optics/a-plain-explanation-of-maxwellsequations.

The Electromagnetic Induction or “Slingram” Method

97

practice, we use mS/m, so we get 10 mS/m). The working frequency is a few G ∂E kilohertz. We then demonstrate that, on the one hand, the term ε 0 is ∂t negligible and, on the other hand, what we obtain through the somewhat brutal application of the cause-and-effect relationships chain provides fairly acceptable results. A well-known document, the “TN-6” (Technical Note 6), which was written by McNeill and distributed by Geonics, contains more detail and takes this approximation into account (it is available at http://www.geonics.com/pdfs/technicalnotes/tn-6.pdf). Ultimately, the condition for benefiting from this approximation, which is called the “low induction number”, is the following:

ωμ0σs2  2 , where (s) is the distance between the two coils. Let us consider an example: a rather conductive terrain, let us say a clay soil with 100 mS/m (thus 10 Ω m ), with f = 10 kHz and a spacing (s) (the gap between coils) of 2 m. We get:

ωμ0σs2 ≈ 0.03  2 It will be relatively rare for the condition not to be met. Slingram is mostly used for environmental applications. In terms of conductivity4, Slingram is sometimes useful in archaeology, for example, to reveal ditches that have been filled with clay, but it will be almost insensitive to the presence of walls and other pieces of construction (except possibly by contrast). This is because the Slingram signal is proportional to conductivity, while the resistivity method provides a signal that is proportional to its inverse, resistivity.

4 We will not discuss the use of Slingram for determining magnetic susceptibility, which is still being researched.

98

Everyday Applied Geophysics 2

Figure 2.2 shows how a commercial device is used, the EM38 MK2, which has two receiver coils (one at 0.5 m from the transmitter and the other at 1 m).

Figure 2.2. The EM38 MK2 in action. Because of its size (1 m), the device has to be placed on the ground. Here, the coils contained in the frame are vertical, so this is V or HCP mode

Many different commercial designs exist, ranging from this small size to 100 m coil spacing for the most recent coils, and there are also airborne versions with even larger spacings5. Have a look at the following manufacturers, for example: http://www.dualem.com, http://www. geonics.com, http://www.gfinstruments.cz and many others. 2.1.1.1. Slingram and the “phase component” Since eddy currents are out of phase with the induction phenomenon itself, it must be stressed that there is also a phase response, which is magnetic in nature. As it is still little used and is more difficult to master, we decided not to discuss it in this book. 5 Airborne often leads to an electromagnetic exploitation of transient fields.

The Electromagnetic Induction or “Slingram” Method

99

2.1.2. An example of Slingram prospecting Figure 2.3 shows a prospection in a meander of the Yonne River. It reveals the river’s paleo-channels, which constantly changes its course (at least when it was still little tamed by man). The device used here is a Geonics EM31, in walking mode.

Figure 2.3. Apparent conductivity map (integration of the grounds beneath the device) in a meander of the Yonne (Christian Camerlynck and students in training, personal communication). For a color version of this figure, see www.iste.co.uk/ florsch/geophysics2.zip

100

Everyday Applied Geophysics 2

To interpret such a map, the contextualization of the site is important. The red parts have a relatively high clay content (which could be calculated, see section 2.1.2 of Volume 1), while the less conductive blue parts reveal gravel and cobble banks that have been deposited during floods. 2.2. Slingram response on the field: different scenarios 2.2.1. Apparent conductivity for a homogeneous tabular terrain and ordinary conductivity complexes 2.2.1.1. Homogeneous terrain Our first scenario involves considering a homogeneous terrain with homogeneous conductivity σ. From the tn-6’s formula 6, we recall the relationship between Hp , the primary field (received in the receiving coil in the absence of subsoil) and the secondary field Hs , generated by eddy currents. This very important relationship is: ⎛ Hs ⎜ ⎜ Hp ⎝

⎞ ⎛H ⎞ iωμ 0 σs 2 ⎟ ≅⎜ s ⎟ ≅ ⎟ ⎜ ⎟ 4 ⎠V ⎝ H p ⎠H

The (i) in the equation is purely imaginary. It shows that the secondary field is in quadrature (90° out of phase) relative to the primary field. From Lanz’s law, the derivative transforms a sine into a cosine: it shifts the phase. Ignoring the phase (but not amplitude), we get:

σ≅

4 ⎛ Hs ⎜ ωμ0s 2 ⎜⎝ H p

⎞ ⎟ ⎟ ⎠

Equality is increasingly accurate with a low induction number. 2.2.1.2. Apparent conductivity For non-homogeneous terrain, we always measure Hs and Hp. By definition, the above expression is then an apparent conductivity denoted as σa :

σa =

4 ⎛ Hs ⎜ ωμ0s 2 ⎜⎝ H p

⎞ ⎟ . ⎟ ⎠

The Electromagnetic Induction or “Slingram” Method

101

2.2.1.3. Tabular terrain This is a terrain made up of N horizontal layers. We denote the depth of successive interfaces as h1 , h 2 .. h N−1 (the last one is infinite) and the conductivities of these same terrains as σ1 , σ2 .. σN . McNeill developed a calculation that assumes that a superposition principle is valid6. This is only the case if the criterion of low induction number is checked. This author rightly prefers to use the reduced variable standardized to the coil spacing s, namely:

zk =

hk . s

McNeill’s fundamental idea is to offer the functions:

R V (z) =

1 4z2 + 1

for HCP mode (vertical dipole), and

R H (z) = 4z 2 + 1 − 2z for VCP mode (horizontal dipole). These functions provide a contribution to the apparent conductivity of a homogeneous terrain that would start at the standardized depth z, i.e. the actual depth h = z ⋅ s . Figure 2.4 shows the geometry involved and the graph of these functions. This expression allows us to calculate the contribution for a conductivity layer σ that extends between the reduced depths z1 and z 2 ( z 2 > z1 ). This will be the simple difference:

σ[ R(z1 ) − R(z2 )] with a difference that can be read as “contribution from depth z1 , from which we remove the contribution from depth z 2 ”.

6 https://en.wikipedia.org/wiki/Superposition_principle.

102

Everyday Applied Geophysics 2

Figure 2.4. Geometry and graph for the R function, which gives the contribution of a layer ranging from the reduced depth z = h/s to infinity

With this contribution to the apparent conductivity for a layer between two depths, what occurs next is only a matter of time. We take into account the fact that the R functions are 1 for z = 0, and that the last layer is infinite7. For example, here is the apparent conductivity expression for a tabular “4-terrain”:

σa = σ1 [1 − R(z1 )] + σ2 [ R(z1 ) − R(z 2 )] + σ3 [ R(z 2 ) − R(z3 )] + σ4 R(z 4 )

7 It does not make sense to apply infinity, bearing in mind that R functions quickly tend toward 0. Great depths no longer occur in the composition of the signal.

The Electromagnetic Induction or “Slingram” Method

103

Most of the time in Slingram, we do not go that far because the signal decreases fast enough and there is no point, in a way, in taking so many layers into consideration. A particular case concerns certain devices that are worn on the belt, such as the EM31, in order to be able to work faster in the field (Figure 2.5).

Figure 2.5. The author in “walking” mode with a Slingram EM31 (3.66 m between the two coils) worn at belt height. The air height (insulating layer) must be taken into account. In homogeneous terrain, this bias can be corrected by using a simple multiplicative factor. The same coefficient is often applied even though the terrain is not tabular, which is moderately rigorous (but acceptable in practice)

Let us estimate the effect of this layer of air on a homogeneous terrain. With this layer of air, we have in fact a 2-terrain with an initial insulating layer. The above formula is applied with an initial conductivity layer zero and a second conductivity layer σ . There is only one interface depth, which is simply z = h/s, h being the height of the device above the ground. We get:

σa = σair [1 − R(z)] + σR(z) = 0 ⋅ [1 − R(z)] + σR(z) = σR(z) Using an EM31 as example, s = 3.66 m. Thus, R(z) = 0.8775.

σa ≅ 1.14σa is often applied to correct this bias 0.8775 (and, we repeat, is only rigorously accurate for a homogeneous terrain). The formula σ =

104

Everyday Applied Geophysics 2

2.2.1.4. Depth sensitivity

The superposition principle states that contributions from the different layers are additive. So, it makes sense to consider what the contribution from a thin layer located at reduced depth z would be. This sensitivity is none other than the derivative of R, which McNeill denotes as φ. Thus, we get:

φV (z) =

4z (4z + 1)3/2 2

and

φH (z) = 2 −

4z 4z2 + 1

These curves are shown in Figure 2.6.

Figure 2.6. Sensitivity for HCP and VCP modes depending on depth

These curves show that vertical dipole mode (HCP) reaches a maximum at z = 0.4, so a depth of h = 0.4×s, although the horizontal dipole mode (VCP) has a greater sensitivity close to the surface. Using both modes thus provides two pieces of complementary information, one that is deeper than the other, but they are still not independent. Let us note that for both functions R and φ , the ratio of HCP mode to VCP tends toward 2 when z tends toward a large value. This is used with R for a calibration protocol. 2.2.1.5. Response for a non-homogeneous terrain

The use of special integration methods (finite differences (#), finite elements (#), method of moments (#)) is mandatory if the terrain is not tabular. Note that tabularity is only required for the area under the device in

The Electromagnetic Induction or “Slingram” Method

105

order for McNeill formulas to be valid. In any case, it is in fact very difficult to use the Slingram on complex terrains, because even using the two HCP and VCP modes, there are only two values, which is not enough to constrain a three-dimensional (3D) structure, even in cartography. Slingram is often only semiquantitative. A major area of application is in soil salinization, as soils become very conductive. Agronomists can establish a simple statistical relationship between salinity and conductivity, and thus can appreciate the quality of their soils. It may occur that we have Slingram(s) with different spacings. This then makes it possible to create pseudo-sections (see Volume 1), as shown in Figure 2.7.

Figure 2.7. Pseudo-section of Slingram conductivity, obtained by combining an EM38, an EM31 and an EM34 (10, 20 and 40 m). Site of Naizin, DESS in Applied Geophysics, UPMC, 1999. For a color version of this figure, see www.iste.co.uk/ florsch/geophysics2.zip

More or less altered shales succeed surface silts of approximately metric power. The warmer colors (from green to red) are more conductive and reveal higher water content.

106

Everyday Applied Geophysics 2

2.2.2. The answer in the presence of metal conductors

In the presence of metal conductors, the low wave number condition is no longer met. Typical responses can be seen with apparent conductivities that may be negative. Everett [EVE 13]8 showed an example in his work (Figure 8.10), which is partly reproduced in Figure 2.8.

Figure 2.8. Taken from Everett, this figure shows an EM31 anomaly for a profile that is perpendicular to a metal pipeline. A theoretical anomaly would be more symmetrical, but this is a characteristic shape

The anomaly in this pipeline first shows an increase in apparent conductivity as the pipe approaches, then a shift to negative apparent conductivity. This behavior is due to the configuration of the field lines and can be understood when examining Figure 2.1, since the secondary field is the inverse of the primary field when the conductor is located between the two coils. Depending on the conductivity of the housing, this negative value is not necessarily reached, and there is then just a trough below the reference level. It is worth noting that if the profile and longitudinal axis of the device are perpendicular to the conductive structure, the distance between the two 8 Everett M.E., Near-Surface Applied Geophysics, Cambridge University Press, 2013.

The Electromagnetic Induction or “Slingram” Method

107

peaks is roughly equal to the distance between the coils plus the depth of the structure. Here, with the horizontal distance between peaks being 5 m and the distance between coils being 3.66 m, the depth of the pipe (counted from the device, not from the surface!) is: 5 – 3.66 = 1.34 m. 2.2.3. Interpretation and limitations of the method 2.2.3.1. The issue of calibration

In Slingram, the measured signal is proportional to conductivity. If the latter is weak (less than 10 mS/m), the signal is small and therefore difficult to measure. In addition, the measurement is usually biased by an “offset” O. Instead of displaying the true conductivity, the device displays:

σ(displayed) = σa(true) + O There are several ways to address this problem. The first is to disregard this bias. This is justified when O  σ , which is quite close to reality when σ ≥ 100 mS/m (to support this, the offset is often smaller than 10 mS/m unless the device is completely maladjusted). This is also justified if only a semiquantitative or even qualitative approach is taken (for example if looking for a metal pipe). Second method: in this method, we recalibrate the device following the manufacturer’s recommendations. However, in this case, we would need to know how a calibration is stable over time, especially if we are working with small conductivity values (in some cases, the manufacturer’s recommendation is the one recommended in our third method). Third method: this method is based on the fact that for a layer far from the device (so, for example, say the device is in the air, relatively far from

the ground), we get the relationship

RV = 2. However, we assume that there RH

is an offset, so this relationship is not verified. We then seek correction C such that:

R displayed +C R Vtrue V = 2 = . R displayed +C R Htrue H

108

Everyday Applied Geophysics 2

From there, we deduce the correction:

C = R displayed − 2R displayed . V H We see that the potentiometer needs to be adjusted on an ad hoc basis such that: either we obtain a ratio of 2 between the readings for both modes or the reading from one of the modes of the calculated quantity C is changed, which will ipso facto be correct for the other mode as well (this is the same C) Fourth method: we consider that the apparatus can drift (which is the displayed , R displayed ) , and actual case) and we regularly measure the pair (R V H

therefore C. Upon returning to the office, we draw a curve – function of time – for quantity C, and we make the corrections a posteriori based on the equations above. Figure 2.9 shows a calibration operation (third method: we rotate the “set of pots”, or fourth method: we take notes).

Figure 2.9. Calibrating a Slingram EM38. Placed at about 1.8 m from the ground and being almost twice as high as the distance between coils, the ratio R V R H must be close to 2

The Electromagnetic Induction or “Slingram” Method

109

2.2.3.2. The issue of inversion

Let us recall, from Volume 1, that the purpose of inversion is to go from the data provided by the apparatus to knowing the parameters of a subsoil model, for example the thicknesses and conductivities of the layers of a supposedly tabular terrain. In some cases, it is not the conductivity that is directly sought, but an environmental parameter, such as salinity. For this type of task, an environmentalist must establish a relationship between “his” parameter, for example salinity (which makes soils very conductive) and the value provided by the Slingram: but this is no longer just geophysics. The geophysicist’s job is rather to establish whether the situation is tabular or not, and ultimately to deduce the structure of the subsoil. Let us first consider a Slingram (with both HCP and VCP modes) that makes two measurements. This is good for a homogeneous terrain that only has one parameter, and we must find the same conductivity value for both cases. As soon as we have a simple 2-terrain (which is often the case for a soil layer and a substratum), we need three parameters: two conductivities and an interface depth. Some recent devices offer multiple coil spacings, such as the CMD Explorer (http://www.gfinstruments.cz/index.php?menu=gi&cont=cmd_ov). Using both modes, this provides up to six available values that can be used for inversion. This would be the same as if we used several Slingrams of different dimensions. But inverting these data is not easy and many have found themselves pulling out their hair. There are many reasons that these inversions are difficult. First, there is no indication that the observed field values that can be 3D even exist for a tabular terrain. In this case, there is simply no mathematical solution. Then, noise and measurement errors have strong repercussions on the inversions. The only method that seems to work is Bayesian inversion, but it is quite unwieldy. Ultimately, qualitative or semiquantitative analyses will often be sufficient.

110

Everyday Applied Geophysics 2

2.2.4. Slingram map examples EXAMPLE 2.1.– On a plot of the South African site of Potshini, an EM38-MK2 map is intended to evaluate the thickness of the soil, which can be clearly seen in Figure 2.10.

Figure 2.10. In Potshini (South Africa), ravines allow us to evaluate the nature of the ground. The initial surface layer is largely washed of its clay particles, and is resistant (horizon A, here from 30 to 45 cm thick), while the layer just beneath it is full of conductive clays (horizon B). The EM38 signal is no longer sensitive beyond a depth of 2 m. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

The two spacings (50 cm and 1 m) as well as the two device arrangements (HCP, VCP) are used. These maps are shown in Figure 2.11.

Figure 2.11. Measurements of the ground at Potshini using a Slingram with two spacings. From left to right, the device “sees” deeper and deeper, moving from the very non-conductive surface layer (1 mS/m) to a more conductive layer. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

The Electromagnetic Induction or “Slingram” Method

111

We inverted these data using Bayesian methods (Figure 2.12)9. The inversion makes it possible to return back to the parameters that were the two conductivities and the thickness of horizon A.

Figure 2.12. The inversion provides the interface depth (here 0.4 (±0.1 m)) and the conductivity of the second field, which has a significant zonation in terms of conductivity. The most conductive zones are interpreted as containing more clay (and are therefore more impermeable). For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

EXAMPLE 2.2.– Oleron Island. The island is incised with ancient gullies dug into the limestone and filled with “bri”, the local term for the clay used to fill these incisions. As the bri here is salty, its conductivity is high. The map is pretty much a thickness map of the clay cover (Figure 2.13).

9 The author recommends “The use of Slingram EM38 data for topsoil and subsoil geoelectrical characterization with a Bayesian inversion” by Grellier et al., but there are several relevant sources online which the reader is encouraged to explore.

112

Everyday Applied Geophysics 2

Figure 2.13. EM31 map of the marsh of La Perroche, Oleron Island. The very high conductivities correspond to the zones where the salty clay thickness is the greatest. The resistant areas are almost on limestone. The line through the middle is because of a battery failure, while the one at the top is the outline of a buried metal pipe. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

EXAMPLE 2.3.– Site of Champgazon, pollution detection. The device used is a CMD miniexplorer, and the map shown in Figure 2.14 is a HCP map with a coil spacing of 1 m.

The Electromagnetic Induction or “Slingram” Method

113

Figure 2.14. Slingram map of Champgazon. The area is very homogeneous with a low conductivity at the limit of capacity of any Slingram, except in the zone of high conductivity, which is an old landfill. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

The whole site is wooded. We show the trajectory of the Slingram obtained with a sensitive GPS, for which the irregularity is due to difficulties from working in the forest, and in a peat bog on top of that. The entire homogeneous area (blue) is of low conductivity. In a peat bog, water is not so conductive because it is low in ions. The big patch in the lower part is an old dump full of metal waste such as washing machines.

114

Everyday Applied Geophysics 2

2.3. How to build an EMI-Slingram device

Little information exists on the Internet on how to build a Slingram. So, we developed a functional prototype, with a coil spacing ranging from 1 to 5 m (perhaps more, with larger coils). A transmitter coil and a receiver coil are the main components of a Slingram. Let us see the sizes and quantities involved. We note that the two HCP and VCP modes are in fact one and the same device just oriented differently in the field. The two coils are in “Gauss’ second position” relative to each other. First, the transmitter: the momentum of surface coil rotations SE and NE, traversed by a current Icos(ωt), is:

M(t) = SE ⋅ NE ⋅ Icos(ωt) We always note the spacing (s) between the two coils. In Gauss’ second position, in the absence of matter in the vicinity, the field strength (induction, to be precise) is: Bprimary (t) = μ 0

M(t) s3

Let us place a receiver coil at distance s, with surface rotations SR and NR. The emf generated within the coil is: e(t) = −

∂B 1 ∂M(t) 1 = +μ0 3 SR N R SE N E ⋅ I ⋅ ω sin(ωt) = −μ0 3 N R SR ∂t ∂t s s

This is Lenz’s law, because of the derivative, which transforms the cosine into a sine: the receiver coil dephases B by 90°. The sign becomes positive again because the derivative of cosine is –sine, but this depends on the direction of the coil. If the receiver coil is facing the same direction as the transmitter, the receiver is crossed in the opposite direction in Gauss’ second position. So we could change the sign back to (-), depending on the orientation of the coil (we would also change the sign by changing the transmitter).

The Electromagnetic Induction or “Slingram” Method

115

Let us consider an example: at a frequency f = 10 kHz ( ω = 2πf ) with a current of 10 mA = 0.01 in the transmitter, 400 rotations in each coil of 5 cm radius, s = 1 m. We get:

e(t) = 4π⋅10−7 (π0.05)2 (π0.05)2 400 ⋅ 400 ⋅ 0.01⋅ 2π×10,000sin(ωt) = 7.8mV That does not seem terrible, but this is the primary field, which is, in fact, a nuisance. Let us now consider a conductive half-space. McNeill tells us that ⎛ Hs ⎞ ⎛H ⎞ iωμ 0 σs 2 is the same ratio for induction B = μ 0 H in ⎜ ⎟ ≅⎜ s ⎟ ≅ ⎜ Hp ⎟ ⎜ ⎟ 4 ⎝ ⎠V ⎝ H p ⎠H the absence of unique magnetic properties. Remember that complex i represents a phase shift that is specific to the medium (it is an initial phase shift relative to the primary field, because there will be a second phase shift linked to the coil detection technique. Relative to the current, the signal produced by the underlying conductive medium will therefore have a phase shift of 180°). We get (and this is normal) a factor ωμ 0 since induction in the medium is proportional to these two constants. From this relationship, we deduce that: Bsec ondary =

iωμ 0 σs 2 B prima ry = iΓB prima ry 4

But of course, the additional emf that occurs due to the subsoil will have the same ratio, but it will be out of phase by 90°. Let us calculate this ωμ0 σs 2 . Let us take a rather low conductivity value factor Γ = 4 σ = 10 ms/ m = 0.01m S/m. Thus, we get:

Γ=

2π ⋅10,000 × 4π ⋅10−7 0.01 = 2 × 10−4 4

This secondary field is therefore 5,000 times weaker than the primary one. This will make measurements very difficult.

116

Everyday Applied Geophysics 2

This ratio, as we can see, will get worse if s increases, because the dependence in s is in s 2 s 3 = 1 s . For decametric devices, such as the EM34, the size of the coils is greatly increased to compensate for this scale effect. 2.3.1. Sizing the transmitter

Although it is clearly beneficial to increase the number of receiver rotations (but be careful, the receiver also picks up noise and radio fields, especially VLF), we cannot apply this idea to the transmitter, as we will show. Indeed, when designing such a device, we have to consider the power consumption in order to optimize the battery. In addition, ordinary electronics typically work in (–12, +12) or (–15, +15) V. So, if we try to send 1 A to the coil, we will spend 30 W doing so (without counting the output of the transmitter), and a source of reasonable size for the field cannot keep up with this. It is however necessary to seek a maximum transmitted field H (or L) for the amount of power available to the electronics of the given transmitter. The latter is given by the power amp’s capacity at transmission, but it is also limited by the power supply (and also the battery reserve capacity). In our case, we have a battery at the source (for example a 12 V 4 Ah) supplying a TRACO converter at ±15 V, which is the power source for an op power amp, the LM675, which is capable of delivering 3 A under 4 Ω. However, we limit the output voltage of the op amp to ± 10 V peak to peak or 7 V rms. As the charge is essentially self-conductive (a priori more than for a loudspeaker that relies on the acoustic power it must supply), we should not go below about 10 Ω for transmission coil impedance. Using an enamelled wire with a decent diameter (1 mm) will make the ohmic resistance of the R coil almost negligible. The impedance of the coil is thus: Z L = R + jLω ≅ jLω ⇒ Z L = Lω

The inductance of a coil can be calculated using online tools such as http://www.66pacific.com/calculators/coil-inductance-calculator.aspx. For 1 mm diameter, let us do 60 rotations with 10 rotations per layer, which corresponds to this screenshot from this excellent tool (Figure 2.15).

The e Electromagne etic Induction or “Slingram” Method

Fig gure 2.15. The e aforemention ned website offfers a useful softwa are for calcula ating inductancce

Thuss, we get L = 655 μH = 6.55 ×10−4 H . At 10 kHz, our reeference freqquency, this gives an imppedance: ZL = Lω = 6.55 × 10−4 ⋅ 2π ⋅ 104  41Ω .

117

118

Everyday Applied Geophysics 2

Let us calculate the peak current that will pass through the coil. This is:

I peak =

U peak ZL

=

10 = 0.24 A. 41

This is the peak current to be supplied by the op power amp, which is compatible with the TRACO’s capacity of 0.5 A in cruise mode. Of course, such a calculation was achieved through trial and error. With a more powerful power supply module, the power could be increased. But we have to stop somewhere. The peak power used by the amp is:

Ppeak =

U 2peak ZL

=

100 = 2.4W. The 41

amp needs to be equipped with a small heatsink. By pushing the optimization logic for the transmitted field, we find that the best option would be a single turn with a huge current. This is an expected result if we consider that the impedance of a coil is proportional to itself, itself being proportional to the square of the number of turns. Thus, if we increase the number of turns, we increase the field by factor N, but in fact we simultaneously decrease it by factor N2 because the impedance of the coil decreases the current by as much. We can deliberately opt for less current and power. For example, by opting for 400 rotations on each coil of 0.5 mm enamelled wire, also taking into account the ohmic resistance of the wire, the current is about 6 mA. 2.3.2. Electronics design

To carry out all this work, it is helpful to have a basic electronics laboratory, and we will find that an oscilloscope will prove itself to be extremely useful. 2.3.2.1. Transmitter

A diagram of a transmitter is shown in Figure 2.16.

The Electromagnetic Induction or “Slingram” Method

119

Figure 2.16. Diagram of a transmitter

First, we have to generate a signal. We used a small, easy-to-find, mounted function generator placed on a conventional XR2206, positioned to generate a sinusoidal signal at about 10 kHz. One potentiometer adjusts the amplitude, and the other two are used for broad and fine frequency adjustment. Be careful with the power supply, which should not exceed 10 V or so. Here, it is asymmetrical. Note that the symmetric DC converter has a disconnected mass, but a converter providing 0–10 V would have been perfectly suitable. There is no reason not to use another sine generator here (such as a Wien bridge generator, for example). The output is decoupled by a capa before reaching the non-inverting input of the LM675 op power amp. Its gain is only 1 + 47/10 = 5.7, and we then adjust the amplitude of the XR2206 to have 10 V at the output of the LM675. The latter is powered by an isolated DC-DC converter that is capable of supplying up to ½ A (or through two batteries with mid-point). The shunt will send a reference to the receiver for synchronous detection of the signal transmitted by the subsoil. Its value depends on the chosen (or obtained) current. Sending up to 10 V to the receiver multiplier is great, but this introduces a certain resistance into

120

Everyday Applied Geophysics 2

the circuit; you can also go down to 1 V and provide the receiver with a conditioning amplifier. In series with the coil, the current can be instructed to bypass through a primary field compensation coil. This is optional and will be further discussed in the section about receivers (section 2.3.2.2). If this option is not chosen, the continuity of the coil power supply wire must obviously be ensured (as well as the return to the mass from it). The use of a coaxial cable probably limits some coupling. 2.3.2.2. Receiver

An electronic diagram of the receiver is shown in Figure 2.17.

Figure 2.17. Electronic diagram of a Slingram receiver

The Electromagnetic Induction or “Slingram” Method

121

The signal from the receiver coil reaches an instrumentation amplifier (for example here an INA128P), with a gain (here) of 107. This gain is suitable for an inter-coil spacing of 1–2 m. For a larger device, it can be adapted proportionally to the distance. For an inter-coil space of 5 m, the gain resistance can be replaced by a gain resistance 10–20 times smaller for a gain 10–20 times greater. The signal is then filtered with a second-order bandpass in Rauch structure. Let us observe the response of this filter, as shown in Figure 2.18.

Figure 2.18. Amplitude and phase of the Rauch filter, resistance at 22 kΩ and capa = 1 nF. Note the gain of ½ and particularly, the zero phase at the characteristic frequency of the filter, here 10 kHz

Near its characteristic (or “central”) frequency, where the amplitude is stationary, the phase varies linearly and passes through zero: the filter does not dephase at its peak band. We will use these properties to adjust the zero of the device. After the filter, we have a gain again, also with an offset setting. It must be set to 0 at the output of the op amp, with a high gain (1,000), to reduce the DC component at the output of the AD708 (which is a low offset amp), in the absence of transmission to the transmitter, or by bypassing the receiver input. This can be adjusted once and for all with a trimmer, or transferred to a front panel potentiometer, which allows frequent control (and gain adjustment if desired). A test point can be transferred to the front panel: the “voltmeter offset adjustment”, where we can use a low pass to conserve the DC component at the output of the amp. The purpose of this setting is to avoid injecting a potential DC component into the multiplier,

122

Everyday Applied Geophysics 2

which would come from the offset voltage of the AD708 or another equivalent amp but which would have a less advantageous residual offset than the AD70810. The amplified signal is then injected into an AD633 signal multiplier, where it is combined with the reference. The latter, which is set to 1 V (0-peak) on the transmitter shunt, benefits from a gain before the multiplier (bottom left amp). This makes it possible to take full advantage of the dynamics of the multiplier for which the output voltage is one-tenth of the product of the two inputs. In fact, the transmitter will have to be set to have 20 V peak to peak at the multiplier input (so 2 V peak to peak at the transmitter shunt). The output signal of the multiplier (see instructions, easily available on the Internet) involves multiplying the voltage between pins 1 and 2 by the voltage between pins 3 and 4. Let us indeed denote these two voltages as V12 (t) and V34 (t) , while the reference pins 2 and 4 are grounded. The output voltage of AD633, pin 7, is given by: V(t) =

V12 (t)V34 (t) +Z 10

The Z term is an offset that can be applied to pin 6; we chose not to apply it, but it can be used as indicated in the AD633 manual. The signal coming from the receiver, which is properly filtered and amplified, is multiplied by that coming from the reference. Let us specify how this “synchronous detection” works. Let us consider the transmission current, in phase with B at transmission, but also with a shunt voltage, which is here V12 (t). Its temporal dependence looks like this: V12 (t) = Vref cos(ωt). We know that the primary field is 90% out of phase. Depending on the direction of the receiving coil, the “primary” voltage, which is directly from

10 AD708 is given for a maximum “input offset voltage” up to 30 μV. With a gain of 1,000, we get 30 mV at the output. Another “trick” from the offset setting would be to insert a second Rauch filter (identical to the first) between the gain amplifier and the multiplier. At the very least, the latter could also be reached via a capacitor, but it must be checked that input 1 of the multiplier does not get charged with static electricity.

The Electromagnetic Induction or “Slingram” Method

123

the receiving coil, will be: ± VP sin(ωt) 11. We have also seen that the secondary field is re-dephased by 90°, this time by induction in the ground (then by induction in the receiving coil). Thus, this secondary field can be written as: ± VS cos(ωt). Our conclusion is that the secondary field, such as it is induced in the receiving coil, is in phase (or in phase opposition by 180° depending on the orientation of the coil axis at 180° – the sign is checked and thus so is the orientation of the coil at the time of the first experiments). The principle of the multiplier is therefore to multiply V12 (t) = Vref cos(ωt) (the reference) by the receiver signal, which looks like (apart from the signs) VP sin(ωt) + VS cos(ωt) so that at the output, we get:

Vout (t) = Vref cos(ωt) [ VP sin(ωt) + VS cos(ωt)] = Vref VP cos(ωt)sin(ωt) + Vref VS cos 2 (ω t) We must not forget here that there is a 1/10 factor, but this is compensated for by the gain amplifier 10: by Vref , we mean “at the shunt output”. If we need a reminder on trigonometry, we can easily find this on the Internet, including the following formulas: cos a sin b =

1 1 1 [sin(a + b) + sin(b − a) ] and cos2 a = 2 + 2 cos 2a . 2

By expanding the first term of Vout , we see that we get a signal in sin(2ωt), and the second term cancels out because a = b = ωt and sin(0) = 0. The second term also carries the double frequency, as well as a DC component equivalent to: U=

1 Vref VS . 2

11 Although the coils are all facing the same way, there are two sign contradictions: one from the law of induction, and the other from the geometry of the field lines that cross the transmitter and receiver coils in opposite directions. For the secondary field in a homogeneous medium, similar reasonings show that the field at the receiver coil has the same direction as the primary field: it reinforces it.

124

Everyday Applied Geophysics 2

Thus, for a continuous component at the output of the multiplier AD633, we obtain the voltage VS that interests us and only this: the primary field is eliminated! The other frequency, which is given as 2ω (the “frame rate”), is at 20 kHz and must be removed. This is very easy to do, even with a passive RC low-pass filter, which cuts at a few hertz. This is the role of the final low-pass filter. Moreover, counting the final low-pass filter, this synchronous detection (#) produces an excellent additional filter that only considers the desired frequency. A follower-mounted amplifier completes the assembly for zero dock output impedance, and an additional gain of 10 is optional. However, the entire acquisition chain (filtering, amplification) must not lead to any unwanted phase shift: the amplified signal must accurately maintain its phase shift of 90° relative to the current in the transmission coil. However, there will always be a small phase shift, if only because of the cascading electronics, which are never perfect. This is where the trick of using a Rauch structure12 comes in. Indeed, close to its peak, amplitude is stationary, but its phase crosses the x-axis around 0. Consequently, as far as correcting a small phase shift due to the electronics is concerned, a small frequency adjustment is appropriate. This leads to the Slingram zero adjustment procedure in the absence of a conductive subsoil and can be achieved by following the procedure described in section 2.2.3 based on the HCP/VCP response ratio, which must equal 2. 2.3.2.2.1. Note on primary field compensation

One difficulty with Slingram is that the secondary field is very small compared to the primary field. Indeed, we saw in section 2.2.1 that we have: ⎛H ⎜ s ⎜ Hp ⎝

⎞ iωμ 0 σs 2 , ⎟≅ ⎟ 4 ⎠

where Hs and Hp are the secondary field (due to soil response) and primary field, respectively. Let us apply this to a realistic example. Let s = 2m, and we get the following ratio:

12 Other band-pass filters may concur so long as their phase cancels out at the critical frequency.

The Electromagnetic Induction or “Slingram” Method

125

Hs ≈ 0.0008 ≤ 1 / 1, 000 . Hp

Thus, in the receiving coil, the secondary field, which is the one that occurs due to induction in the subsoil, is one thousandth of the primary field. However, this “thousandth” (or less)13 is what we are looking for, and so the electronics (in this case the multiplier) will have to make do with it. An instrumental idea that is very useful for taking measurements in low conductivity involves trying to compensate for this primary field by canceling it out in one way or another. Obviously, we cannot do this at the transmission, as the primary field is still at the origin of the eddy currents, which we want to provoke. An initial solution involves acquiring a signal that is in phase with the eddy currents, and reinjecting it by difference (subtraction), for example at the output of the first instrumentation amplifier (we then need to add an op amp). Or, we can add a subtractive injection at the minus input of the gain amp, where we already added offset compensation; in this case, we must also add an adapter amp containing an output resistance, then connect it on the minus input of this ½ AD708. To obtain such a signal, we just need to place a small receiver coil above the transmitter coil and acquire its signal, which will have the correct phase (the same as the eddy currents). One terminal of this coil can be grounded to the receiver, and the other can reach the adapter follower. We can also use an instrumentation amp with its two near-infinite impedance (+) inputs. A second solution is to divert the current from the transmitter coil to a small, single coil, which is partially overlayed on the receiving coil (by checking its orientation). This coil has a negligible effect on the primary field in the subsoil, but its proximity, even its overlay, with the receiving coil will compensate for the primary field. The best way to do this is to create an elongated quadrangular coil, of a similar length to the coil diameter and typically one-tenth wide, and for which the overlap with the receiving coil will be adjusted so as to cancel out the total flow of the primary field (measured by an oscilloscope at the output of the Rauch filter or gain 13 Even with the most advanced professional devices, the realistic low measurement limit is around 1 mS/m, provided the calibration has been done correctly.

126

Everyday Applied Geophysics 2

amplifier, or by any other method). This solution requires some mechanics, or the compensation coil may be permanently glued to the receiver coil. 2.3.2.2.2. Note on power supply

The gain to be produced at the receiver can be enormous and even reach 1 million. Even with very weak coupling at transmission and reception, there is a risk of spontaneous oscillations that may void the whole thing. Such oscillations will generally have a frequency that is very different to the frequency of action of the device. Therefore, the Rauch filter (and possibly, a second one added before the multiplier) limits this risk. This risk is also what leads us to separate the power supplies using DC converters (from Traco or Murata) for each of the units: – with a ± 5 V Murata for the XR2206 transmitter generator (which does not support more than 12 V), or 0–10 V, knowing that its output is floating thanks to a capacitor; – with a Traco that is able to deliver some Watts to supply power, in ± 15 V, to the transmission coil amplifier; – with a ± 12 V Murata for the receiver. The whole thing, with decouplings at the output of these power supplies, may be powered by a single 0–12 V source, but we have seen that during use, variations in this voltage impact the output voltages, especially for Murata devices. This affects the stability of the device. Improvements involve taking slightly higher voltage Murata circuits (± 15 V) followed by real regulators of type 78L12 and 79L12 for positive and negative voltages, respectively (and a 78L10 for the XR2206). 2.3.3. Initial calibration and calibration of the Slingram

We have seen that two settings are useful, perhaps even necessary: 1) an offset setting; 2) a compensation setting, particularly for measuring low conductivities, which can be carried out either by removing the transmitted field using a small coil connected to the transmitter and for which the inverse voltage is

The Electromagnetic Induction or “Slingram” Method

127

returned before the gain amplifier, or via a rectangular coil connected in series with the transmitter coil, and which is used in opposition to the measuring coil to cancel out the primary field flow (in any case, this solution is the simplest). We then just need to convert the reading from the voltmeter (whether a multimeter or an acquisition chain) into conductivity. 1 Vref VS 2 to the ground conductivity. Voltage Vref depends on the size of the transmitter and the shunt. The best thing to do here is to measure it with an oscilloscope, once and for all, and set it to 1 V.

To do this, we need to link the output voltage of the system: U =

Voltage VS must be connected to the voltage measured at the receiver coil, which is already expressed above, for the part induced by the subsoil alone. The voltage of the primary field in the receiver coil is, in peak value (thus dropping the sine): e peak = μ0

1 SR N R SE N E ⋅ I ⋅ ω . s3

The voltage of the secondary field alone is this very same quantity but multiplied by a factor that exists between the primary field and the secondary field. Thus, we get: esecondary = μ 0

⎡ ωμ 0 σs 2 ⎤ μ 02 2 1 S N S N I ⋅ ⋅ ω ω SR N R SE N E Iσ. ⎢ ⎥= R R E E s3 ⎣ 4 ⎦ 4s

This voltage undergoes the gains from the different stages: G1 for the instrumentation amplifier input (107 in our example), G r = 1 2 for the filter and G 2 = 10, 100 or 1,000 for the final filter (and a possible additional gain of 10 if we want to add it – we do not take it into account here, as we assume that it is 1). We do not count the multiplier gain of 1/10, which is compensated for by the gain amplifier 10 in the reference circuit.

128

Everyday Applied Geophysics 2

Thus, we get: U = G1G r G 2

μ02 2 ω SR N R SE N E Iσ 4s

U

σ= G1G r G 2

μ 02 4s

ω 2 S R N R SE N E I

That is a lot of coefficients for one calibration. Another method involves taking a measurement on an otherwise known resistivity ground (for example by conventional electrical measurements). We need a homogeneous terrain with a depth that is at least three times the spacing between the coils.... We might as well go to a lake (with a plastic or wooden boat), as the method that allows us to convert between the displayed voltage and the conductivity is actually very simple. Once in the field, the device zero can be set every few hours (slightly playing with the frequency), as explained in section 2.2.3. 2.3.4. Data acquisition

We can use our trustworthy field notebook to record our measurements. We can also use almost any acquisition system, especially if the acquisition frequency is low. The system for electrical acquisition discussed in Volume 1 is very well suited with some small modifications, since there is only one value to measure. It can be beneficial to connect control points to check that the different components are not saturated. Electronic engineers will know how to do this.

3 Processing Geophysical Maps

In this chapter, we will discuss the main steps involved in processing a geophysical map. We do not intend to stray far from a technical point of view, but we do propose some practical and commonsensical considerations. Volume 3 of this collection will look at more in-depth techniques, including modeling tools. 3.1. Introduction The choice of data representation modes (here, we mean two-dimensional geophysics) is fundamental: a good representation is rich in information, whereas if the representation is done poorly, we risk missing some essential information. Although it may seem simple, this question of representation is actually quite complex, and one needs practice – and experience – to best attain it. The most common problem is an “overwriting” of relevant data with less relevant data. Let us imagine the superposition of two or more anomalies, one of large amplitude, the other small, as shown in Figure 3.1, where we consider a profile (the issue is the same for a map). Thus, we see a loss of information. It is not very difficult to fix this, but it still requires some thought: let us assume that a finer reading of the map can initially highlight invisible anomalies, and let us consider, therefore, that we could be missing something.

130

Everyday Applied Geophysics 2

Figure 3.1. Superposition of small anomalies (A and C) and a large anomaly (B), such that they are divided into horizontal sections (dotted lines, which are contour lines) and colored with five colors between the contour lines. Small anomalies, which may well be the most relevant depending on the target sought, disappear completely in the representation

To remedy this, we can, for example, multiply the contour levels. But this is not always without its drawbacks. Another method that would be very effective in our case involves “removing the regional”, in other words the “long spatial wavelengths” or “trends1” (even if the latter is not at all linear). Either we want to explore the “big” anomaly (and this is fine) or we want to focus on the small ones (depending on the targets to be explored) that have to be detected. We would then have to adapt the representation and search the finer levels of the geophysical signal. But we still need to know that there are indeed small anomalies: if one is satisfied with a rushed representation, one may miss the target. A particular difficulty lies in the fact that the probe must be very accurately positioned when looking for small anomalies within large ones. Indeed, in the figure we see that a small positioning error2 where a strong anomaly exists will result in an amplitude measurement error. This is known as “positioning noise”. Contour lines are not the only way to represent data. Figure 3.2 shows three possible representations of the same map.

1 Terminology varies from community to community. The terms used by financiers on the stock exchange, statisticians, geophysicists, etc. are different, but through common sense we know what we are talking about. 2 In other words, the acquired data file shows that one is at a specific position (often a round number: 1 m, 2 m, etc.) while the sensor is actually offset by a few centimeters from these round values.

Processing Geophysical Maps

131

100

190

90

160 80

130

70

100

60

70 40

50 -40

-30

-20

-10

0

10

20

30

40

50

60

70

100

10 .

m

90

80

70

60

50 -40

-30

-20

-10

0

10

20

30

40

50

60

70

Figure 3.2. Resistivity map (extracted) of the Dehlingen site. At the top, representation with contour lines. In the middle, shaded. At the bottom, in perspective. The perspective map is mainly used for making pretty report covers. The shaded map shows details on the contour map that are more difficult to understand. But this is where the most information is resolutely contained. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

The classic level curve map allows us to clearly identify a clay environment, with resistivities below 30 Ω·m. The more resistant elements are the remains of a Roman villa, with a main body on the left and possibly an outbuilding on the right. Nothing is visible from the surface, not even by plane: the topsoil (here, meadows) completely masks the structures, but this is transparent to electric currents.

132

Everyday Applied Geophysics 2

3.1.1. How to make a map The data (.dat format) associates measurements with known coordinate points. These are not necessarily on a regular grid (.grd format), and the first step will be to homogenize them to a regular grid, as this greatly facilitates the work downstream. This will be discussed in the following section. It is only after obtaining this .grd file that we consider the representation, with a preference for contours (most often). There is always a small amount of trial and error (or even a lot in some cases), before achieving a satisfactory visualization. .grd files normally fill a perfect rectangle, but the data depends on the constraints of the terrain. There is no recipe, and one has to remain flexible... In .grd format, the data fill exactly one rectangle, without gaps, so if measurements are missing, they are replaced by recognizable arbitrary values, such as 1.70141e+038. 3.1.2. Map representation software packages In the world of geophysics, Golden Software’s Surfer is the most popular. It is flexible, easy to use and powerful. I am not just saying this to win shares (and I do not have any), but because I really have tried many other software. Some software developed in laboratories can be interesting but are often difficult to obtain because they are not distributed online (notably “Wumap”, which was developed at Sorbonne University, France, in the “METIS” laboratory. This software is very suitable for geophysics since it includes specific functions such as trimming, and typically magnetic processing functions, such as reduction to the pole). Most representations in this book were made using Surfer. But at the end of the day, this software is not free: Surfer costs about 850 USD (for Wumap, we have to wait for a new version which is still under development).

Processing Geophysical Maps

133

3.1.3. Free software Here, we must give a mention to GMT (http://gmt.soest.hawaii.edu/), which was initially developed under Unix for representations on a continental or global scale3. This software is very powerful but getting started requires some personal (non-financial) investment. Octave, which is built to have the same syntax as Matlab and which we use in this book, allows some basic manipulations to be carried out. We will look at an example later. However, its contouring function is buggy4 and does not always yield a good result. We can attain an adequate representation so long as we do not seek to draw the sensu stricto contours. Nonetheless, an Octave toolbox, named Divand5 promises to allow many operations, but we have not tested it yet. For magnetism, let us not forget to mention MagPick, which was developed by Geometrics. It is free. It not only allows for reductions and extensions to the pole (see below), but also provides tools for interpreting anomalies (in other words, for identifying structures that can “explain” anomalies). You will undoubtedly be able to find some other software, too. Finally, various useful tools and codes for data processing are available from https://github.com/NicolasFlorsch/geophysics. 3.2. Preparing a regular grid A geophysical map represents the size of a parameter (for example resistivity, which is a component of the magnetic field, temperature, etc.) measured in 2D space. It is usually denoted as (x,y), preferably (but not necessarily) with perpendicular axes. Prospecting is adapted to the constraints of the terrain, and the axes only point east for the x-axis and north for the y-axis if the configuration of the terrain allows it. When this happens, we can ipso facto put north at the top of the map.

3 See also https://www.youtube.com/watch?v=PdTlCNzCkg4. 4 Notice to developers. 5 https://octave.sourceforge.io/divand/.

134

Everyday Applied Geophysics 2

The same value cannot appear twice in the same place, like two temperatures or resistivities. This defines a function, here of two variables, which we denote: z = f(x,y). While z is altitude, this is a topographic map6. If a measurement is made using Slingram, it is a conductivity map, etc. In general, the data consist of a set (or table) of points of the shape: Point number

x

y

z

1

x1

y1

z1

2

x2

y2

z2

3

x3

y3

z3

















N

xN

yN

zN

There can be several columns after the z column. For example, in magnetism we measure three components, namely east, north and vertical: Point number

x

y

E

N

V

1

x1

y1

E1

N1

V1

2

x2

y2

E2

N2

V2

3

x3

y3

E3

N3

V3

























N

xN

yN

EN

NN

VN

Since spreadsheets have row numbering, it is not unusual to not have a column 1. Ultimately, the file can have or not have a more or less detailed header. In this case, the header consists of the first line. Such a file often has the extension “.dat” and can be in ASCII (perfectly readable with any word processor) or in binary (a specific program is required to read it). 6 It is possible to have three altitudes at the same place: this is the case for an overhanging cliff. But altitude is a geometric parameter, not a physical parameter. Two different physical quantities cannot be in the same place. Mathematically speaking, one should not say that altitude is a function (#) of (x,y).

Processing Geophysical Maps

135

Maps should be drawn from such files in order to obtain a representation of the data. This is usually done with a contour representation, like for a topographic map, in color or gray scale. This is what we will discuss in the following section. A specific program will draw these curves. This is greatly facilitated if instead of having randomly placed points, one works from a regular grid7. Also, the quasi general rule requires that one goes from an irregular grid to a regular grid. This usually means the data on the corners (lower left and upper right) and the number of points Nx for x and Ny for y, and the data of the values of the function (or functions), for example line by line. For a regular grid, the coordinates of each point do not need to be kept in the file, since they are determined by the grid. This is shown in Figure 3.3.

Figure 3.3. Interpolation of irregular data (points) and representation with a color scale. This map was obtained using a small program written with Octave, which is described below. These data were in fact obtained by extracting a regular grid for this educational purpose to illustrate an irregular set of coordinates. In fact, in the field, one always tries to measure on a regular grid, and “irregularities” are usually due to places that were inaccessible for one reason or another. The data used are part of the files found on: https://github.com/Nicolas Florsch/geophysics. For a color version of this figure, see www.iste.co.uk/florsch/ geophysics2.zip

7 In the past, “iso-values” were drawn and colored by hand and not by ready-made programs. In this case too, a regular grid facilitated the work.

136

Everyday Applied Geophysics 2

Several situations can occur for .dat files: – data are randomly distributed; – data are on a regular grid, but with gaps; – data are on a regular grid, but with unsatisfactory x and y steps; – data are already on a regular grid, with suitable steps (but in .dat format). In the latter case, one simply needs to change the file format. The first three cases require some calculations to achieve a regular interpolated grid. However, interpolation (here with two dimensions) is a complex problem. It combines the result of a choice (depending on which criteria we are interpolating) and calculation algorithms. For example, a frequently taught interpolation technique, polynomial interpolation, would be catastrophic if applied to any data. Admittedly, the interpolated function passes through the initial points, but very often oscillates dramatically between points. Interpolation is truly a science in itself. There are dozens of ways to calculate an interpolation, and just as many possible criteria. How to choose? Sometimes, it is the nature of the data that can help to make a decision by seeking that the interpolator (method + program) respects physical properties that are specific to the measured signal. For example, in electrical prospecting, there may be discontinuities (fault crossing, for example), but this cannot exist for magnetic data measured a few decimeters from the ground (all this will be shown later). An interpolator for a magnetic map must provide a smooth result. Ultimately, this must be considered case by case, and this is often done to the eye and satisfaction of the geophysicist who will decide on the validity of a method. Note that we distinguish between “exact” interpolators (which “pass” through the original points if we interpolate) and “smoothing” interpolators, which do not necessarily pass through the original points, precisely because of a certain degree of smoothing. Let us mention some classic “interpolators”, bearing in mind that there are dozens of them.

Processing Geophysical Maps

137

1) Distance-weighted average interpolation The idea here is that the value to interpolate is influenced by its neighbors, particularly those in close proximity. There is a Wikipedia page about this method, and we refer the reader to it8. 2) Linear interpolation based on Delaunay triangles9 If we assume irregularly distributed points, we can always construct triangles, and carry out a linear interpolation between the vertices of the triangle that contains the point to interpolate. A balanced way to build these triangles is the Delaunay method. 3) The method of minimum (integral) curvature, or “spline”, adapted to two dimensions10 Let us start with the one-dimensional spline. The term “spline” means a wooden stick (a narrow board) that you force through given points, which then allows you to draw a very smooth curve between these points (see illustration on the cited website). In physics, the deformation energy (the energy required to bend the board) is minimal: any other shape would require more work than the spontaneous shape that passes through the points. In fact, this energy is also proportional to the integral of the board’s curvature (the curvature is the inverse of the radius of curvature). 1D spline interpolation uses exactly this principle, and it is shown that the solution is a set of cubic functions (polynomial), each of which goes from one point to another. This can be refined, which would involve bringing an additional tension into play (stretching the board, as if it were being pulled from each end). In 2D, it gets a little complicated from a mathematical point of view. Many codes use the Smith and Wessel (1990) process, which can be easily found on the Internet11. 4) Kriging A 2D signal is considered to be a random signal with certain statistical spatial properties. It comes from the theory of probability and can be 8 https://en.wikipedia.org/wiki/Inverse_distance_weighting. 9 https://en.wikipedia.org/wiki/Delaunay_triangulation. 10 https://en.wikipedia.org/wiki/Spline_(mathematics). 11 We downloaded the original document from http://cosmos.hwr.arizona.edu/Technical/ Smith-Wessel-1990.pdf.

138

Everyday Applied Geophysics 2

summed up as follows: at a given point, it is the most likely value given the value of neighbors AND the statistical properties of correlation between neighbors. The Wikipedia page12 tells a lot more. It reads, for example, “under suitable assumptions on the priors, kriging gives the best linear unbiased prediction of the intermediate values”. This statement, which is correct, has led many practitioners and theorists to declare that all other methods are bad, which is simply not true. Indeed, an estimator need not always be “linear unbiased”. In magnetic prospecting for example, it is a bad estimator because it does not provide smooth solutions, whereas this is a property that one expects from the magnetic field. For the latter, spline is better, and an interpolation that is based directly on the properties of the magnetic field (law in 1/r3 of sources in the subsoil, for example) is better13, although more complex to implement. It would not be wrong to say that there are as many interpolation methods as there are digitizers or maps. Interpolation, like many signal processing choices, is based on a case-by-case analysis and the geophysicist’s experience. Let us compare the few methods mentioned above: Inverse distance

Advantage

Very easy to calculate.

Creates “eyes” Disadvantage around certain points.

Delaunay

Minimum curve Requires complex calculation with Simple enough to numerical calculate. resolution of a partial differential equation. The function must be constrained in Triangles are the parts where visible in data are missing, to interpolation. avoid large deviations. Long calculation time.

Kriging Statistical sense (not always relevant in geophysics, however). Requires statistical concepts, such as the variogram14. Long calculation time.

Let us compare these four methods graphically, as shown in Figure 3.4. 12 https://en.wikipedia.org/wiki/Kriging. 13 To say "better", you need a criterion. Here the criterion would be "sources of the magnetic field exist such that the calculated values coincide with the actual observed values, with a single small error". 14 https://en.wikipedia.org/wiki/Variogram.

Processing Geophysical Maps

139

Figure 3.4. Comparison of four classical interpolators. This is a magnetic map made from a 1 m x 1 m grid, interpolated to 0.5 m x 0.25 m. The simplest method of inverse distance weighting yields a poor result. Triangle-based interpolation bundles in strong angulosities. Kriging is much better but requires an intermediate statistical analysis that is not described here (see the literature, particularly https://en.wikipedia.org/ wiki/Kriging). Small angulations still remain, which are not realistic for a magnetic map that must, in essence, be very smooth. Finally, the “spline” or minimum integral curve interpolation provides the best result. These maps are from a study using the commercial software Surfer. For a color version of this figure, see www.iste.co.uk/ florsch/geophysics2.zip

These maps were obtained using Surfer, from known data on a regular grid. These are the same data as in Figure 3.3 but plotted here from the original regular data. Let us look in detail at the small Octave program that allowed this representation to be achieved.

140

Everyday Applied Geophysics 2

clear all NX=101;NY=101; % here we decide to grid in 101 x 101 points data=dlmread(‘irregular.dat’,’,’); % read data x,y,z with comma separator x=data(:,1);y=data(:,2);z=data(:,3); XMIN=floor(min(x));XMAX=ceil(max(x)); YMIN=floor(min(y));YMAX=ceil(max(y)); % calculating the grid TX=linspace(XMIN,XMAX,NX);TY=linspace(YMIN,YMAX,NY); % the lines above and below generate a grid [XI,YI] = meshgrid(TX,TY); ZI = griddata(x,y,z,XI,YI); % interpolation of % data on the regular grid pcolor(XI,YI,ZI), hold on % draws the map with small rectangles shading flat % avoids the contouring of small rectangles colorbar % sets the color scale axis(“equal”) % puts both axes on the same scale toto=rainbow(64); % takes the rainbow scale % but I reverse it in the following 3 lines: I prefer % going from blue to red... for i=1:64 rainbow2(i,:)=toto(65-i,:); end colormap(rainbow2) % choice of color scale plot(x,y,’*k’), hold on % puts a black * at the coordinates of the data plot(x,y,’ow’), hold off % puts a white o at the coordinates of the data xlabel(‘x’) ylabel(‘y’) title(‘Interpolation of magnetic data”) % all the following lines put the interpolated file in .grd format max(max(ZI)) min(min(ZI)) for i=1:NX for j=1:NY if(isnan(ZI(i,j))==1) ZI(i,j)=1.70141e38; end end end grd_write(ZI,XMIN,XMAX,YMIN,YMAX,’GRD.grd’) % or any other .grd name

The reader, even a non-programmer reader, will easily be able to adapt this code to their needs. Note the use of the grd_write routine, which was

Processing Geophysical Maps

141

written in a participative framework linked to Matlab (remember that it is compatible with Octave): https://fr.mathworks.com/matlabcentral/fileexchange/20880-surfer-grid-importexport but note the adaptation made for the “missing values” (which often happens on irregular grids) over the seven lines of the program preceding the last line. It is not always so simple with Octave though, as it is less robust and less powerful than Matlab, from which it uses the syntax as best it can. To check the colors of our geophysical maps, we recommend reading: http://cresspahl.blogspot.fr/2012/03/expanded-control-of-octaves-colormap. html. Note that the website: https://fr.mathworks.com/matlabcentral/fileexchange/20880-surfer-grid-importexport provides routines to exchange data between the Surfer format (used even without this software) and Matlab or Octave. 3.3. Representation with contour lines: an algorithm problem Although determining the color of a pixel by interpolation is easy, constructing the contour lines sensu stricto is more difficult. Indeed, a contour line is an equation curve:

f (x, y) = Ck , with Ck being the kth level chosen by the operator. This equation can be put in differential form, by writing that

df (x, y) = 0 , which is the same as writing: ∂f ∂f dx + dy = 0. ∂x ∂y

142

Everyday Applied Geophysics 2

Such a differential equation can be solved through methods such as Runge-Kutta (#) for example. But it remains a delicate process: on the one hand because the curves close on themselves (it is then not possible to get a function of shape y = g(x) , or by pieces, which must be managed), and on the other hand because f is not continuously known, but only defined on the grid of points. Another difficulty is that at the edges of the area, the curves stop. This is certainly a reason why free software for such a function is difficult to find. One can cite, for example, http://www.galiander.ca/quikgrid/. Here is what this program gives after reading the irregular data file (Figure 3.5):

Figure 3.5. Quickgrid software is very easy to use. The grid calculation is automated according to the resolution of the raw map (via a statistic on the points). The data are the same as for Figure 3.3 and are available at https://github.com/Nicolas Florsch/geophysics. For a color version of this figure, see www.iste.co.uk/florsch/ geophysics2.zip

Let us also mention http://surfit.sourceforge.net/. Programs that run online are also available, such as https://www.esurveycad.com/Online/ generate_contour_map_help.

Processing Geophysical Maps

143

Surfer is the most attractive paid software. Less sophisticated software, such as http://3dfmaps.com/, are available for approximately 100 dollars. 3.4. Artifacts on maps 3.4.1. Profile effects Profile effects appear from the moment unintended variations affect the measurements. This can affect all kinds of measurements, even point-by-point measurements, for the simple reason that measurements are always made along profiles (and all these make up a map). An example of a map with profile effects is shown in Figure 3.6.

Figure 3.6. Example of profile effects (left). On the right is the processed map. Note that attaining 4 squares of 25 m x 50 m leads to imperfections at the junction. As these are Slingram measurements, the drift is mainly thermal (from the electronics and deformation of the chassis) and is all the more inconvenient in the presence of low conductivity. The outlined rectangle is the one used to illustrate the profile effect elimination technique discussed below. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

144

Everyday Applied Geophysics 2

The processing must be carried out separately on each part of the 25 m × 50 m maps made at different times. To remove these profile effects, we have to calculate the averages of each profile, here acquired at constant x. Let us consider, for example, the map for x = 25 – 50 and y = 0 – 50 (bottom right part). The mean is averaged relative to y for each x value. Then, by a process chosen by the operator, the corresponding curve is filtered: low, medium or median degree polynomial adjustment on 3 or 5 points. Figure 3.7 shows the average curve (say, M(x)) and the filtered curve (say, F(x)). Then the map is processed such that for each x-coordinate profile (say, A(x)), the corrected profile C(x) is given as: C(x) = A(x) – M(x) + F(x). In other words, a smooth function was substituted for an irregular function.

Figure 3.7. The continuous curve shows the average of the profiles as a function of x and relative to y for the area contained within a rectangle of the map in Figure 3.6. The small circles show the result of a 5-point median filtering15. Any other smoothing process is valid, we just have to consider the result

15 For five values a1, a2, a3, a4 and a5, this amounts to ordering these values and retaining the control unit, which has two higher values and two lower values. For example: the median of (1,4,3,6,7) is 4.

Processing Geophysical Maps

145

3.4.2. Chevron effect This artifact was mentioned in Chapter 1, section 1.3.5. Suppose the profiles are made relative to y (with x = constant). The simplest procedure is to average all the rising profiles on one side, and all the descending profiles on the other. Intercorrelating the two mean curves will peak at the offset. The positions are then corrected by translation relative to y, by half an offset downwards (toward decreasing y) for the rising profiles and by half an offset in the opposite direction for the descending profiles. As this operation is done per map, it is quite possible to attain the offset from the two average curves without going through the intercorrelation: this can be done just by visualizing these averages. We have provided a program on Github. 3.4.3. Regional A very useful software for this is Magpick. It has been made available by the manufacturer Geometrics, and can be found at: https://sourceforge.net/ projects/magpick/. It allows various operations on a geophysical map (in .grd format) to be carried out: reduction to the pole, extensions and regional processing. It also allows contour lines to be drawn, in addition to filling the maps with color, and right clicking on the map allows us to view the color scale. Regional: this term refers to the content of a map with long spatial wavelengths. Removing it is only relevant if one seeks to “detach” – or, more accurately, to separate – the small anomalies from the large ones, by which we mean: small and large spatial wavelengths. There are many processes to achieve this: some go through the Fourier transform, others through a polynomial (2D) adjustment of a rather low degree (rarely more than 3). Let us take a look at the example in Figure 3.8. The effect of removing the regional is an improved shape of anomalies. The largest of these sees its negative pole close in on itself again, as it should be in principle.

146

Everyday Applied Geophysics 2

Figure 3.8. The raw map shows a “regional”, with overall lower values to the left and higher values to the right. On the bottom map, we removed a regional in the form of a polynomial of two variables (x,y) of 2 degrees (therefore of 6 coefficients: P(x) = a + bx + cy + dx2 + ey2 + fxy), from the “background” function, which can be found in the “operations” tab of the Magpick program. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

3.5. Reduction to the pole for magnetic maps In section 1.3.5, a topic on magnetism, we saw how a type of anomaly varies depending on latitude. Obviously, reading an anomaly at a pole is easier than reading an anomaly at an average latitude. In particular, the peak of the anomaly at the pole is well above the dipole that generates the anomaly, which is not the case at other latitudes.

Processing Geophysical Maps

147

Actuually, it turnss out that the information n carried by an a anomaly aat almost any latiitude allows us to calcullate what wee would havve measuredd had we been at the pole (hoowever, this is not quitee the case foor an anomally at the mall part of thhe informatio on is lost). Equatorr, where a sm We will not dive into the whole w theory here, as it is i quite mathh-heavy. nt elements for all convventional Blakelyy’s (1995) boook providess the relevan operatioons and muchh more. It is rather addressed to geopphysicist expeerts. How wever, let us reduce our anomaly to the pole using Magpickk. To do this, wee must indicaate where noorth is. The program, p for which in Fiigure 3.9 we show w the window w for reducing to the pole, is inaccurrate: north shhould be given reelative to thee (-x) axis, whhereas it is 180 + 45 = 2225°.

Fig gure 3.9. Mag gpick menu forr reduction to the pole. The slope (here, 6 64°) and north must m be marke ed, as shown at a the bottom of the figure

148

Everyday Applied Geophysics 2

Figure 3.10 shows the map reduced to the pole.

Figure 3.10. Map reduced to the pole: it is no longer relevant to add the north (there is no north at the North Pole). The maxima of anomalies are repositioned a little to point north. The strong negative anomaly remains, suggesting that remanent magnetization is present. This area has since been excavated: both anomalies are due to 19th Century stoves buried about 1.5 m underground. For a color version of this figure, see www.iste.co.uk/florsch/geophysics2.zip

3.6. Other operations Other map operations are sometimes useful, such as upward continuation (calculating how the field would have been if measured higher) or downward continuation (but this quickly becomes unstable). Other operations are trending, such as the “analytical signal” or the “Eulerian deconvolution”. Although these are interesting to explore, one must know what to look for: one will not be able to “bring out” the signal if it is not there in the first place.

References

Further reading BLAKELY R.J., Potential Theory in Gravity and Magnetic Applications, Cambridge University Press, Cambridge, UK, 1996. BURGER H.R., SHEEHAN A.F., JONES C.H., Introduction to Applied Geophysics: Exploring the Shallow Subsurface, W. W. Norton & Company, New York, USA, 2006. EVERETT M.E., Near-surface Applied Geophysics, Cambridge University Press, Cambridge, UK, 2013. HINZE W.J., VON FRESE R.R.B., SAAD A.H.S., Gravity and Magnetic Exploration, Cambridge University Press, Cambridge, UK, 2013. KIRSCH R., Groundwater Geophysics, Springer, Heidelberg, Germany, 2008. MILSON J., ERIKSEN A., Field Geophysics, 4th ed., Wiley, Oxford, UK, 2011. REYNOLDS J.M., An Introduction to Applied and Environmental Geophysics, Wiley, Oxford, UK, 1997. RUBIN T., HUBBARD S., Hydrogeophysics, Springer, Heidelberg, Germany, 2005. TELFORD W.M., GELDART L.P., SHERIFF R.E., Applied Geophysics, 2nd ed., Cambridge University Press, Cambridge, UK, 1990. WITTEN A.J., Handbook of Geophysics and Archaeology, Routledge, Abingdon-onThames, UK, 2005.

Index

A

F, G

aliasing, 36, 59, 64 anomaly, 2, 17, 18, 22, 24–31, 35, 36, 38–43, 45–53, 56 apparent conductivity, 99–102, 106

fluxgate, 25, 26, 32, 37–42, 46, 47, 53–62, 69, 81, 84–86, 91 gradiometer, 15, 26, 31, 34, 41, 43, 47, 55, 56 grid, 132–142

C calibration, 37, 57, 58, 85, 86, 89–92, 104, 107, 108, 125, 126, 128 contour lines, 130, 131, 141, 145 Curie’s temperature, 13, 22

H, I, K hysteresis cycle, 19 interpolation, 135–141 inversion, 109, 111, 119 Kriging, 137–139

D, E data acquisition, 128 Delaunay triangles, 137, 138 dipole, 8, 9, 13, 22–27, 45 distance-weighted average interpolation, 137 eddy currents, 93–96, 98, 100, 125 effect chevron, 145 profile, 143, 144

M, P magnet, 1, 5, 7–10, 13, 14, 19, 23, 53 magnetic field, 2, 6–15, 18, 23, 24, 33, 34, 39, 55, 57–59, 62, 69, 76, 87, 89, 92 induction, 11–13, 19, 21–23 moment, 9, 11–13, 19, 21, 23, 26 permeability, 11 susceptibility, 11, 12, 17

152

Everyday Applied Geophysics 2

magnetite, 1, 13, 16–18 magnetization remanent, 19, 21, 22, 148 magnetometer, 3, 8, 15, 24, 29, 35–37, 41, 42, 44, 48, 53–55, 59, 61, 81, 84, 85, 88, 90, 91 Maxwell, James Clark –Ampère’s equation, 96 –Faraday’s equation, 96 right-hand rule, 7 minimum curve, 138 profile, 143, 144

R, S, T receiver, 98, 114, 116, 119–123, 125–127 reduction to the pole, 132, 145–147 regional, 130, 145, 146 safety, 18 spline, 137–139 time variations, 33, 34, 37, 39, 42 transmitter, 95–98, 114–116, 118–122, 125–127