Theory of Sampling and Sampling Practice, Third Edition [3 ed.] 113847648X, 9781138476486

A step-by-step guide for anyone challenged by the many subtleties of sampling particulate materials. The only comprehens

2,147 265 24MB

English Pages 694 [727] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Theory of Sampling and Sampling Practice, Third Edition [3 ed.]
 113847648X, 9781138476486

Table of contents :
Cover
......Page 1
Half Title......Page 2
Title Page......Page 4
Copyright Page......Page 5
Table of Contents......Page 6
Foreword......Page 28
Preface......Page 30
The Author......Page 32
Introduction......Page 34
Historical Summary......Page 37
Subdivisions of the Text......Page 40
1.1 Basic Terms......Page 44
1.2.1 Latin Letters......Page 58
1.3 The Word Error versus The Word Uncertainty Controvercy......Page 59
1.3.1 Introduction......Page 60
1.3.3 Jumping from Uncertainty to Error......Page 61
1.3.4 Sampling Correctness: The Mandatory Path to Predictable Uncertainty......Page 62
2.1 Structural Property......Page 64
2.2 Circumstantial Property......Page 65
2.3.2 The Consequences......Page 66
2.5 Relationship between Correctness and Accuracy......Page 68
2.7 Disadvantages and Risks of Controlling Accuracy......Page 69
2.8 Conclusions......Page 70
Part II: Fundamental Statistical Concepts Used in the Theory of Sampling......Page 72
3. Fundamental Statistical Concepts......Page 74
3.1 Notion of Probability......Page 76
3.2.1 Notion of Random Variable......Page 77
3.2.4 Graphic Representation of a Probability Law (Discrete Variable)......Page 78
3.4.1. The Arithmetic Average x of a Discrete Set Is Defined as......Page 80
3.4.8 Dispersion Parameters......Page 81
3.5.1 Permutations......Page 84
3.6 The Gaussian Model: The Normal Probability Distribution......Page 85
3.7 The Binomial Model......Page 86
3.8.2 Poisson Processes......Page 87
3.10 Random and Systematic Errors......Page 92
3.10.2 Notion of Accuracy......Page 93
3.10.4 Graphic Representation of the Notion of Accuracy and Precision......Page 94
3.11 Summation of Random and Systematic Errors......Page 97
4. A Logical Introduction to the Components of the Overall Estimation Error......Page 98
4.2 Distribution Heterogeneity......Page 99
4.3.2 Continuity or Discontinuity of the Selected Model Representing the Lot......Page 100
4.4 Heterogeneity of a Zero-Dimensional Lot......Page 101
4.5 Heterogeneity of a One-Dimensional Lot......Page 102
4.7 Heterogeneity of Three-Dimensional Lots......Page 103
4.9 Materialization of the Sampling Operation: The Increment Materialization Error......Page 104
4.11 The Total Sampling Error......Page 105
4.13 The Overall Estimation Error......Page 106
Part III: Heterogeneity and Homogeneity......Page 110
5. A Logical Introduction to the Notion of Heterogeneity......Page 112
5.1.1 Constitution Heterogeneity......Page 113
5.1.2 Distribution Heterogeneity......Page 114
5.2.1 Number of Dimensions of the Selected Model Intended to Represent a Lot......Page 115
5.2.2 Continuity or Discontinuity of the Selected Model......Page 116
5.3.1 Functional Relation......Page 117
5.4.2 One-Dimensional and Continuous Lots......Page 118
5.4.3 Two- and Three-Dimensional Lots......Page 119
6.1 Introduction......Page 120
6.2 Definitions and Basic Relationships......Page 121
6.3 Population of Specified Units: Population of Fragments......Page 122
6.3.1 Heterogeneity Carried by a Fragment within the Lot......Page 123
6.3.3 Variance of the Heterogeneities Carried by the Fragments of a Lot......Page 124
6.3.6 The Intrinsic Heterogeneity of the Fragments Making Up the Lot......Page 125
6.3.8 Respective Properties of CHL and IHL......Page 126
6.4.2 Heterogeneity Carried by a Fragment within a Group of Fragments......Page 127
6.4.4 Definition of the Distribution Heterogeneity of a Lot......Page 128
6.4.5 Relationship between Constitution and Distribution Heterogeneities......Page 129
6.4.7 Definition of the Distribution Homogeneity......Page 130
6.4.8 Natural Distribution Homogeneity within a Lot......Page 131
6.4.9 Different Kinds of Natural Distribution Homogeneity......Page 133
6.4.11 Maximum Distribution Heterogeneity within a Lot......Page 135
6.4.12 Definition and Properties of the Grouping Factor......Page 136
6.4.13 Definition and Properties of the Segregation Factor......Page 137
6.4.15 Effect of the Size Distribution of the Fragments on the Value of the Constitution Heterogeneity......Page 138
6.4.16 General Expression of the Distribution Heterogeneity......Page 139
6.4.17 Illustration of the Definition of Heterogeneity......Page 140
6.5 Transformation of a Set of Units with Two Descriptors into an Equivalent Set of Units with One Descriptor......Page 142
6.6 Practical Use of the Proposed Definitions of the Heterogeneity......Page 143
6.7 Practical Example of the Calculation of Heterogeneity Characteristics in a Counted Population......Page 144
7.1 Introduction......Page 148
7.2.1 Definition of the Heterogeneity Carried by the Unit Um......Page 149
7.2.2 Characterization of a Chronological Series......Page 150
7.4.2 Basic Definition of the Semi-Variogram......Page 151
7.5.1 Problem Associated with the Central Values of the Chronological Series......Page 153
7.5.3 Problem Associated with the Main Characteristics of the Processing Stream......Page 154
7.7 Definition of the Relative Variogram......Page 155
7.8 Relevance of the Chronological Order of the Units......Page 157
7.9.1 Description of the Heterogeneity of a One-Dimensional Lot in Terms of a Variogram......Page 160
7.9.3 The Long-Range Heterogeneity Fluctuation......Page 161
7.9.4 The Periodic Heterogeneity Fluctuation......Page 162
7.9.6 Properties of the Variogram before and beyond the Range......Page 163
7.9.7 Area of Influence of One Increment......Page 164
7.10 Auxiliary Functions of the Variogram......Page 165
7.10.1 The First Order Average Integral of the Variogram......Page 166
7.10.2 The Second Order Average Integral of the Variogram......Page 167
7.10.3 The Moving Average of the Variogram......Page 168
7.11.1 Definition of Errors Generators......Page 169
7.12.1 Graphical Integration of a Variogram......Page 170
7.12.2 Significance of the Term V(0)......Page 172
7.12.3 Practical Estimation of the Term V(0) Using a Separate Experiment......Page 173
7.12.5 Point-by-Point Calculation of the Second Order Average Integral W’(j)......Page 175
7.12.6 Calculation of the Heterogeneity Fluctuation Error......Page 176
7.13.2 Investigation of the Continuous Term of V(j)......Page 177
7.14 Practical Interest of a Variographic Experiment in Quality Control......Page 181
7.14.2 Average Estimates versus Instantaneous Estimates......Page 183
7.15 Practical Applications of the Auxiliary Functions of the Variogram......Page 184
Part IV: Sampling Errors Introduced by Various Forms of Heterogeneity......Page 186
8.1 Theoretical Use of the One-Dimensional Model......Page 188
8.3.1 Definition of the Real Lot L......Page 189
8.3.2 Definition of an Imaginary Lot L’......Page 190
8.4 Characterization of the Heterogeneity of a One-Dimensional Lot......Page 191
8.6 Characterization of the Sample......Page 192
8.7 The Continuous Heterogeneity Fluctuation Error HFE......Page 193
8.8.3 Variance of HFE for Strict Random Sampling......Page 194
8.9 Components of the Continuous Heterogeneity Fluctuation Error HFE......Page 195
8.10.1. Properties of the Average m(HFE1)......Page 196
8.10.2. Properties of the Variance s2(HFE1)......Page 197
8.10.4. Minimization of the Variance s2(HFE1)......Page 198
8.10.5 The Two Components of the Error HFE1......Page 199
8.11. The Continuous Nonrandom Term HFE2......Page 200
8.11.1. Properties of the Average m(HFE2)......Page 202
8.11.2. Properties of the Variance s2(HFE2)......Page 203
8.11.3. Cancellation of the Variance s2(HFE2)......Page 204
8.11.4. Minimization of the Variance s2(HFE2)......Page 205
8.12.2 Frequency and Origin of Periodic Fluctuations......Page 206
8.12.3 From the Heterogeneity h3(t) to the Term HFE3......Page 207
8.12.4 Properties of the Moments of the Error HFE3......Page 208
8.12.5 Practical Recommendations......Page 210
9.2 Notations......Page 212
9.3.2. Distribution of the Frequency πm of the Unit Um in the Set Z......Page 213
9.3.5 Weight MZ of the Set Z......Page 214
9.3.7 Distribution of the Total Weight ASk of the component of Interest in the Sample Sk......Page 215
9.4.3. Distribution Law of aSk......Page 216
9.4.4 Moments of the Critical Content. The Theoretical Approach......Page 219
9.4.5. Moments of aSk and SSE......Page 221
9.6 The Correct Probabilistic Model......Page 222
9.7.1 Small Scale Heterogeneity......Page 223
9.7.2 Comparison of SSE from the Discrete Model with HFE1 from the Continuous Model......Page 224
9.7.3 Logical Analysis of the Discontinuous Heterogeneity Fluctuation Error HFE1......Page 226
10. The Fundamental Sampling Error......Page 230
10.1 Definition of the Fragment Shape Factor......Page 233
10.2 Estimation of the Fragment Shape Factor......Page 234
10.4 Definition of the Maximum Fragment Size......Page 235
10.5 Definition of the Mineralogical Factor......Page 236
10.5.2 Calculation of the Mineralogical Factor......Page 237
10.7.1 Calculation of the Liberation Factor through the Notion of Constitution Heterogeneity......Page 239
10.7.2 Calculation of the Liberation Factor Using the Notion of Liberation Size......Page 243
10.9 Definition of the Fundamental Sampling Error FSE......Page 244
11.1 Minimum and Maximum of the Fundamental Sampling Error......Page 246
11.2 Construction of Sampling Nomographs......Page 247
11.3 Revisiting the Determination of the Liberation Factor......Page 249
11.4.1 General Approach......Page 250
11.4.2 Example of General Method Applied to a Gold Deposit......Page 251
11.5 Cases Where the Material Has Been Crushed Close to the Liberation Size of the Constituent of Interest......Page 258
11.6 Cases Where a Constituent of Interest Is Associated with Another Major Mineral......Page 260
11.7 Sampling for Fragment Size Distri Bution Analysis......Page 261
11.8.1 The Total Allotted Variance......Page 264
11.8.2 A Logical, Economic Distribution of the Total Allotted Variance......Page 266
12.1 Introduction......Page 268
12.2 Cardinal Rule #1 in Sampling......Page 270
12.3 Cardinal Rule #2 in Sampling......Page 271
12.4 Cardinal Rule #3 in Sampling: Using a Logical Strategy......Page 273
12.4.3 The Mineralogist to the Rescue......Page 274
12.4.6 A Logical Flow Sheet to Perform Heterogeneity Tests......Page 275
12.4.7 Calculating the Necessary Sample Weight Ms Instead of the Variance of FSE......Page 276
13. The Grouping and Segregation Error......Page 278
13.1.2 Conditions for the Cancellation of the Variance s2(GSE)......Page 279
13.1.3 Conditions for the Minimization of the Variance s2(GSE)......Page 280
13.2 A Logical Solution to Problems Generated by the Variance of the Grouping and Segregation Error......Page 282
13.3 The Transient Nature of Segregation......Page 283
13.3.2 Segregation Introduced at the Discharge Point of a Laboratory Blender......Page 284
13.4 The Relativity of Segregation......Page 285
13.6 Segregation because of Fragment Size Heterogeneity......Page 286
13.7 Segregation because of Fragment Shape Heterogeneity......Page 287
13.9 Segregation because of Vibrations......Page 289
13.10 Other Causes for Segregation......Page 292
Part V: Integration of Visman and Ingamells’s Works into the Theory of Sampling......Page 294
14.2.1 A Simple, Pragmatic Observation......Page 296
14.2.2 The Poisson Model: The Law of Small Probabilities......Page 297
14.2.4 A Simple, but Useful Exercise......Page 298
14.2.5 Additivity of Poisson Processes......Page 299
14.3 Visman’s Sampling Equation......Page 300
14.4 A Very Useful Visman’s Experiment......Page 301
14.4.1 Discussion about the Experimental Variances......Page 303
14.5 Ingamells’ Most Probable Result......Page 305
14.6 Ingamells’ Gangue Concentration......Page 307
14.6.1 Discussion about the Low Background Content......Page 308
14.7 Ingamells’ Optimum Sample Mass......Page 309
14.8 Ingamells’ Minimum Sample Mass......Page 310
14.10.2. The Variance s2
opt Taking into Account the Optimum Sample Mass......Page 311
14.11 Case Study: A Sampling Diagramin a Nickel-Cobalt Deposit......Page 312
14.11.3 Calculation of the Most Probable Result......Page 313
14.11.4 Calculation of Standard Deviation of the Fundamental Sampling Error......Page 314
14.11.5 Calculation of the Standard Deviation Taking into Account the Optimum SampleMass......Page 315
14.11.7 Compositing Vertically......Page 316
15.2 Scope......Page 318
15.4.1 Mineralogical and Microscopic Observations......Page 319
15.4.4 Quantifying the Fundamental Sampling Error......Page 320
15.4.6 The Challenges of Reality......Page 321
15.5.3 Poisson Processes......Page 322
15.6 Case Study #1: Estimation of the Iron Content in High-Purity Ammonium Paratungstate......Page 323
15.6.1 Investigation of the Histogram......Page 325
15.6.2 Discussion of Acceptable Maximum for the Standard Deviation of the FSE......Page 327
15.6.3 Visman Sampling Equation......Page 328
15.6.4 The Most Probable Result......Page 329
15.7 Case Study #2: Poisson Processes in a Gold Deposit......Page 330
15.8 Summarizing All the Information in a Sampling Diagram......Page 334
15.9 How a Double Poisson Process May Take Place......Page 336
15.10.3 Case #3: Maximum Gold Particle Size between 200 and 1000μm......Page 337
16.2 Discussions, Conclusions and Recommendations for Future Work......Page 338
16.2.2 Difficult Cases Where Good Prevention Is, in Appearance, Not Realistic......Page 339
16.3 Visman and Ingamells’ Works Help Justify an Augmented Gy’s Approach......Page 340
Part VI: The In-Situ Nugget Effect: A Major Component of the Random Term of a Variogram......Page 342
17.1 Summary......Page 344
17.4.1 A Special In-Situ Structural Case......Page 345
17.4.2 Nugget Effect and In-Situ Nugget Effect......Page 347
17.4.4 Components of the Variographic Random Variance......Page 348
17.5.1 Revisiting Gy’s Formulas......Page 350
17.5.2 Estimation of the Maximum Size of Mineral of Interest Particles or Their Cluster Equivalents......Page 352
17.6 The Mean of the In-Situ Nugget Effect and Local Biases......Page 355
17.7 The Low-Background Constituent of Interest Content......Page 356
17.7.1 Estimation of the Low Background Content of the Constituent of Interest......Page 357
17.7.2 The Most Probable Mineral of Interest Estimated Content......Page 358
17.8.1 Calculation of the Necessary Sample Mass......Page 359
17.9.1 Calculation of the Necessary Sample Mass......Page 360
17.10.1 Calculation of the Necessary Sample Mass......Page 361
17.11 Consequences of a Large In-Situ Nugget Effect......Page 362
17.13 Recommendations......Page 363
Introduction......Page 366
18. The Increment Materialization Error......Page 370
18.2.1 Purposive Sampling......Page 373
18.2.4 Authoritative Sampling......Page 374
18.3 Probabilistic Sampling of Movable Lots......Page 375
18.5 Analysis of the Increment Sampling Process......Page 376
18.6 Analysis of the Splitting Process......Page 378
18.7 Comparison of the Increment Process with the Splitting Process......Page 379
18.8 Introduction to a Group of Models of the Sampling Process......Page 380
18.9.1 Continuous Model of the Material to Be Sampled......Page 382
18.9.2 Degenerated Models of the Lot L......Page 383
18.9.3 The One-Dimensional Temporal Model of Flowing Streams......Page 384
18.9.4 Punctual, Extended, and Fragmental Functions......Page 385
18.9.6 Continuous Model of the Increment Sampling Process......Page 386
18.10 The Discrete Model of the Lot L......Page 387
18.11.2. Definition of the Model Extended Increment Ip......Page 388
18.11.3 Definition of the Actual Extended Increment I’v: Increment Delimitation Error IDE......Page 390
18.11.5. Definition of the Actual Fragmental Increment I0f
: Increment Extraction Error IEE......Page 392
18.11.6. Definition of the Prepared Fragmental Increment I''f: Increment Preparation Error IPE......Page 393
18.11.7 Recapitulation of the Increment Materialization Error IME......Page 395
19.1.1 Definition of a Random Selection......Page 396
19.1.5. Probability for a Fragment Fi of the Lot L to Be Included in the Actual Fragmental
Increment I'f......Page 397
19.2 Random Systematic Selection Mode......Page 398
19.3 Stratified Random Selection Mode......Page 399
19.5 Examples of Selection Modes Used in Practice......Page 401
20.1.1 A Critically Important Decision......Page 404
20.1.2 Beware of Paradigms......Page 405
20.1.3 Definition of Delimitation Correctness......Page 406
20.2.2 The Splitting Technique of Diamond Core Samples......Page 408
20.2.4 Sampling of Reverse Circulation Drilling Chips......Page 409
20.3 Recommendations for Ore Grade Control in Open Pit......Page 410
20.3.2 Drilling Several Benches at Once......Page 412
20.3.3 Delimitation Bias Introduced When Sampling the Blast-Hole Cuttings Pile Using a Tube......Page 413
20.3.4 Sampling the Blast-Hole Cuttings Pile Using a Radial Bucket......Page 414
20.3.5 Sampling the Blast-Hole Cuttings Pile Using Radial Channels......Page 415
20.3.7 Monitoring Blast-Holes Using Neutron Activation: the Future......Page 416
20.4.2 Channel Sampling in General......Page 419
20.4.4 Sampling at Draw-Points......Page 420
21.1.2 Taking Part of the Stream All the Time......Page 422
21.1.3 Taking Part of the Stream Part of the Time......Page 423
21.1.4 The Special Case of In-Stream Stationary Cutters......Page 426
21.1.5 Straight Path Cross-Stream Cutters......Page 428
21.1.6 Rotating-Path Cross-Stream Cutters......Page 434
21.1.8 Cross-Stream Flap Sampler......Page 436
21.1.10 Header Tanks......Page 439
21.1.11 Rotating Cutter Located under a Cone......Page 440
21.1.12 Rotating Sweepers......Page 443
22.1.2 Illustration of Correct and Incorrect Delimitation and Extraction......Page 446
22.1.3 Incorrect and Correct Use of the JIS Spoon......Page 447
22.1.5 The Use of a Correct Spoon for the Wrong Application......Page 448
22.2.1 Rotary Dividers......Page 449
22.3.1 True Splitting Devices – Riffle Splitters......Page 453
23.2.1 Drilling Orebodies with Diamond Coring Bits......Page 456
23.2.2 Drilling Orebodies with Tungsten Carbide Coring Bits......Page 458
23.2.3 Drilling Orebodies with Destructive Bits......Page 459
23.3 The Plucking Effect......Page 460
23.3.1 Modeling the Plucking Effect......Page 462
23.4.2 Recovery of the Former Sub-Drill......Page 464
23.4.5 Moving from Very Bad to Something Acceptable......Page 467
23.5 Channel Sampling of an Underground Face Development......Page 468
23.6.1 Sampling a Leaching Pad Using an Auger......Page 469
23.6.2 Sampling a Blasthole Pile Using an Auger......Page 470
24.2.1 Definitions and Notations......Page 472
24.2.2 Chronology of the Fragment F......Page 473
24.2.3 Chronology of the Leading Edge CL......Page 474
24.2.5 Collision between the Fragment F and the Cutter C......Page 475
24.2.6 Respective Positions of the Fragment F and the Leading Edge CL at the Instant tF3......Page 476
24.2.8 Respective Positions of the Fragment F and the Trailing Edge CT at the Instant tF3......Page 477
24.2.9 The Rebounding Rule and Definition of the Model Fragmental Increment......Page 478
24.3 Conditions of Extraction Correctness Relative to the Material to Be Sampled......Page 479
24.4 Conditions of Extraction Correctness Relative to the Cutter Characteristics......Page 481
24.4.3 Shapes of the Cutter Edges......Page 482
24.4.4 Problems with Inner Slopes at the Level of Cutter Edges......Page 484
24.4.7 Length of the Cutter Opening......Page 486
24.4.8 Inclination of the Cutter Opening......Page 487
24.4.9 Cutter Problems Associated with Overflowing......Page 490
24.4.10 The Appropriate Depth and General Design of the Cutter......Page 492
24.4.11 Width and Velocity of the Cutter: A Logical Qualitative Approach......Page 493
24.4.12 Cutter Width and Velocity – Rules of Extraction Correctness......Page 496
24.4.13 Critical Cutter Width W0......Page 497
24.4.14 The Peculiar Case of Very Large Flowrates for the Critical Cutter Width W0......Page 498
24.4.15 Critical Cutter Speed V0......Page 499
24.4.17 Guidelines for Rotating Cutters......Page 501
24.4.18 The Special Case of In-Stream Stationary Cutters and Probes......Page 504
25.1.1 Possibility of an Operating Bias......Page 506
25.1.3 Design of Riffle Splitters......Page 507
25.1.6 The Seven Cardinal Rules to Follow with Riffle Splitters......Page 508
25.3 Degenerated Splitting Devices – Reject Type Splitters......Page 509
25.4.1 Coning and Quartering......Page 511
25.4.2 Alternate Shoveling......Page 512
25.4.3 Fractional Shoveling......Page 513
25.4.4 Degenerated Splitting Methods......Page 514
25.5 The Special Case of a Rotating Cutter Travelling under a Deflecting Cone......Page 515
26. The Increment Preparation Errors and the Notion of Sample Integrity......Page 518
26.1.2 Contamination by Material Present in the Sampling Circuit......Page 519
26.1.4 Contamination by Corrosion......Page 521
26.2.1 Loss of Fines as Dust......Page 522
26.2.3 Loss of Specific Fractions of the Sample......Page 523
26.3.1 Errors Resulting from an Addition or Fixation......Page 524
26.4 Errors Resulting from the Alteration of the Physical Composition......Page 525
26.4.2 Subtraction or Destruction of a Critical Component......Page 526
26.6 New Technology Equipment to Favor......Page 527
26.7 Errors Resulting from Unintentional Mistakes......Page 528
26.9 Conclusions......Page 530
Part VIII: The Increment Weighting Error and the Weighing Error......Page 532
27.2.2 The Variance of IWE......Page 534
27.3.1 Sampling Systems Installed under the Discharge of a Flowing Stream......Page 535
27.3.3 Subsampling at the Laboratory......Page 536
27.4.1 Definition and Purpose......Page 537
27.4.3 Estimation of the Lot Mass and Discussion of Assumptions......Page 538
27.5.1 Important Factors Affecting the Choice of a Proportional Sampler......Page 539
27.5.2 Required Characteristics of a Proportional Sampler......Page 541
27.5.4 Reliability of Proportional Sampling......Page 542
27.6 Conclusion......Page 543
28.1 Comparing Two Weightometers with a Stockpile between Them......Page 544
28.3 Correct Installation of a Weightometer......Page 545
Part IX: Review of Some Notorious Sampling Problems......Page 550
29.2.2 Moisture in Materials of Mineral Origin......Page 552
29.3.1 Moisture Content of a Material as Received......Page 553
29.4 Determination of the Moisture Content for Itself......Page 554
29.5.1 The Chemical Analysis Cannot Be Performed on Wet Materials......Page 555
29.5.2 The Drying Step Cannot Be Performed on the Entire Lot......Page 556
29.5.5 Accuracy of the Estimation of the Tonnage of the Constituent of Interest A......Page 557
29.5.6 Carrying the Condition of Accuracy into Effect......Page 558
29.5.7 Classical Method: Two Primary Samples and Two Drying Stages......Page 559
29.5.8 The Simultaneous Drying Method......Page 561
29.5.9 Method of the Single Sample......Page 562
29.6 Typical Difficulties Encountered during the Selection of a Sampling Weight Base Line......Page 565
29.7 Correctness of the Sampling Equipment Used for the Moisture Determination......Page 567
29.8 Complementary Remarks and Conclusions......Page 568
30.1.1 Financial Peculiarities......Page 572
30.2 A Logical Approach......Page 573
30.3.1 Important Limitation......Page 574
30.3.2 Practical Examples......Page 575
30.3.4 Relation between the Total Weight of Gold in a Sample and the Weight of Its Largest Gold Particle......Page 577
30.3.5 Problems Attached to the Variability of Gold in Stream Sediments during Geochemical Reconnaissance......Page 578
30.3.6 Problem Associated with the Liberation of Gold during Sample Preparation......Page 580
30.3.7 Useful Sampling Nomographs......Page 581
30.4.2 Calculation of the Intrinsic Heterogeneity IHL......Page 584
30.4.3 Useful Sampling Nomograph......Page 586
30.4.4 Practical Examples......Page 587
30.5 Gold Is Associated with Another Major Mineral......Page 588
30.5.1 Practical Example......Page 589
30.6.1 Equation and Shape of a Poisson Distribution......Page 590
30.6.2 The Most Probable Result......Page 592
31.1 Introduction......Page 594
31.2 Key Questions and Issues Specifically Related to Sampling in the Environment......Page 595
31.3 A Logical Approach......Page 596
31.3.1 Structural Property of a Pollutant......Page 597
31.3.3 Interaction between Sampling Cost, Accuracy, Precision, and the Regulatory Threshold......Page 598
31.3.4 Standardization of a Sampling Strategy......Page 599
31.4.2 Errors Generated by the Sampling of One-Dimensional Wastes......Page 600
31.4.4 The Analytical Error......Page 601
31.5.2 Heterogeneity of a One-Dimensional Waste......Page 602
31.5.3 Heterogeneity of a Two- or Three-Dimensional Waste......Page 603
31.6.2 Zero-Dimensional Wastes......Page 605
31.6.3 One-Dimensional Wastes......Page 608
31.6.4 Two-Dimensional Wastes......Page 609
31.6.5 Three-Dimensional Wastes......Page 614
31.6.6 Areas That Would Benefit from the Implementation of a Variographic Experiment......Page 615
31.7 Implementation of Sampling Plans......Page 620
31.7.1 Notion of Sampling Correctness in the Environment......Page 621
31.8 Development and Use of Standard Reference Materials......Page 623
31.9 Conclusions and Recommendations......Page 624
31.9.2 Development of a Sampling Plan......Page 625
31.9.3 Correct Implementation of the Sampling Plan......Page 626
32.2 Cost of Representativeness......Page 628
32.4 Notion of Acceptable Cost......Page 629
32.5 Sampling of Three-Dimensional Lots......Page 630
32.6 Sampling of Two-Dimensional Lots......Page 631
32.8 Sampling of Zero-Dimensional Lots......Page 632
Part X: Chronostatistics......Page 634
33.2 Micromanagement of Process Variability......Page 636
33.3.1 Differences between Control Limits and Specification Limits......Page 637
33.3.3 The Old Ways of Doing Process Control......Page 638
33.3.4 The Superiority of Graphics......Page 640
34.2 Scope......Page 642
34.3.3 Measuring Heterogeneity Variability with the Variogram......Page 644
34.3.4 Extrapolation of the Variogram to Time or Distance Zero......Page 645
34.3.6 Understanding Clearly the Client’s Needs......Page 646
34.3.7 Calculation of the Short-Range Absolute Variogram......Page 647
34.3.8 Calculation of the Long-Range Absolute Variogram......Page 648
34.4.1 Testing the Capability of the Total Sampling, Measurement, and Sampling Interval......Page 649
34.4.2 Testing the Additional Contribution of a Cycle......Page 651
35.1.2 Calculation of the Short-Range Absolute Variogram......Page 654
35.1.3 Calculation of the Long-Range Absolute Variogram......Page 655
35.1.4 Testing the Capability of the Total Sampling, Measurement, and Sampling Interval......Page 657
35.1.5 Testing the Additional Contribution of a Cycle......Page 659
35.2 Case Study #2: Controlling G/L Solids of a Precipitated Chemical at the Underflow Exit of a Thickener......Page 660
35.2.3 Interpretation of the Experimental Absolute Variogram......Page 661
35.3 Conclusions......Page 665
Part XI: Homogenization......Page 668
36.1 Introduction......Page 670
36.2.2 Dynamic and Discontinuous Processes......Page 671
36.2.5 The Particular Case of Ball and Rod Mills......Page 673
37.2 Continuous and Discontinuous Fluctuations......Page 674
37.3.3 Assumption #3: Preparation of the Pile Following a Preestablished Mode......Page 675
37.3.5 Assumption #5: Constancy of the Speed of the Reclaiming System......Page 676
37.4 Parallel between the Reclaiming Phase and Systematic Sampling......Page 677
37.4.3 Properties of the Variance s2 (HFE)......Page 679
37.5.1 Necessary Precautions during the Experiment......Page 681
37.5.3 Determination of the Parameter V1 of the Input Variogram......Page 682
37.5.5 Calculation of the Variogram of the Output Material......Page 684
37.6 Conclusions......Page 685
Part XII: Recommendations to Manufacturers of Sampling Equipment and to Engineering Firms......Page 688
38.1.1 Why Is It That Sampling Systems Don’t Work?......Page 690
38.1.2 Why Is It That Incorrect Sampling Systems Are Built?......Page 691
38.1.3 List of Typical Incorrect Sampling Systems......Page 692
38.1.4 Why Do Customers Still Support the Use of Incorrect Samplers?......Page 693
38.1.6 Can a Bias Test Credit an Incorrect Sampling System?......Page 694
38.1.10 Responsibility of Standards Committees on Sampling......Page 695
38.2.1 Can We Find Correct Sampling Systems on the Market?......Page 696
38.3 Integration of Correct Sampling Systems in All New Plant Feasibility Studies......Page 697
References......Page 700
Index......Page 710

Citation preview

Theory of Sampling and Sampling Practice

This page intentionally left blank

Theory of Sampling and Sampling Practice Third Edition

Edited by Francis F. Pitard

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742

© 2019 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed on acid-free paper International Standard Book Number-13: 9781138476486 (Hardback) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www. copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-7508400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Names: Pitard, Francis F., author. Title: Theory of Sampling and Sampling Practice / Francis F. Pitard. Other titles: Pierre Gy’s sampling theory and sampling practice Description: Third edition. | Boca Raton : Taylor & Francis, 2019. | Revised edition of: Pierre Gy’s sampling theory and sampling practice : heterogeneity, sampling correctness, and statistical process control / Francis F. Pitard. 2nd ed. c1993. | Includes bibliographical references. Identifiers: LCCN 2018040960| ISBN 9781138476486 (hardback : alk. paper) | ISBN 9781351105934 (ebook) Subjects: LCSH: Ores–Sampling and estimation. Classification: LCC TN560 .P56 2019 | DDC 622/.7–dc23 LC record available at https://lccn.loc.gov/2018040960

Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com

Contents Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix The Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxi

Part I

Introduction and a Management Strategy . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Historical Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Subdivisions of the Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1. Definition of Basic Terms and Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.1 Basic Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 1.2 List of Notation and Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 1.2.1 Latin Letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 1.2.2 Greek Letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 1.3 The Word Error versus the Word Uncertainty Controvercy. . . . . . . . . . . . . .26 1.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 1.3.2 Going Back to Matheron and Gy’s Fundamentals. . . . . . . . . . . . .28 1.3.3 Jumping from Uncertainty to Error . . . . . . . . . . . . . . . . . . . . . . . . .28 1.3.4 Sampling Correctness: The Mandatory Path to Predictable Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 2. A Management Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.1 Structural Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 2.2 Circumstantial Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 2.3 Concepts of Primary and Secondary Properties . . . . . . . . . . . . . . . . . . . . . . .33 2.3.1 The Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33 2.3.2 The Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33 2.4 The Primary Structural Property of Sampling Correctness . . . . . . . . . . . . . .35 2.5 Relationship between Correctness and Accuracy . . . . . . . . . . . . . . . . . . . . . .35 2.6 Practical Advantages of Controlling Sampling Correctness . . . . . . . . . . . . .36 2.7 Disadvantages and Risks of Controlling Accuracy. . . . . . . . . . . . . . . . . . . . .36 2.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37

Part II

Fundamental Statistical Concepts Used in the Theory of Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3. Fundamental Statistical Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1 Notion of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43

v

vi

Contents

3.2

3.3 3.4

3.5

3.6 3.7 3.8

3.9

3.10

3.11

Probability Law of a Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44 3.2.1 Notion of Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44 3.2.2 From a Random Variable to Its Probability Law . . . . . . . . . . . . . .45 3.2.3 Definition of a Probability Law . . . . . . . . . . . . . . . . . . . . . . . . . . . .45 3.2.4 Graphic Representation of a Probability Law (Discrete Variable) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45 Dependence between Random Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . .47 Position Parameters and Characterization of a Probability Distribution . . .47 3.4.1 The Arithmetic Average x of a Discrete Set Is Defined as . . . . . .47 3.4.2 The Quadratic Average xQ of a Discrete Set Is Defined as . . . . . .48 3.4.3 The Geometric Average xG of a Discrete Set Is Defined as . . . . . .48 3.4.4 The Harmonic Average xH of a Discrete Set Is Defined as . . . . . .48 3.4.5 The Weighted Average xM of a Discrete Set Is Defined as . . . . . .48 3.4.6 The Median. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 3.4.7 The Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 3.4.8 Dispersion Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 3.4.9 The Pearson Variation Factor: The Relative Standard Deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51 Permutations and Combinations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51 3.5.1 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51 3.5.2 Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52 The Gaussian Model: The Normal Probability Distribution . . . . . . . . . . . . .52 The Binomial Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53 The Poisson Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 3.8.1 Limitations of Normal and Lognormal Statistical Models . . . . . .54 3.8.2 Poisson Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 Preventive Recommendations Relative to Sampling . . . . . . . . . . . . . . . . . . .59 3.9.1 The Grade aS of a Sample Obeys a Normal Distribution . . . . . . .59 3.9.2 The Grade aS of a Sample Obeys a Poisson Distribution . . . . . . .59 3.9.3 Capital Notion of Probabilistic Selection Process . . . . . . . . . . . . .59 Random and Systematic Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59 3.10.1 Notion of Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60 3.10.2 Notion of Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60 3.10.3 Notion of Representativeness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61 3.10.4 Graphic Representation of the Notion of Accuracy and Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61 Summation of Random and Systematic Errors . . . . . . . . . . . . . . . . . . . . . . . .64

4. A Logical Introduction to the Components of the Overall Estimation Error . . . . 65 4.1 Constitution Heterogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66 4.2 Distribution Heterogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66 4.3 Classification of Lots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67 4.3.1 Number of Dimensions Characterizing a Lot. . . . . . . . . . . . . . . . .67 4.3.2 Continuity or Discontinuity of the Selected Model Representing the Lot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67

Contents

4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13

Part III

vii

4.3.3 Order and Disorder of the Constituents of the Lot . . . . . . . . . . . .68 Heterogeneity of a Zero-Dimensional Lot . . . . . . . . . . . . . . . . . . . . . . . . . . . .68 Heterogeneity of a One-Dimensional Lot . . . . . . . . . . . . . . . . . . . . . . . . . . . .69 Heterogeneity of Two-Dimensional Lots . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70 Heterogeneity of Three-Dimensional Lots . . . . . . . . . . . . . . . . . . . . . . . . . . . .70 Qualitative and Quantitative Components of the Heterogeneity Fluctuation Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 Materialization of the Sampling Operation: The Increment Materialization Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 The Sampling Selection Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72 The Total Sampling Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72 The Analytical Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73 The Overall Estimation Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73

Heterogeneity and Homogeneity . . . . . . . . . . . . . . . . . . . . . 77

5. A Logical Introduction to the Notion of Heterogeneity . . . . . . . . . . . . . . . . . . . . . . 79 5.1 Qualitative Analysis of the Duality Homogeneity versus Heterogeneity . .80 5.1.1 Constitution Heterogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80 5.1.2 Distribution Heterogeneity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81 5.2 Classification of the Lots Submitted to Quality or Quantity Control . . . . . .82 5.2.1 Number of Dimensions of the Selected Model Intended to Represent a Lot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82 5.2.2 Continuity or Discontinuity of the Selected Model . . . . . . . . . . . .83 5.2.3 Order or Disorder of the Units Making Up a Lot . . . . . . . . . . . . .84 5.3 Functional, Random and Stochastic Relations. . . . . . . . . . . . . . . . . . . . . . . . .84 5.3.1 Functional Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84 5.3.2 Random Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85 5.3.3 Stochastic Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85 5.4 Caption of the Various Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85 5.4.1 Zero-Dimensional and Discontinuous Lots . . . . . . . . . . . . . . . . . .85 5.4.2 One-Dimensional and Continuous Lots . . . . . . . . . . . . . . . . . . . . .85 5.4.3 Two- and Three-Dimensional Lots . . . . . . . . . . . . . . . . . . . . . . . . .86 6. Heterogeneity of a Zero-Dimensional Lot: Constitution and Distribution Heterogeneities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87 6.2 Definitions and Basic Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88 6.3 Population of Specified Units: Population of Fragments . . . . . . . . . . . . . . . .89 6.3.1 Heterogeneity Carried by a Fragment within the Lot . . . . . . . . . .90 6.3.2 Average of the Heterogeneities Carried by the Fragments of a Lot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91 6.3.3 Variance of the Heterogeneities Carried by the Fragments of a Lot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91

viii

Contents

Definition of the Constitution Heterogeneity of a Lot. . . . . . . . . . . .92 Constitution Heterogeneity of a Composited Lot . . . . . . . . . . . . .92 The Intrinsic Heterogeneity of the Fragments Making Up the Lot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92 6.3.7 The Intrinsic Heterogeneity of a Composited Lot . . . . . . . . . . . . .93 6.3.8 Respective Properties of CHL and IHL . . . . . . . . . . . . . . . . . . . . . . .93 6.3.9 Constitution Homogeneity of a Lot . . . . . . . . . . . . . . . . . . . . . . . . .94 Population of Specified Units: Population of Groups of Fragments. . . . . . .94 6.4.1 Introduction to the Notion of Distribution Heterogeneity . . . . . . . .94 6.4.2 Heterogeneity Carried by a Fragment within a Group of Fragments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94 6.4.3 Heterogeneity Carried by a Group of Fragments within the Lot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95 6.4.4 Definition of the Distribution Heterogeneity of a Lot . . . . . . . . . . . .95 6.4.5 Relationship between Constitution and Distribution Heterogeneities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96 6.4.6 Definition of the Constitution Homogeneity . . . . . . . . . . . . . . . . .97 6.4.7 Definition of the Distribution Homogeneity . . . . . . . . . . . . . . . . .97 6.4.8 Natural Distribution Homogeneity within a Lot . . . . . . . . . . . . . .98 6.4.9 Different Kinds of Natural Distribution Homogeneity . . . . . . . .100 6.4.10 Experimental Verification of the Homogeneity . . . . . . . . . . . . . .102 6.4.11 Maximum Distribution Heterogeneity within a Lot . . . . . . . . . .102 6.4.12 Definition and Properties of the Grouping Factor . . . . . . . . . . . . .103 6.4.13 Definition and Properties of the Segregation Factor . . . . . . . . . . .104 6.4.14 Effect of the Observation Scale on the Value of the Distribution Heterogeneity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105 6.4.15 Effect of the Size Distribution of the Fragments on the Value of the Constitution Heterogeneity . . . . . . . . . . . . . . . . . . .105 6.4.16 General Expression of the Distribution Heterogeneity . . . . . . . .106 6.4.17 Illustration of the Definition of Heterogeneity. . . . . . . . . . . . . . .107 Transformation of a Set of Units with Two Descriptors into an Equivalent Set of Units with One Descriptor. . . . . . . . . . . . . . . . . . . . . . . . .109 Practical Use of the Proposed Definitions of the Heterogeneity . . . . . . . . .110 Practical Example of the Calculation of Heterogeneity Characteristics in a Counted Population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111 6.3.4 6.3.5 6.3.6

6.4

6.5 6.6 6.7

7. Heterogeneity of a One-Dimensional Lot: Notion of Variography . . . . . . . . . . . 115 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115 7.2 Total Heterogeneity Supported by a One-Dimensional Lot . . . . . . . . . . . .116 7.2.1 Definition of the Heterogeneity Carried by the Unit Um . . . . . .116 7.2.2 Characterization of a Chronological Series. . . . . . . . . . . . . . . . . .117 7.3 Overall Characterization of the Heterogeneity of a One-Dimensional Lot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118 7.4 Sequential Characterization of the Heterogneity of a One-Dimensional Lot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118

Contents

7.5

7.6 7.7

7.8 7.9

7.10

7.11 7.12

7.13

7.14

ix

7.4.1 Order and Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118 7.4.2 Basic Definition of the Semi-Variogram . . . . . . . . . . . . . . . . . . . .118 Actual Presentation of the Variogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120 7.5.1 Problem Associated with the Central Values of the Chronological Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120 7.5.2 Problem Associated with the Precision of the Variographic Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121 7.5.3 Problem Associated with the Main Characteristics of the Processing Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121 Comparison between the Variogram of hm with the Variograms of the Two Descriptors am and Mm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122 Definition of the Relative Variogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122 7.7.1 Example #1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124 7.7.2 Example #2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124 Relevance of the Chronological Order of the Units . . . . . . . . . . . . . . . . . . .124 Modeling of the Experimental Variogram . . . . . . . . . . . . . . . . . . . . . . . . . . .127 7.9.1 Description of the Heterogeneity of a One-Dimensional Lot in Terms of a Variogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127 7.9.2 The Short-Range Heterogeneity Fluctuation . . . . . . . . . . . . . . . .128 7.9.3 The Long-Range Heterogeneity Fluctuation . . . . . . . . . . . . . . . .128 7.9.4 The Periodic Heterogeneity Fluctuation . . . . . . . . . . . . . . . . . . . .129 7.9.5 Properties of the Residual Component . . . . . . . . . . . . . . . . . . . . .130 7.9.6 Properties of the Variogram before and beyond the Range . . . .130 7.9.7 Area of Influence of One Increment . . . . . . . . . . . . . . . . . . . . . . .131 7.9.8 Stationarity of the Information Provided by a Variogram . . . . .132 Auxiliary Functions of the Variogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132 7.10.1 The First Order Average Integral of the Variogram . . . . . . . . . .133 7.10.2 The Second Order Average Integral of the Variogram . . . . . . . .134 7.10.3 The Moving Average of the Variogram . . . . . . . . . . . . . . . . . . . .135 From Heterogeneity to the Continuous Heterogeneity Fluctuation Error.136 7.11.1 Definition of Errors Generators . . . . . . . . . . . . . . . . . . . . . . . . . . .136 Point-By-Point Interpretation of the Variogram . . . . . . . . . . . . . . . . . . . . . .137 7.12.1 Graphical Integration of a Variogram . . . . . . . . . . . . . . . . . . . . . .137 7.12.2 Significance of the Term V(0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139 7.12.3 Practical Estimation of the Term V(0) Using a Separate Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140 7.12.4 Point-by-Point Calculation of the First Order Average Integral W(j) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142 7.12.5 Point-by-Point Calculation of the Second Order Average Integral W’(j) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142 7.12.6 Calculation of the Heterogeneity Fluctuation Error . . . . . . . . . .143 Step-By-Step Interpretation of a Variogram . . . . . . . . . . . . . . . . . . . . . . . . .144 7.13.1 Investigation of the Random Term V(0) . . . . . . . . . . . . . . . . . . . .144 7.13.2 Investigation of the Continuous Term of V(j) . . . . . . . . . . . . . . .144 Practical Interest of a Variographic Experiment in Quality Control. . . . . .148

x

Contents

7.15

Part IV

7.14.1 Stability of a Variogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150 7.14.2 Average Estimates versus Instantaneous Estimates . . . . . . . . . .150 Practical Applications of the Auxiliary Functions of the Variogram . . . . .151

Sampling Errors Introduced by Various Forms of Heterogeneity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

8. Sampling of One-Dimensional Lots: The Continuous Model. . . . . . . . . . . . . . . . 155 8.1 Theoretical Use of the One-Dimensional Model . . . . . . . . . . . . . . . . . . . . . .155 8.2 The Analytical Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156 8.3 The One-Dimensional Model: Definitions and Basic Notations . . . . . . . . .156 8.3.1 Definition of the Real Lot L. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156 8.3.2 Definition of an Imaginary Lot L’ . . . . . . . . . . . . . . . . . . . . . . . . .157 8.4 Characterization of the Heterogeneity of a One-Dimensional Lot . . . . . . .158 8.5 Characterization of the Increment Sampling Process . . . . . . . . . . . . . . . . . .159 8.6 Characterization of the Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .159 8.7 The Continuous Heterogeneity Fluctuation Error HFE . . . . . . . . . . . . . . . .160 8.8 Variance of the Continuous Heterogeneity Fluctuation Error HFE . . . . . .161 8.8.1 Variance of HFE for Random Systematic Sampling . . . . . . . . . .161 8.8.2 Variance of HFE for Stratified Random Sampling . . . . . . . . . . . .161 8.8.3 Variance of HFE for Strict Random Sampling . . . . . . . . . . . . . . .161 8.9 Components of the Continuous Heterogeneity Fluctuation Error HFE . . .162 8.9.1 Components of the Average m(HFE). . . . . . . . . . . . . . . . . . . . . . .163 8.9.2 Components of the Variance s2(HFE) . . . . . . . . . . . . . . . . . . . . . .163 8.10 The Discontinuous Random Term HFE1 . . . . . . . . . . . . . . . . . . . . . . . . . . . .163 8.10.1 Properties of the Average mðHFE1 Þ . . . . . . . . . . . . . . . . . . . . . . . .163 8.10.2 Properties of the Variance s2 ðHFE1 Þ . . . . . . . . . . . . . . . . . . . . . . .164 8.10.3 Mathematical Cancellation of the Variance s2 ðHFE1 Þ . . . . . . . . .165 8.10.4 Minimization of the Variance s2 ðHFE1 Þ. . . . . . . . . . . . . . . . . . . . .165 8.10.5 The Two Components of the Error HFE1 . . . . . . . . . . . . . . . . . . .166 8.11 The Continuous Nonrandom Term HFE2 . . . . . . . . . . . . . . . . . . . . . . . . . . .167 8.11.1 Properties of the Average mðHFE2 Þ . . . . . . . . . . . . . . . . . . . . . . . .169 8.11.2 Properties of the Variance s2 ðHFE2 Þ . . . . . . . . . . . . . . . . . . . . . . .170 8.11.3 Cancellation of the Variance s2 ðHFE2 Þ. . . . . . . . . . . . . . . . . . . . . .171 8.11.4 Minimization of the Variance s2 ðHFE2 Þ. . . . . . . . . . . . . . . . . . . . .172 8.12 The Periodic Term HFE3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173 8.12.1 Limitations of the Variographic Analysis of the Periodic Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173 8.12.2 Frequency and Origin of Periodic Fluctuations . . . . . . . . . . . . . .173 8.12.3 From the Heterogeneity h3(t) to the Term HFE3. . . . . . . . . . . . . .174 8.12.4 Properties of the Moments of the Error HFE3. . . . . . . . . . . . . . . .175 8.12.5 Practical Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .177

Contents

xi

9. Sampling of Zero-Dimensional Lots: The Discrete Model. . . . . . . . . . . . . . . . . . . 179 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179 9.2 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179 9.3 Distribution of the Random Variables ∏m , NSK , aSK , and ASK . . . . . . . . . . .180 9.3.1 Recall of a Statistical Property . . . . . . . . . . . . . . . . . . . . . . . . . . . .180 9.3.2 Distribution of the Frequency ∏m of the Unit Um in the Set Z . .180 9.3.3 Number of Units NZ in the set Z . . . . . . . . . . . . . . . . . . . . . . . . . .181 9.3.4 Distribution of the NSk Units in the Sample Sk . . . . . . . . . . . . . . .181 9.3.5 Weight MZ of the Set Z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181 9.3.6 Distribution of the Weight MSk of the Sample Sk . . . . . . . . . . . . .182 9.3.7 Distribution of the Total Weight ASk of the component of Interest in the Sample Sk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .182 9.4 Properties of the Sample Selection Error SSE: The General Case . . . . . . . .183 9.4.1 Definition of the Sample Selection Error SSE . . . . . . . . . . . . . . . .183 9.4.2 Relationship between the Properties of SSE and Those of aSk . .183 9.4.3 Distribution Law of aSk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183 9.4.4 Moments of the Critical Content. The Theoretical Approach . .186 9.4.5 Moments of aSk and SSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188 9.5 Properties of the Sampling Selection Error SSE: Correct Sampling . . . . . .189 9.5.1 Consequences of Selection Correctness Hypothesis . . . . . . . . . .189 9.5.2 First Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189 9.6 The Correct Probabilistic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189 9.7 The Components of the Sampling Selection Error SSE . . . . . . . . . . . . . . . .190 9.7.1 Small Scale Heterogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190 9.7.2 Comparison of SSE from the Discrete Model with HFE1 from the Continuous Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191 9.7.3 Logical Analysis of the Discontinuous Heterogeneity Fluctuation Error HFE1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193 10. The Fundamental Sampling Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 10.1 Definition of the Fragment Shape Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . .200 10.2 Estimation of the Fragment Shape Factor . . . . . . . . . . . . . . . . . . . . . . . . . . .201 10.3 Definition of the Fragment Size Distribution Factor. . . . . . . . . . . . . . . . . . .202 10.4 Definition of the Maximum Fragment Size . . . . . . . . . . . . . . . . . . . . . . . . . .202 10.5 Definition of the Mineralogical Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203 10.5.1 Cases Where Several Phases of the Mineral of Interest Are Present . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204 10.5.2 Calculation of the Mineralogical Factor . . . . . . . . . . . . . . . . . . . .204 10.6 Definition of the Liberation Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206 10.7 Calculation of the Liberation Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206 10.7.1 Calculation of the Liberation Factor through the Notion of Constitution Heterogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . .206 10.7.2 Calculation of the Liberation Factor Using the Notion of Liberation Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .210

xii

Contents

10.8 10.9

Calculation of the Intrinsic Heterogeneity IHL . . . . . . . . . . . . . . . . . . . . . . .211 Definition of the Fundamental Sampling Error FSE . . . . . . . . . . . . . . . . . . .211

11. Minimizing the Fundamental Sampling Error in Sampling Protocols . . . . . . . . 213 11.1 Minimum and Maximum of the Fundamental Sampling Error . . . . . . . . .213 11.2 Construction of Sampling Nomographs . . . . . . . . . . . . . . . . . . . . . . . . . . . .214 11.3 Revisiting the Determination of the Liberation Factor . . . . . . . . . . . . . . . . .216 11.4 Recommnded Method for the Determination of IHL, for Each Size Fraction of a Typical Fragment Size Distribution . . . . . . . . . . . . . . . . . . . . .217 11.4.1 General Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .217 11.4.2 Example of General Method Applied to a Gold Deposit . . . . . .218 11.5 Cases Where the Material Has Been Crushed Close to the Liberation Size of the Constituent of Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .225 11.6 Cases Where a Constituent of Interest Is Associated with Another Major Mineral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .227 11.7 Sampling for Fragment Size Distri Bution Analysis . . . . . . . . . . . . . . . . . . .228 11.8 Important Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231 11.8.1 The Total Allotted Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231 11.8.2 A Logical, Economic Distribution of the Total Allotted Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233 12 Other Approaches, a Strategy, and Cardinal Rules for the Estimation of the Variance of FSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .235 12.2 Cardinal Rule #1 in Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237 12.3 Cardinal Rule #2 in Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .238 12.4 Cardinal Rule #3 in Sampling: Using a Logical Strategy . . . . . . . . . . . . . . .240 12.4.1 The Mandatory Calibration of K and x . . . . . . . . . . . . . . . . . . . . .241 12.4.2 The Geologist to the Rescue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .241 12.4.3 The Mineralogist to the Rescue . . . . . . . . . . . . . . . . . . . . . . . . . . .241 12.4.4 Representing the Coarsest Fragments . . . . . . . . . . . . . . . . . . . . . .242 12.4.5 Representing the Coarsest Particles of the Constituent of Interest. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242 12.4.6 A Logical Flow Sheet to Perform Heterogeneity Tests . . . . . . . .242 12.4.7 Calculating the Necessary Sample Weight Ms Instead of the Variance of FSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .243 13. The Grouping and Segregation Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 13.1 Minimization of the Grouping and Segregation Error . . . . . . . . . . . . . . . . .246 13.1.1 Conditions for the Cancellation of the Average m(GSE). . . . . . .246 13.1.2 Conditions for the Cancellation of the Variance s2(GSE) . . . . . .246 13.1.3 Conditions for the Minimization of the Variance s2(GSE) . . . . .247 13.2 A Logical Solution to Problems Generated by the Variance of the Grouping and Segregation Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .249 13.3 The Transient Nature of Segregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .250

xiii

Contents

13.3.1

13.4 13.5 13.6 13.7 13.8 13.9 13.10

Part V

Segregation Introduced by a Transfer from One Conveyor Belt to Another . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .251 13.3.2 Segregation Introduced at the Discharge Point of a Laboratory Blender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .251 The Relativity of Segregation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .252 Segregation because of Fragment Density Heterogeneity . . . . . . . . . . . . . .253 Segregation because of Fragment Size Heterogeneity . . . . . . . . . . . . . . . . .253 Segregation because of Fragment Shape Heterogeneity . . . . . . . . . . . . . . .254 Segregation because of Air Turbulence . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256 Segregation because of Vibrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256 Other Causes for Segregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259

Integration of Visman and Ingamells’s Works into the Theory of Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

14. The Works of Visman and Ingamells Relevant to the Theory of Sampling . . . . 263 14.1 Scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263 14.2 An Introduction to Poisson Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263 14.2.1 A Simple, Pragmatic Observation . . . . . . . . . . . . . . . . . . . . . . . . .263 14.2.2 The Poisson Model: The Law of Small Probabilities . . . . . . . . . .264 14.2.3 Shape of a Poisson Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . .265 14.2.4 A Simple, but Useful Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . .265 14.2.5 Additivity of Poisson Processes . . . . . . . . . . . . . . . . . . . . . . . . . . .266 14.2.6 Programming Factorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .267 14.3 Visman’s Sampling Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .267 14.4 A Very Useful Visman’s Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .268 14.4.1 Discussion about the Experimental Variances . . . . . . . . . . . . . . .270 14.4.2 Case Where No Large Samples Are Available . . . . . . . . . . . . . . .272 14.5 Ingamells’ Most Probable Result. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .272 14.6 Ingamells’ Gangue Concentration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274 14.6.1 Discussion about the Low Background Content . . . . . . . . . . . . .275 14.7 Ingamells’ Optimum Sample Mass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276 14.8 Ingamells’ Minimum Sample Mass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .277 14.9 The Link with Gy’s Preventive Suggestions . . . . . . . . . . . . . . . . . . . . . . . . .278 14.10 Necessary Variances to Construct a Meaningful Sampling Diagram. . . . .278 14.10.1 The Variance of the Fundamental Sampling Error . . . . . . . . . . .278 14.10.2 The Variance s2opt Taking into Account the Optimum Sample Mass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .278 14.10.3 The Variance of a Single Assay s2N¼1 . . . . . . . . . . . . . . . . . . . . . . .279 14.11 Case Study: A Sampling Diagram in a Nickel-Cobalt Deposit . . . . . . . . . .279 14.11.1 Compositing Horizontally . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .280 14.11.2 Calculation of the Low Background Content for Cobalt. . . . . . .280 14.11.3 Calculation of the Most Probable Result . . . . . . . . . . . . . . . . . . . .280

xiv

Contents

14.11.4 14.11.5 14.11.6 14.11.7

Calculation of Standard Deviation of the Fundamental Sampling Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .281 Calculation of the Standard Deviation Taking into Account the Optimum Sample Mass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282 Calculation of the Standard Deviation of a Single Assay sN¼1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283 Compositing Vertically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283

15. Theoretical, Practical, and Economic Difficulties in Sampling for Trace Constituents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 15.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .285 15.2 Scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .285 15.3 Industries that Should Be Concerned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .286 15.4 A Logical Approach Suggested by the Theory of Sampling . . . . . . . . . . . .286 15.4.1 Mineralogical and Microscopic Observations . . . . . . . . . . . . . . .286 15.4.2 Heterogeneity Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287 15.4.3 Respecting the Cardinal Rules of Sampling Correctness . . . . . .287 15.4.4 Quantifying the Fundamental Sampling Error . . . . . . . . . . . . . .287 15.4.5 Minimizing the Grouping and Segregation Error . . . . . . . . . . . .288 15.4.6 The Challenges of Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .288 15.5 Ingamells’ Work to the Rescue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289 15.5.1 From Visman to Ingamells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289 15.5.2 Limitations of Normal and Lognormal Statistical Models . . . . .289 15.5.3 Poisson Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289 15.6 Case Study #1: Estimation of the Iron Content in High-Purity Ammonium Paratungstate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .290 15.6.1 Investigation of the Histogram. . . . . . . . . . . . . . . . . . . . . . . . . . . .292 15.6.2 Discussion of Acceptable Maximum for the Standard Deviation of the FSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .294 15.6.3 Visman Sampling Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .295 15.6.4 The Most Probable Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296 15.7 Case Study #2: Poisson Processes in a Gold Deposit . . . . . . . . . . . . . . . . . .297 15.8 Summarizing All the Information in a Sampling Diagram . . . . . . . . . . . . .301 15.9 How a Double Poisson Process May Take Place. . . . . . . . . . . . . . . . . . . . . .303 15.10 Recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .304 15.10.1 Case #1: Maximum Gold Particle Size below 80μm . . . . . . . . . .304 15.10.2 Case #2: Maximum Gold Particle Size between 80 and 200μm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .304 15.10.3 Case #3: Maximum Gold Particle Size between 200 and 1000μm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .304 16 From Links between Gy and Ingamells to a Sampling Strategy . . . . . . . . . . . . . . 305 16.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .305 16.2 Discussions, Conclusions and Recommendations for Future Work . . . . . .305 16.2.1 The Wisdom of Prevention for Due Diligence . . . . . . . . . . . . . . .306

xv

Contents

16.2.2

16.3

Part VI

Difficult Cases Where Good Prevention Is, in Appearance, Not Realistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .306 16.2.3 After-the-Fact Noncompliance with Due Diligence . . . . . . . . . .307 Visman and Ingamells’ Works Help Justify an Augmented Gy’s Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .307

The In-Situ Nugget Effect: A Major Component of the Random Term of a Variogram . . . . . . . . . . . . . . . . . . . . . . . 309

17. The In-Situ Nugget Effect: A Transition Between Geostatistics and the Theory of Sampling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 17.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .311 17.2 Scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .312 17.3 Definitions and Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .312 17.4 In-Situ Constitution Heterogeneity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .312 17.4.1 A Special In-Situ Structural Case . . . . . . . . . . . . . . . . . . . . . . . . . .312 17.4.2 Nugget Effect and In-Situ Nugget Effect . . . . . . . . . . . . . . . . . . .314 17.4.3 Estimation of the Variance of the Nugget Effect . . . . . . . . . . . . .315 17.4.4 Components of the Variographic Random Variance. . . . . . . . . .315 17.5 Theoretical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .317 17.5.1 Revisiting Gy’s Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .317 17.5.2 Estimation of the Maximum Size of Mineral of Interest Particles or Their Cluster Equivalents . . . . . . . . . . . . . . . . . . . . . .319 17.6 The Mean of the In-Situ Nugget Effect and Local Biases . . . . . . . . . . . . . . .322 17.7 The Low-Background Constituent of Interest Content . . . . . . . . . . . . . . . .323 17.7.1 Estimation of the Low Background Content of the Constituent of Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .324 17.7.2 The Most Probable Mineral of Interest Estimated Content . . . .325 17.8 Case Study #1: Arsenic Impurity in a Copper Deposit. . . . . . . . . . . . . . . . .326 17.8.1 Calculation of the Necessary Sample Mass . . . . . . . . . . . . . . . . .326 17.9 Case Study #2: Molybdenum in a Copper Deposit. . . . . . . . . . . . . . . . . . . .327 17.9.1 Calculation of the Necessary Sample Mass . . . . . . . . . . . . . . . . .327 17.10 Case Study #3: Coarse Gold and Clustering of Fine Gold . . . . . . . . . . . . . .328 17.10.1 Calculation of the Necessary Sample Mass . . . . . . . . . . . . . . . . .328 17.11 Consequences of a Large In-Situ Nugget Effect . . . . . . . . . . . . . . . . . . . . . .329 17.12 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .330 17.13 Recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .330

Part VII

The Capital Notion of Sampling Correctness . . . . . . . . . 333 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333

18. The Increment Materialization Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 18.1 Probabilistic and Nonprobabilistic Selecting Processes . . . . . . . . . . . . . . . .340

xvi

Contents

18.2

18.3 18.4 18.5 18.6 18.7 18.8 18.9

18.10 18.11

Critical Review of Nonprobabilistic Selecting Processes . . . . . . . . . . . . . . .340 18.2.1 Purposive Sampling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .340 18.2.2 Grab Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341 18.2.3 Sampling with Thief Probes and Augers . . . . . . . . . . . . . . . . . . .341 18.2.4 Authoritative Sampling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341 18.2.5 Common Properties of Nonprobabilistic Selecting Processes . .342 Probabilistic Sampling of Movable Lots. . . . . . . . . . . . . . . . . . . . . . . . . . . . .342 Probabilistic Sampling of Unmovable Lots . . . . . . . . . . . . . . . . . . . . . . . . . .343 Analysis of the Increment Sampling Process. . . . . . . . . . . . . . . . . . . . . . . . .343 Analysis of the Splitting Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .345 Comparison of the Increment Process with the Splitting Process. . . . . . . .346 Introduction to a Group of Models of the Sampling Process . . . . . . . . . . .347 The Continuous Model of the Lot L . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349 18.9.1 Continuous Model of the Material to Be Sampled. . . . . . . . . . . .349 18.9.2 Degenerated Models of the Lot L. . . . . . . . . . . . . . . . . . . . . . . . . .350 18.9.3 The One-Dimensional Temporal Model of Flowing Streams. . .351 18.9.4 Punctual, Extended, and Fragmental Functions . . . . . . . . . . . . .352 18.9.5 Continuous Model of a Selection Process . . . . . . . . . . . . . . . . . . .353 18.9.6 Continuous Model of the Increment Sampling Process . . . . . . .353 The Discrete Model of the Lot L . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .354 The Increment Sampling Process of Flowing Stream . . . . . . . . . . . . . . . . . .355 18.11.1 Definition of the Punctual Increment Ip . . . . . . . . . . . . . . . . . . . .355 18.11.2 Definition of the Model Extended Increment Ip . . . . . . . . . . . . . .355 18.11.3 Definition of the Actual Extended Increment I’v: Increment Delimitation Error IDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .357 18.11.4 Definition of the Model Fragmental Increment If . . . . . . . . . . . .359 0 18.11.5 Definition of the Actual Fragmental Increment If : Increment Extraction Error IEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .359 18.11.6 Definition of the Prepared Fragmental Increment If00 : Increment Preparation Error IPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .360 18.11.7 Recapitulation of the Increment Materialization Error IME . . . .362

19. Sampling Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 19.1 Probabilistic Approach of the Delimitation and Extraction Processes . . . .363 19.1.1 Definition of a Random Selection . . . . . . . . . . . . . . . . . . . . . . . . .363 19.1.2 Selection Probability of a Point t on the Time Axis . . . . . . . . . . .364 19.1.3 Probability for a Fragment Fi to Be Included in the Actual Extended Increment I0 and in the Model Fragmental Increment If . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 19.1.4 Probability for a Fragment Fj Belonging to the Model Fragmental Increment If to Be Included in the Actual Fragmental Increment If0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .364 19.1.5 Probability for a Fragment Fi of the Lot L to Be Included in the Actual Fragmental Increment If0 . . . . . . . . . . . . . . . . . . . . .364 19.1.6 Preparation of the Sample S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .365

Contents

19.2 19.3 19.4 19.5

xvii

Random Systematic Selection Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .365 Stratified Random Selection Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .366 Random Selection Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .368 Examples of Selection Modes Used in Practice . . . . . . . . . . . . . . . . . . . . . . .368

20. The Increment Delimitation Error during Exploration, Mining, and Sampling Food and the Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 20.1 Definition and Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .371 20.1.1 A Critically Important Decision . . . . . . . . . . . . . . . . . . . . . . . . . . .371 20.1.2 Beware of Paradigms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .372 20.1.3 Definition of Delimitation Correctness . . . . . . . . . . . . . . . . . . . . .373 20.2 Recommendations for Exploration Programs . . . . . . . . . . . . . . . . . . . . . . . .375 20.2.1 Drilling Drifts and Surveys. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .375 20.2.2 The Splitting Technique of Diamond Core Samples . . . . . . . . . .375 20.2.3 Selecting the Correct Diamond Core Length . . . . . . . . . . . . . . . .376 20.2.4 Sampling of Reverse Circulation Drilling Chips . . . . . . . . . . . . .376 20.2.5 The Correctness of Drilling: Conclusion . . . . . . . . . . . . . . . . . . . .377 20.3 Recommendations for Ore Grade Control in Open Pit . . . . . . . . . . . . . . . .377 20.3.1 Delimitation Bias Introduced When Drilling Blast-Holes. . . . . .379 20.3.2 Drilling Several Benches at Once . . . . . . . . . . . . . . . . . . . . . . . . . .379 20.3.3 Delimitation Bias Introduced When Sampling the Blast-Hole Cuttings Pile Using a Tube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .380 20.3.4 Sampling the Blast-Hole Cuttings Pile Using a Radial Bucket. .381 20.3.5 Sampling the Blast-Hole Cuttings Pile Using Radial Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .382 20.3.6 Trenching Soil, Clay, Laterite, or Saprolite . . . . . . . . . . . . . . . . . .383 20.3.7 Monitoring Blast-Holes Using Neutron Activation: the Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .383 20.4 Sampling for Underground Ore Grade Control . . . . . . . . . . . . . . . . . . . . . .386 20.4.1 Block Caving Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .386 20.4.2 Channel Sampling in General. . . . . . . . . . . . . . . . . . . . . . . . . . . . .386 20.4.3 Channel Sampling of an Underground Face Development . . . .387 20.4.4 Sampling at Draw-Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .387 21. The Increment Delimitation Error at a Processing Plant . . . . . . . . . . . . . . . . . . . . 389 21.1 Options for Sampling a Flowing Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . .389 21.1.1 Taking the Whole Stream Part of the Time. . . . . . . . . . . . . . . . . .389 21.1.2 Taking Part of the Stream All the Time. . . . . . . . . . . . . . . . . . . . .389 21.1.3 Taking Part of the Stream Part of the Time . . . . . . . . . . . . . . . . .390 21.1.4 The Special Case of In-Stream Stationary Cutters . . . . . . . . . . . .393 21.1.5 Straight Path Cross-Stream Cutters . . . . . . . . . . . . . . . . . . . . . . . .395 21.1.6 Rotating-Path Cross-Stream Cutters . . . . . . . . . . . . . . . . . . . . . . .401 21.1.7 Shape of the Intercepted Stream . . . . . . . . . . . . . . . . . . . . . . . . . .403 21.1.8 Cross-Stream Flap Sampler. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403 21.1.9 Flexible Hose Samplers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .406

xviii

Contents

21.1.10 21.1.11 21.1.12

Header Tanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .406 Rotating Cutter Located under a Cone . . . . . . . . . . . . . . . . . . . . .407 Rotating Sweepers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .410

22. The Increment Delimitation Error during Sampling at the Laboratory . . . . . . . . 413 22.1 The Use of Spatulas, Scoops, and Shovels . . . . . . . . . . . . . . . . . . . . . . . . . . .413 22.1.1 The Incorrect Use of a Well-Designed Spatula or Scoop. . . . . . .413 22.1.2 Illustration of Correct and Incorrect Delimitation and Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .413 22.1.3 Incorrect and Correct Use of the JIS Spoon. . . . . . . . . . . . . . . . . .414 22.1.4 Correct and Incorrect Shovel, Spoon, and Spatula . . . . . . . . . . .415 22.1.5 The Use of a Correct Spoon for the Wrong Application . . . . . . .415 22.2 The Use of Rotary Dividers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .416 22.2.1 Rotary Dividers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .416 22.3 The Use of Riffle Splitters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .420 22.3.1 True Splitting Devices – Riffle Splitters. . . . . . . . . . . . . . . . . . . . .420 23. The Increment Extraction Error during Exploration and Mining . . . . . . . . . . . . . 423 23.1 Definition and Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .423 23.2 Review of Some Drilling Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .423 23.2.1 Drilling Orebodies with Diamond Coring Bits . . . . . . . . . . . . . .423 23.2.2 Drilling Orebodies with Tungsten Carbide Coring Bits . . . . . . .425 23.2.3 Drilling Orebodies with Destructive Bits . . . . . . . . . . . . . . . . . . .426 23.3 The Plucking Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .427 23.3.1 Modeling the Plucking Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . .429 23.3.2 Practical Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .431 23.4 Extraction Problems during Drilling and Sampling of Blastholes . . . . . . .431 23.4.1 Refluxing of Coarse Fragments during Blasthole Drilling . . . . .431 23.4.2 Recovery of the Former Sub-Drill . . . . . . . . . . . . . . . . . . . . . . . . .431 23.4.3 Upward Contamination during Drilling. . . . . . . . . . . . . . . . . . . .434 23.4.4 Downward Contamination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .434 23.4.5 Moving from Very Bad to Something Acceptable . . . . . . . . . . . .434 23.5 Channel Sampling of an Underground Face Development. . . . . . . . . . . . .435 23.6 Drilling with Augers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436 23.6.1 Sampling a Leaching Pad Using an Auger . . . . . . . . . . . . . . . . . .436 23.6.2 Sampling a Blasthole Pile Using an Auger . . . . . . . . . . . . . . . . . .437 24. The Increment Extraction Error: During Sampling in a Processing Plant . . . . . . 439 24.1 Definition of the Extraction Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . .439 24.2 Analysis of the Rebound of a Fragment on a Cutter Edge. . . . . . . . . . . . . .439 24.2.1 Definitions and Notations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .439 24.2.2 Chronology of the Fragment F . . . . . . . . . . . . . . . . . . . . . . . . . . . .440 24.2.3 Chronology of the Leading Edge CL . . . . . . . . . . . . . . . . . . . . . . .441 24.2.4 Chronology of the Trailing Edge CT . . . . . . . . . . . . . . . . . . . . . . .442 24.2.5 Collision between the Fragment F and the Cutter C . . . . . . . . . .442

xix

Contents

Respective Positions of the Fragment F and the Leading Edge CL at the Instant tF3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443 24.2.7 Collision between the Fragment F and the Leading Edge CL . . .444 24.2.8 Respective Positions of the Fragment F and the Trailing Edge CT at the Instant tF3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .444 24.2.9 The Rebounding Rule and Definition of the Model Fragmental Increment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .445 Conditions of Extraction Correctness Relative to the Material to Be Sampled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .446 24.3.1 Fragments Do Not Fall One by One . . . . . . . . . . . . . . . . . . . . . . .448 24.3.2 Fragments Are Spinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .448 24.3.3 Fragments Do Not Fall in a Vertical Plane Containing the ~C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .448 Velocity Vector V Conditions of Extraction Correctness Relative to the Cutter Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .448 24.4.1 Straightness of the Cutter Edges . . . . . . . . . . . . . . . . . . . . . . . . . .449 24.4.2 Thickness of the Cutter Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . .449 24.4.3 Shapes of the Cutter Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .449 24.4.4 Problems with Inner Slopes at the Level of Cutter Edges. . . . . .451 24.4.5 Correct Outer Slopes of the Cutter Blades . . . . . . . . . . . . . . . . . .453 24.4.6 Conditions for Designing a Correct Cutter. . . . . . . . . . . . . . . . . .453 24.4.7 Length of the Cutter Opening . . . . . . . . . . . . . . . . . . . . . . . . . . . .453 24.4.8 Inclination of the Cutter Opening . . . . . . . . . . . . . . . . . . . . . . . . .454 24.4.9 Cutter Problems Associated with Overflowing . . . . . . . . . . . . . .457 24.4.10 The Appropriate Depth and General Design of the Cutter . . . .459 24.4.11 Width and Velocity of the Cutter: A Logical Qualitative Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .460 24.4.12 Cutter Width and Velocity – Rules of Extraction Correctness . .463 24.4.13 Critical Cutter Width W0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .464 24.4.14 The Peculiar Case of Very Large Flowrates for the Critical Cutter Width W0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .465 24.4.15 Critical Cutter Speed V0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .466 24.4.16 Optimum Width and Optimum Cutter Speed . . . . . . . . . . . . . . .468 24.4.17 Guidelines for Rotating Cutters . . . . . . . . . . . . . . . . . . . . . . . . . . .468 24.4.18 The Special Case of In-Stream Stationary Cutters and Probes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .471 24.2.6

24.3

24.4

25. The Increment Extraction Error during Sampling at the Laboratory . . . . . . . . . . 473 25.1 True Splitting Devices – Riffle Splitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . .473 25.1.1 Possibility of an Operating Bias . . . . . . . . . . . . . . . . . . . . . . . . . . .473 25.1.2 Correct Riffle Chute Width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .474 25.1.3 Design of Riffle Splitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .474 25.1.4 Equity of Riffle Splitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .475 25.1.5 Feeding the Riffle Splitter in the Wrong Direction . . . . . . . . . . .475 25.1.6 The Seven Cardinal Rules to Follow with Riffle Splitters . . . . . .475

xx

Contents

25.2 25.3 25.4

25.5

True Splitting Devices – Sectorial Splitters . . . . . . . . . . . . . . . . . . . . . . . . . .476 Degenerated Splitting Devices – Reject Type Splitters . . . . . . . . . . . . . . . . .476 True Splitting Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .478 25.4.1 Coning and Quartering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .478 25.4.2 Alternate Shoveling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .479 25.4.3 Fractional Shoveling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .480 25.4.4 Degenerated Splitting Methods . . . . . . . . . . . . . . . . . . . . . . . . . . .481 The Special Case of a Rotating Cutter Travelling under a Deflecting Cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .482

26. The Increment Preparation Errors and the Notion of Sample Integrity. . . . . . . . 485 26.1 Errors Resulting from Contamination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486 26.1.1 Contamination by Dust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486 26.1.2 Contamination by Material Present in the Sampling Circuit . . .486 26.1.3 Contamination by Abrasion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .488 26.1.4 Contamination by Corrosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .488 26.2 Errors Resulting from Losses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489 26.2.1 Loss of Fines as Dust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489 26.2.2 Loss of Material Left in the Sampling and Preparation Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .490 26.2.3 Loss of Specific Fractions of the Sample . . . . . . . . . . . . . . . . . . . .490 26.3 Errors Resulting from the Alteration of the Chemical Composition . . . . .491 26.3.1 Errors Resulting from an Addition or Fixation . . . . . . . . . . . . . .491 26.3.2 Errors Resulting from Subtraction or Elimination . . . . . . . . . . . .492 26.4 Errors Resulting from the Alteration of the Physical Composition . . . . . .492 26.4.1 Addition or Creation of a Critical Component. . . . . . . . . . . . . . .493 26.4.2 Subtraction or Destruction of a Critical Component . . . . . . . . . .493 26.5 Old Technology Equipment to Avoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .494 26.6 New Technology Equipment to Favor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .494 26.7 Errors Resulting from Unintentional Mistakes . . . . . . . . . . . . . . . . . . . . . . .495 26.8 Errors Resulting from Fraud and Sabotage . . . . . . . . . . . . . . . . . . . . . . . . . .497 26.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497

Part VIII

The Increment Weighting Error and the Weighing Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499

27. The Increment Weighting Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501 27.2 The Moments of IWE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501 27.2.1 The Mean of IWE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501 27.2.2 The Variance of IWE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501 27.3 Practical Experience on the Behavior of IWE. . . . . . . . . . . . . . . . . . . . . . . . .502 27.3.1 Sampling Systems Installed under the Discharge of a Flowing Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .502

Contents

27.4

27.5

27.6

xxi

27.3.2 Sampling a Mineral Deposit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503 27.3.3 Subsampling at the Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . .503 Introduction to Proportional Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . .504 27.4.1 Definition and Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .504 27.4.2 Time and Mass-Sampling Ratios . . . . . . . . . . . . . . . . . . . . . . . . . .505 27.4.3 Estimation of the Lot Mass and Discussion of Assumptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .505 Practical Implementation of Proportional Sampling . . . . . . . . . . . . . . . . . .506 27.5.1 Important Factors Affecting the Choice of a Proportional Sampler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .506 27.5.2 Required Characteristics of a Proportional Sampler . . . . . . . . . .508 27.5.3 Flow Regulation of the One-Dimensional Lot . . . . . . . . . . . . . . .509 27.5.4 Reliability of Proportional Sampling . . . . . . . . . . . . . . . . . . . . . . .509 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .510

28. The Weighing Error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 28.1 Comparing Two Weightometers with a Stockpile between Them . . . . . . .511 28.2 Comparing Two Weightometers with No Stockpile between Them . . . . .512 28.3 Correct Installation of a Weightometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512

Part IX

Review of Some Notorious Sampling Problems . . . . . . . 517

29. Sampling for the Determination of the Moisture Content . . . . . . . . . . . . . . . . . . . 519 29.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .519 29.2 Definition of Moisture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .519 29.2.1 Moisture in Materials of Vegetable Origin . . . . . . . . . . . . . . . . . .519 29.2.2 Moisture in Materials of Mineral Origin . . . . . . . . . . . . . . . . . . . .519 29.3 Possible Definitions of the Moisture Content MLA ¼ MLD  aLD ¼ MLH ð1  HÞaLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 29.3.1 Moisture Content of a Material as Received. . . . . . . . . . . . . . . . .520 29.3.2 Moisture Content of a Material after Drying . . . . . . . . . . . . . . . .521 29.4 Determination of the Moisture Content for Itself . . . . . . . . . . . . . . . . . . . . .521 29.5 Determination of the Moisture Content for the Calculation of the Tonnage of a Critical Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .522 29.5.1 The Chemical Analysis Cannot Be Performed on Wet Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .522 29.5.2 The Drying Step Cannot Be Performed on the Entire Lot. . . . . .523 29.5.3 Conditions to Ensure Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . .524 29.5.4 Accuracy of the Estimation of the Dry Tonnage MLD . . . . . . . . .524 29.5.5 Accuracy of the Estimation of the Tonnage of the Constituent of Interest A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .524 29.5.6 Carrying the Condition of Accuracy into Effect. . . . . . . . . . . . . .525 29.5.7 Classical Method: Two Primary Samples and Two Drying Stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .526

xxii

Contents

29.6 29.7 29.8

29.5.8 The Simultaneous Drying Method . . . . . . . . . . . . . . . . . . . . . . . .528 29.5.9 Method of the Single Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . .529 Typical Difficulties Encountered during the Selection of a Sampling Weight Base Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .532 Correctness of the Sampling Equipment Used for the Moisture Determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .534 Complementary Remarks and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . .535

30. Peculiarities about the Sampling of Precious Metals . . . . . . . . . . . . . . . . . . . . . . . 539 30.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .539 30.1.1 Financial Peculiarities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .539 30.1.2 Theoretical Peculiarities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .540 30.1.3 Practical Peculiarities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .540 30.2 A Logical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .540 30.3 Gold Is Liberated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .541 30.3.1 Important Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .541 30.3.2 Practical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .542 30.3.3 Study of an Alluvial Orebody Using a Classical Drilling Campaign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .544 30.3.4 Relation between the Total Weight of Gold in a Sample and the Weight of Its Largest Gold Particle . . . . . . . . . . . . . . . . .544 30.3.5 Problems Attached to the Variability of Gold in Stream Sediments during Geochemical Reconnaissance . . . . . . . . . . . . .545 30.3.6 Problem Associated with the Liberation of Gold during Sample Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .547 30.3.7 Useful Sampling Nomographs . . . . . . . . . . . . . . . . . . . . . . . . . . . .548 30.4 Gold Is Not Liberated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .551 30.4.1 Three Important Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . .551 30.4.2 Calculation of the Intrinsic Heterogeneity IHL . . . . . . . . . . . . . . .551 30.4.3 Useful Sampling Nomograph. . . . . . . . . . . . . . . . . . . . . . . . . . . . .553 30.4.4 Practical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .554 30.5 Gold Is Associated with Another Major Mineral . . . . . . . . . . . . . . . . . . . . .555 30.5.1 Practical Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .556 30.6 Notion of Maximum Acceptable Fundamental Sampling Error . . . . . . . . .557 30.6.1 Equation and Shape of a Poisson Distribution . . . . . . . . . . . . . . .557 30.6.2 The Most Probable Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .559 31. Sampling of Liquid and Solid Wastes and Sampling of the Environment . . . . . 561 31.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .561 31.2 Key Questions and Issues Specifically Related to Sampling in the Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .562 31.3 A Logical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .563 31.3.1 Structural Property of a Pollutant . . . . . . . . . . . . . . . . . . . . . . . . .564 31.3.2 Structural Property of Sampling Correctness. . . . . . . . . . . . . . . .565

xxiii

Contents

31.3.3

31.4

31.5

31.6

31.7 31.8 31.9

Interaction between Sampling Cost, Accuracy, Precision, and the Regulatory Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . .565 31.3.4 Standardization of a Sampling Strategy . . . . . . . . . . . . . . . . . . . .566 The Components of the Overall Sampling Error. . . . . . . . . . . . . . . . . . . . . .567 31.4.1 Errors Generated in the Sampling of Zero-Dimensional Wastes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .567 31.4.2 Errors Generated by the Sampling of One-Dimensional Wastes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .567 31.4.3 Errors Generated by Incorrect Sampling . . . . . . . . . . . . . . . . . . .568 31.4.4 The Analytical Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .568 Characterization of the Heterogeneity Carried by a Pollutant . . . . . . . . . .569 31.5.1 Heterogeneity of a Zero-Dimensional Waste . . . . . . . . . . . . . . . .569 31.5.2 Heterogeneity of a One-Dimensional Waste . . . . . . . . . . . . . . . .569 31.5.3 Heterogeneity of a Two- or Three-Dimensional Waste. . . . . . . .570 Development of an Appropriate Sampling Plan. . . . . . . . . . . . . . . . . . . . . .572 31.6.1 Regulatory and Statistical Objectives . . . . . . . . . . . . . . . . . . . . . .572 31.6.2 Zero-Dimensional Wastes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .572 31.6.3 One-Dimensional Wastes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .575 31.6.4 Two-Dimensional Wastes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .576 31.6.5 Three-Dimensional Wastes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .581 31.6.6 Areas That Would Benefit from the Implementation of a Variographic Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .582 Implementation of Sampling Plans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .587 31.7.1 Notion of Sampling Correctness in the Environment . . . . . . . . .588 Development and Use of Standard Reference Materials . . . . . . . . . . . . . . .590 Conclusions and Recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .591 31.9.1 Characterization of the Various Kinds of Heterogeneity . . . . . .592 31.9.2 Development of a Sampling Plan. . . . . . . . . . . . . . . . . . . . . . . . . .592 31.9.3 Correct Implementation of the Sampling Plan. . . . . . . . . . . . . . .593

32. Solvable and Unsolvable Sampling Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 32.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .595 32.2 Cost of Representativeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .595 32.3 Notion of Acceptable Representativeness . . . . . . . . . . . . . . . . . . . . . . . . . . .596 32.4 Notion of Acceptable Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .596 32.4.1 Commercial Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .597 32.4.2 Technical Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .597 32.4.3 Environmental Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .597 32.4.4 Administrative or Internal Accounting Sampling . . . . . . . . . . . .597 32.5 Sampling of Three-Dimensional Lots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .597 32.6 Sampling of Two-Dimensional Lots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .598 32.7 Sampling of One-Dimensional Lots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .599 32.7.1 Stationary Lots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .599 32.7.2 Flowing Streams. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .599 32.8 Sampling of Zero-Dimensional Lots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .599

xxiv

Part X

Contents

Chronostatistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601

33. A Strategy to Take Better Advantage of Existing Chronological Data . . . . . . . . . 603 33.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .603 33.2 Micromanagement of Process Variability . . . . . . . . . . . . . . . . . . . . . . . . . . .603 33.3 The Significance of Control Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .604 33.3.1 Differences between Control Limits and Specification Limits . .604 33.3.2 Definition of the Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . .605 33.3.3 The Old Ways of Doing Process Control . . . . . . . . . . . . . . . . . . .605 33.3.4 The Superiority of Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .607 34. The Use of the Variogram to Elaborate Meaningful Process Control Charts . . . 609 34.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609 34.2 Scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609 34.3 Searh for a Variographic Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .611 34.3.1 Selection of a Given Process Parameter of Interest . . . . . . . . . . .611 34.3.2 Heterogeneity Affecting the Given Process Parameter of Interest. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .611 34.3.3 Measuring Heterogeneity Variability with the Variogram. . . . .611 34.3.4 Extrapolation of the Variogram to Time or Distance Zero . . . . .612 34.3.5 Important Limitations for the Variogram . . . . . . . . . . . . . . . . . . .613 34.3.6 Understanding Clearly the Client’s Needs . . . . . . . . . . . . . . . . . .613 34.3.7 Calculation of the Short-Range Absolute Variogram . . . . . . . . .614 34.3.8 Calculation of the Long-Range Absolute Variogram . . . . . . . . .615 34.4 From Variography to Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .616 34.4.1 Testing the Capability of the Total Sampling, Measurement, and Sampling Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .616 34.4.2 Testing the Additional Contribution of a Cycle . . . . . . . . . . . . . .618 35. Case Studies where Variography is an Effective Tool to Discover and Quantify Structural Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 35.1 Case Study #1: Controlling the Copper Content of Slag in a Copper Smelter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .621 35.1.1 Understanding Clearly the Client’s Needs . . . . . . . . . . . . . . . . . .621 35.1.2 Calculation of the Short-Range Absolute Variogram . . . . . . . . .621 35.1.3 Calculation of the Long-Range Absolute Variogram . . . . . . . . .622 35.1.4 Testing the Capability of the Total Sampling, Measurement, and Sampling Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .624 35.1.5 Testing the Additional Contribution of a Cycle . . . . . . . . . . . . . .626 35.2 Case Study #2: Controlling G/L Solids of a Precipitated Chemical at the Underflow Exit of a Thickener . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .627 35.2.1 Understanding Clearly the Client’s Needs . . . . . . . . . . . . . . . . . .628 35.2.2 Calculation of the Absolute Variogram. . . . . . . . . . . . . . . . . . . . .628

Contents

35.3

Part XI

xxv

35.2.3 Interpretation of the Experimental Absolute Variogram . . . . . .628 35.2.4 Use of the Variogram Information into a Control Chart. . . . . . .632 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .632

Homogenization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635

36. An Introduction to Homogenizing Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 36.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637 36.2 Classification of Homogenization Processes . . . . . . . . . . . . . . . . . . . . . . . . .638 36.2.1 Stationary and Discontinuous Processes. . . . . . . . . . . . . . . . . . . .638 36.2.2 Dynamic and Discontinuous Processes. . . . . . . . . . . . . . . . . . . . .638 36.2.3 Dynamic and Discontinuous Processes with Circulation in a Closed Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .640 36.2.4 Dynamic and Continuous Processes with Circulation in a Closed Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .640 36.2.5 The Particular Case of Ball and Rod Mills . . . . . . . . . . . . . . . . . .640 37. Bed-Blending Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 37.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .641 37.2 Continuous and Discontinuous Fluctuations . . . . . . . . . . . . . . . . . . . . . . . .641 37.3 Development of a Suitable Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .642 37.3.1 Assumption #1: Constancy of the Input Stream Flow Rate . . . .642 37.3.2 Assumption #2: Constancy of the Stacker Speed . . . . . . . . . . . . .642 37.3.3 Assumption #3: Preparation of the Pile Following a Preestablished Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .642 37.3.4 Assumption #4: Constancy of the Length of the Layers . . . . . . .643 37.3.5 Assumption #5: Constancy of the Speed of the Reclaiming System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .643 37.4 Parallel between the Reclaiming Phase and Systematic Sampling . . . . . . .644 37.4.1 Conditions for the Nullity of the Mean m(HFE) of the Heterogeneity Fluctuation Error HFE . . . . . . . . . . . . . . . . . . . . . .646 37.4.2 Consequences of the Nullity of the Mean m(HFE). . . . . . . . . . . .646 37.4.3 Properties of the Variance s2 (HFE) . . . . . . . . . . . . . . . . . . . . . . . .646 37.4.4 Sill of the Output Variogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . .648 37.5 Testing the Effectiveness of an Homogenization Process . . . . . . . . . . . . . .648 37.5.1 Necessary Precautions during the Experiment . . . . . . . . . . . . . .648 37.5.2 Determination of the Punctual Variance of the Input Material .649 37.5.3 Determination of the Parameter V1 of the Input Variogram . . .649 37.5.4 Calculation of the Variogram of the Very Long Term. . . . . . . . .651 37.5.5 Calculation of the Variogram of the Output Material . . . . . . . . .651 37.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .652

xxvi

Contents

Part XII

Recommendations to Manufacturers of Sampling Equipment and to Engineering Firms . . . . . . . . . . . . . . . . 655

38. Recommendations for the Design, Installation, and Maintenance of Sampling Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657 38.1 Attempt to Define the Nature of the Problem . . . . . . . . . . . . . . . . . . . . . . . .657 38.1.1 Why Is It That Sampling Systems Don’t Work?. . . . . . . . . . . . . .657 38.1.2 Why Is It That Incorrect Sampling Systems Are Built? . . . . . . . .658 38.1.3 List of Typical Incorrect Sampling Systems . . . . . . . . . . . . . . . . .659 38.1.4 Why Do Customers Still Support the Use of Incorrect Samplers? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .660 38.1.5 How Can Incorrect Samplers Be Improved? . . . . . . . . . . . . . . . .661 38.1.6 Can a Bias Test Credit an Incorrect Sampling System?. . . . . . . .661 38.1.7 Are Incorrect Sampling Systems Good Enough for Process Control?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .662 38.1.8 Is It Possible to Achieve Material Balance with Incorrect Sampling Systems? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .662 38.1.9 The Special Case of Environmental Sampling . . . . . . . . . . . . . . .662 38.1.10 Responsibility of Standards Committees on Sampling . . . . . . . .662 38.1.11 What Can a Variographic Experiment Tell about Incorrect Sampling Systems? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .663 38.2 Correct Sampling Systems Can Go Wrong . . . . . . . . . . . . . . . . . . . . . . . . . .663 38.2.1 Can We Find Correct Sampling Systems on the Market? . . . . . .663 38.2.2 How Can a Correct Sampling System Be Improved? . . . . . . . . .664 38.3 Integration of Correct Sampling Systems in All New Plant Feasibility Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .664 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677

Foreword Sampling Is One of the Basic Operations of the Human Mind. It Does Not Receive the Attention It Deserves. Sampling is so deep-rooted in our habits that it usually takes place unconsciously. When, visiting France for the first time, a foreign friend of mine, just after clearing immigration and customs, expressed his surprise to observe that most Frenchmen were born in the Antilles: he was unaware of it, judging the French population ON A SAMPLE, AND A HEAVILY BIASED ONE AT THAT. For reasons unknown to me, a large proportion of French immigration and customs officers originates from Guadaloupe and Martinique, the group they constitute may not be regarded as a representative sample of the overall French population. God knows how many biased judgments suffer from such sampling errors! Francis Pitard’s book does not deal with the sampling of human populations but it is exactly the same kind of error that is committed when sampling the contents of a coal or iron ore railwagon by taking a couple of scoopsful on top of the load and in many other instances which will be reviewed in this book full of practical examples. When Francis decided to include the new developments of my sampling theory in his courses and to publish a shortened, digested, English version of my latest French textbook (1988), I was very grateful to him because there was a need for such a book just as there is a need for his courses. It is not an easy thing to develop a coherent mathematical theory and I can witness to that. But it is a much more difficult task to present its results in simple words that the users will easily understand. Of this art, Francis is a master and the success of his numerous courses and lectures witnesses to his rare talents, to his deep knowledge of the sampling theory and to his practical experience in that field. I am sure the American reader will appreciate the quality of the subtle alchemy that transmutes arduous demonstrations into simple explanations. I am very grateful to Francis for his efforts to make the new developments of the sampling theory available to the English-reading public. I wish his book the great success it deserves, and it is with great pleasure that I invite you to follow him in his exploration of the sampling theory. A last word: Sampling is not gambling. Dr. Pierre Maurice Gy (1924-2015)

xxvii

This page intentionally left blank

Preface The first edition of this book (1988) was devised as an adequate but simplified condensation of Dr. Pierre M. Gy’s work, suitable as a two-volume manual for the teaching of a short-course. Despite a few minor mistakes, the first edition was a success and well-received by engineers around the world. For example, the book helped to launch a series of short courses sponsored by Francis Pitard Sampling Consultants and the Australian Mineral Foundation around the Pacific Basin. Therefore, it was only natural that a second edition of the book would be in demand. The second edition of this book (1993) stayed consistent with the spirit of the first book; however, collaboration with some practicing mathematicians, statisticians, and research scientists helped me to prepare a much better, more complete version. It was an overwhelming success and this edition of the book became a classic around the world for more than 25 years. The attendees to my courses learned from me, and I most certainly learned from them. It became rapidly clear that the Theory of Sampling needed to be integrated with the philosophies of Statistical Process Control, Six Sigma, and Total Quality Management. This was the primary objective in the additional chapters of the second edition. Therefore, the “motto” for the second edition was: There are no such things as reliable feasibility studies, unbiased ore grade control, accurate environment assessments, effective process control, if you cannot identify and minimize the eight major sources of sampling variability. Furthermore, Total Quality Management cannot exist without effective Statistical Process Control, which itself cannot exist without a thorough understanding of the Theory of Sampling. The World Conference on Sampling and Blending (WCSB) was created in 2003, and ever since took place every two years on different continents. A huge amount of new knowledge was acquired with eight successful conferences. However, sadly, a new textbook on the Theory of Sampling (TOS) was never created, only compilations of many papers were published, therefore today, the second edition of this book, though outdated is still immensely successful around the world as a classic reference. A natural consequence of this unfortunate gap fully justifies the creation of a third edition of this successful book that is long overdue. This third edition could have been highly technical on Sampling and Statistical Process Control, but it would have missed its primary objective, which is to successfully explain why each of us, in the Mining Industry and other industries, at any given place of a process, is a customer and supplier of somebody else. Therefore, it is imperative for this third edition to provide high-level theoretical background, but also pragmatic solutions to the many problems we face every day. Because of the background of the author the reader may complain about an over emphasis on the Mining Industry, however, TOS is universal and applies to many other industries such as the Food Industry, the Chemical Industry, the Oil and Gas Industry, the Pharmaceutical Industry, and many more. This third edition of the textbook could have been a highly philosophical guideline about the many subtleties of Total Quality Management, but it would have missed its primary objective, which is to successfully explain why each of us, in all these industries, at any given place of a process, is the mean by which objectives are met on target, within specifications, and within acceptable cost.

xxix

xxx

Preface

Therefore, what is special about this third edition of the book is to find an equilibrium between how far someone must go through the theoretical and technical aspects of a given task to better understand, predict, control, and improve conditions leading at the end of the day to a successful job. The theoretical aspect of our problems is an essential means for penetration. The technical aspect of our problems is a necessary implementing process. The pragmatic aspect of our problems is a necessary product of our creativity, and we are paid to be pragmatic without giving up too much on theoretical and technical necessities; this is how we become a successful professional. If we learn how to do this, we may never lose the scope of our story: every time we leave work, we shall feel happy and proud of making the job of somebody else easier, better, and more effective. You made somebody else to look good. That somebody else can be a boss, a subordinate, a coworker, or even someone from another department or another company. Some readers, familiar with my former editions may be tempted to say: “It is just another one of those books on sampling! We went through this before.” This third edition of the textbook is different because a special effort has been made to present sampling and laboratory problems in their inescapable, economic context. The main reason for the Theory of Sampling being neglected in the past is due to the failure to place it in its economical context. As a result, many important executives around the world saw the TOS as an academic achievement with no obvious practical value. To correct this unfortunate situation, this third edition of the book is a proper blend from several important worlds: 1. The Management Method of W. Edwards Deming 2. The Sampling Theory of Dr. Pierre M. Gy 3. The undeniable touch from Geostatistics with variography leading to Chronostatistics 4. The extraordinary competence of a famous analytical chemist and sampling expert, C.O. Ingamells 5. The works of J. Visman 6. The modern philosophies of Statistical Process Control and Six Sigma Blending these different worlds was a daring endeavor on my part. I hope that the result will make sense to many of my clients. This third edition of the book should help set priorities to optimize operations, eliminate costly and incorrect practices accumulated through the years by tradition, mistaken beliefs, shortcomings from naïve common sense, and wrongdoings from practitioners and manufacturers with conflicts of interest. The third edition of the book is therefore the basic, essential tool to make sure due diligence in many modern applications is friendly enough when applied. Dr. Francis F. Pitard President Francis Pitard Sampling Consultants. LLC Broomfield, Colorado, USA

The Author Dr. Francis F. Pitard is a consulting expert in Sampling, Statistical Process Control (SPC) and Total Quality Management (TQM). He is President of Francis Pitard Sampling Consultants (www.fpscsampling.com) and Technical Director of Mineral Stats Inc. (MSI) in Broomfield, Colorado, USA. He provides consulting services in many countries. Dr. Pitard has six years of experience with the French Atomic Energy Commission and fifteen years with Amax Extractive R&D. He taught Sampling Theory, SPC, and TQM for the Continuing Education Offices of the Colorado School of Mines, the Australian Mineral Foundation, for the Mining Department of the University of Chile, and the University of Witwatersrand in South Africa. He has a Doctorate in Technologies from Aalborg University in Denmark. He is the author of 36 papers published over the last 40 years. He has an outstanding expertise in all aspects of sampling accumulated during a 20year association with C.O. Ingamells and Dr. Pierre M. Gy. He coauthored “Applied Geochemical Analysis”. C.O. Ingamells and F.F. Pitard. Wiley Interscience Division, John Wiley and Sons, Inc., New York, 1986. 733 pages textbook. He published two historical novels analyzing the origins of the Easter Island people. Dr. Pitard lived for a total of 6 years in the South Pacific and had a strong interest in Polynesian archeology and philosophy. “Heirs of a Lost Race”, 2001 ISBN: 0-7596-9472-9, Published by AuthorHouse “Rapa Nui Settlers – by choice and necessity”, 2009, Published by AuthorHouse. ISBN: 978-1-4389-5158-4 (e) ISBN: 978-1-4389-2940-8 (sc) ISBN: 978-1-4389-2942-2 (hc) Dr Pitard published a biography of a French Resistance hero. “From Normandy to the Hell of Ravensbruck: Life and escape from a concentration camp – The true story of 44667.” The true story of Dr. Pitard’s great aunt during WWII. Page Publishing, Inc. 2016 ISBN 978-1-68348-728-9 (Paperback) ISBN 978-1-68348-729-6 (Digital) He is the author of an essay on Nuclear Physics, titled “The Theory of Vacuoles and Low-Energy Nuclear Reactions”, 2017. http://mediahead.ca/Francis_Pitard_LENR/

xxxi

xxxii

The Author

Dr. Pitard doctoral thesis, Pierre Gy’s Theory of Sampling and C.O. Ingamells’ Poisson Process Approach, pathways to representative sampling and appropriate industrial standards, Aalborg University, campus Esbjerg, Niels Bohrs Vej 8, DK-67 Esbjerg, Denmark, 2009.

Part I

Introduction and a Management Strategy

Introduction Numerous articles and a few books have been written about sampling of particulate materials before this third edition. Then, why select Pierre Gy’s theory, Visman and Ingamells’s works? As a Pierre Gy’s Gold Medalist I want to bring my knowledge and experience on the Theory of Sampling (TOS) and contribute to making sure TOS grows in a rational way, in spite of its many detractors. Looking at comments made around the world, it is clear that many statisticians and empiricists promoting “Measurement of Uncertainty” (MU) strongly believe that TOS is something they can live without. Such antagonism is misplaced, unjustified and very unfair. I also strongly believe the MU promoters, who most of the time, are more comfortable with J. Visman’s work, need TOS, and vice versa. In this third edition of my book a special effort is made to integrate J. Visman and C. O. Ingamells’s works into the TOS and create a unified foundation that may help to create better sampling standards. I have been closely associated with Pierre Gy’s work since 1969. Gy’s books are meant for university professors and sampling practitioners who should read them thoroughly, which is rarely the case; they read these books in what I call “in diagonal” missing many critically important subtleties. For those who are in charge of solving sampling problems every day, finding the adequate answer to their questions in these books may prove difficult, which is why they give up and look for more pragmatic empirical approaches. There was therefore an obvious need for a “digest” of Pierre Gy’s books and a long overdue attempt to unify his work with J. Visman and C. O. Ingamells’s works. I have been also closely associated with C. O. Ingamells’s work (J. Visman’s school of thought) since 1973. A large amount of that time was spent in appreciating the respective strengths and weaknesses of these authors. Meanwhile, I discovered to my great surprise the inability of engineers and even scientists to deal effectively and 1

2

Theory of Sampling and Sampling Practice

rationally with sampling problems in general. It became natural for us, in close association with the founders of the World Conference on Sampling and Blending (WCSB), to believe that an aggressive education program was needed worldwide. My association with Pierre Gy, C. O. Ingamells, and WCSB members provided me with the fundamental basis of an elaborate educational program and quickly confirmed what I already knew for some time that the work of Dr. Pierre Gy on sampling of particulate materials is unique, and statistically correct. However, there are gaps in his approaches and the works of J. Visman and C. O. Ingamells definitely provide a powerful way to fill these gaps. Pierre Gy breaks the total uncertainty manifestations into eight sources of sampling variability, which he called sampling errors; they are the Fundamental Sampling Error (FSE), the Grouping and Segregation Error (GSE), the Increment Delimitation Error (IDE), the Increment Extraction Error (IEE), the Increment Weighting Error (IWE), the Increment Preparation Error (IPE), plus a long-range error and a periodic error for dynamic measurements. Other sources of variability should be added such a laboratory analytical error and the in-situ nugget effect (e.g., for geologists and geochemist), which is indeed a problem for sampling nonbroken materials. MU experts and empiricists seem to resent such classification as they think the conventional statistical analysis of data alone is enough to detect sampling problems; in this, there is a firm belief that all variability can be modelled by two systematic components (bias, acceptable or not) and a stochastic variance (precision, acceptable or not). However, and this is critically important, detection is not identification, and most certainly not cure; obviously something of great importance is missing. As a result, MU should welcome TOS because it effectively pinpoints where problems are, and this is the cardinal issue that gives irrefutable solutions to minimize each source of excessive sampling variability. As a quick reminder, TOS makes a clear distinction between Uncertainty (i.e., after all sources of sampling biases have been minimized to a negligible level, and after precision has been reduced to an acceptable level relative to preselected logical objectives), and Error when no attempts to minimize sampling correctness problems and unacceptable precision are made. The word Error was selected by Pierre Gy because at the time, early sixties, in an overwhelming amount of cases, sampling incorrectness and excessive precision problems were the rule rather than the exception, which has nothing to do with uncertainty and everything to do with major mistakes. These definitions in TOS may bother some MU experts, but I do not think they are totally incompatible with their ways of thinking. Therefore, it would be beneficial through positive thinking to negotiate intelligently, within WCSB if possible. This third edition of my book is my contribution into such a critically important endeavor. After 40 years of involvement in the Theory of Sampling I owe my knowledge to many experts, many friends, some of them long gone: for those who are gone, they did not die, but they simply faded away in a superb intellectual glow. First, I thank these extraordinary beings who I wish would still be here today, as they helped me so much. Charles Oliver Ingamells taught me the wonderful world of Poisson processes, the classic chemical methods of analysis, and the secrets of preparing reliable Reference Materials. He broke many paradigms I floundered with, setting me free for who I am today. Dr. Michel David, a famous geostatistician, with whom I shared many ideas on sampling, taught me the fundamentals in Geostatistics at Colorado School of Mines (CSM), and encouraged me to teach sampling courses at the continuing education department of CSM: this was the very beginning of my success in 1987. Ever since, every year in October, I have taught a sampling course at CSM for the industry; the course acquired a worldwide reputation.

Introduction and a Management Strategy

3

My thanks to Dr. Pierre M. Gy are limitless. He was my true mentor, a good man and a good friend. From the very beginning that I was introduced to his work not only did I develop a passion for his massive creativity, but I also admired his ethics as a good, caring human being. It has always been my opinion that Pierre deserved far more from the academic world than what he was given over the years, and, above all, should have been given the highest honors of the land from his government. What kind of man can be a master in mineral processing, mathematics, statistics, and create the Theory of Sampling? His work must never die, and I take it as my personal mission that it never will die. Dr. Dominique François-Bongarçon, president of Agoratek International, was for many years my sampling-expert competitor, and we often crossed each other’s path in our consulting and teaching practices around the world. I consider Dominique as a bright expert showing a well-balanced knowledge of the TOS and of Geostatistics. Dominique’s work is the best available today to offer a clear, essential link between the TOS and Geostatistics, and I would like to extend that link to Chronostatistics. I am thankful for his positive work, his remarkable creativity, and the class with which he is willing to help all his peers in a difficult and subtle domain (i.e., TOS) that has been overlooked by academicians for too long. He is a priceless, good-humored friend. He was a recipient of Pierre Gy’s Gold Medal in 2007. Professor Pentti Minkkinen, professor Emeritus, Lappeenranta University of Technology in Finland. Pentti is an inspiring friend and has a deep knowledge of Pierre Gy’s work. He is a formidable contributor for WCSB. He was a recipient of Pierre Gy’s Gold Medal in 2005. Doctor Ralph Holmes, from the AusIMM, Chief Research Scientist at CSIRO Mineral Resources in Australia. Ralph has been a huge contributor for WCSB ever since its creation in 2003. Ralph has a huge influence in ISO standard committees for sampling. He was a recipient of Pierre Gy’s Gold Medal in 2015. Dr. Richard Minnitt, professor at the School of Mining Engineering, University of the Witwatersrand, South Africa, told me once: Francis, I took three sampling courses with you and I would have expected to hear the same routine over and over each time. It was never the case, and in every one of these courses I learned something new I wished I had known a long time ago.

There again, I was fascinated by a brilliant academician who would show me how difficult it is to reconcile academicians and industrial practitioners, sampling experts and statisticians, and statisticians with geostatisticians. Dick’s friendly ways of communicating with the rest of the world was for me an eye opener to these difficulties, and I thank him immensely for this. He was a recipient of Pierre Gy’s Gold Medal in 2017. Dr. Pedro Carrasco Castelli, was chairman of the geological society in Chile and director of Geology at Codelco, was a longtime friend with whom I shared my knowledge from Gy and Ingamells for more than 15 years. Pedro’s immense knowledge of geology, geometallurgy, Geostatistics, and Chronostatistics was a source of inspiration for me and an incentive to find better ways to help people in the copper industry. His continuous guidance was a way for me to look at myself, reflect on findings, and do something about them to achieve constant improvement. He was a recipient of Pierre Gy’s Gold Medal in 2011, and unfortunately passed away the same year. Kim H. Esbensen, PhD, is research professor (Geoscience Data Analysis and Sampling) at GEUS (National Geological Surveys of Denmark and Greenland), external

4

Theory of Sampling and Sampling Practice

chemometrics professor with the ACABS research group, Aalborg University, Denmark, and external professor (Process Analytical Technologies or PAT) at Telemark Institute of Technology, Norway. He was professor extraordinaire at Stellenbosch University (Institute of Wine Biotechnology) in the period 2005–2010. Esbensen was awarded an honorary Doctorate at Lappeenranta University of Technology (LUT) in 2005. In 2009 he was inducted into the Danish Academy of Technical Sciences (ATV). Since 2001, he has devoted most of his scientific and R&D efforts to the theme of representative sampling of heterogeneous systems and PAT respectively. He is a member of the International Chemometrics Society, the international sampling community as well as six other scientific organizations. Esbensen is a long-standing contributor to R&D in applied chemometrics, acoustic chemometrics, multivariate image analysis (MIA), representative sampling in science, technology and industry, process analytical technology (PAT), and has also contributed to applied biotechnology/bioenergy. He has given more than 300 lectures/courses, published 175+ papers, is the author of a leading chemometric textbook and has four other didactic book chapters to his credit. He has served as opponent at 22 doctoral defences, has supervised ~50 MSc and 10 completed PhD degrees, and has 5 current PhDs. He has taught numerous courses at several university curricula, including chemometrics, Theory of Sampling (TOS), PAT, philosophy of science. He is chairman of the taskforce DS-Forum 205 responsible for writing the world’s first horizontal (matrix-independent) sampling standard, launched in CEN September 2012. He was a recipient of Pierre Gy’s Gold Medal in 2013. Pitard joined Professor Kim Esbensen to promote WCSB1 in Aalborg, Denmark in 2003, WCSB2 in Brisbane, Australia, in 2005, WCSB3 in Porto Alegre, Brazil, in 2007, WCSB4 in Cape Town, South Africa, in 2009, WCSB5 in Santiago, Chile, in 2011, WCSB6 in Lima, Peru, in 2013, WCSB7 in Bordeaux, France, in 2015, and WCSB8 in Perth, Australia, in 2017. All together we are also working as a team to bring the awareness of the Theory of Sampling to the general public all around the world extending beyond the academic and industrial audiences. Without them this third edition of my book could not exist. Now, it is time to make a long list of historical references highly relevant to this third edition.

Historical Summary From the early age of the industrial revolution, miners, geologists, metallurgists, chemists, regulators, and even investors have expressed concern about the representativeness of samples, which is essential for any kind of evaluation, quality assurance and quality control program. Many “experts” developed their own rules of thumb by following their experience, common sense, and intuition. A few of these well-intentioned pioneers proposed some statistical models, being aware that the collection of a sample is a selecting process ruled by probability laws and affected by many sources of errors. Very few of these individuals had the will to undertake the task of writing what they found, and those who dared, often covered only a small segment of the whole problem. From experience, these pioneers knew that the elaboration of a complete sampling theory would be enormously complicated and were discouraged by the challenge, or never had the time, the resources, nor the will of doing something about it. These are those who believe that being a statistician is a necessary and sufficient condition to become a reliable expert in sampling: it is indeed a necessary condition, but

Introduction and a Management Strategy

5

it is not and never will be a sufficient condition. The sampling expert should also have a solid knowledge in mining, metallurgy, chemistry, geochemistry, physics, geostatistics, and engineering; basically, he or she should be a “Renaissance” man or woman, which is easier said than done. Only those having a broad scope over these fields may become reliable experts in sampling. These men and women do exist; we have plenty of them within WCSB. However, they are not necessarily interested in spending their life meditating on a boring subject such as sampling, unless it becomes a passion. For many years there was a desperate need for a sampling theory and the industry was contented with partial approaches without any logical connection between them. Half of the twentieth century went by without anyone assembling the basis of a coherent and complete sampling theory. Then, an unfortunate phenomenon happened: those in need of implementing standards learned to live without a theory and accumulated numerous empirical rules that are still today plaguing many industries. Already now, and in the near future, many industries in need of high technology are going to put pressure on regulators for a rapid implementation of formidable standards of quality requirements. These industries need to rely on correct sampling systems and optimized sampling protocols. Environmental concerns become more and more acute, safety for food and pharmaceutical products is also of great concern, and everyone has the responsibility of doing his job in harmony and compliance with strict regulations; those in charge of regulatory agencies need to rely on correct sampling systems and reliable sampling protocols instead of relying on politicians and lawyers. International competition is making considerable changes in our daily lives: those having the responsibility of making feasibility evaluations often need to rely on high quality data, and much of these data depend on correct sampling systems and reliable sampling protocols. The list could go on without exercising a great deal of imagination. The empiricist may ask: Why is a complete and coherent sampling theory necessary? Why go through a great deal of difficulty to dissociate the overall estimation error into its components? Why not just perform an analysis of the total variance of available data and go from there? Why not just rely on bias tests? Etc.

The German philosopher Emmanuel Kant in his “Critique of Pure Reason” could answer, “Things which exist in the world are real, but the human mind is needed to give them order and form, and to see the relationships between them. Only the human mind can surround them with space and time.” Closer to us, Albert Einstein could answer: I am convinced that we can discover by means of purely mathematical constructions the concepts and the laws connecting them with each other, which furnish the key to the understanding of natural phenomena. Experience may suggest the appropriate mathematical concepts, but they certainly cannot be deducted from it.

As usual from this remarkable man, there is a deep and subtle philosophy behind this statement. So far only one complete sampling theory has been published and taught, we will call it the Theory of Sampling (TOS). Before entering the details of this theory, it would be useful to list the names of those who should not be forgotten because of the contribution of the progressive ideas and inspiration they gave to the author of this third edition.

6

Theory of Sampling and Sampling Practice

From 1884 to 1909: D. W. Brunton1–3 1909: E. G. Bailey4 1928: B. Baule and A. Benedetti-Pitcher5 1945: M. D. Hassialis et al.6 From 1964 to 1966: R. M. Becker7 1964: A. D. Wilson8 1967: A. W. Kleeman9 From 1957 to 1972: J. Visman10–13 From 1970 to 1986: C. O. Ingamells et al.

14–22

From 1992 to 2017: D. François-Bongarçon23–27 Between 1965 and 1973, while Pierre Gy was preparing his general Theory of Sampling. The French mathematician G, Matheron was actively working on his regionalized variables; his findings have a tremendous impact on Pierre Gy’s theory.28–31 At this time, it became obvious that two models were necessary to assemble all the pieces of the jigsaw puzzle: • A continuous model taking into account the continuous nature of the space or time variability of its characteristics was suggested. This is where the notions introduced by the regionalized variables (variography) are going to be a powerful tool. • A discrete model was also necessary, taking into account the discrete nature of the population of fragments submitted to the sampling operation, described by the small scale fundamental notions of constitution and distribution heterogeneities. The study of the short-range heterogeneity fluctuation term will link both models together. At this stage, the problem was solved, and Pierre Gy’s Theory of Sampling was rapidly taking its final form and became the only theory of sampling of particulate materials to be accepted and undisputed worldwide. Over the years small gaps would naturally be taken care of by various well-known authors who will be referred to in due time in this third edition. For those interested in perfecting their knowledge of the TOS, it would be most suitable, perhaps essential, to study the various publications of Pierre Gy in the chronological order they were created that would provide a unique insight into the enormous task that has been accomplished. The study of these publications shows many examples of application and important historical cases. Pierre Gy’s work can roughly be divided into several phases as follows: • The early phase from 1953 to 196332–41 • The transition phase from 1964 to 196742–49 • Assembling the pieces of the “jigsaw puzzle” from 1967 to 197450–56 • First attempt in publishing the Theory of Sampling from 1975 to 197957–63 • Second attempt in publishing the TOS from 1979 to 198264–72 • Semifinal version of the TOS in 198273 • Continuing the work from 1982 to 198874–77 • Final version of the TOS in 1988 to 199978–80

Introduction and a Management Strategy

7

There was no doubt that room for improvement existed; however, it would be extremely unwise to rewrite a new theory ignoring what has been done. As far as this author is concerned what was done by Pierre Gy is sacred and should be preserved in all its details. Pierre Gy’s TOS is characterized by an impeccable Cartesian approach that leaves no room for empiricists and represents a tremendous breakthrough in the field of quality assurance and quality control. This chapter would be incomplete without a reference to the following important documents created during the WCSB conferences and Sampling conferences by the Pierre Gy’s gold medalists for their excellence in promoting the Theory of Sampling: From 2003 to 2010: P. Carrasco81–86 From 2003 to 2017: P. Minkkinen87–94 From 2003 to 2017: D. François-Bongarçon95–114 From 2003 to 2017: F. F. Pitard115–142 From 2003 to 2017: K. H. Esbensen143–152 From 2003 to 2017: R. J. Holmes153–162 From 2007 to 2017: R. C. A. Minnitt163–171

Subdivisions of the Text Part I: Introduction and a Management Strategy Because the prevention of any misunderstanding is essential, Chapter 1 has been devoted entirely to the definition of basic terms and notations. Chapter 2 suggests a more effective management strategy based on a clear definition of structural problems (i.e., causes of problems) and circumstantial problems (i.e., effects generated by structural problems). The confusion between these two kinds of problems is costing alarming financial losses for many corporations around the world. Part II: The Necessary Statistical Methodology Chapter 3 is a refreshing presentation of important statistical concepts for the mining engineer, geologist, metallurgist, chemist, manufacturer of sampling equipment, regulator, or scientist who had little exposure to statistics. A careful reading of this chapter is essential before entering the complex subject of sampling. Chapter 4 is a logical introduction to the components of the Overall Estimation Error OEE. Part III: Heterogeneity and Homogeneity Chapter 5 is a logical introduction to the notion of heterogeneity because it is a primary structural property of all materials we want to sample. Chapter 6 is devoted to the heterogeneity of a zero-dimensional lot where a good understanding of Constitution Heterogeneity CH, and Distribution Heterogeneity DH is essential.

8

Theory of Sampling and Sampling Practice

Chapter 7 is devoted to the heterogeneity of a one-dimensional lot and the notion of variography is introduced that is a critically important tool used throughout the book. Part IV: Sampling Errors Introduced by Various Forms of Heterogeneity Chapter 8 is an in-depth study of one-dimensional lots and an introduction to the continuous model. Chapter 9 is an in-depth study of zero-dimensional lots and an introduction to the discrete model. Chapter 10 is devoted to the Fundamental Sampling Error FSE, which is the minimum error possible involved in a sampling process of particulate materials. If something goes wrong there, then the sampling process becomes useless and vastly misleading. Chapter 11 shows how to minimize the variance of the Fundamental Sampling Error FSE in sampling protocols, and an introduction to sampling nomographs. Chapter 12 is devoted to the sampling for particle size distribution analysis. This chapter has a strong connection to Chapters 10 and 11 in its critically important applications. Chapter 13 is an introduction to the Grouping and Segregation Error GSE and an indepth study of segregation and its negative effects in sampling protocols and in processing plants. Part V: Integration of Visman and Ingamells’s Works into the TOS Chapter 14 is an introduction to Visman’s work. Chapter 15 is an introduction to Ingamells’s work. Chapter 16 is an in-depth demonstration that Visman and Ingamells’s works can be easily linked to the Theory of Sampling and actually add to it in a beneficial way. Part VI: The Special Case of the In-Situ Nugget Effect Chapter 17 is an introduction to a difficult problem in Geostatitics when a constituent of interest clusters in an excessive way to the point that it becomes a sampling problem as well, although it is not the sampling of particulate material but the sampling of isolated clusters in unbroken materials. Part VII: The Capital Notion of Sampling Correctness Chapter 18 is an in-depth introduction to the Increment Materialisation Error IME. Chapter 19 is a discussion about the various sampling modes that can be selected to implement a sampling protocol. Chapter 20 is an in-depth study of the Increment Delimitation Error IDE taking place during exploration, mining and also when sampling the environment. Chapter 21 is a review of the Increment Delimitation Error IDE taking place within processes. Chapter 22 is a review of the Increment Delimitation Error IDE taking place at the laboratory. Chapter 23 is an in-depth study of the Increment Extraction Error IEE taking place during exploration and mining.

Introduction and a Management Strategy

9

Chapter 24 is a review of the Increment Extraction Error IEE taking place within processes. Chapter 25 is a review of the Increment Extraction Error IEE taking place at the laboratory. Chapter 26 is devoted to the Increment Preparation Error IPE and the notion of sampling integrity and the many problems it creates for contamination, losses, alteration of physical and chemical properties, training of personnel and possible fraud during the sampling of valuable commodities. Part VIII: The Increment Weighting and Weighing Errors Chapter 27 is an in-depth study of the Increment Weighting Error IWE, as good sampling systems and protocols must be reasonably proportional. Proportional sampling is the future of good sampling systems. Chapter 28 is a review of weighing systems the incorrectness of them is indirectly connected to sampling problems. They are often a major source of reconciliation as people may not know exactly how much tonnage they process. Part IX: Review of Some Notorious Sampling Problems Chapter 29 is a review of a problem with a bad reputation in commercial sampling, which is the sampling for the determination of the moisture content. Companies often have severe disputes and even litigation on this subject. Chapter 30 addresses the many peculiarities about the sampling of precious metals and other very heavy minerals. Chapter 31 is a review of sampling practice for liquid and solid wastes and sampling of the environment. Chapter 32 is a discussion about solvable and unsolvable sampling problems. Part X: Chronostatistics Chapter 33 suggests a strategy to take better advantage of existing chronological data. Chapter 34 is devoted to the use of the variogram to elaborate meaningful process control charts. Chapter 35 shows case studies where variography was an effective tool to discover sources of problems. Part XI: Homogenization Chapter 36 is an introduction to homogenizing processes. Chapter 37 is devoted to bed-blending techniques. Part XII: Recommendations to Manufacturers of Sampling Equipment and to Engineering Firms Chapter 38 is a list of recommendations based on the contents of this third edition and will provide a quick check list to users of the TOS.

This page intentionally left blank

1 Definition of Basic Terms and Symbols

The Theory of Sampling (TOS) largely evolved since the World Conference on Sampling and Blending was created in 2003. Numerous authors brought their own definitions to many fundamental terms and notations were not always consistent. Because the right word sometimes does not exist in current sampling practice, or because various authors may disagree on the selection of such words and symbols, it is essential to write this chapter at the beginning of the book to facilitate the necessary work so kindly offered by many talented participants. Because specific notations are frequently associated with basic terms, it is natural to define them as well. For some important terms and for those for which the definition is often controversial, we extend our discussion to make certain that the reader will pursue his study of TOS in an unambiguous and comfortable way. Reading this chapter carefully will most certainly condition the reader before exploring all following chapters.

1.1. Basic Terms Accuracy – The definition of accuracy is often controversial. It is incorrect to include the notion of precision and reproducibility with the notion of accuracy; consequently, definitions given by most current dictionaries are misleading, confusing, and incorrect. Accuracy has been given several definitions and statisticians and engineers disagree completely on those definitions. A sampling bias is defined as the mean m(ISE)of the sampling error, (also referred to as the Increment Selection Error ISE). As far as sampling is concerned, a sample is said to be accurate when the absolute value of the bias |m(ISE)| is smaller than a certain standard of accuracy, m0(ISE), which is regarded as acceptable for a given purpose and clearly defined in preselected Data Quality Objectives (DQO). Basically, what is acceptable to someone may turn out to be unacceptable to somebody else, and it is an issue to clarify before a sampling campaign is implemented. Accuracy is a property of the mean of a given error exclusively. It is important to make a distinction between an accurate and unbiased sample. An unbiased sample implies that the mean of the Increment Selection Error m(ISE)is equal to zero. Even when carried out in an ideal way, sampling is always biased due to the particulate structure of most materials. The bias is never strictly zero. It may be negligible but always different than zero. We should speak of an accurate sample but not of an unbiased sample. In fact, we can forget about an unbiased sample because it is a limit case never encountered in sampling practice. 11

12

Theory of Sampling and Sampling Practice

Analysis – Often, when we speak of analysis, we speak of estimation or determination of the content of a constituent of interest; this is an incorrect practice. Analysis is the opposite of synthesis. It is the separation, partial or complete, of a material into its constituents. How these constituents are determined after their separation is an entirely different matter. When we talk of analysis, analytical procedure, or analytical error, we should talk of determination, procedure for a determination, or determination error. We choose to go along with the word analysis, although it is not correct, because it is so widely misused in various industries. We may speak of chemical analysis, moisture analysis, particle size distribution analysis, percent solid analysis, or variance analysis in statistics. Analytical Error AE – Error generated by nonoptimized operations such as assaying, moisture determination, particle size analysis, estimation of the percent solids of a slurry, and so on. This error does not include the last selection process from which the final analytical subsample is obtained; this stage shall be a part of the Total Sampling Error, TSE. Analytical Uncertainty AU – This refers to a certain amount of uncertainty left in the analytical process after all sources of potential analytical bias have been eliminated in a preventive way, and after necessary tests have been performed to bring the uncertainty below or equal to an acceptable level dictated by Data Quality Objectives. It should be clearly understood that the use of the word uncertainty would be inappropriate when biases have not been minimized in a preventive way and when the amount of uncertainty is not appropriate with DQO, which is essentially an error by operators. Average – Synonym of mean; however, the term mean is reserved for an infinite or at least very large population, and m(x) is called the true unknown mean of the distribution of a random variable x. The term average is reserved for the experimental estimate of m(x) when a limited amount of data is available and is noted X. Batch – Amount of material from which increments making up samples are selected. A batch of material shall have well-defined boundaries: content of a bag, truck, railroad car, ship, and so on. A batch of material is often called a lot. In the TOS it is called the Lot L. Bias B – A sampling bias is equal to the mean of the Increment Selection Error m(ISE) when this mean is different from zero. The relative sampling bias associated with the estimator as of the true unknown critical content aL of a lot L can be written: Bðas Þ ¼ mðISEÞ ¼

mðas Þ  aL aL

where as is the critical content of a sample S selected from the Lot L. A sampling bias is always introduced by incorrect sampling. In addition, we can state that correct sampling is not biased in a significant way. This leads to two fundamental notions: • Sampling correctness is defined as a necessary structural primary property, as demonstrated by Matheron in 1967138 and later developed in this book, depending on specific conditions over which we have control, but independent from the sampling problem itself over which we have no control.

Definition of Basic Terms and Symbols

13

• Sampling accuracy is defined as a circumstantial secondary property depending on the respect of the conditions of sampling correctness over which we have control, and on the intrinsic properties of the material to be sampled over which we have no control. Circumstantial Property – A property depending on the sampling problem itself over which we have no control. It is also a secondary property (e.g., accuracy is a circumstantial property). Comminution – A crushing, grinding, or pulverizing stage that diminishes the fragment size d of a lot, sample, or increment. Component – An elementary part or a constituent that can be separated and quantified by analysis. As far as sampling is concerned there are several kinds of components: • Chemical components: usually assaying leads to the determination of the content of a chemical component; it can be an element, a molecule, or a mineral. • Physical components: the domain occupied by the lot can be divided into a set of subdomains respectively occupied by a single physical component of the material. The TOS of particulate materials looks at the proportions of physical components. There are two categories of physical components: ○ Active physical components: set of components the proportion of which is to be estimated by analysis; it could be a set of minerals, a set of particle size fractions, or a liquid such as water if the critical content is the moisture content. ○ Passive physical components: set of components the proportion of which is irrelevant for a given analysis: for example, the interstitial air when estimating the moisture content of a solid material. • Critical component: chemical or physical component the proportion of which is to be estimated. Composite Sample – A sample made by the reunion of several distinct subsamples. Composite samples are often prepared when it is not economically feasible to analyze a large quantity of individual samples. This should not be confused with a sample made of several increments, so let’s define both in an unambiguous way because it has been a frequent issue in environmental or food sampling practices. • A composite sample is the combination of various samples collected over a large space or a long time to minimize the cost of assaying them all. • A sample is the combination of one or several increments collected over a small space or a small time to minimize the negative effect of small-scale segregation. Of course, it becomes suggestive as what is large and what is small: sampling practitioners must use their judgment relative to well-defined DQO. Constitution Heterogeneity CH – The constitution heterogeneity CHL of the lot L is the heterogeneity that is inherent to the composition of each fragment or particle making up the lot. The greater the difference in composition between each fragment, the greater the constitution heterogeneity. The constitution heterogeneity CH could have also been called the composition heterogeneity.

14

Theory of Sampling and Sampling Practice

There also the sampling practitioner must use his or her judgment as how large the lot that is referred to is. As a rough guideline, we could refer to the area of influence of a collected sample (i.e., a typical diamond drilling intercept in the mineralized zone of a given geological unit, a 12-hour lot collected in a processing plant, a decision unit in environmental or food industry, etc.). All those must be the object of careful discussion when setting logical DQO. Content – Because it is always easier to deal with dimensionless relative values, we shall use the word content for the proportion of a given active component: Content = Weight of a given active component/Weight of all active components Consequently, we shall call critical content a the proportion of a critical component; the critical content of the lot L is denoted aL , the critical content of a sample S is denoted as , and so on. Critical content aL = Weight of the critical component in the lot L/Weight of all active components in the lot L. Continuous Model – Because sampling is a selection process, what is called the continuous model of sampling should be called the continuous selection model. For simplicity, we call the continuous model throughout the text. Continuity is a mathematical concept; however, matter is essentially discontinuous. When we look at a lot L using the continuous model, we voluntarily disregard all discontinuities within the lot L and look at the properties of every point X of the entire domain of L, irrespective of the fact that X may or may not belong to an active component. Continuous Selection Error – The Continuous Selection Error, now preferably called the Heterogeneity Fluctuation Error HFE is generated by the immaterial selection process and was regarded in older publications as the sum of two complementary errors. Using new notations, they are: • The Quality Fluctuation Error QFE: taking into consideration the fluctuation of the critical content a as a function of time t (it could be distance). QFE can be estimated by investigating the properties of the quality function aðtÞ. • The Quantity Fluctuation Error preferably called the Increment Weighting Error IWE: taking into consideration the fluctuations of the flow rate μ of the active components as a function of time t if working in a plant. IWE can be estimated by investigating the properties of the quantity function μðtÞ. This decomposition is not a must as the Heterogeneity Fluctuation Error HFE is regarded as the result of the total heterogeneity contribution h. In other words, HFE contains both the qualitative term QFE and the quantitative term IWE. Correct Selection – Sampling is correct when it gives all elements, in the batch to be sampled, a uniform probability P of being selected. When this condition is not fulfilled, the sampling selection becomes necessarily incorrect and it is important to remember that incorrect sampling is always a form of gambling responsible for transient biases: Correct selection probability ¼ P ¼ Constant

Definition of Basic Terms and Symbols

15

Furthermore, sampling is correct when all elements that do not belong to the batch to be sampled have a zero probability of being selected. Such concept should not be taken lightly, and yet, every day, everywhere around the world, in areas where it can hurt most, the most elementary rules of sampling correctness listed in this book are transgressed, either because of ignorance, or deliberately in the name of practicality. Critical Component – The critical component is a chemical or physical component, the proportion of which is to be estimated. There are several kinds of critical components. There are as many sampling problems as there are critical components, and these problems must be solved independently from one another. The critical component presenting the most difficult problem will determine the parameters of the sampling process for all critical components. Critical content – The critical content is the proportion of a critical component, usually denoted as aL for the true unknown critical content of the lot L, or as as for the true unknown critical content of the sample S. Deterministic Sample – A nonrandom sample, a sample selected for a given reason. The best example of a deterministic sample is a grab sample; a grab sample is usually deterministic and practically never probabilistic and should be more exactly labelled as a specimen as it does not represent anything other than itself. Discrete Model – The discrete model identifies the lot L to be sampled with a discrete set of Nu units Um (m = 1,2,3,. . .,Nu) that can be either individual particles or groups of particles. As far as sampling is concerned, a lot L is completely defined by the finite set of all groups of particles of critical content am and weight Mm. The discrete model is valid only if the interstitial fluid is a passive component. Discrete Set – A discrete set is a collection of separate and distinct particles, or fragments, or units. Generally, a discrete set shall be defined in the sense that there are rules and properties that can be used to find if one particle, or fragment, or unit is or is not a part of this discrete set. Distribution Heterogeneity DH – The distribution heterogeneity DHL of a lot L is the heterogeneity that is inherent to the manner in which separate and distinct particles, or fragments, or units are scattered or spread out within the lot L. The greater the difference in composition between each particle, or fragment, or unit, the greater the possible distribution heterogeneity. Distribution Homogeneity – It is an unstable stage that can be reached if we successfully mix a segregated material. In practice, it is an unreachable limit where the amount of segregation would be zero. Domain – A domain D is a closed volume, the boundaries of which contain a lot L, a group of units, or a group of fragments, or a group of particles. Such a domain can be a three-, two-, or one-dimensional geometrical space. A domain T is a closed fraction of a one-dimensional temporal space, a notion that is used in the study of flowing streams. Dynamic Phenomenon – Phenomenon showing active changes. For example, sampling and homogenization are dynamic phenomena, contrary to heterogeneity, which is a static phenomenon at a particular time and location when and where sampling is performed.

16

Theory of Sampling and Sampling Practice

Equitable – A sampling operation is said to be equitable when the Total Sampling Error TSE becomes a true random variable with a mean m(TSE) = 0, which involves the unconditional respect of the rules of sampling correctness. It should be emphasized that when these rules are not respected there is no longer any valid theoretical analysis that can be performed, which has huge implications in sampling practice. An example of that would be a statistical test performed with data generated by nonprobabilistic samples, which is almost a standard in some industries and a flagrant misuse of statistics. Errors – Because samples, and lots from which samples are collected, are not strictly equivalent, there are errors involved in any estimation procedure. The validity of a sampling method can be estimated by the magnitude of the sampling errors involved and it is essential to clearly differentiate all various sampling errors from one another because it is the key to a good understanding of the TOS. We shall also differentiate analytical and preparation errors from sampling errors. It is also important to differentiate the word error from the word uncertainty; because it is a controversial issue, a special section is dedicated to such differentiation in this book. Estimate – The result a0s obtained from the analysis of a sample for a given constituent of interest is an estimate of the true unknown content as of the sample for the same constituent. The result a0s is always affected by various errors, and there are no exact measurements or exact analyses. Estimator – By definition, the true content as of a sample is an estimator of the true content aL of the corresponding lot. Following a logical investigation an estimate a0s can be selected to represent the true unknown content as of a sample and the true unknown content aL of a lot. The estimate a0sp can be selected to represent the true unknown content asp of a specimen; however, asp cannot and should not be selected to represent the true unknown content aL of the corresponding lot. In other words, the true content asp of a specimen is not an estimator of the true content aL of the corresponding lot because the hypothesis of equality between asp and aL is not founded on any logical and scientific justification. This mistake is made every day in the name of practicality, and it is probably one of the reasons why so many people around the world, including some standards committees, resist TOS because it creates an inconvenience. Exact Selection – A selection, the error of which is identically zero, implies that the mean and the variance of an exact selection are both equal to zero. Therefore, an exact selection is characterized by a perfectly reproducible and unbiased mean. For all practical purposes, an exact selection is an inaccessible limit. Extrapolation – Empirical calculation made from values of one or several variables located beyond range of this or these variables. There is no legitimate method making possible the prediction of a phenomenon beyond the known experimental domain; therefore, extrapolation should be performed with extreme caution, and only with the support of a theoretical analysis. A typical example of such mistake is the extrapolation from the result obtained from a grab sample selected in a flowing stream and assuming it is representative of the entire stream; it is almost never the case. Fragment – Broken piece of a solid that forms a compact unit belonging to the lot. During the selection process, this unit is assumed to be indivisible. As fragments become smaller, the term particle is preferred; however, both words have the same meaning irrespective of the size.

Definition of Basic Terms and Symbols

17

Fundamental Sampling Error FSE – The Fundamental Sampling Error results from the constitution heterogeneity CH of the sampled material. The adjective fundamental is justified by the fact that out of all sampling errors, FSE is the only one that can never cancel. It is the error that remains when the sampling operation is perfect. FSE is also the only sampling error that can be roughly estimated beforehand, which makes it convenient to preventively optimize sampling protocols. However, contrary to some beliefs, it is not constant. Therefore, following appropriate tests its order of magnitude can be predicted and corrective actions can be taken. However, it is not a stationary property; for example, during exploration and within a given geological unit, a large majority of samples may show a reasonable variance of FSE, nevertheless a few samples may show a large variance because of local, unpredictable conditions. In other words, the predictability of the variance of FSE is possible, but it is not necessarily the panacea. Granulometric Factor – It is the particle size distribution factor g. When calculating the variance of FSE, it is stated that it is proportional to the third power of the maximum particle size d, defined as the opening of the square mesh retaining about 5% oversize. All particles do not have the same size, which is taken into account by a correcting factor g called the granulometric factor. Therefore, there are as many values for g, as there are definitions of d for a same material. Generalization – It is one of the most frequent sources of problems generated by empirical approaches. For example, the sampling characteristics of a particular constituent are often generalized to the other constituents, which is nearly always a dangerous assumption, especially when the sampling characteristics of a major component, copper for example, is generalized to those of a minor component, such as gold. Group of Particles – A subset of neighboring particles that belongs to the set of Fi particles making up the lot L. Grouping and Segregation Error GSE – The Grouping and Segregation Error results from the distribution heterogeneity DH of the sampled material, and its variance is directly proportional to three factors: • the constitution heterogeneity CH • the grouping factor γ • the segregation factor ξ. Under normal conditions, this error has a nonzero mean often negligible with major constituents; however, it could become important with liberated minor constituents. Grouping Factor – The grouping factor γ is naturally introduced during the development of the notion of natural distribution heterogeneity. Increments or groups making up a sample are most of the time composed of many neighboring particles. In other words, each particle of the lot does not have the same chance of being part of the sample, only each group does if, and only if each group is truly selected at random; therefore, an error is generated that is taken into account by the grouping factor. The grouping factor is an increasing function of the average number of particles Nn making up each increment in a sample. This factor is equal to zero when each random increment is made up of only one particle.

18

Theory of Sampling and Sampling Practice

Heterogeneity – The importance of a clear definition of heterogeneity is considerable in the TOS. In fact, the entire Part III of this book is devoted to the notion of heterogeneity. Because homogeneity is defined as the condition of a lot under which all elements of the lot are exactly identical, heterogeneity is also defined as the condition of a lot under which these elements are not strictly identical. Consequently, we say that homogeneity is the zero of heterogeneity and is an inaccessible limit. All particulate materials are essentially heterogeneous. There are several kinds of heterogeneities and we should always specify which kind of heterogeneity we are referring to. In a first analysis we can identify two different kinds of heterogeneity: • the constitution heterogeneity CH • the distribution heterogeneity DH. These two terms are defined separately. Constitution and distribution heterogeneities are responsible for the total heterogeneity contribution h of a given fraction toward the corresponding lot L. This fraction can be a single fragment or a group of fragments such as an increment. The average heterogeneity contribution of all possible fractions within the lot is, by definition, nil. When we consider a lot as a sequential population, the total heterogeneity contribution h of a given fraction toward the corresponding lot L is the sum of only three major components: • The short-range heterogeneity contribution h1 of h. It is a discontinuous term generated by the heterogeneity inside each fraction. Fluctuations of h1 are essentially random. • The long-range heterogeneity contribution h2 of h. It is a continuous term generated by local trends. Fluctuations of h2 are essentially nonrandom. • The periodic heterogeneity contribution h3 of h. It is a continuous term generated by handling, processing, reclaiming, and so on. Fluctuations of h3 are essentially nonrandom but also cyclic. Finally, the following formula can be written: h ¼ h1 þ h 2 þ h 3 Heterogeneity Fluctuation Error HFE – The heterogeneity fluctuation error is a synonym of the continuous selection error CSE, which was the sum of the quality fluctuation error QFE and the increment weighting error IWE in former publications of TOS, and this is the reason we define CSE in this book, keeping in mind the new notation is HFE from now on. HFE ¼ QFE þ IWE In the present edition of TOS, this decomposition is, most of the time, no longer a must because we are directly investigating the total heterogeneity contribution h of the material. The total heterogeneity fluctuation error HFE can be divided into three terms: • HFE1: sort-range heterogeneity fluctuation error generated by h1 • HFE2: long-range heterogeneity fluctuation error generated by h2

Definition of Basic Terms and Symbols

19

• HFE3: periodic heterogeneity fluctuation error generated by h3. HFE ¼ HFE1 þ HFE2 þ HFE3 Incorrect Selection – When one of the two conditions that are necessary to obtain a correct selection are not fulfilled, an incorrect selection is always generated (see the definition of a correct selection). Increment – An increment I is a group of fragments, or particles, or molecules extracted from the lot L in a single operation of the sampling device. It is important to make a distinction between an increment and a sample that is obtained by the reunion of several increments. Increment Delimitation Error IDE – The increment delimitation error IDE results from an incorrect shape of the volume delimiting the increment, which renders nonuniform the probability for a given element of matter to fall between the boundaries of the model extended increment (i.e., ideal increment). Most of the time, the mean of this error is different from zero, which makes it an important source of sampling bias. In other words, for IDE to have a mean near zero, the volume delimiting the increment must coincide with an ideal isotropic volume of observation and be located at random within the lot to be sampled, a condition easier said than done that must be clearly explained at great length in the TOS and unambiguously addressed by sampling standards. Increment Extraction Error IEE – The increment extraction error IEE results from an incorrect extraction of the increment and it could have been called the increment recovery error, which is very familiar to geologists. The extraction is said to be correct if and only if the rule of center of gravity is respected; all fragments, or particles, or molecules with their center of gravity inside the boundaries of the correctly delimited increment belong to the increment. In other words, the sampling tool coming in contact with the material to be sampled must under no circumstances become selective on what it is taking. The mean of this error is most often different from zero, which makes it an important source of sampling bias. IEE must be clearly explained at great length in the TOS and unambiguously addressed by sampling standards. Increment Materialization Error IME – The increment materialization error IME is the sum of four errors not accounted for by the continuous model: the increment delimitation error IDE, the increment extraction error IEE, the increment preparation error IPE, and the increment weighting error IWE. IME ¼ IDE þ IEE þ IPE þ IWE Contrary to former publications, in this version of the TOS we include IPE and IWE in IME because they are sources of severe sampling biases if precautions are not taken, just like IDE and IEE; it is a convenience only. It is understood that IPE is not part of the sample selection process, nevertheless, the biases it generates are additive to those generated by IDE, IEE, and IWE. Increment Preparation Error IPE – The Increment preparation error is the sum of various errors that are not part of the selective process; they are more like accidents, but they can be somewhat prevented if well understood. These errors can be introduced

20

Theory of Sampling and Sampling Practice

either when each increment is further processed, or when the entire sample made of several increments is further processed, which highly depends at which stage of the sampling process we refer to. These errors can be introduced by contamination, by loss, by chemical or physical alteration, by human mistakes, or by fraud. The increment preparation error should not be confused with sampling errors that are part of the selection process. IPE must be clearly explained at great length in the TOS and unambiguously addressed by sampling standards. Increment Weighting Error IWE – The increment weighting error results from incorrect IDE, IEE, and IPE all combined, which makes it impossible for the sampling process to be reasonably proportional. Flow rates in stream are not necessarily constant therefore the mass of increments making up a sample must be proportional to the flow rate at the time they are taken. If sampling is correct usually the increment weighting bias m(IWE) is negligible. Clearly, it is not the case when grab increments, manual or automated, with no respect to IDE and IEE, are taken in a stream to make up a sample. Interpolation – An insertion of fictive values between known values representing two consecutive experimental points. Having, in advance, a certain idea of what he or she is trying to demonstrate, someone may often be tempted to alter the true meaning of some data by forcing them to fit a convenient model. Excessive interpolation is a common problem encountered when not enough experimental points are at our disposal. Intrinsic Heterogeneity of a lot IHL – The intrinsic heterogeneity of a lot L refers to the constant factor of constitution heterogeneity CHL. Basically, it is the ratio of the constitution heterogeneity of a lot CHL divided by the average mass Mi of fragments present in the lot: IHL ¼

CHL Mi

L Because Mi ¼ M NF where ML is the mass of the lot to be sampled and NF the number of fragments in the lot, for practical applications it is far more convenient to refer to IHL than CHL that is proportional to NF and impossible to estimate.

Liberation Factor – When the mineralogical factor c assumes the constituent of interest is completely liberated, which implies that we are dealing with a mixture of two pure constituents: the constituent of interest itself and the gangue. Of course, this is not often true and the liberation factor ‘ is defined as the correcting factor, taking into account the fact that the constituent of interest and the gangue are not perfectly separated from one another. The value of ‘ is very difficult to estimate and has been the object of controversies and also huge mistakes. The knowledge of the mineralogist can help tremendously in such endeavor. The last thing you want is to give a value to ‘ that is such that it would fit a convenient model. The domain of ‘ is: 0‘1 Liberation Size – Often called the liberation diameter d‘ and it is defined as the particle size of a comminution that would ensure the near complete liberation of the constituent of interest. Mineralogists and mineral processing engineers usually suggest that d‘ is

21

Definition of Basic Terms and Symbols

reached when the value of a screen opening retaining only 5% of the material and defined as d in TOS is such that 85% of the constituent of interest is liberated. Lot – Batch of material from which increments to make samples are selected. A lot L of material shall have well-defined boundaries and can be represented by the content of a bag, a truck, a railroad car, a ship, a geological unit, and so on. Samples are used to estimate the content of various constituents of interest present in the lot. Mass – Quantity of matter in an object as measured in its relation to inertia, which is determined by dividing the weight of the object by the acceleration due to gravity. From a practical standpoint, the mass is a measure of the quantity of matter present in an object in terms of its heaviness referred to in our specific environment. For all practical purposes mass is a synonym of weight in our earthly environment. Mean – Mean, arithmetic mean, and average are synonym terms of the numerical value m(x) obtained by dividing the sum of a set of xi measurements by the total number N of individual measurements in the set. For the sake of clarity, when speaking of mean we recommend specifying which kind of mean we are referring to: arithmetic, quadratic, geometric, weighted, or harmonic mean. All terms are defined within the textbook. Usually, when the word mean is used alone, without any specification, it is assumed that we are referring to the arithmetic mean. Mean Square – The mean square of a set of xi measurements is nothing other than the square of their quadratic mean r(x): r2 ðxÞ ¼ mðx2 Þ ¼

N 1X x2 ¼ m2 ðxÞ þ σ 2 ðxÞ N i¼1 i

The mean square concept is useful as a measure of sample representativeness. Mineralogical factor – The mineralogical factor c could also be called more exactly the mineral composition factor. Because the constituent of interest does not have to be a mineral, we could generalize and call c the constituent of interest composition factor. By definition, c is the maximum heterogeneity generated by the constituent of interest in the lot L. This maximum is reached when the constituent of interest is completely liberated. Model – A model is a more or less idealized representation of a complex reality. A model should provide an undistorted and unbiased picture of reality and it should be simple enough to allow a straightforward mathematical processing. A model must link each of its subdivisions to one another. All subdivisions must cover all relevant aspects of the problem and answer all questions. Usually, we know it is not necessarily the panacea, nevertheless it can be a very useful tool. Nugget Effect and In-situ Nugget Effect – In geostatistics the variogram will not necessarily tend toward zero as the lag j between samples does. This means that if one goes back to sample the same site, the result will differ. This amount of variance is referred to as the nugget effect C0 of the spatial variographic function; however, in TOS that was born in the mineral processing world it is defined as the nugget effect V(j=0) of the temporal variographic function. Many factors play a role on how large C0 and V (j=0) are.

22

Theory of Sampling and Sampling Practice

The in-situ nugget effect INE is only one of these numerous factors. It is only related to the size of particles of interest present in unbroken material or the size of clusters made of particles of interest when such particles tend to concentrate within a very small volume. Again, we refer to unbroken material, which can be the mineral deposit or, at least very large boulders or, fragments in a run off mine when performing a visual inspection in an open pit or, an underground operation or, when logging diamond core samples during exploration. It is convenient to differentiate C0 from INE as they may lead to very different causes of sampling problems. Overall Estimation Error (OEE) – The overall estimation error OEE is defined as sum of the total sampling error TSE, which includes the Increment Selection Error (i.e., proper sampler error) and the Increment Preparation Error IPE, and of analytical error AE. It is the difference between the final analytical estimate and unknown true value.

the ISE the the

Particle Size Distribution Factor – See the definition of the granulometric factor. Pearson’s Coefficient – Pearson’s coefficient is the relative standard deviation. To compare an experiment with another one it is often much easier to work with the relative dimensionless standard deviation sR . sR is defined as the ratio of the standard deviation s of a population of values to the arithmetic average x of the same population. It can also be defined as the coefficient of variation: sR ¼

s x

Precision – Usually the term precision is used to describe the reproducibility of estimates regardless if they are biased or not. An estimate is precise or reproducible when the variance of the Sample Selection Error σ 2 ðSSEÞ is not larger than a certain standard of precision σ 20 ðSSEÞ regarded as acceptable for a given purpose well defined by logical DQO. Precision is a property of the variance of a given error or uncertainty exclusively. The use of the word uncertainty is recommended when the precision matches the required DQO and the data is not biased. Preparation – Preparation is a sequence of nonselective operations such as transfer, crushing, grinding, pulverizing, filtering, drying, mixing, homogenizing, and so on, carried out on a batch of matter to bring it under a convenient form for the next selection processing stage, which can be another sampling stage or the ultimate analysis. Preparation shall not be confused with sampling, which is the selection process itself. Probabilistic Selection – A selection is said to be probabilistic when it is founded on the notion of selecting probability, which necessarily includes the intervention of some random element. A sampling process that does not involve some kind of random process is invalid and nondefensible. A probabilistic selection can be either correct or incorrect. It is correct when the selecting probability is uniformly distributed among all units making up the lot and nil for material that does not belong to the lot. A selection is incorrect when one of these conditions is not strictly fulfilled. A selection is said to be nonprobabilistic when the selection is not founded on the notion of selecting probability, such as grab sampling or sampling with in-stream or by-stream probes.

23

Definition of Basic Terms and Symbols

Probability – A probability is the ratio of the number of possible elementary outcomes of a certain event to the total number of elementary outcomes, provided that all elementary outcomes are likely to occur. For instance, if we toss a coin, the probability of showing one face instead of the other is equal to ½. Now, if we consider a lot of material, and if the probability of selecting a given elementary unit is a certainty, then the probability P of selecting this unit is equal to 1. If the probability of not selecting a given elementary unit is a certainty, then the probability P of selecting this unit is equal to zero. Thus, all probabilities Pi encountered may be written: 0  Pi  1 Purposive Selection – A selection is said to be purposive when it is founded on the sampling operator’s choice to decide which units of the lot L should be retained as part of the sample. This type of selection is often used by the exploration geologist or by the environmental regulatory officer to fulfill a given purpose. A purposive sample is nothing else but a specimen and should be labelled as such. Random Variable – A random variable x is a quantity that can take one or several possible values, each of these having a given probability of occurrence. Representativeness – The degree of representativeness r2 ðSSEÞ is the mean square of the Sample Selection Error SSE, which is itself the sum of the square of the mean mðSSEÞ and of the variance σ 2 ðSSEÞ of the Sample Selection Error: r2 ðSSEÞ ¼ m2 ðSSEÞ þ σ 2 ðSSEÞ Thus, a sample is said to be representative when the mean square r2 ðSSEÞ of the Sample Selection Error is not larger than a certain standard of representativeness r20 ðSSEÞ regarded as acceptable and to be clearly determined by logical DQO. Representativeness is the exclusive property that synthetizes both accuracy and precision. Reproducibility – See the definition of precision. Sample – A sample S is a part of the lot L obtained by the reunion of several increments and meant to represent the lot L in further operations. A sample is not just any part of the lot; its positioning, delimitation, and extraction must respect certain rules that TOS intends to establish. Sampling is often carried out by progressive stages: a primary sample is extracted from the lot, then a secondary sample is extracted from the primary sample and so on. Sampling Constant C – In the old literature of Pierre M. Gy the sampling constant C is the product of four factors: the mineralogical factor c, the fragment shape factor f, the granulometric factor g, and the liberation factor ‘: C¼f gc‘ Sampling Constant K – In more recent literature the sampling constant K needs to be calibrated by performing appropriate tests on a new material to be sampled. Such tests are critically important to minimize the variance of FSE.

24

Theory of Sampling and Sampling Practice

Sampling Selection Error – The relative Sampling Selection Error SSE occurs when replacing the true unknown critical content aL of the lot L by its estimator as . Thus, by definition: SSE ¼

as  aL aL

The Sampling Selection Error shall not be confused with preparation errors that have nothing to do with the selection process. Sampling Protocol – It is a record presented as a flow sheet of all sampling and preparation stages on which agreement has been reached after minimizing all the numerous sources of sampling error, with respect to the critically important notion of sampling correctness. The term sampling protocol is being used in this book rather than sampling scheme. Sampling Ratio – The sampling ratio τ is a random variable, the expected value of which is the selection probability. If the lot L is one-dimensional such as in a flowing stream, the sampling ratio τ can be defined as the proportion of the time during which the stream is diverted to extract the sample. Sampling Scheme – A sampling scheme is a sequence of sampling and preparation stages carried out on the lot L and ending up with the extraction of one or several samples to estimate the true unknown content aL of the lot. It is a synonym of sampling protocol. Sampling Stage – It is a selective process implemented on a lot L or on a sample S to reduce its bulk without major alterations to its critical characteristics. Segregation – A transient state of distribution heterogeneity due to gravity and how the material is handled. Because gravity is a one-dimensional force, segregation in nature is the rule rather than the exception, and there is no such thing as constant segregation because many factors may play a role. Segregation Factor – The distribution heterogeneity of a critical constituent has a natural range between the minimum distribution heterogeneity or distribution homogeneity ðDHL Þmin and the maximum distribution heterogeneity ðDHL Þmax . ðDHL Þmax is equal to the constitution heterogeneity CHL . • For ðDHL Þmin the segregation factor ξ = 0. • For ðDHL Þmax the segregation factor ξ = 1. Thus, the segregation factor ξ localizes the degree of distribution heterogeneity of a critical constituent in a given material between a natural minimum and a maximum equal to the constitution heterogeneity of this material: 0ξ1 Selecting Conditions – Conditions controlling the selecting process, for example: cutter speed, cutter width, and so on, when using a cross-stream sampler. Selecting results – Results expressed in terms of sampling error and its moments.

Definition of Basic Terms and Symbols

25

1.2. List of Notation and Symbols A special effort has been made throughout this textbook to keep notations consistent. When possible, these notations should remain the same as those used in the original work of Pierre M. Gy, but evolution is inevitable leading to necessary changes. Also, some chapters may use notations that have already been used for something else; when this happens a special comment and explanation is provided to prevent unfortunate confusion. 1.2.1. Latin Letters A: a: AE: B: b: C: c: CH: D: d:

mass or weight of a critical component in a given unit critical content Analytical error bias, usually associated with an estimate representing an estimator by proxy coefficient of bias cutter and cutter edges; also a sampling constant in older Gy’s publications constituent of interest composition factor constitution heterogeneity domain diameter of a fragment. We shall not confuse this notation with mathematical symbols used for differentials such as dx, dt, dz, etc. E: interstitial fluid e: error F: fragment or particle f: fragment or particle shape factor. We shall not confuse this notation with the mathematical symbol f(x) denoting a nonspecified function. FSE: fundamental sampling error G: group of fragments or particles GSE: grouping and segregation error H: heterogeneity of a fragment or group of fragments HFE: continuous heterogeneity fluctuation error I: increment i: integer defined in context IDE: increment delimitation error IEE: increment extraction error IPE: increment preparation error IWE: increment weighting error j: integer defined in context. In variography, it is used as the lag between samples. k: integer defined in context L: lot or batch or decision unit ‘: liberation factor of a constituent of interest in a batch of material; to prevent any confusion with the numerical one, this symbol shall be written as the script type ‘ throughout the text of this book. M: mass or weight of active components in a given batch m: mean of a random variable N: number of elements in a set; in variography, it is the number of pairs of increments.

26

Theory of Sampling and Sampling Practice

n: O: OEE: P: Q: QFE: R: r: ra: S: s: s0 : SSE: st: sy: T:

ratio of the actual cutter width to the minimum required cutter width origin of a system of coordinates overall estimation error probability number of increments in a sample quality fluctuation error sampling reject quadratic mean of a random variable subscript indicative of a random selection of the increments sample Experimental estimate of a standard deviation, not to be confused with the true, unknown standard deviation σ of an infinite population purposely chosen standard deviation or tolerated standard deviation to be dictated by DQO sampling selection error (proper sampling error that does not include IPE) subscript indicative of a stratified random selection of the increments subscript indicative of a systematic selection of the increments time interval

1.2.2. Greek Letters α: β: ρ: θ: σ: γ: γ: ξ:

mark of a fragment size class mark of a fragment density class density of a fragment average number of constituent of interest grains per sample true unknown value of a standard deviation a most probable result in Ingamells’ work a grouping factor a segregation factor

1.3. The Word Error versus The Word Uncertainty Controvercy Observation from attending many conferences around the world reveals that many people are constantly shifting from the word error to the word uncertainty without a clear vision of what the subtle difference is. The same applies to a large swath of the scientific literature, in which these two concepts unfortunately are often used synonymously, which is a scientific flaw of the first order. In practice, as demonstrated in the Theory of Sampling, there are both sampling errors, and sampling uncertainties. Sampling errors can easily be preventatively minimized, while sampling uncertainty for a preselected sampling protocol is inevitable. Gy’s work, supported by Matheron’s critical reviews, demonstrates beyond any possible doubt that sampling uncertainty cannot be achieved if sampling errors are not preventively minimized by applying very stringent rules that are too often ignored by sampling practitioners. A clarification, in line with the Theory of

Definition of Basic Terms and Symbols

27

Sampling, is presented to mitigate the confusion created by circles favoring the use of uncertainty versus the use of the word error. Both words are very different in nature and scope, and they are both necessary in their due place and time.

1.3.1. Introduction Error: the difference between an observed or calculated value and a true value; variation in measurements, calculations, or observations of a quantity due to mistakes or to uncontrollable factors. It is the word “mistake” that bothers statisticians. Indeed, it is a mistake not to optimize sampling protocols according to acceptable Data Quality Objectives (DQO), and a grave mistake not to make sure sampling is correct, and that is the way it is. Uncertainty: lack of sureness about someone or something; something that is not known beyond doubt; something not constant. Historically, statisticians prefer the word uncertainty. The following words should be carefully remembered in all discussions involving the creation of sampling protocols and their practical implementations. Gy49 stated: With the exception of homogeneous materials, which only exist in theory, the sampling of particulate materials is always an aleatory operation. There is always an uncertainty, regardless of how small it is, between the true, unknown content aL of the lot L and the true, unknown content aS of the sample S. A vocabulary difficulty needs to be mentioned: tradition has established the word error as common practice, though it implies a mistake that could have been prevented, while statisticians prefer the word uncertainty, which implies no responsibility. However, in practice, as demonstrated in the Theory of Sampling, there are both sampling errors, and sampling uncertainties. Sampling errors can easily be preventatively minimized, while sampling uncertainty for a preselected sampling protocol is inevitable. For the sake of simplicity, because the word uncertainty is not strong enough, the word error has been selected as current usage in the Theory of Sampling, making it very clear it does not necessarily imply a sense of culpability.

Gy’s choice was especially justified for Increment Delimitation Error, Increment Extraction Error, Increment Weighting Error and Increment Preparation Errors, because indeed, the magnitude of these errors is dictated by the ignorance, unwillingness or negligence of operators, managers, and manufacturers to make these errors negligible by following the rules of Sampling Correctness listed in the Theory of Sampling. For these errors, the word uncertainty would be totally inappropriate. Therefore, in any project, if management is due diligent, the word error should not exist and only uncertainties remain; the problem is we are living in a world very far from perfect where the TOS is not yet mandatory knowledge for everyone in the business of creating an important database. Figure 1.1 may help in the following discussion. It is of the utmost importance management is very clear on the fact that the variances of IDE, IEE, IWE and IPE cannot be calculated unless you spend a fortune to do so properly. As a result, if someone ends up in the bottom box of Figure 1.1, subsequent statistical calculations become invalid. Of course, MU people would strongly disagree with this statement, and that is their privilege.

28

Theory of Sampling and Sampling Practice

1.3.2. Going Back to Matheron and Gy’s Fundamentals Gy’s work, inspired by Becker’s, and confirmed by Matheron’s in-depth mathematical analysis, reveals beyond any possible doubt that the sampling process should either be equiprobable (collecting fragments one by one at random) or at the very least semiequiprobable (collecting increments making up one sample, one by one at random). In a nutshell, this clearly means that if there is no randomness involved in a sampling process, such a process is by definition invalid and cannot provide a representative sample; basically, the sampling process is not correct. The conclusion is that if a sampling process is not correct, it is an error of judgment from the sampling practitioner that necessarily leads to possible biases. In such cases it would be inappropriate to call the residual variability between replicate samples taken from the same lot under the same condition a source of uncertainty. It is the opinion of this author that a sampling process that can be affected by nonrandom factors cannot and will not lead to a certain amount of uncertainty. It is definitely a source of gross, unfortunate errors. 1.3.3. Jumping from Uncertainty to Error Figure 1.1 showed us the preventive choice someone can make, to create a safe world of uncertainty rather than living in an unpredictable swamp of errors. Basically, it is a personal choice, but unfortunately many practitioners don’t see it that way as they have a very hard time to embrace the Theory of Sampling. It should be clear by now that any sampling device that cannot provide equiprobability or at least semi-equiprobability,

Sampling Errors generating precision problems

Preventive actions to minimize FSE and GSE

Sampling Errors generating accuracy problems

Preventive actions to minimize IDE, IEE, IWE and IPE

No preventive action to minimize FSE and GSE

No preventive action to minimize IDE, IEE, IWE, IPE

Level of Error likely unacceptable

Level of Error and presence of biases likely unacceptable Possible quantification of error and possible access to an acceptable level of residual uncertainty

Unpredictable level of errors, and transgression to the most basic principles of due diligence: Protocols and practices are invalid, therefore non-defensible. FIGURE 1.1 The preventive mission of the Theory of Sampling.

Definition of Basic Terms and Symbols

29

such as in-stream fixed cutters, by-stream or in-stream probes, cross-belt samplers missing the bottom part of a stream or pushing material outside the ideal increment into the sample or losing part of the increment by bouncing because of excessive cutter velocity, manual grab samples, sampling systems strictly incapable to take the entire stream, stationary on-line analyzers, and many more, lead to loss of time and money and are not even worth any decent discussion. All these cases, so common in the industry, are the result of ignorance, arrogant denial, wishful thinking and invalid testing. Basically, they are incapable to provide a reliable residual amount of uncertainty that would offer the possibility of performing statistics based on a solid, valid ground. All these incorrect sampling devices declared good enough because of flawed statistics based on transient, circumstantial good luck, are domains where practitioners are making unacceptable mistakes, which is most definitely a grave error of judgment. 1.3.4. Sampling Correctness: The Mandatory Path to Predictable Uncertainty The Cardinal rules of sampling correctness are simple: 1. All fragments or groups of fragments must have the same chance to become part of the sample (IDE). 2. The sampling tool must not become selective on what it is taking (IEE). 3. The mass of collected increments must be proportional to the amount of material presented to the sampling tool (IWE). 4. The integrity of increments must under no circumstance be altered (IPE). Any deviation from one of these rules and the sampling protocol becomes invalid, wishful thinking, misleading and constitutes a direct, unacceptable departure from the expected due diligence; in other words, it is a mistake leading to unpredictable errors on precision and accuracy. Basically, there are no statistical analyses possible and if you had to defend the incorrect sampling protocol in a court of law, a good sampling expert would make that protocol indefensible in five minutes. In other words, the rules of sampling correctness should under no circumstances be negotiable. If all these rules a strictly respected with no compromise possible you give yourself a chance to enter the domain of acceptable residual uncertainty, and the data base offers the possibility of performing a valid, predictable statistical analysis. Anything less can be called administrative sampling, leading to statistical ceremonies, and a total loss of time and money, and nonetheless misleading information for top management. After it is clear that the sampling process is correct, we still have to address Constitution Heterogeneity CH and Distribution Heterogeneity DH. This is done by minimizing the variances of the Fundamental Sampling Error FSE and the Grouping and Segregation Error GSE, which is the objectives of Part IV. Then, and only then, we may dare to talk about residual uncertainty. Basically, the use of the word uncertainty should be the ultimate reward of having eliminating what is obviously structurally wrong in sampling and analytical protocols. Anything less is an intolerable mistake, therefore an error, which naturally leads to Chapter 2.

This page intentionally left blank

2 A Management Strategy

Every day, many people struggle in exploration, ore grade control, process control, metallurgical accounting, commercial settlements, environmental monitoring, food safety evaluations, standards committees, government agencies, and many more domains, because of a massive confusion between problems they can control and problems they cannot control. As a result, management often selects a wrong strategy to control problems and optimize operations. The choice of a correct strategy is a necessity. The answer to this difficult problem is given by two concepts: • Structural properties, which are the true causes of problems • Circumstantial properties, which are the negative effects of undetected structural problems. A clear understanding of the difference between the two concepts can save a lot of time and money for a given project.

2.1. Structural Property A property is said to be structural when it necessarily results from a certain number of conditions that we can control or quantify, and that are assumed to be fulfilled. A few examples are: • The heterogeneity of a constituent of interest in a material to sample (e.g., gold in a given geological unit, alumina in a given iron ore, GMO in a wheat shipment, mercury in a fish, etc.) • A sampling protocol authoritatively selected by a management team • The characteristics of a sampling device built by a manufacturer • A process control procedure implemented by an operator • The characteristics of a process unit • A sampling interval selected for convenience • A cutoff grade selected at the mine for a given constituent of interest • A standard enforced by regulators, • A wrong temperature set to warm your home, • A child trying to study mathematics without a reliable textbook, etc. . . 31

32

Theory of Sampling and Sampling Practice

There are millions of examples in our lives. A structural property is what it is: it is the structure with which someone operates. A structural property is always true as long we operate with it. The effect depends on chance; it is the circumstance we endure. It is important to know that heterogeneity and sampling correctness have primary structural properties.

2.2. Circumstantial Property A property is said to be circumstantial when it depends on conditions that we are not necessarily able to control; it is the conjuncture. A few examples are: • A change in moisture content in a coal shipment • A change in stream flow rate feeding a SAG mill • A change in the particle size distribution of an iron ore shipment • A change in density between fragments • A material balance going from acceptable to unacceptable • A reconciliation between the mine and the plant going from good to bad • Your blood pressure going from good to bad • Your child losing ground on his understanding of mathematics • Parents always fighting one another, and so on. There are millions of examples in our lives. There is nothing much you can do about these undesirable effects. The only thing you could do is to change the structural property that is creating all these problems in the first place. The conclusion is clear: a structural property will remain true, unless you change: • A poor sampling protocol • Change a faulty sampling system • Change an inappropriate analytical procedure • Install a weightometer on a conveyor belt that is no longer than 100 meters • Eliminate a wishful cutoff grade at the mine • Give a reliable textbook to your child losing ground in mathematics and explain to him how to use it, and so on. Remember, a structural property can be relied upon. However, a circumstantial property cannot be relied upon and strictly depends on chance. Therefore, too much emphasis on solving effects of a cause is often a loss of time and money; it is an exercise in futility that invariably leads to conflict and a sentiment of being useless. It is important to know that accuracy has a secondary circumstantial property. For instance, it could happen that an incorrect sampling operation is also accurate; nevertheless, it is dangerous to rely or speculate on such an incorrect sampling operation. It is of the utmost importance to place emphasis on identifying structural properties. Invest resources in finding the cause of a problem instead of reacting to its effects; that way you will be more likely to live happy.

33

A Management Strategy

2.3. Concepts of Primary and Secondary Properties A property is said to be primary when it depends on the sole structure. Therefore, it is independent from circumstances. A property is said to be secondary when it depends on both the structure and the circumstances. There is a logical relationship between a primary structural property and a secondary circumstantial property and a few examples may illustrate the point. 2.3.1. The Concept Let’s answer the first question by yes or no: does the primary structural property exist (see Figure 2.1)? Then answer the second question by yes or no: Is the secondary property observed? Then the following very interesting outcomes take place. If the answer to the first question is yes, it is easy to conclude that the yes answer to the second question is a certainty; however, the negative answer to the second question is impossible. It is an effective strategy to select yes to the first question and give it top priority. This is where top management should make sure that choice is the only valid one in the ways they try to solve problems every day. Unfortunately, most of the time, the answer to the first question is no. Then a dangerous set of events may take place. In that case a positive answer to the second question is possible, while a negative answer to the same question is probable. In such a case, the management team sits between two chairs and it is nearly impossible to predict what is going to happen. This invariably leads to the impossibility to eliminate problems, and it is ultimately a very costly course of action, with counterproductive and unnecessary meetings, finger-pointing, and many unhappy people. 2.3.2. The Consequences In Figure 2.2, an example of a selected sampling system is illustrated. If the answer to the first question is yes, the answer to the second question is straightforward,

1. Does the primary structural property exist?

Yes

Yes

No

Certain

Possible

Impossible

Probable

2. Is the secondary property observed? No

FIGURE 2.1 Logical relationship between one primary structural property and one secondary circumstantial property.

34

Theory of Sampling and Sampling Practice

unambiguous and reliable; this is where you want to be. If the answer to the first question is no, then the answer to the second question leads to an unwelcome situation where you will never know if you can rely on this sampling system or not; It becomes a kind of gambling. In Figure 2.3, an example of a selected stacking/reclaiming facility used for bed blending is illustrated. If the answer to the first question is yes, the answer to the second question is straightforward, unambiguous and reliable; this is where you want to

1. Is the sampling device primarily correct? Yes

No

Yes

Certain

Possible

No

Impossible

Probable

2. Is the collected sample secondarily accurate enough?

This is where you want to be.

This is a terrible situation.

FIGURE 2.2 Logical relationship between sampling correctness and sampling accuracy.

1. Is the design of the stacking/reclaiming facility primarily correct? Yes

No

Yes

Certain

Possible

No

Impossible

Probable

2. Is the feed going to the process secondarily constant?

This is where you want to be. FIGURE 2.3 Example of a selected stacking/reclaiming system used for bed blending.

This is a terrible situation.

A Management Strategy

35

be. If the answer to the first question is no, then the answer to the second question leads to an unwelcome situation where you will never know if the feed to the plant is constant enough on any given day; it is a situation that can cost a fortune other the years because it is unlikely process control can perform well enough to make a product reliable enough between pre-selected specifications for a client. Clearly, the concepts of structural properties and circumstantial properties deserves deep thought from the part of the reader using this textbook. It shows how critically important the contents of many of the following chapters are. It would most certainly lead to series of cardinal rules used in the TOS that leave not much room for compromise.

2.4. The Primary Structural Property of Sampling Correctness Now a new problem comes forth, which is a problem of judgment: we have the choice between measuring the precision and accuracy of a given sampling protocol or deciding upon the precision and correctness of a given sampling protocol. The difference between these two alternatives is of prime importance as only the second alternative provides a safe solution. The reader should clearly appreciate this difference because the choice of the right or wrong sampling strategy depends on it. In sampling, there are many technical errors and we are already familiar with most of them (e.g., Fundamental Sampling Error [FSE], Grouping and Segregation Error [GSE], Heterogeneity Fluctuation Error [HFE], Increment Delimitation Error [IDE], Increment Extraction Error [IEE], Increment Preparation Error [IPE], etc.). We also know how to deal with them, but judgment errors are of equal importance, often have disastrous consequences, and we really do not know how to deal with them. Examples of such judgement mistakes are many, especially with people trying to quantify FSE using routine replicate samples: they call this exercise “the calibration of Pierre Gy’s formula,” which is a very debatable exercise mixing oranges and apples, which is typical of empirical approaches and a favorite of many statisticians.

2.5. Relationship between Correctness and Accuracy If correctness is a structural property of the sampling process, it means that it is an intrinsic property of this process, as long the integrity of the equipment is not damaged. Sampling correctness does not depend on circumstances external to the sampling process over which we have no control, such as the properties of the material to be sampled. Thus, we can state that sampling correctness is a primary quality of a sampling process over which we may have control. On the contrary, accuracy is the circumstantial property of a sampling process, and depends on various factors such as: • Respect for the conditions of sampling correctness • Properties of the material to be sampled over which we have no control.

36

Theory of Sampling and Sampling Practice

Consequently, we can state that accuracy is a secondary property if compared to correctness. We cannot directly control accuracy, but we can directly control correctness, which makes us wonder about the effectiveness of performing bias tests to control the validity of a sampling protocol or a sampling device. A correct sampling process is always accurate; however, an incorrect sampling process can be circumstantially accurate today, unacceptably biased in one direction tomorrow, and unacceptably biased in the other direction after tomorrow. Gambling over the accuracy of an incorrect sampling process is not recommended, unless your objective is to lose money. Someone often told me “I know this sampling device is incorrect, however it is accurate enough for my applications”; such a statement is very dangerous because a bias in sampling is never constant due to the transient nature of segregation. Basically, someone does not know if the incorrect sampling device is accurate enough as they have no clue what circumstances are going to show up. Several standards committees have made recommendations to control accuracy. We strongly disagree with them and recommend instead to control correctness, which is the primary objective; then, and only then, acceptable accuracy may be the reward.

2.6. Practical Advantages of Controlling Sampling Correctness The necessary conditions for selecting a correct sampling process are described in Part VII of this book. By definition, a correct sampling process gives to all fragments or groups of fragments of a lot to be sampled an equal probability of being selected and being part of the sample, and a zero probability to any foreign material to the lot of being selected and becoming part of the sample. Success to fulfill these conditions require that true randomness be provided, and this will be discussed at large in Parts III and IV of this book. We know how to design, build, install, operate, and maintain a correct sampling station. In fact, it is the only thing we know how to do. If we were asked to design and build an accurate sampling station (secondary property), we would design and build a correct sampling station (primary property), because we know how to control the conditions of a sampling process, but we do not know how to control its results. We can indeed control the results only by using TOS, which tells us that a correct sampling process is always accurate. Control of the correctness of a sampling station by a competent specialist consists of performing a critical inspection and a few simple measurements. This control is effective, unmistakable, quick, cheap, and does not disturb production. Above all, it not only detects the sources of incorrectness, but it also indicates the necessary steps to be taken to render the sampling station correct. Therefore, it should be considered as a nonnegotiable part of due diligence.

2.7. Disadvantages and Risks of Controlling Accuracy An incorrect sampling process can become accurate when the heterogeneity of a material becomes circumstantially favorable. There is no apparent logic in generalizing such a transient phenomenon to the production of another hour, another day, or another material. The recommendations of controlling accuracy instead of correctness is

A Management Strategy

37

therefore a serious judgment error; furthermore, it is not logical as it is nearly always cheaper to control correctness than accuracy. The experimental control of accuracy is a lengthy and costly process that always disturbs production, and it is not very well accepted by the people involved. Because segregation within a flowing stream, or any other lot to be sampled, is a transient phenomenon, any bias test can prove only one thing: it that there is a bias. It cannot prove there is no bias. If no bias is observed the right statement is to say that the test is not conclusive; there is a difference involving judgement.

2.8. Conclusions A good sampling strategy should provide the following chronological steps that should be part of a required due diligence process: • Study of the heterogeneity of the material of a given lot, either zero-dimensional lot, one-dimensional lot, or two-dimensional lot. We will explain later the reasons why the sampling of a three-dimensional lot is not an option. • Optimization of the sampling protocols to minimize the Fundamental Sampling Error FSE, the Grouping and Segregation Error GSE, the long-range Heterogeneity Fluctuation Error HFE2, and the periodic Heterogeneity Fluctuation Error HFE3. • Control of sampling correctness (i.e., choice of the sampling equipment and how to use it) to eliminate, or render negligible, the Increment Delimitation Error IDE, the Increment Extraction Error IEE, and the Increment Preparation Error IPE. Please bear in mind that the control of accuracy is a dangerous approach that does not solve sampling problems but only can look at them in a troubling, risky, and oversimplified way. At this point, the reader has all of the necessary elements to proceed with his own judgement in using the following parts of this book as a valuable “tool” for making sound decisions on sampling processes. The control of sampling correctness is the only effective way to save time, money, serious shortcomings in quality control circles, and ultimately to fulfill the most basic requirements of due diligence.

This page intentionally left blank

Part II

Fundamental Statistical Concepts Used in the Theory of Sampling

This page intentionally left blank

3 Fundamental Statistical Concepts

The reader, particularly if he or she happens to be a statistician, may wonder if it is useful to introduce in this book a short course on probabilities (i.e., study of random variables prior to experimentation) and especially on statistics (i.e., study of random variables using data provided by experimentation). The answer is a definite yes and the reason is simple: the author, who has given more than 500 short courses on sampling, and who has been in contact with many clients having sampling problems, found that most people involved with sampling in quality assurance and quality control circles have no basic knowledge of descriptive statistics or long ago forgot this knowledge. Nevertheless, because much software is available and many articles cover the subject of sampling in a very short and superficial way, for better or worse, these people are using statistical concepts. Therefore, it is necessary to include this chapter on fundamental statistical concepts, so these wonderful tools are not misused. The term “statistic” was used for the first time by the German professor Achenwall in 1748. In 1843, Cournot defined statistics as “a science having for objective the collection and coordination of numerous facts within a given category of events, thus obtaining quantified effects which are likely to be independent from happening only by accident.” Statistical concepts are necessary for the development, understanding, and use of the TOS because they give information unpredictable in any other way; furthermore, they strongly link theory with reality. Statistical concepts are the basic “tools” of modern processes and quality control programs. They prevent detrimental effects from accumulating dangerously for too long. They also make possible the anticipation of acceptable or unacceptable operating errors. The concepts presented in this chapter have their limitations, which are often voluntarily or involuntarily forgotten by an operator, making the conclusions of his statistical evaluation not only naïve but also deprived of any scientific value. Let us imagine an operator who has collected two series of values from a certain experiment. He is plotting one series on the y-axis of a rectangular coordinate system and the other series on the corresponding x-axis. Thus, he is going to find a series of experimental points in the xyplane through which he draws a continuous line, obtains a graph, and calculates the equation of the graph. Meanwhile, more often than not, our operator is confronted with numerous problems: • Interpolation: The temptation is great to allow the simplest continuous curve to fit inside the “area of influence” of each point, however, this curve may or may not be represented by a simple equation, except when it can be approximated into a straight line. For this straight line to be a reality, it may become convenient to change a few variables, forcing a phenomenon to obey a preconceived law that happens to be convenient for the operator. When intervals between points become 41

42

Theory of Sampling and Sampling Practice

larger, interpolation may become dangerous because a large quantity of curves are found suitable in appearance, and the operator is likely to choose the curve that helps him to prove his preconceived idea. • Extrapolation: With the exception of the immediate vicinities of points corresponding to the limits of an experiment, the graph representing all points obtained by the operator should not be extrapolated beyond these limits, because there is neither a scientific nor a legitimate justification to do so. Similarly, for various reasons such as lack of time or funds, it is not uncommon to see statistical evaluations made on the analysis of two, three, or four samples, and some decisions that should have never been made. It is dangerous to extrapolate the information reached from the analysis of very few samples to an infinite population of potential samples because there is neither a scientific nor a legitimate way to find out exactly what kind of probability distribution they belong to without extensive additional testing. This is especially true in the case of trace constituents. In fact, the common problem exposed here is a combination of extrapolation (i.e., shape of a probability distribution defined beyond experimental points) and interpolation (i.e., shape of a probability distribution defined between too few experimental points). • Generalization: It is one of the most frequent sources of problems encountered with empirical approaches. For example, sampling characteristics obtained for a particular physical component are often generalized to the other components, or sampling characteristics obtained for a particular physical component in one deposit are generalized to the same component in another deposit that is extremely naïve. There are many of these examples, even in forensic analyses. The author witnessed the study of a porphyry molybdenum deposit in which molybdenite occurrences were believed to be the same as those of an older mine of the same type several hundred miles away. Because preliminary mineralogical and sampling investigations were never performed, the accurate feasibility study of this molybdenum deposit could never be achieved; but it generated a huge exploration expense and a wrongly designed pilot plant for the company involved. • Experimental Errors: All experiments are affected by errors (i.e., measurement errors, analytical errors, human errors, preparation errors, sampling errors, etc.). All these errors may be classified into several categories: ○ The error is a random variable, the average of which can be negligible (e.g., the average of the Fundamental Sampling Error [FSE] can often be negligible if we exclude the cases of low-level precious metals and trace constituents where Poisson processes may take place requiring a huge amount of data to correct the problem. ○ The error is not a random variable, the average of which is systematically biased in one direction (e.g., Increment Delimitation Error [IDE], Increment Extraction Error [IEE]). ○ The error is not a random variable, but is an accidental error or blunder. These errors cannot be studied by statistics (e.g., Increment Preparation Errors [IPE]). Without a clear understanding of the differences between these various categories of error, it is not possible to understand the complicated notion of heterogeneity and the capital notion of sampling correctness. Basically, our operator is completely lost.

43

Fundamental Statistical Concepts

• Interpolation of Experimental Results: Sampling plans, like those often recommended by regulatory agencies or standard committees, often involve a simple analysis of the variance of values obtained from an experiment without taking into consideration the many components responsible for this resulting variance. Indeed, it seems convenient to analyze a variance “after the facts,” saving a great deal of effort in figuring out the numerous components responsible for the errors. Sometimes the operator will go even one step further and assume wishfully that these errors may, more or less, cancel each other. The problem is that some of these components are likely to be more important than others, and if our objective is to have control over a complex phenomenon, it is essential to know the law connecting each component to its sources. Furthermore, some of these components mainly affect precision, while some others affect both accuracy and precision. Many experiments lead only to the observation of the resulting overall error, and there is no empirical way to analyze all the components of this error and their respective sources. Because variances are additive, the overall variance is often a very deformed image of its components; only TOS has access to all these components and is capable to minimize the strong impact of systematic errors on the total error by prevention. Seldom are the experiments capable of effectively detecting one or several systematic biases. They exist, however they must be selected and implemented with great precautions such as the calibration of Pierre Gy’s formula promoted by many practitioners.

3.1. Notion of Probability When the probability P for a certain fragment of a lot to be selected is a certainty, by definition we write: P¼1

ð3:1Þ

When the probability P for a certain fragment of a lot not to selected is a certainty, by definition we write: P¼0

ð3:2Þ

Thus, all probabilities P, encountered are included in a closed interval between zero and one: 0  Pi  1

ð3:3Þ

A given probability P may conveniently be expressed in percent; for example, when P = 0.1, it is strictly equivalent to write P = 10%. Note that when P = 1, it is equivalent to write P = 100%. Figure 3.1 illustrates a typical probability distribution. Suppose the probability distribution of all possible outcomes of the critical content ɑs of a sample collected in a lot L of true unknown critical content ɑL is known, and the

44

Theory of Sampling and Sampling Practice

F(as)

P as as2

as1 FIGURE 3.1 Illustration of a typical probability distribution.

distribution is represented by the sketch shown in Figure 3.1. We can then calculate any probability P for ɑs to fall between two known or fixed limits as1 and as2 as follows: P½as1  as  as2  ¼

ð as2

f ðas Þdas

ð3:4Þ

as1

where das is an infinitesimal constant increment and P the proportion of the surface between ɑs1 and ɑs2 versus the total surface of the distribution. These notions become much clearer when we draw a sketch as illustrated in Figure 3.1; probabilities and statistics are often easier to understand by drawing a simple sketch. In the same way, if ɑs is expressed as part of one the equation representing the total probability distribution is: P½0  as  1 ¼

ð1

f ðas Þdas ¼ 1

ð3:5Þ

0

which brings us back to the meaning of equation (3.1) where P = 1 when 100% of the surface area representing all the possible values taken by ɑs is considered. Note that if ɑs =1 we are dealing with a pure constituent or mineral. There are numerous possible laws of distribution for ɑs and these may or may not be known. We will take a quick look at some of these distribution laws; however, we will first define the probability law of a random variable in general.

3.2. Probability Law of a Random Variable We shall learn systematically to express in a quantitative way not only the possible outcomes of a critical content such as ɑs, but also the probability attached to these outcomes. 3.2.1. Notion of Random Variable Suppose that when we determine the gold content ɑs of samples selected from a lot of ore coming from the mine we may obtain several categories of values for ɑs, each of them being associated with a certain domain of probability the surface of which can be measured:

45

Fundamental Statistical Concepts

Gold content as (g/t) Probabilities as 5 1 10% 1  as 55 75% as  5 15% What we did is to associate the various possible values of a variable (e.g., the gold content of a sample) with the chances for these values to be observed. Not only are the various classes of possible grades identified, but their chances of occurring are also measured. The grade of a sample has been quantified by the notation of random variable, however, as we will see in due time, things may not be as simple because phenomena may not turn out to be as random as we may conveniently imagine; nevertheless, the notion of a random variable can be generalized to a large number of situations such as the weight of a sardine in a food processing plant, the ash content of a coal shipment, the measurement error of a spectrographic determination, and so on. 3.2.2. From a Random Variable to Its Probability Law A random variable was defined by associating each of its possible values with a certain amount of probability. To find out the probability law of a random variable, we simply have to know each amount of probability associated with all the possible values of the random variable. As an example, this may be illustrated by the possible outcomes of the BTU (British Thermal Unit) content of a coal shipment as shown in Table 3.1. 3.2.3. Definition of a Probability Law A probability law expresses the totality of the various possible outcomes of a random variable associated with their corresponding amount of probability, the sum of which, by definition, must be equal to 1: Xn i¼1

Pi ¼ 1

ð3:6Þ

3.2.4. Graphic Representation of a Probability Law (Discrete Variable) Histograms are used to represent a probability law, either as noncumulated amount of probability (Figure 3.2) or as cumulated amount of probability (Figure 3.3). The histogram is the simplest statistical tool. The width of each interval is a matter of judgement.

TABLE 3.1 Probabilities associated with the BTU content of an imaginary coal shipment From 8000 to Possible BTU contents 8500 Corresponding Probabilities (Pi in %)

5

From 8501 to 9000

From 9001 to 9500

From From 9501 10,001 to to 10,000 10500

From 10,501 to 11,000

From 11,001 to 11,500

10

20

30

10

5

20

46

Theory of Sampling and Sampling Practice

As data increase, the resolution of the histogram becomes better, as does the identification of the probability law associated with the data.

Pi %

30 25 20 15 10 5

8000

9000

10000

11000

BTU content

9000

10000

11000

BTU content

FIGURE 3.2 Typical noncumulated histogram.

Pi % 100 90

70

50

30

10 8000 FIGURE 3.3 Typical cumulated histogram.

47

Fundamental Statistical Concepts

3.3. Dependence between Random Variables Random variables may be independent from one another. The fact that constitution heterogeneity generates short-range quality fluctuations among increments selected on a conveyor belt at regular intervals has nothing to do with fact that the same constitution heterogeneity will generate additional short-range fluctuations during subsequent sampling stages performed on the same increments. The sampling selection performed at one stage is completely independent from the sampling selection performed during other stages. Random variables may be exclusive from one another, The fact that an analytical subsample contains the only coarse gold particle present in a sample excludes the fact that another analytical subsample selected from the sample will contain nearly as much gold; people go to ligation when they don’t understand this. Random variables may be nonexclusive from one another. The fact that a mineral shows an important segregation taking place may or may not affect to the first order the amount of segregation generated by another mineral. Random variables may be dependent on one another. The fact that a lot of material is showing an important constitution heterogeneity for a given constituent will be accountable for the fact that the same constituent within the same material may be affected by an important segregation. These few comments lead to the following essential three theorems that are useful to keep in mind: 1. If random variables are independent and exclusive, the averages of their respective probability distribution are additive. 2. If random variables are independent but not exclusive, the averages of their respective probability distribution are additive; however, we shall subtract the product of these averages from the addition. 3. If random variables are dependent, the averages of their respective probability distribution are multiplicative. This seem simple at first; however, the notion of dependence or independence between variables can rapidly become extremely complex and require in-depth study of probability laws, which is beyond our objectives.

3.4. Position Parameters and Characterization of a Probability Distribution 3.4.1. The Arithmetic Average x of a Discrete Set Is Defined as 1 x ¼ ð x 1 þ x 2 þ x 3 þ . . . þ xN Þ N x¼

N 1X xi N i¼1

ð3:7Þ

48

Theory of Sampling and Sampling Practice

3.4.2. The Quadratic Average xQ of a Discrete Set Is Defined as vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u1 X xQ ¼ t x2 N i¼1 i 3.4.3. The Geometric Average xG of a Discrete Set Is Defined as N 1X log xG ¼ log xi N i¼1 3.4.4. The Harmonic Average xH of a Discrete Set Is Defined as N xH ¼ N P1 i¼1

ð3:8Þ

ð3:9Þ

ð3:10Þ

xi

The harmonic average is of great interest in Ingamells’s work. 3.4.5. The Weighted Average xM of a Discrete Set Is Defined as N 1 X xM ¼ N Mi xi P i¼1 Mi

ð3:11Þ

i¼1

3.4.6. The Median The median x is the value of a characteristic that splits the probability distribution into two sets of equal proportion or frequency, that is, 50% of the distribution is below the median and 50% is above. 3.4.7. The Mode The mode γ is the value of a characteristic where the probability reaches global or local maximum (i.e., it is the most probable result). The most probable result will be of great relevance in the work of Ingamells presented later in Part V. 3.4.8. Dispersion Parameters The easiest way to measure the dispersion of values xi is to compare them to their arithmetic average x, which can be done by measuring all the differences xi  x. Because the average of all these differences equals zero it is convenient to work with the square of the differences ðxi  xÞ2 . Thus, we can define the experimental estimate of the sample variance as follows: s2 ¼

N 1X ðxi  xÞ2 N i¼1

ð3:12Þ

49

Fundamental Statistical Concepts

Figure 3.4 shows the dispersion parameters on a normal, Gaussian distribution, while Figure 3.5 shows the same parameters on a double Poisson distribution that will be later introduced in Ingamells’s work.

Arithmetic average, mode and median at the same place

50%

50%

F(x)

x

FIGURE 3.4 The normal distribution.

Global Mode F(x)

Local Mode Arithmetic Average

x Median

FIGURE 3.5 The double Poisson distribution.

50

Theory of Sampling and Sampling Practice

In practice, we use the variance for calculation because only the variance has additive properties. However, the square root of the average of the variance is defined as the experimental estimate s of the true unknown standard deviation σ: rffiffiffiffi N 1X ðxi  xÞ2 s¼ N i¼1

ð3:13Þ

It is important to note that for random variables standard deviations cannot be added, although their use is very popular; only variances can be added. We shall differentiate s, which is the sample standard deviation from σ, which is the true, unknown population standard deviation. The calculation of s is based on the assumption that the population made of N units is an isolated population; however, this is not often the case. If this population is included inside another infinite or very large population made of the same units, the arithmetic average x is no longer the true average and it has an error of its own; it becomes necessary to take this into consideration with the standard deviation sx of x around the true unknown average. It is well known that sx can be expressed as follows: sT sX ¼ pffiffiffiffi N

ð3:14Þ

where sT is the total standard deviation. It is understood that x is only one of the possible averages provided by populations made of N units. The total variance s2T can then be written: s2T ¼ s2 þ s2x

ð3:15Þ

Thus: s2T ¼  s2T

N1 N

 ¼

N 1X s2 ð x i  xÞ 2 þ T N N i¼1 N 1X ð x i  xÞ 2 N i¼1

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u 1 X sT ¼ t ð xi  xÞ 2 N  1 i¼1

ð3:16Þ

This is the correct formula to use when we estimate the total dispersion of measurements performed on samples selected at random among a large population. As N becomes large, s is approaching sT . Here, we will always assume that s is equal to sT to avoid the use of too many subscripts. In general, it is easier to use the variance that is the square of the standard deviation because only the variance has additive properties; this important statement should never be forgotten:

51

Fundamental Statistical Concepts

s2T ¼

N 1 X ð xi  xÞ 2 N  1 i¼1

ð3:17Þ

We may define the true variance σ2 of a continuous set in the same way: Z 2 ¼

þ1

1

ðx  mÞ2 f ðxÞdx

ð3:18Þ

Where m is the expected value or the mean or exact average of x. 3.4.9. The Pearson Variation Factor: The Relative Standard Deviation As will be seen, it is often much easier to work with the relative standard deviation sR , which is the ratio of the standard deviation s of a population to its arithmetic average x; sR is also called the Pearson variation factor: sR ¼

s x

ð3:19Þ

In the same way we will see the relative, dimensionless variance: s2R ¼

s2 x2

ð3:20Þ

The main advantage of working with relative values is that they are dimensionless, and results are easy to compare from one experiment to another. In this third edition of the book we use only the relative standard deviations sR and R unless specified otherwise. For simplification purposes, we will write sR ¼ s and R ¼ .

3.5. Permutations and Combinations Certain statistical problems involve different arrangements for a set of objects for which a counting process is necessary. Such a counting process is obtained by the notions of permutations and combinations. We will recall only the basic definitions and formulas used by such counting processes. 3.5.1. Permutations A permutation PM of r objects in a set is an arrangement without possible duplication of r objects from this set. The permutation of n objects selected r at a time can be written as follows: PMrn ¼

n! ðn  rÞ!

ð3:21Þ

52

Theory of Sampling and Sampling Practice

If r = n, then: PMnn ¼ n!

ð3:22Þ

Suppose that we have n objects in which n1 is a subset of one kind, n2 a subset of another kind, and so on: n ¼ n 1 þ n 2 þ n 3 þ . . . þ nK

ð3:23Þ

Then the number of distinguishable permutation of n objects is: n! n1 !n2 !n3 ! . . . nK ! 3.5.2. Combinations A combination C of r objects from a set made of n objects is a subset that contains r distinct objects and can be written as follows: Crn ¼

n! ðn  rÞ!r!

ð3:24Þ

3.6. The Gaussian Model: The Normal Probability Distribution The Gaussian model is the most common model in use in conventional statistics. This model is characterized by a unique property. If the set of the possible contents of a certain component is from large enough samples, the distribution of the average values obtained from those samples will tend toward a bell-shaped Gaussian distribution. This property is called the central limit theorem. A random variable obeys a normal law of distribution when its probability density f(x) can be written as follows: 2 2 1 f ðxÞ ¼ pffiffiffiffiffiffi e½ðxxÞ =2   2p

ð3:25Þ

where x is the arithmetic average and σ is the true unknown standard deviation, in which the distances ±1σ from the average x are defined as the x-coordinates of both inflexion points of the above function f(x). Most of the time, however, we only have access to the estimated standard deviation s. Then investigating the normality of a distribution consists of comparing the characteristics of the estimated standard deviation s with those of an ideal model; for example, if we have: P{-s < x < s} approaches 68% P{-2s < x < 2s} approaches 95% P{-3s < x < 3s} approaches 99.7%

53

Fundamental Statistical Concepts

Arithmetic average, mode and median at the same place

F(x )

Inflexion point in the ideal model 34% x

13.5% 2.5% rs r 2s r 3s

FIGURE 3.6 Illustration of a typical standard normal distribution.

thus, the distribution is approximately normal. An illustration of a typical standard normal probability distribution is shown in Figure 3.6.

3.7. The Binomial Model The binomial distribution often occurs when two components made of completely liberated discrete fragments are mixed together. The shape of the distribution is directed by the proportions of both components. A discrete random variable obeys a binomial law if its probability distribution can be written as follows: Pðx ¼ rÞ ¼ Crn pr qnr

ð3:26Þ

where p is the chance of selecting the component of interest, q is the chance of selecting the other components in which we are not interested, n is the number of trials, and r is the number of successes. This distribution has many practical applications and, as we will see, for trace constituents, q becomes very large and p very small, which leads to the Poisson model. I cannot emphasize enough how critically important this model is and it is the corner stone of all Ingamells’s work.

54

Theory of Sampling and Sampling Practice

3.8. The Poisson Model The Poisson model is a limit case of the binomial model. 3.8.1. Limitations of Normal and Lognormal Statistical Models At one time, scientists became convinced that the Gaussian distribution was universally applicable, and an overwhelming majority of applications of statistical theory are based on this distribution. A common error has been to reject “outliers” that cannot be made to fit the Gaussian model or some modification of it as the popular lognormal model. The tendency, used by some geostatisticians, has been to make the data fit a preconceived model instead of searching for a model that fit the data. On this issue, a Whittle quote172 later used and modified by Michel David173 was superb: “there are no mathematical models that can claim a divine right to represent a variogram.” The variogram is not the target in this chapter, however the philosophy remains the same. It is now apparent that outliers are often the most important data points in a given data set, and a good understanding of Poisson processes is a convenient tool to use. 3.8.2. Poisson Processes Poisson and double Poisson processes explain why highly skewed distribution of assay values can occur. The grade and location of an individual point assay, which follows a single or double Poisson distribution, will have virtually no relationship, and it will be impossible to assign a grade other than the mean value to mineable small-size blocks. Similar difficulties can occur with the assessment of impurity contents in very valuable commodities. 3.8.2.1. The Single Poisson Process The Poisson model is a limit case of the binomial model. It would be inappropriate to cover this long development in this chapter. Instead, we may spend some time to explore the profound difference between the binomial model and the Poisson model, which may be of interest to the sampling practitioners. Some people may object to the following development that can be found nearly everywhere in the literature, however it is important to have an integrated document in our mission to be clearly understood. The following discussion is tailored to industries mentioned earlier, so it is more palatable than what can be read in a statistical book. Most people are reluctant to use the Poisson model because they don’t understand its concept. Applying equation (3.26) to a situation where the number of trials or small samples n is very large, and the chance p of selecting one very small grain, and after transposing equation (3.24) in equation (3.26) it becomes possible to write P for two successive values of x as follows: PðrÞ ¼

n! pr qnr ðn  rÞ!r!

ð3:27Þ

55

Fundamental Statistical Concepts

Pðr  1Þ ¼

n! pr1 qnrþ1 ðr  1Þ!ðn  r þ 1Þ!

ð3:28Þ

After taking the ratio of equation (3.27) over equation (3.28) and simplifying: PðrÞ nrþ1 p ¼  Pðr  1Þ r q

ð3:29Þ

For convenience let’s multiply each term of this ratio by n: n  r þ 1 pn pn 1  r1 n  ¼  q r qn r

ð3:30Þ

But, for trace constituents and a sample mass too small by several orders of magnitude, n can be very large, q is almost equal to 1, and r is very small, then in 3.30 the last factor is practically equal to 1: 1  ðr  1Þ=n ¼1 q

ð3:31Þ

Now let’s write  ¼ pn and define  as the hypothetical average number of mineral grains per sample, therefore: PðrÞ pn  ¼ ¼ P ð r  1Þ r r

ð3:32Þ

The objective is to find a formula with a probability law like PðrÞ ¼ . . ., so let’s go back to the above ratio and make successively r = 1, r = 2, r = 3, . . . r = x: Pðr ¼ 1Þ  ¼ therefore Pðr ¼ 1Þ ¼   Pðr ¼ 0Þ Pðr ¼ 1  1 ¼ 0Þ 1 P ð r ¼ 2Þ  ¼ Pðr ¼ 2  1 ¼ 1Þ 2 therefore Pðr ¼ 2Þ ¼

  2  Pðr ¼ 1Þ ¼    Pðr ¼ 0Þ ¼  Pðr ¼ 0Þ 2 2 2 P ð r ¼ 3Þ  ¼ Pðr ¼ 3  1 ¼ 2Þ 3

therefore Pðr ¼ 3Þ ¼

 3  Pðr ¼ 0Þ  Pðr ¼ 2Þ ¼ 32 3

56

Theory of Sampling and Sampling Practice

and so on. The values of the ratio can be generalized as follows: Pðx ¼ rÞ ¼

r P ð r ¼ 0Þ r!

ð3:33Þ

Probabilities may be summed as follows: 1 X

Pðx ¼ rÞ ¼ Pðr ¼ 0Þ þ Pðr ¼ 1Þ þ Pðr ¼ 2Þ þ . . . þ Pðr ¼ nÞ þ . . .

ð3:34Þ

r¼0

¼ Pðr ¼ 0Þ þ Pðr ¼ 0Þ þ

2 3 r P ð r ¼ 0Þ þ Pðr ¼ 0Þ þ . . . þ Pðr ¼ 0Þ þ . . . ¼ 1 2 23 r!   2 3 r ¼ P ð r ¼ 0Þ 1 þ  þ þ þ . . . þ þ . . . ¼ 1 2! 3! r!

ð3:35Þ

where the second term is an exponential series with a limit value of e : e ¼ 1 þ  þ

2 3 r þ þ ... þ þ ... 2! 3! r!

ð3:36Þ

then: Pðr ¼ 0Þe ¼ 1

ð3:37Þ

Pðr ¼ 0Þ ¼ e

ð3:38Þ

or:

consequently: Pðx ¼ rÞ ¼

r  e r!

ð3:39Þ

Because of its many applications the author thought it was important to demystify this extremely important equation. If n is the number of trials, the variance of the Poisson distribution is  ¼ npq  np because q is close to 1. The mean value of the Poisson distribution is  ¼ np. 3.8.2.2. The Double Poisson Process When primary samples taken from the deposit contain the constituent of interest in a limited average number  of discrete grains or clusters of such grains (i.e., P[y=n]), and they are subsampled in such a way that the subsamples also contain discrete grains of reduced size in a limited average number  (i.e., P[x=r]), a double Poisson distribution of the assay values is likely.

57

Fundamental Statistical Concepts

The probability P of r grains of mineral appearing in any sub-sample is determined by the sum of the probabilities of r grains being generated from samples with n grains (not to be confused with n trials as defined earlier). Let’s define the ratio f: f ¼

 

ð3:40Þ

With  ¼   f or  ¼ n  f for each possibility, the equation for the resulting, compounded probability of the double Poisson distribution is: Pðx ¼ rÞ ¼

X

Pðy ¼ nÞ  Pðx ¼ rÞ ¼

¼1 n nf r X n e ðnf Þr enf f r e nX  e n  ¼ n! r! r! n¼0 n!

ð3:41Þ

for r = 0, 1, 2, 3,. . . This is the probability of obtaining a sample with r grains of the constituent of interest. The equation could be modified using improved Stirling approximations given later on in Part V of this book. In practice, one does not usually count grains; concentrations are measured. The conversion factor from number of grains to, percent X for example, is C, the contribution of a single average grain. Because the variance of a Poisson distribution is equal to the mean: pffiffiffi s¼C 

ð3:42Þ

s2 ¼ C 2 

ð3:43Þ

Therefore:

But variances of random variables are additive, then: s2 ¼ C2  þ c2 

ð3:44Þ

The data available are usually assays in % metal, gram/ton, ppm, or ppb. They are related by the equation: xi ¼ aH þ cri

ð3:45Þ

where xi is the assay value of a particular sample, in % for example; aH is the low more homogeneous background concentration in % for example, which is easier to sample; for an in-depth definition and calculation of aH refer to Part V; ri is the number of mineral grains in the sample; c is the contribution of one grain to the assay in % for example: 

 weight of grain 100% c¼ weight of sample

ð3:46Þ

Thus, the probability of a sample having an assay value of xi equals the probability of the sample having ri grains when aH is relatively constant.

58

Theory of Sampling and Sampling Practice

The mean value of a set of assays can be shown to be: x¼

N 1X xi ¼ aH þ c N i¼N

ð3:47Þ

For a single Poisson distribution, this equation would be: x¼

N 1X xi ¼ aH þ C N i¼N

ð3:48Þ

where x is an estimator of the unknown average content aL of the constituent of interest. Assuming sampling is correct, and for the sake of simplicity, in the following part of this chapter we should substitute x with aL. Then: aL ¼ aH þ C ¼ aH þ c

ð3:49Þ

aL  aH ¼ C ¼ c

ð3:50Þ

 C ¼ ¼f  c

ð3:51Þ

s2 ¼ CðaL  aH Þ þ cðaL  aH Þ ¼ fcðaL  aH Þ þ cðaL  aH Þ

ð3:52Þ

then:

Substituting 3.51 in 3.44:

whence: f ¼

s2  cðaL  aH Þ cðaL  aH Þ

ð3:53Þ

The probability that there will be no difficult-to-sample grains of the constituent of interest in a randomly taken subsample is found by putting r = 0 in (3.41): Pðx ¼ 0Þ ¼ e

X n enf n!

¼ eðe

f

1Þ

ð3:54Þ

If a data set fits a double Poisson distribution, the parameters  and  of this distribution may be found from a reiterative process, as follows: Make a preliminary low estimate of aH. Give c an arbitrary low value. Calculate a preliminary value for f from (3.53), and for  by rearranging (3.49): ¼

ðaL  aH Þ ðaL  aH Þ ¼ C fc

ð3:55Þ

Substitute these preliminary estimates in (3.54); averaging the lowest P(x = 0) of the data to obtain a new estimate of aH. Increment c and repeat until a best fit is found.

Fundamental Statistical Concepts

59

3.9. Preventive Recommendations Relative to Sampling 3.9.1. The Grade aS of a Sample Obeys a Normal Distribution The Theory of Sampling is a preventive tool that may be compared to the possession of an “insurance policy” against major financial losses. To prevent ambiguous difficulties, every time it is economically feasible, sampling should be considered in such a way that the law of distribution of the grade aS of a sample of particulate material can be assumed to be a normal distribution. It is not always feasible to ascertain that the grade aS will follow a normal distribution. For trace constituents and precious metals, the grade aS is likely to follow a Poisson distribution, and therefore, special precautions will have to be taken using Ingamells’s work. 3.9.2. The Grade aS of a Sample Obeys a Poisson Distribution If enough data are at our disposal, it is not wrong for the grade aS of a sample to obey a Poisson distribution, and the estimated average of such a distribution should be an unbiased estimator of the real mean aL of the lot, assuming the sampling operation was handled correctly. When the constitution heterogeneity is large, however, and few data are at our disposal, the apparent average is found to slowly drift from the true average toward the mode of the distribution. The less the data, the stronger the drift. We should avoid such situation, or at least find a way to become perfectly aware of their occurrences to prevent making decisions with grave economic consequences. 3.9.3. Capital Notion of Probabilistic Selection Process An immediate consequence of these probability laws is that if we do not have a systematic sampling error, and only if the distribution of the grade aS of a sample is normally distributed, then aS of a sample is normally distributed, then aS accepts for central value the grade aL of the initial lot. A systematic sampling error is prevented if the following conditions are fulfilled: • The entire lot is freely accessible to the sampling tool, so that an equal chance for each constituent of the lot to be part of the sample exists. • The sampling scheme is impartial, so that an equal chance for each constituent to be part of the sample exists. • The distribution of the grade aS of the sample obeys a normal distribution, which is an optimistic assumption in the case of trace constituents such as in environmental sampling, sampling for impurities, or sampling for precious metals. To fulfill the most elementary requirements for due diligence these conditions should not be negotiable.

3.10. Random and Systematic Errors With the exception of accidental errors, such as Increment Preparation Errors (IPE), which affect the integrity of a sample, all other sampling errors are random variables

60

Theory of Sampling and Sampling Practice

characterized by a given average, nil or not, and a given variance that is never nil. It is by exaggeration, often because it is convenient, that we speak of random error with average nil and variance different from zero, or of systematic errors with variance nil and average different from zero. Actually, all errors such as the Fundamental Sampling Error (FSE), the Increment Delimitation Error (IDE), The Increment Extraction Error (IEE), etc., have two components: • A random component characterized by the variance only • A nonrandom component characterized by the average only. In fact, the variance and the average of an error are physically complementary, even if they are very different properties. Therefore, when several random variables such as FSE, IDE, IEE, and so on are independent in probability (it should be clear we are not talking about independent quantities, but independent differences between quantities, which is not the same thing), they are cumulative, and it is perfectly justified to write the following relationships: If these errors occur separately: TSE ¼ FSE þ IDE þ IEE þ . . . where TSE is the Total Sampling Error. For the averages of these errors we can write: mðTSEÞ ¼ mðFSEÞ þ mðIDEÞ þ mðIEEÞ þ . . . For the variances of these errors we can write: s2 ðTSEÞ ¼ s2 ðFSEÞ þ s2 ðIDEÞ þ s2 ðIEEÞ þ . . . The above property of additivity is often well appreciated by those involved in sampling practices. 3.10.1. Notion of Precision Precision should not be confused with accuracy, and it is wrong to include the notion of accuracy with the notion of precision. A sampling selection is said to be precise when the selection sampling error (SSE) is a little dispersed around its average, regardless of the fact that the difference between average and the true mean m(SSE) of the selection sampling error is zero or different from zero. In a similar way we could refer to the Increment Selection Error (ISE). Precision concerns measuring the variability of a sample around the average of the lot from which it has been collected. This measurement is generally expressed as the variance of the selection sampling error σ2 (SSE). 3.10.2. Notion of Accuracy Accuracy is independent from precision and a sampling selection is said to be accurate when the selection sampling error has a mean m(SSE) approaching zero. In other words,

61

Fundamental Statistical Concepts

the value that m(SSE) is taking is the value of the sampling bias, although, in comparing m(SSE) with σ2 (SSE) leads to the notion of representativeness. 3.10.3. Notion of Representativeness A sampling selection is said to be representative when the mean square of the selection sampling error r2 ðSSEÞ which is itself the sum of the square of the mean mðSSEÞ and of the variance 2 ðSSEÞ, is smaller than a certain standard of representativeness r20 ðSSEÞ regarded as acceptable and to be clearly determined by logical Data Quality Objectives (DQO): r2 ðSSEÞ ¼ m2 ðSSEÞ þ 2 ðSSEÞ  r20 ðSSEÞ

ð3:56Þ

Representativeness is the exclusive property that synthetizes both accuracy and precision. Now, if we are speaking about one error e, we shall refer to its mean square r2 ðeÞ which necessarily encompasses its variance 2 ðeÞ and its mean m2 ðeÞ: r2 ðeÞ ¼ m2 ðeÞ þ 2 ðeÞ

ð3:57Þ

3.10.4. Graphic Representation of the Notion of Accuracy and Precision Because it is important to illustrate qualities and defects of a sampling process, Figures 3.7 through 3.13 illustrate the representative points of an imaginary series of sampling selections. In abscissa we report the true unknown values of the content aL of a series of imaginary lots, and in ordinate we report the true unknown values of the content aS of each corresponding imaginary sample.

True Sample content

True Lot content

FIGURE 3.7 Sampling is exact (unbiased and perfectly reproducible. This is a limit case never encountered.

62

Theory of Sampling and Sampling Practice

True Sample content

True Lot content

FIGURE 3.8 Sampling is accurate and reproducible.

True Sample content

True Lot content

FIGURE 3.9 Sampling is not reproducible. We cannot say sampling is accurate since there is no correlation between the true sample content and the true lot content.

Examples shown in Figures 3.10–3.12 are typical of analytical biases. The example shown in Figure 3.13 is the best illustration of biases generated by IDE, IEE, and IPE. Basically, a sampling bias is rarely reproducible.

63

Fundamental Statistical Concepts

True Sample content

True Lot content

FIGURE 3.10 Sampling is biased and reproducible (absolute bias). As far as sampling is concerned, this case is not realistic.

True Sample content

True Lot content

FIGURE 3.11 Sampling is biased and reproducible (relative bias). As far as sampling is concerned, this case is not realistic.

64

Theory of Sampling and Sampling Practice

True Sample content

True Lot content

FIGURE 3.12 Sampling is biased and reproducible (the bias follows an unknown law). As far as sampling is concerned, this case is not realistic.

True Sample content

True Lot content

FIGURE 3.13 Sampling is biased and not reproducible. This is the most frequent case in sampling.

3.11. Summation of Random and Systematic Errors When the means mðei Þ of various sampling errors are different from zero, they are additive and their respective sign should be taken into account during the summation; however, the standard deviations ðei Þ of various sampling errors are not additive, only their corresponding variances 2 ðei Þ are. It is essential to remember that random errors are added only by squaring. Sampling statistics, in a broad sense, have been presented in great detail in the complete works of W. G. Cochran.174–176 His monographs are highly recommended.

4 A Logical Introduction to the Components of the Overall Estimation Error

Before introducing the various components of the Overall Estimation Error OEE, it is beneficial to proceed with an introduction to the fundamental notion of heterogeneity. To find out how it is defined, and to clearly differentiate this notion from the notion of homogeneity. It does not involve a great deal of research to find out that homogeneity is often a relative concept. If we look at a pile of fine sand from a distance, the pile may appear homogeneous; however, we know that as we come closer toward the pile and finally look at it under a magnifying glass, we realize that the homogeneity was only an illusion. The only reality is a state of great heterogeneity when each individual grain is examined showing differences in sizes, colors, compositions, shapes, densities, opacities, porosities, and so on. It is not long before one wonders whether homogeneity is only a limit case rarely encountered. We indeed live inside a heterogeneous world. And if we try to measure this heterogeneity, we intuitively find that the zero of heterogeneity is homogeneity, just a limit case. Even liquids that appear to be homogeneous are indeed heterogeneous if we consider particles, atoms, ions, and molecules. Thus, there are two ways to look at homogeneity. It can be compared to a mathematical limit never encountered in our universe, and one may say that is pushing the concept too far. This leads to a more practical way, which defines homogeneity as a relative state where all the constituents of a lot are apparently identical (e.g., a lot of calibrated marbles). Even in the second case, homogeneity remains a limit case, and we immediately face a difficulty that consists in clearly defining this limit. As we move from major to minor constituents, or constituents at the trace level, we will have to push down this limit more and then return to the point where we have to face reality, which is essentially heterogenous. As far as sampling is concerned, we may as well forget about homogeneity and remove this word from our vocabulary, because it leads to misconceptions and dangerous optimistic and wishful thinking assumptions. To state that a material is homogeneous is indeed almost always wishful thinking. Because the Theory of Sampling is a preventive “tool” we choose the safe hypothesis that we are only dealing with heterogeneity and we intend to measure this heterogeneity, the zero of which is homogeneity. In our attempt to measure heterogeneity we will have to clearly differentiate two categories of heterogeneity: • the Constitution Heterogeneity CH • the Distribution Heterogeneity DH.

65

66

Theory of Sampling and Sampling Practice

4.1. Constitution Heterogeneity We shall call the Constitution Heterogeneity of a lot to sample (CHL) the kind of heterogeneity we are confronted with when we consider the fundamental properties of the fragments present in that lot and looking at them one by one. By definition, the zero of Constitution Heterogeneity would be a lot made of strictly identical fragments in composition, shape, size, density, and so on. Then the Constitution Heterogeneity relative to the fragments of a given lot under a given state of comminution is an intrinsic property of the lot and cannot vary, unless we proceed with a comminution. We also say it is a primary structural property of the lot. Mixing and homogenizing have no influence on Constitution Heterogeneity. A sample S selected from a lot L is affected by an error specifically related to the Constitution Heterogeneity CHL of the same lot. This error is defined as the Fundamental Sampling Error (FSE). For a given sample of weight MS, we see that FSE is an incompressible minimum depending on factors such as mineral composition, liberation, shape, and fragment size distribution which are intrinsic properties of a given lot. The Fundamental Sampling Error FSE is the only error that is never zero and its importance may be secondary for major constituents; however, it often becomes primary for minor constituents and indeed overwhelming for trace constituents in high purity materials, low level precious metals, or in the environment, food, pharmaceutical products, and so on.

4.2. Distribution Heterogeneity The notion of Constitution Heterogeneity shows differences between individual fragments. Now we may consider a lot as a set of groups of fragments, each group being made of a given number of neighboring fragments. By definition, we say that a lot has a homogeneous distribution when all groups or subsets of fragments we may select from the lot have the same average composition. If this is not the case, then the lot has a heterogeneous distribution. For each critical constituent, the respective Distribution Heterogeneity (DHL) depends on three factors: • the Constitution Heterogeneity CHL • the spatial distribution of the constituents (i.e., how they are segregated) • the shape of the lot. The shape of the lot is an important factor because its Distribution Heterogeneity is greatly affected by the omnipresent gravitational forces affecting our environment that create segregation. These gravitational forces introduce a strong anisotropy in the Distribution Heterogeneity of a lot, generating hybrids between tridimensional distribution Homogeneity and the tridimensional Distribution Heterogeneity, namely, the twodimensional distribution homogeneity, the one-dimensional homogeneity, and the revolution distribution homogeneity. The concept of Distribution Heterogeneity is complex which makes it necessary to consider several categories of lots.

Introduction to the Overall Estimation Error

67

4.3. Classification of Lots The effective representation of a lot investigated for its heterogeneity is possible only if we characterize this lot according to certain criteria such as: • the number of dimensions of the easiest model to represent the lot • the continuity or discontinuity of the selected model • the order or disorder of the constituents of the lot.

4.3.1. Number of Dimensions Characterizing a Lot From a theoretical standpoint a lot always has three dimensions, however, in practice, one or even two of these dimensions can often be regarded to be of secondary importance. When the dimensions are fewer, the easier the solution of the sampling problem associated with the lot; in fact, we found out that only sampling problems generated by zero-, one-, and two-dimensional lots were economically solvable. We can encounter the following: • three-dimensional lots: the content of a ship, truck, railroad car, bag, jar, and so on, as long as one of these three-dimensional objects is considered as the whole lot. It can also be a compact solid such as a block inside a mineral deposit. • Two-dimensional lots: A three-dimensional object in which the thickness becomes negligible because it is very small when compared to the two other dimensions (e. g., the seam of a coal deposit, a 2-meter slice of a mineral deposit, a copper cathode, etc.). • One-dimensional lots: continuous and elongated piles, material travelling on a conveyor belt, flowing streams, and so on, or series of trucks, railroad cars, bags, jars, and so on, as long as these objects are considered as a set of nonrandom, discontinuous units making up the lot, the order of which is highly relevant. • Zero-dimensional lots: the content of a series of trucks, railroad cars, bags, jars, and so on, as long as these objects are considered as a set of random, discontinuous units making up the lot. A zero-dimensional lot can be regarded as a suitable convention to describe a set of unarranged units. It can also be a one-dimensional lot for which the chronological order of the various units has been lost. Perhaps, there is a subtlety worth mentioning; the number of dimensions regarding a lot to sample may have nothing to do with the way it looks, but everything to do with the way we decide how to sample it. This will be a huge issue well addressed in Part seven of this book.

4.3.2. Continuity or Discontinuity of the Selected Model Representing the Lot A lot may be represented by either a continuous or a discontinuous model, depending on whether we look at it from a distance or under a magnifying lens. The scale of the

68

Theory of Sampling and Sampling Practice

heterogeneity we are interested in may dictate the kind of model we will consider; therefore, we can look at a lot in two different manners: 1. As a discrete and discontinuous set, if we are interested in fluctuations of the amount of heterogeneity introduced by the various fragments at a very short range. 2. As a continuous set, such as a flowing stream of material on a conveyor belt where the continuity is essentially a function of time, distance or tons, if we are mainly interested by its long-range and periodic heterogeneity fluctuations.

4.3.3. Order and Disorder of the Constituents of the Lot All the constituents or units of a lot may be looked at in two different ways: 1. As a statistical population of units, where adjacent units are statistically independent. 2. As a time, space, or tons series, where adjacent units are statistically correlated. We often have a tendency to apply to time, space, or tons series results that are valid only for statistical populations. In sampling, we will see that it is essential to find out if we are dealing with a strictly statistical population or if we are dealing with a population that has some degree of correlation involved. We defined homogeneity as the zero of heterogeneity. In the same way, we can define a statistical population as the zero of a correlated population. Furthermore, we may define perfect disorder as a limit case, and indeed we know factors that generate correlation such as chronology and gravitational forces. In conclusion we can say that there is no legitimate way to accept the hypothesis that the personality of a given constituent is totally uncorrelated with its position inside the lot. All populations encountered in sampling practice are hybrids: partially correlated and partially random populations of fragments or units.

4.4. Heterogeneity of a Zero-Dimensional Lot In this category, we include lots that are essentially statistical populations, and we already know that they are affected by two kinds of heterogeneities: 1. the Constitution Heterogeneity CHL 2. the Distribution Heterogeneity DHL. The Distribution Heterogeneity is a direct consequence of the Constitution Heterogeneity. Without Constitution Heterogeneity, Distribution Heterogeneity cannot exist. The kind of Distribution Heterogeneity we are dealing with in a zero-dimensional lot is what could be defined as a small-scale Distribution Heterogeneity, which is the logical consequence of random fluctuations between the constitution of neighboring fragments. Such random fluctuations generate the Fundamental Sampling Error FSE already introduced, but they also give an opportunity for gravitational forces to proceed with a

Introduction to the Overall Estimation Error

69

rearrangement of the order between fragments, segregating families of fragments on the basis of their respective constitution. The more the difference in constitution (e.g., composition, shape, size, density, etc.), the stronger the possible segregation. In due course, we will determine the exact relationship between Constitution Heterogeneity and Distribution Heterogeneity and we will see that Distribution Heterogeneity is always smaller or equal to the Constitution Heterogeneity, which is intuitive. Two factors will be responsible for the sampling error introduced by Distribution Heterogeneity: 1. a segregation factor, which is a measure of spatial rearrangements 2. a grouping factor, which is a measure of the random selectivity. Intuitively, we find that the Fundamental Sampling Error FSE is the minimum error generated when collecting a sample of a given weight. The minimum is reached only under one statistical condition: fragments making up the sample shall be collected strictly at random, and one by one. Of course, it does not happen that way in practice, which greatly bothers inexperienced empirical practitioners who wonder why the variability between their replicate samples in their kind of heterogeneity test does not agree with Gy’s formula. When collecting an increment to make up a sample, this increment is likely to be made of many fragments. Then, statistically speaking, one sample is not made of strictly random fragments, but only of random groups of fragments; the difference is huge and often completely misunderstood. Consequently, we introduce an additional error and the larger the groups, the larger the error, and there are no ways around it; it is the way it is. This invariably leads to endless and counterproductive debates, and irrelevant theoretical developments that plant total chaos and confusion in the world of TOS, unfortunately. We define the error introduced by Distribution Heterogeneity as the Grouping and Segregation error, GSE, a vastly misunderstood error in sampling, even by some wellknown sampling experts, which is worrisome. Such fundamental misunderstanding is magnified when a poor understanding of the true source of analytical errors takes place, which makes empirical approaches extremely vulnerable.

4.5. Heterogeneity of a One-Dimensional Lot Industrial activities are characterized by a constant need to transport materials (e.g., ores, concentrates, coal, cereals, chemicals, etc.) from one location to another. The practical implementation of such activities necessarily generates long piles, running materials on conveyor belts, and streams with suspended solids, that are all defined as one-dimensional lots. The good news is that it is always possible to implement a correct sampling operation on one-dimensional lots. What we said of a zero-dimensional lot is still true for a one-dimensional lot which will be affected by a certain Constitution Heterogeneity coupled with a transient term which is Distribution Heterogeneity. However, a one-dimensional lot is almost always generated by chronological operations. Consequently, it will be affected by fluctuations that mainly reflect human activities at the mine, at the mill, at the processing or chemical plant, and so on. These are not the intrinsic properties of the material making up the lot; instead they are trends and they lead to a new concept of heterogeneity that can be subdivided into two terms:

70

Theory of Sampling and Sampling Practice

1. heterogeneity h2 introduced by long-range trends, which could be defined as a large-scale segregation 2. heterogeneity h3 introduced by cyclic phenomena, which are extremely frequent in a processing plant. Consequently, we will define two new errors introduced by these types of heterogeneity: 1. the long-range Heterogeneity Fluctuation Error HFE2 2. the periodic Heterogeneity Fluctuation Error HFE3. These two new errors are defined as the continuous components of the Overall Estimation Error, OEE. Therefore, in the case of a chronological series of units, we can define the heterogeneity introduced by random Constitution Heterogeneity as the small-scale heterogeneity h1, which introduces an error defined as the short-range Heterogeneity Fluctuation Error HFE1. We will show that we may write the following relationships, making the assumption the Increment Weighting Error IWE is negligible: HFE1 ¼ FSE þ GSE

ð4:1Þ

h ¼ h1 þ h 2 þ h 3

ð4:2Þ

HFE ¼ HFE1 þ HFE2 þ HFE3

ð4:3Þ

where h is the total heterogeneity supported by a lot of any kind, HFE is total continuous Heterogeneity Fluctuation Error which could also be called the integration error. We may see now that HFE1 will serve as a link between the continuous model and the discrete model. If a one-dimensional lot was considered as a zero-dimensional lot, h3 would cancel, and h2 would become part of h1, which is obvious as the lot would be considered as a random population. In Part VIII we will address potential problems with the Increment Weighting Error IWE; at this stage we make the assumption that it is negligible, which is not always the case if preventive precautions are not taken.

4.6. Heterogeneity of Two-Dimensional Lots An orebody that is essentially three-dimensional can be divided by successive layers, several meters thick, and drilled correctly. In such a case, each successive layer can be defined as a two-dimensional lot. The same thing goes for the content of truck, of a stockpile, of a copper cathode or anode, an ingot, if we can drill them correctly through their entire thickness. The same heterogeneity components could apply in two different directions.

4.7. Heterogeneity of Three-Dimensional Lots These lots can be compact orebodies, the heterogeneity of which has been successfully studied by geostatisticians, but goes beyond our purpose, mostly confined to the sampling

Introduction to the Overall Estimation Error

71

of particulate materials. Small three-dimensional lots such as piles, trucks, railroad cars, and bags made of particulate materials could be sampled the same way as compact orebodies, and principles used in geostatistics would perfectly apply; however, the correct materialization of increments in such lots is nearly always impossible to implement. Furthermore, the geostatistical evaluation of small lots is certainly uneconomical. The logical conclusion is that correct sampling of three-dimensional lots is usually not feasible. Then, it is worth mentioning that when grab samples are collected in a zero-, one-, two-, or three-dimensional lot we are indeed considering the lot three-dimensional, which is the worst-case scenario in sampling and an exercise in futility that is highly misleading as it is impossible to perform a reliable and valid job. We will strongly insist on this in Part VII of this book as it is a problem plaguing all industries.

4.8. Qualitative and Quantitative Components of the Heterogeneity Fluctuation Error In the earlier versions of TOS, our attention was focused on the fluctuations of the critical content a around its average aL , which generated a Quality Fluctuation Error QFE, and on the fluctuations of the flow rate of active and critical components, which generated the Increment Weighting Error IWE. We may choose not to dissociate qualitative and quantitative terms and look instead at their combined effects which are completely described by the overall notion of heterogeneity. Experience has shown that it is relatively easy to regulate flow rates in such a way that the Increment Weighting Error could become important only to the second order. Nevertheless, it is important to remember that IWE is not always negligible, therefore we chose to give it more attention in this third edition of the book, so it is not forgotten when it can indeed become an issue. Accordingly, we may write: HFE ¼ QFE þ IWE

ð4:4Þ

4.9. Materialization of the Sampling Operation: The Increment Materialization Error Thus far, we have identified errors generated by the heterogeneity supported by the material making up the lot in general; however, we looked at the lot as a continuous one-dimensional object and we based our reasoning on the selection of imaginary points within the domain of interest, but the real points are made of fragments or groups of fragments and the discrete nature of these units should be considered. The same reasoning can be made with the splitting process for taking samples from a zerodimensional lot. The materialization of such groups of fragments provides the increments of a sample. The materialization is achieved by first implementing a correct increment delimitation, then a correct increment extraction (i.e., correct recovery of the increment), which are error and bias generating operations if stringent conditions are not respected. These conditions will be reviewed in depth in Part seven. We define the

72

Theory of Sampling and Sampling Practice

Increment Materialization Error IME as the sum of the Increment Delimitation Error IDE and the Increment Extraction Error IEE: IME ¼ IDE þ IEE

ð4:5Þ

Logically, Increment Preparation Errors could be included in the Increment Materialization Error, but we chose not to include them at this stage because of their particular nature and especially because they are accidental errors which cannot be statistically analyzed.

4.10. The Sampling Selection Error In a proper sense, the Sampling Selection Error SSE is introduced by the bulk reduction of a lot after the selection of a series of increments, the materialization of which makes up one sample. It is important to note that we consider sampling as a combination of only two categories of operations: an immaterial selection process that leads to the Heterogeneity Fluctuation Error HFE, and a materialization process that leads to the Increment Materialization Error IME: SSE ¼ HFE þ IME

ð4:6Þ

This sum is defined as the Proper Sampling Error generated by only one sampling stage.

4.11. The Total Sampling Error We have only looked at the selecting process in general. Now, we shall consider all nonselection operations carried out on the lot and on the successive samples and their increments by various sampling stages. These nonselection operations, or at least some of them, are likely to be present between sampling stages, and we define them as “preparation stages.” A preparation stage is an error-generating process that may be created by transfers from one place to another, comminution stages, screening operations, mixing attempts, drying, filtration, weighing, packaging, and so on. The generated errors, usually accidental errors, are defined globally as the Increment Preparation Error IPE. We define the Total Sampling Error TSE as the sum of the proper Sampling Selection Error SSE and the Increment Preparation Error IPE, generated for each sampling and corresponding preparation stage: TSE ¼ SSE þ IPE

ð4:7Þ

Consequently, if we have N sampling stages and preparation stages, N Total Sampling Errors will be generated, and we should write: IPE ¼ IPE1 þ IPE2 þ … þ IPEn þ … þ IPEN

ð4:8Þ

73

Introduction to the Overall Estimation Error

TSE ¼ TSE1 þ TSE2 þ … þ TSEn þ … þ TSEN

ð4:9Þ

Where n is the rank of a sampling or preparation stage (n = 1,2,3. . ..N). TSEn ¼ SSEn þ IPEn X ðSSEn þ IPEn Þ TSE ¼

ð4:10Þ ð4:11Þ

n

4.12. The Analytical Error The Analytical Error AE is not part of the Total Sampling Error TSE; however, we shall not forget that it is always part of the Overall Estimation Error OEE. In this book, we do not intend to study in detail the Analytical Error that was the subject of another book that I coauthored with C. O. Ingamells.22 We shall only remember that both sampling and analysis are error-generating processes with the consequence that OEE is the sum of the TSE and the AE: OEE ¼ TSE þ AE

ð4:12Þ

An important remark is worth mentioning: the selection, delimitation, and extraction of a tiny analytical subsample (e.g., 50mg, 1g, 2g, etc,) is not part of the Analytical Error, but part of the Total Sampling Error. This makes it extremely difficult to quantify AE just by itself. This is another difficulty sampling practitioners and even sampling experts not familiar with analytical chemistry have a difficulty to understand, which leads to variance analyses that are essentially flawed.

4.13. The Overall Estimation Error At this stage, we have completed our review of all the components of the Overall Estimation Error. Each of these components will need further investigation, which is the object of all following chapters. Already, we can express the Overall Estimation Error OEE as follows: OEE ¼ AE þ

N X ðSSEn þ IPEn Þ

ð4:13Þ

n¼1

Figure 4.1 illustrates the schematization of a complete estimation plan. We know now that the sources of errors are many; in fact, relationship 4.13 can be vastly developed with all its basic components as follows; let’s progress a few steps at a time: OEE ¼ AE þ

N X ðIPEn þ HFEn þ IMEn Þ n¼1

ð4:14Þ

74

Theory of Sampling and Sampling Practice

OEE ¼ AE þ

N X ðIPEn þ HFE1 þ HFE2 þ HFE3 þ IDEn þ IEEn

ð4:15Þ

n¼1

OEE ¼ AE þ

N X ðIPEn þ QFE1 þ QFE2 þ QFE3 þ IWE1 þ IWE2 þ IWE3 þ IDEn þ IEEn Þ n¼1

ð4:16Þ OEE ¼AE þ

N X ðIPEn þ FSEn þ GSEn þ QFE2 þ QFE3 þ IWE1 þ IWE2 þ IWE3 þ IDEb þ IEEn Þ n¼1

ð4:17Þ

Sampling stages

Preparation stages

Lot and consecutive samples

Original lot L of content aL

First sampling and preparation stage

First sampling stage: Increment selection or splitting process SSE1

First preparation stage: reunion of increments, comminution, packaging, etc. IPE1

Second sampling and preparation stage

Second sampling stage: Increment selection or splitting process SSE2

Second preparation stage: reunion of increments, comminution, packaging, etc. IPE2

Secondary prepared sample S2 of content aS2

Last sampling stage: Increment selection or splitting process SSEN

Last preparation stage: reunion of increments, comminution, packaging, etc. IPEN

Final analytical subsample SN of content aSN

Last sampling and preparation stage

Overall Estimation Error

Analysis generating error AE

Primary prepared sample S1 of content aS1

Assay result aR affected by the error OEE

FIGURE 4.1 Schematization of a sample estimation plan resulting in the Difference between aL and aR :

75

Introduction to the Overall Estimation Error

All of these terms are summarized and classified into their respective categories in Figure 4.2. This figure should serve as a reminder and as a guide as we progress throughout the following chapters. After reading this review, the reader can easily find out that the simple analysis of a series of samples will not give any valuable information on what went wrong during the total sampling process. Many statisticians fond of measurement of uncertainty (MU) do not fully realized the magnitude of this difficulty. The total sampling process cannot be investigated backward or in retrospect because

Zero-dimensional lots: CHL leads to FSE

DHL leads to GSE

FSE + GSE = QFE 1

and HFE 1 = QFE 1 + IWE1

One-dimensional lots: HFE2 = QFE 2 + IWE2

HFE 3 = QFE 3 + IWE3 HFE 1 + HFE 2 +HFE 3 = HFE

IDE

IEE IDE + IEE = IME

Two or three-dimensional lots: Extension of HFE to more dimensions

Sampling Stages: 1, 2, 3,..., N

Contamination errors Alteration errors Human errors Loss errors Fraud

Sample Selection Error SSE

All lead to IPE and Preparation Stages 1, 2, 3, ... , N

Total Sampling Error TSE

Analytical Error AE

Overall Estimation Error OEE

FIGURE 4.2 Recapitulation of all the components of the Overall Estimation Error: The two boxes with thick borders are a reminder of where the main bias generators are.

76

Theory of Sampling and Sampling Practice

there is neither a scientific nor a legitimate way to do so. The only logical approach, which should be part of basic due diligence requirements, is a thorough, preventive examination of the selection, materialization, and preparation process which is our only assurance of obtaining a precise and accurate enough (i.e., representative) estimate of the true, unknown critical content aL of the lot.

Part III

Heterogeneity and Homogeneity

This page intentionally left blank

5 A Logical Introduction to the Notion of Heterogeneity

Heterogeneity is a primary structural property of all materials and we cannot ignore it without taking considerable risks that are almost always economically detrimental to any research performed on these materials. There are many examples in various industries using materials of mineral origin such as metallurgy, cements, coal, glass, ceramics, nuclear energy, and so on. There are other examples such as agricultural, food, paper, chemical, and pharmaceutical industries that use mineral, organic, vegetal, and synthetic materials. Without exception, all these industries, and the research programs associated with them, should be affected and extremely concerned by the notion of heterogeneity, The assessment of the quality of a product and also of the feasibility of a process depend on experimental estimates that always have a certain amount of uncertainty. Two factors contribute to this uncertainty: 1. the measurement or analytical errors 2. sampling errors. All people involved in research, development, production, quality assurance, and quality control deal directly or indirectly, conscientiously or unconscientiously with sampling problems. These sampling problems exist because all materials are essentially heterogeneous. Heterogeneity is detrimental to all processes that are regulated in such a way that the optimum output is obtained for a given input material of composition X. The output deteriorates in quality and quantity as soon the input raw material has a composition X’ different from X which has been set by stringent objectives. The real output becomes different from the optimum output, and nearly always in a negative direction, resulting in a loss directly detrimental to the feasibility of the entire process. The prevention of such losses is the most serious problem of all production teams that spend enormous amounts of energy and time to keep the process at optimum productivity in quantity and quality. To achieve this goal, they have two different strategies at their disposal: 1. acceptance of the heterogeneity, which generates enormous, and not always effective, quality assurance and quality control programs 2. elimination of the heterogeneity through a homogenization process such as bedblending. The duality between “homogeneity” and “heterogeneity” is not simple, and it cannot be compared to the straightforward kind of duality observed between “black” and “white”. 79

80

Theory of Sampling and Sampling Practice

In a simplistic way we may describe our concept of heterogeneity as a scalar or a function, the homogeneity of which is zero. Therefore, homogeneity is a limit case. The hypothesis made when we assume that a material is homogeneous is naïve and dangerous because it allows anyone to solve all sampling problems associated with heterogeneity by vastly oversimplifying them. All particulate solids, either dry, wet, or suspended in water or air, that are dealt with in our industrial world are heterogeneous; furthermore, in a preliminary approach we shall distinguish two different forms of heterogeneity: 1. Constitution Heterogeneity, which is a property of the set of units under consideration, such as the set of fragments within a lot of particulate material 2. Distribution Heterogeneity, which depends on Constitution Heterogeneity, but also, as its name suggests, on the spatial distribution of the units within the volume occupied by the lot. Constitution Heterogeneity is an intrinsic property of the lot and cannot be altered without proceeding with a comminution. This is the reason we say Constitution Heterogeneity is a primary, structural property. Mixing has no effect on Constitution Heterogeneity. Consequently, and this is important and intuitive at this stage, the sampling error associated with it, which is FSE, cannot and will not be minimized unless we increase the mass of the collected sample. Distribution Heterogeneity, however, can be modified; either it can be diminished by homogenization or mixing, or it can be increased by promoting segregation, voluntarily or involuntarily. This is the reason we say Distribution Heterogeneity is a secondary, circumstantial property. Consequently, and this is extremely important, the sampling error that is associated with it, which is GSE, cannot be calculated as it is a transient phenomenon changing all the time. The only thing we can do to minimize this error, is to know FSE in the first place, then maybe homogenize the material prior to sampling, or/and taking as many small increments at random as practically possible. The omnipresence of gravity always introduces an anisotropic factor that results in three hybrid types of heterogeneity between distribution homogeneity and distribution heterogeneity: 1. three-dimensional homogeneity 2. two-dimensional homogeneity 3. one-dimensional homogeneity 4. revolution homogeneity (often observed around a vertical axis) 5. three-dimensional heterogeneity.

5.1. Qualitative Analysis of the Duality Homogeneity versus Heterogeneity 5.1.1. Constitution Heterogeneity In each particular case, it is possible to define the units of a lot, whose main property is to be indivisible and unalterable, and thus within a physical and/or chemical environment considered as stable.

Introduction to Heterogeneity

81

For a particulate solid, the units are fragments or particles whose size distribution should remain constant. For a liquid, the units would be ions or molecules; from a theoretical standpoint, there is no difference between the constitution heterogeneities of these two media. We can state that a lot of material has a homogeneous constitution when all its units are strictly identical, but it has a heterogeneous constitution when all its units are not strictly identical. The units can either be completely or partially identical depending on whether we consider all the properties of these units or just some of them. For a set of units making up a lot, placed in a defined and stable physical and/or chemical environment, the Constitution Heterogeneity is a well-defined and constant property; therefore, mixing and homogenization have absolutely no effect on the units and the amount of Constitution Heterogeneity they carry. 5.1.2. Distribution Heterogeneity Distribution Heterogeneity is the kind of heterogeneity that can be observed when we look at various groups of neighboring units such as fragments within the domain of a given lot. For Constitution Heterogeneity we looked at differences between individual units such as fragments, but for Distribution Heterogeneity we looked at differences between groups of neighboring units such as fragments. We can state that a lot of material has a homogeneous distribution when all the groups of a given size that can be selected within the domain of the lot have strictly identical average composition; however, the lot has a heterogeneous distribution when these groups are not identical. For a set of given units making up a lot, the Distribution Heterogeneity depends on three factors: • Constitution Heterogeneity • The spatial distribution of the units • The basic module of observation

5.1.2.1. Constitution Heterogeneity In the next chapter, we demonstrate that Distribution Heterogeneity does not exist within a set of units if these do not have first a certain amount of Constitution Heterogeneity. Furthermore, the greater the Constitution Heterogeneity, the greater the possible Distribution Heterogeneity. We also demonstrate that the amount of possible Distribution Heterogeneity cannot be larger than a certain maximum that is equal to the amount of Constitution Heterogeneity itself. This is a fact not very well known by most people involved with sampling problems and process control who are not necessarily familiar with the Theory of Sampling. 5.1.2.2. The Spatial Distribution of the Units The spatial distribution of the units is a constant only in compact solids where all basic units do not have the freedom of changing places. This is the case for minerals inside a consolidated deposit, which is mainly the domain of geostatistics where the concept of stationarity is critically important. However, as soon as this consolidated material is

82

Theory of Sampling and Sampling Practice

mined, crushed, or pulverized, these minerals become either partially or totally free, depending on their degree of liberation. Consequently, these minerals can constantly change place all the time toward a more stable configuration when submitted to gravitational forces or other forces that are meant to promote segregation or homogenization, in mineral processing, for instance.

5.1.2.3. Basic Module of Observation A basic module of observation is defined by its shape, size, and orientation. The characteristics of the basic module of observation depend on the kind of distribution we intend to describe or work with. An important remark should be emphasized: our observations, experiments, or analyses are always influenced by the omnipresence of gravity which introduces some anisotropy in the distribution of the units we are interested in, and we found out that there are three anisotropic hybrids between distribution homogeneity and the distribution heterogeneity. The five fundamental kinds of distribution are: • Three-dimensional distribution homogeneity • Two-dimensional distribution homogeneity • One-dimensional distribution homogeneity • Revolution distribution homogeneity • Three-dimensional distribution heterogeneity. There are discussed in more detail in Chapter 6, Section 6.4.9.

5.2. Classification of the Lots Submitted to Quality or Quantity Control The mathematical modeling of the lots submitted to quality or quantity control renders their classification, according to various criteria, a necessity: • The number of dimensions of the model they may be compared to • The continuity or discontinuity of this selected model • The order or disorder of their units.

5.2.1. Number of Dimensions of the Selected Model Intended to Represent a Lot A lot of particulate material, or any other material, always occupies a three-dimensional space; however, it often happens that one or even two dimensions are only important to the second order, if and only if increments are correct. This is a convenient way to simplify our analysis of heterogeneity because the fewer the dimensions of the selected model, the simpler is the practical solution of the associated sampling problem. Thus, we obtain the following classification:

Introduction to Heterogeneity

83

5.2.1.1. Three-Dimensional Lots The lot has three dimensions and none of them is important to the second order. This is the case of most mineral deposits and also isolated truck loads, piles, bags, jars, and so on. Furthermore, assuming an operator collects a grab sample from material traveling on a conveyor belt, then voluntarily or not, he is considering the travelling lot, which is the flowing stream, three-dimensional; be aware that is the worst possible scenario in sampling. 5.2.1.2. Two-Dimensional Lots The lot is about as long as wide; however, its thickness is small and important only to the second order, if and only if, increments represent the entire thickness of the lot at the place they are collected. This is the case of some flat ore deposits and also isolated piles, content of a truck, bag, or jar that have been flattened. Each increment must represent the entire thickness of the flattened lot; assuming the operator collects increments not representing the entire thickness, he is still considering the lot three-dimensional, which is the worst possible scenario in sampling. 5.2.1.3. One-Dimensional Lots The lot is very long; however, its width and thickness are small and important only to the second order, if and only if, increments represent an entire slice across the lot at the place they are collected. Assuming the operator takes a grab sample not representing the entire slice across the lot, he is still considering the lot three-dimensional, which is the worst possible scenario in sampling. 5.2.1.4. Zero-Dimensional Lots This is a convenient convention to describe lots made of a large number of units such as railroad cars, trucks, barrels, bags, jars, or even fragments. All units usually have the same nominal capacity and are more or less uniformly loaded. By definition, the various units form a population in which their order is nonexistent or at least irrelevant. Samples or increments will consist of an entire unit selected at random, and this is the reason the number of dimensions becomes irrelevant. 5.2.2. Continuity or Discontinuity of the Selected Model All materials are discontinuous or discrete in nature; for example, each atom or each molecule can be regarded as a discrete unit; you may remember this when looking at non-liberated gold particles in situ: they can indeed be considered as discrete particles. We will most certainly revisit this concept in Part VI, Chapter 17. As far as sampling is concerned, it is convenient to look at this notion of continuity or discontinuity in a different way. To illustrate this point, let us consider crushed ore placed on a moving conveyor belt. We can look at this one-dimensional lot using a discontinuous model as long as we are looking at the characteristics of each passing fragment, which are different from those of neighboring fragments. This one-dimensional lot, however, can be looked at as a continuous flow of matter whose rate is a function of time or distance; therefore, this lot can be represented by a continuous model in which the whole material

84

Theory of Sampling and Sampling Practice

making up the lot has been projected on an extension axis, either time or distance. Remember, each case is selected as a convenience and nothing else. 5.2.3. Order or Disorder of the Units Making Up a Lot Let us consider a set of units. Experience shows that we can look at it in two different ways: • The whole population of these units in which, because of the definition of a population, the order of the various units is irrelevant. • The chronological series made by these units when they are arranged following a certain pattern in which the order of the various units is highly relevant. Many misconceptions in sampling have their origin in the confusion between a population and a chronological series. Conventional statistics ignore chronological series, and considers a lot of material essentially as a random population. The trouble is that most of the time, a lot of material is not strictly a random population, which means that neighboring fragments are not totally independent from one another. Thus, it is essential to find out if we are dealing with a population or a chronological series. Exactly the same as homogeneity is the zero of heterogeneity, a random population is the zero of a chronological series. Furthermore, there is no risk in applying results which are valid to a chronological series to a population, while the reverse is most certainly not true. Experience shows that perfect disorder is the exception, and order or partial order is the rule. This originates in the fact that our industrial activities are well framed in time and space, generating a correlation over time and space, and also in the fact that gravity is omnipresent, generating segregation along a vertical axis during transportation or handling of particulate materials. It is essential to remember that there is often no legitimate way to assume the absence of correlation between the characteristics of one unit and its position within the lot. This is probably a very important concept to take into account in modern astronomy.

5.3. Functional, Random and Stochastic Relations These discussions about order or disorder, presence or absence of correlation, make us wonder what kind of relation can exist between two variables. Some properties of these relations are important in this book and must be reviewed. Let us consider two physical variables, X and Y, connected by a relation such as Y = f (X). As far as sampling is concerned, f(X) can belong to one of the three categories described in the three following sections.

5.3.1. Functional Relation This is the typical kind of function used in algebra, and its main mathematical property is to be essentially continuous except for, possibly, a finite number of particular points.

Introduction to Heterogeneity

85

5.3.2. Random Relation This is the kind of relation we have that, for each value of X, we have a corresponding value of Y selected at random within a given population. This relation is essentially discontinuous. 5.3.3. Stochastic Relation A stochastic relation exists between X and Y when we have in this relation a combination of two components: • One functional and continuous • The other random and discontinuous. This means that the stochastic relation is a hybrid between a functional relation and a random relation. Most of the relations we are dealing with in sampling are stochastic in nature, For example, suppose we have a linear function such as Y = a(X), and the relation between X and Y is measured by a correlation coefficient r. The three types of relations introduced earlier can be characterized by the values taken by the correlation coefficient r as follows: • A functional relation with r = +1 or –! • A random relation with r = 0 • A stochastic relation with 0 ≤ r ≤+1 or – 1 ≤ r ≤ 0.

5.4. Caption of the Various Cases 5.4.1. Zero-Dimensional and Discontinuous Lots Several very different cases belong to this model. By definition these lots are populations made of units distributed at random such as: the fragments making up a lot of particulate material and units like railroad cars, trucks, barrels, bags, jars, and so on, coming from a single source, or from different sources, whose order has been lost. In these two cases, the parameter of randomness characterizing these units has been arbitrarily assigned to them. These cases are studied in Chapter 6. 5.4.2. One-Dimensional and Continuous Lots 5.4.2.1. Common Problems Presented by Discontinuous and Continuous One-Dimensional Lots These two cases seem very different; however, it is only an appearance or an illusion. The first case refers to a finite chronological series of units; the second case refers to a lot made of a continuous stream of material, either stationary or moving. These cases have much in common from a practical and theoretical standpoint. For instance:

86

Theory of Sampling and Sampling Practice

• To describe and characterize the correlation existing between units of a discrete series, which renders this chronological series a very different entity than the population made of the same units, we must use continuous functions whose continuity fills the existing gaps between the punctual units. • To experimentally study the heterogeneity of lots of material making up a onedimensional continuous set, we must work with a finite number of quasi-punctual increments, whose chronology is not any different than the chronological series studied in the first case.

5.4.2.2. One-Dimensional and Discontinuous Lots This is a chronological series of transporting units such as railroad cars, trucks, barrels, bags, jars, and so on, originating from a single course, and whose order has been maintained. The parameter characterizing these units is their chronological order. 5.4.2.3. One-Dimensional and Continuous Lots Two very different practical cases can be studied by using the same model: • The relevant dimension is distance; this is the case of elongated piles, stopped conveyor belts, drilling core, and so on • The relevant dimension is time; this is the case of moving conveyor belts, water or slurry streams, chutes, and so on. These cases are reviewed in Chapter 7. 5.4.3. Two- and Three-Dimensional Lots Two- and three-dimensional lots are encountered in mineral deposits and have been the object of considerable work by geostatisticians, which is beyond the objective of this book. We refer the reader to the previous work of Matheron,28–31 David,173 and Journel.177 Under the same models, we find lots of particulate material such as isolated piles, truck loads, railroad car loads, barrels, bags, jars (for three-dimensional lots), or same lots after their content has been flattened (for two-dimensional lots). Mathematical models developed by geostatisticians can perfectly apply to these objects but, as far as sampling is concerned, they present very difficult problems that are generally impossible to solve in an economical way. This is discussed in Chapter 32.

6 Heterogeneity of a Zero-Dimensional Lot Constitution and Distribution Heterogeneities

6.1. Introduction The two types of heterogeneity investigated in this chapter are those encountered in a zero-dimensional lot. By definition, the various units of a zero-dimensional lot form a population in which their order is nonexistent or at least considered irrelevant. The notion of heterogeneity is somewhat confusing in the minds of many people, and there are several reasons for this: • There are several types of heterogeneity: ○ The heterogeneity consisting of a difference between the composition of the different units of a lot, such as between fragments, if we are talking about a lot of particulate material as an example. This heterogeneity is defined as the Constitution Heterogeneity CH. ○ There is another form of heterogeneity which is defined as the Distribution Heterogeneity DH, consisting of a difference between the spatial distribution of the different units within the lot, such as different fragments of a lot of particulate material. Their physical characteristics explain somewhat the reasons for their position. Everyone knows that this kind of spatial heterogeneity is generated because fragments have different shapes, densities, and weight, to mention only a few of their intrinsic properties. • Most authors speaking of heterogeneity never specify the type of heterogeneity they refer to, thus creating confusion for those trying hard to understand the complex concept of heterogeneity. Not only is it essential to differentiate Constitution Heterogeneity from Distribution Heterogeneity but, furthermore, it is necessary to specify the characteristic of the unit taken into consideration: ○ Example 1: A lot of pure silica is finely ground. This material is homogeneous as far as its mineral constitution is concerned, and heterogenous as far as its particle size distribution is concerned. ○ Example 2: A material feeding a flotation plant and containing molybdenite, galena, chalcopyrite, and so on, is screened to obtain a well-calibrated size fraction. The material of this size fraction is homogeneous as far as its particle size distribution is concerned, but still very heterogeneous as far as its mineral composition is concerned. 87

88

Theory of Sampling and Sampling Practice

Our objective in this chapter is to clearly define and study these two important forms of heterogeneity associated with a given constituent of interest, and to link them to one another.

6.2. Definitions and Basic Relationships In a complex subject such as heterogeneity, it does not take long before the reader encounters notation difficulties; for quick and easy reference we conveniently summarized all notations pertaining to this chapter in Table 6.1. We would like to mention again that in this chapter, all units make up a population in which their order is not taken into consideration, which means that all units within the lot are strictly considered as a random population. On the contrary, in Chapter 7 the order of the units is taken into account, and the lot is considered as a chronological series. Now let us consider L as a given lot of material made of discrete units, Nu as the number of these units, and Um as the current unit of L with m = 1,2,. . .,Nu. By definition, in a zero-dimensional lot, these units constitute a population in which there is no obvious order. These units can be made of fragments of particulate material (e.g., particles of minerals or solid chemicals, grains of a cereal, fruits, seeds, etc.), or groups of neighboring fragments, or transportation units (e.g., railroad cars, trucks, barrels, bags, jars, shovels, etc.). In all of these cases, as far as the heterogeneity carried out by one component of interest is concerned, the material making up one unit Um is completely defined by three parameters that can be called its descriptors: Mm: the total weight of the active components in Um Am: the weight of the component of interest (also called critical component) in Um am : the component of interest content (also called critical content) in Um

TABLE 6.1 Notations for units and characteristics related to the concept of Constitution and Distributions Heterogeneity Units

Weight of active components

Weight of critical components

Critical content and number of units

L: lot of particulate material

ML

AL

aL

Um: nonspecified unit of L

Mm

Am

am m = 1,2,. . .,Nu

Fi: a fragment of the lot L

Mi

Ai

F i : average fragment of L

Mi

Ai

ai i = 1,2,. . .,NF ai

Gn: a group of fragments of L

Mn

An

an n = 1,2,. . .,NG

Gn : average group of fragments of L

Mn

An

an

Fnj: a fragment of the group Gn

Mnj

Anj

F nj : average fragment of group Gn

Mnj

Anj

anj j = 1,2,. . .,Nn anj

89

Heterogeneity of a Zero-Dimensional Lot

These three descriptors are related as follows: Am ¼ am Mm

ð6:1Þ

The three parameters A, a, and M are related by one equality, and only two of them are sufficient to completely define the unit under consideration. Often, we choose the total weight M and the critical content a; thus, we define units with two descriptors. Now, let us suppose that one of these two parameters is practically constant in all the units of the population; therefore, only one descriptor is necessary to completely define the unit under consideration. Thus, we define units with one descriptor. In the same manner, a lot L is completely defined by the three descriptors ML, AL, and aL , which are themselves defined by the following relations: X

ML ¼

Mm

ð6:2Þ

Am

ð6:3Þ

m

X

AL ¼

m

aL ¼

AL ML

ð6:4Þ

6.3. Population of Specified Units: Population of Fragments Thus far, we have spoken of unspecified units Um, but often we choose units such as fragments of particulate materials; therefore, the units Um become the fragments Fi with i = 1,2,. . ., NF. Now we refer to the fragments Fi of the lot L, and the three descriptors ML, AL, and aL are defined by the following relations: X

ML ¼

Mi

ð6:5Þ

Ai

ð6:6Þ

i

X

AL ¼

i

P aL ¼ P

Ai

i

Mi

¼

AL ML

ð6:7Þ

i

As we previously referred to fragments, we can also refer to groups of fragments Gn of a lot L, and write the following relations: ML ¼

X

Mn ¼

n

AL ¼

X n

X

Mi ¼

n

i

An ¼

X i

XX

Ai ¼

XX n

Mnj

ð6:8Þ

Anj

ð6:9Þ

j

j

90

Theory of Sampling and Sampling Practice

PP n

aL ¼ P P n

Anj

j

Mnj

¼

AL ML

ð6:10Þ

j

Sometimes it may be convenient to refer to the average fragment Fi of a lot, to the average fragment Fnj of a group of fragments G in a lot L, or to the average group of fragments Gn in a lot L; therefore, the three descriptors ML, AL and aL can also be defined by the following relations: ML ¼ Mi NF ¼ Mn NG

ð6:11Þ

AL ¼ Ai NF ¼ An NG

ð6:12Þ

AL Ai An ¼ aL ¼ ¼ ai ¼ ¼ an ML Mi Mn

ð6:13Þ

6.3.1. Heterogeneity Carried by a Fragment within the Lot We are mainly interested in the heterogeneity between and not within fragments. The notion of heterogeneity is directly related to a given critical component. Our objective is to characterize the heterogeneity of L concerning a critical component A whose proportion is the critical content a. Each fragment Fi making up the lot L is entirely defined by its weight Mi and critical content ai , and also by the relation: Ai ¼ ai Mi

ð6:14Þ

The heterogeneity carried by a fragment Fi is defined only by reference to a given population, for example, the set of all fragments making up a lot L.

6.3.1.1. Fragment with Only One Descriptor Let us consider a given critical component with ai : the single descriptor, such as the critical content of a fragment. Also, let us consider ai as the arithmetic average of ai within the population L, and hi the heterogeneity carried by the fragment Fi. Because a homogeneous material is obviously defined by ai ¼ aL , it seems natural to admit that the heterogeneity hi carried out by one fragment is proportional to the difference ai  aL . It is commonly accepted that it is far more convenient to use relative and dimensionless characteristics rather than absolute ones; thus, hi is proportional to ðai  aL Þ=aL .

6.3.1.2. Fragment with Two Descriptors We should assume that among these two parameters, one is qualitative and quantitative, such as the critical content ai , and the other one is essentially quantitative such as the weight Mi. At the scale of the lot L, the fluctuation generated by one fragment Fi is an increasing function of its weight Mi; therefore, hi is proportional to the product

91

Heterogeneity of a Zero-Dimensional Lot

ðai  aL ÞMi and its relative and dimensionless equivalent is proportional to ðai  aL ÞMi =aL ML . Thus, the heterogeneity hi carried by a fragment Fi within the lot L is defined as: hi ¼

ðai  aL Þ Mi ðai  aL Þ Mi  ¼ NF  ML aL aL Mi

ð6:15Þ

At this stage a few important remarks should be addressed: • Under some conditions, the sampling variance is a simple function of the variance of hi. • The calculation of hi involves only parameters such as ai , Mi, and Mi , which are intrinsic properties of the material making up the lot L, and this independently from the size of the lot under consideration. • Thanks to its relative definition, hi has a dimensionless value, but it is essential to use the same units for the numerator and the denominator. • We will see how a set of fragments with two descriptors can be modified into an equivalent set of fragments with only one descriptor, which would be nothing else than the heterogeneity hi carried by one fragment Fi.

6.3.1.3. Additivity of the Heterogeneities Carried by Various Fragments Starting from equation (6.15), we can easily demonstrate that the heterogeneity hn carried by a group of fragments Fi is equal to the sum of the heterogeneities carried by all fragments making up the group. 6.3.2. Average of the Heterogeneities Carried by the Fragments of a Lot It is easily demonstrated that: P mðhi Þ ¼

hi

i

NF

¼0

ð6:16Þ

6.3.3. Variance of the Heterogeneities Carried by the Fragments of a Lot Taking into account our definition of the variance of a random variable, the variance of the heterogeneities s2 ðhi Þ carried by the fragments of a lot can be expressed as follows: s2 ðhi Þ ¼

X ðai  aL Þ2 M2 1 X 2 hi ¼ N F  2i NF i a2L ML

ð6:17Þ

When all fragments have the same weight Mi, the fragments have only one descriptor, ai ; therefore, it is easy to see that s2 ðhi Þ is equal to the additive variance of the content ai of the fragments. With this variance we are able to define two distinctive values characterizing the heterogeneity of a lot L.

92

Theory of Sampling and Sampling Practice

6.3.4. Definition of the Constitution Heterogeneity of a Lot The Constitution Heterogeneity CHL of the lot L is defined as the relative and dimensionless variance of the heterogeneities hi carried out by the NF fragments Fi making up the lot: CHL ¼ s2 ðhi Þ ¼

X ðai  aL Þ M2 1 X 2 hi ¼ N F  2i NF i a2L ML

ð6:18Þ

Remarks: It is important to notice, for future important discussions in this chapter that CHL is a square, therefore it is always positive. Also, as explained in Section 3.4.8, there are two ways to define a variance, one with NF and another one with NF-1. Relations 6.17 and 6.18 are based on the first definition, which means that we are dealing with the variance of a finite and isolated population of fragments. If the lot was a subset of a much larger population it would have been necessary to use NF-1 in these relations.

6.3.5. Constitution Heterogeneity of a Composited Lot Let us consider a lot made of the union of two sublots: • The sublot L1 made of NF1 fragments characterized by the heterogeneity hi1 and a variance CHL1 calculated with the overall average content aL of both sublots. • The sublot L2 made of NF2 fragments characterized by the heterogeneity hi2 and a variance CHL2 calculated with the overall average content aL of both sublots. It can be easily demonstrated that the total Constitution Heterogeneity of this lot is: CHL ¼

NF1 CHL1 þ NF2 CHL2 NF1 þ NF2

ð6:19Þ

Thus, by recurrence it can be demonstrated that the Constitution Heterogeneity of a composited lot is equal to the weighted average of the Constitution Heterogeneity of the respective sublots, regardless of the number of these sublots. Therefore, a lot obtained by the union of two or more sublots with the same Constitution Heterogeneity will also be characterized by the same Constitution Heterogeneity. Thus, CHL is a characteristic of the material making up the lot, irrespective of the size of the lot.

6.3.6. The Intrinsic Heterogeneity of the Fragments Making Up the Lot As defined by equation (6.18). the Constitution Heterogeneity CHL is not easy to calculate in most of the real cases in which we are interested. Part of the reason is the difficulty or impossibility in estimating NF, which is usually very large. In practice, we need to be able to calculate in all cases, and at the cost of some approximations if necessary, the characteristic of the material making up the lot, and this characteristic shall be independent of the size of the lot (i.e., suppressing the need to estimate NF).

93

Heterogeneity of a Zero-Dimensional Lot

This can be done by multiplying CHL by the term ML =NF which is nothing more than the average weight Mi of a fragment. Therefore, we may define the Intrinsic Heterogeneity IHL by the following extremely important relation:

X ðai  aL Þ2 M2 CHL ML IHL ¼ ¼ CHL Mi ¼  i 2 NF ML a L i

ð6:20Þ

The larger font used for this critically important equation is to emphasize its importance for all the work we are going to do for the Fundamental Sampling Error FSE and for the Grouping and Segregation Error GSE. Many practitioners refer to Gy’s formula by using the wrong formula. If there is such a thing as a Gy’s formula, equation (6.20) is the one, and no one describes it as such. Because CHL is dimensionless, IHL has the dimension of a weight. Now we are going to find out why IHL deserves the name of Intrinsic Heterogeneity.

6.3.7. The Intrinsic Heterogeneity of a Composited Lot Let us define a lot L obtained by the union of two sublots L1 and L2, respectively made of NF1 and NF2 fragments, characterized by the Constitution Heterogeneities CHL1 and CHL2 and by the same average fragment weight Mi . By multiplying the two sides of equality 6.19 by Mi we obtain: IHL ¼ CHL Mi ¼ IHL ¼

NF1 CHL1 Mi þ NF2 CHL2 Mi NF1 þ NF2

NF1 IHL1 þ NF2 IHL2 NF1 þ NF2

ð6:21Þ ð6:22Þ

Thus, by recurrence it can be demonstrated that if the respective weights of the fragments are equal or almost equal, the Intrinsic Heterogeneity of a composited lot is equal to the weighted average of the Intrinsic Heterogeneities of the sublots, regardless of the number of these sublots. Therefore, a lot obtained by the union of two or more sublots carrying out the same Intrinsic Heterogeneity will also be characterized by the same Intrinsic Heterogeneity. It is the property which justifies the claim that IHL is a constant or intrinsic characteristic of the material making up the lot, irrespective of the size of the lot. The hypothesis that the average weights Mi are almost equal implies that a comparison between various Intrinsic Heterogeneities can only be made for material having a similar particle size distribution.

6.3.8. Respective Properties of CHL and IHL CHL and IHL are two intrinsic parameters of the material under investigation, regardless of the size of the lots, however, they are different in two ways: • CHL is always defined, but can only be calculated when the number of fragments NF is small enough to be counted. CHL is more appropriate for a theoretical approach to the notion of heterogeneity.

94

Theory of Sampling and Sampling Practice

• IHL can always be calculated, regardless of the large number of fragments NF, at the cost of some approximation that are reviewed in later chapters. Thus, IHL is more appropriate for practical applications, such as the calculation of the variance of the Fundamental Sampling Errors, which are studied in later chapters.

6.3.9. Constitution Homogeneity of a Lot If the definitions concerning Constitution Heterogeneity are consistent, the lot L should have a homogeneous constitution if the heterogeneity it carries is nil; therefore CHL =0. If we consider equation (6.18), this happens when all the critical contents ai of the fragments are equal to the average critical content aL of the lot. The conclusion is that the definitions are indeed consistent.

6.4. Population of Specified Units: Population of Groups of Fragments 6.4.1. Introduction to the Notion of Distribution Heterogeneity Let us consider again the lot L made of NF fragments. Within this lot we are going to define an isotropic module of observation such as a sphere, or semi-isotropic such as a cube. We will scan the domain DL of the lot L with this module to study the variability of the critical content of the module. This could also be done with a cylinder or a prism with a square section (i.e., isotropic volumes in two dimensions such as an horizontal projection plane), or a slice of constant thickness (i.e., isotropic volume in one dimension such as an horizontal projection axis perpendicular to the slice). We find that the larger the Distribution Heterogeneity, the larger the variation of the critical content within the module. The study of the zero-dimensional model with a given module of observation requires the following conditions: • Selection of a cubic module of observation. • Definition of the content of the module as the entire set of the fragments whose center of gravity falls inside the module; the reason for this is clarified in part 7 of this book. • Positioning the module in such a way that a population of disjoint and complementary subsets with no empty spaces or overlaps be obtained. The number NG of subsets shall be a finite number defined as the groups Gn of fragments with n = 1,2,. . .,NG. Now that we have defined a group, and before looking at the heterogeneity carried by a group of fragments, we should take a quick look at the heterogeneity carried by a fragment within a group of fragments. 6.4.2. Heterogeneity Carried by a Fragment within a Group of Fragments The lot L is subdivided into a certain finite number of groups Gn, and inside each group Gn we may consider a certain fragment Fnj with j = 1,2,. . .,Nn. By following the same

95

Heterogeneity of a Zero-Dimensional Lot

reasoning as in Section 6.3.1, the heterogeneity hnj carried out by a single fragment in a group that belongs to the lot L can be written as follows: 

 anj  anj Mnj hnj ¼  anj Mnj

ð6:23Þ

where Mnj is the weight of the fragment Fnj, anj is the critical content of the fragment Fnj, and anj is the critical content of the average fragment, and we have anj ¼ an . Because Mnj ¼ Mn =Nn , then: 

 anj  an Mnj Nn hnj ¼  Mn an

ð6:24Þ

Where Mn is the weight of the active components of the group, and Nn is the number of fragments in the group Gn. Thus, it readily follows: P   m hnj ¼

hnj

i

Nn

¼0

  X anj  an 2 M2nj   1 X 2 h ¼ Nn  2 s hnj ¼ Nn nj nj a2n Mn nj 2

ð6:25Þ ð6:26Þ

6.4.3. Heterogeneity Carried by a Group of Fragments within the Lot Considering each group of fragments Gn as a unit of the lot L and by following the same reasoning as in Section 6.3.1, the heterogeneity hn carried by a single group in the lot L can be written: hn ¼

ðan  aL Þ Mn ðan  aL Þ Mn  ¼ NG  ML aL aL Mn

ð6:27Þ

It has been found that the heterogeneity hn carried by a group of fragments Gn is nothing other than the heterogeneity hnj carried by the average fragment of this group: hn ¼ hnj

ð6:28Þ

6.4.4. Definition of the Distribution Heterogeneity of a Lot As explained for CHL in Section 6.3.4, the Distribution Heterogeneity DHL of a lot is defined as the relative and dimensionless variance of the amount of heterogeneity hn carried out by the NG groups of fragments Gn making up the lot: DHL ¼ s2 ðhn Þ ¼

X ðan  aL Þ2 M2 1 X 2 hn ¼ N G  n2 2 NG n a ML L n

ð6:29Þ

96

Theory of Sampling and Sampling Practice

The parameter DHL allows only the overall characterization of the Distribution Heterogeneity, and the sequential characterization is being taken into account only in the one-dimensional model that is presented in Chapter 7. 6.4.5. Relationship between Constitution and Distribution Heterogeneities Gn is a subset of the population of fragments making up the lot L. By assimilating Mnj and Mi which are identical by definition we can write: 

   anj  aL Mnj anj  aL Mnj hnj ¼  ¼ NF  aL aL ML Mnj

ð6:30Þ

We also have: X

h2i ¼

XX n

i

ð6:31Þ

h2nj

j

For convenience we can write the following identity:   hi ≡hnj ≡ hnj  hn þ hn

ð6:32Þ

Let us square both sides of this identity:  2   h2i ¼ h2nj ¼ hnj  hn þ 2 hnj  hn þ h2n

ð6:33Þ

If we sum equation (6.33) within the lot L made of NF fragments and admit that Nn is relatively constant because of the constant volume of the module of observation, we obtain: X i

h2i ¼

XX n

h2nj ¼

XX n

j

hnj  hn

2

þ2

X X n

j

X  hnj  hn hn þ Nn h2n

ð6:34Þ

n

j

Because of relation 6.28 it is obvious that the second term of the addition in relation 6.34 is nil: X i

h2i ¼

XX n

j

h2nj ¼

XX n

hnj  hn

2

þ Nn

X

j

h2n

ð6:35Þ

n

Now, to compare equation (6.35) with 6.18, we divide by NF: 2 1 X 2 1 XX 1 X 2 hi ¼ hnj  hn þ h NF i NF n j NG n n

ð6:36Þ

CHL ¼ CHn þ DHL

ð6:37Þ

or:

97

Heterogeneity of a Zero-Dimensional Lot

where CHn is nothing other than the average Constitution Heterogeneity of the different groups of fragments Gn taken into consideration in the lot L. All terms in equations (6.36) and (6.37) are squares, which means they are either positive or nil; therefore, the very important following inequality is obtained: CHL  DHL  0

ð6:38Þ

Conclusion: The Distribution Heterogeneity of a lot of particulate material is always smaller or, at most, equal to its Constitution Heterogeneity. 6.4.6. Definition of the Constitution Homogeneity By definition, when the material making up the lot is homogeneous, each fragment has the same composition as the average composition of the lot, thus the factor ðai  aL Þ becomes nil for all fragments. Therefore, equation (6.18) is equal to zero, CHL = 0. Consequently: CHL ¼ C Hn þ DHL ¼ 0

ð6:39Þ

Because none of the terms of this equation can be negative, we necessarily have: CHL ¼ 0 implying that C Hn ¼ 0 and DHL ¼ 0

ð6:40Þ

Conclusion: with Constitution Homogeneity everything is homogeneous – the constitution of the lot, the constitution of the groups, and the distribution of the groups throughout the lot, which is logical. With a material of homogeneous constitution, we cannot observe or create any form of heterogeneity. 6.4.7. Definition of the Distribution Homogeneity A lot L has an homogeneous distribution when its Distribution Heterogeneity is nil; therefore, DHL = 0. Consequences: DHL being the variance of all hn, it can be nil only when the values of hn are all equal to their average, which is itself nil. If we take the definition of hn (i.e., equation (6.27)) and if we eliminate the unlikely mathematical solution of having all the weights Mn nil, as we should not forget that we are interested in real and not imaginary materials, the only possible solution of the condition DHL = 0 is: an ¼

X anj Mnj j

Mn

¼ aL

ð6:41Þ

this irrespectively of n = 1, 2, . . ., NG. The set of the NG equation (6.41) has two solutions: • All the values of anj are equal to the overall average aL . Therefore, the material has a rigorously homogeneous constitution. • It is implied that all the values of anj , without being strictly equal between themselves, would have the unusual and disturbing property of having all the

98

Theory of Sampling and Sampling Practice

products anj Mnj capable of satisfying equation (6.41). Of course, there is no reasons for this condition to be fulfilled. Conclusion: Under natural and practical conditions, the Distribution Heterogeneity is never nil and always strictly positive. The natural state of Distribution Heterogeneity is characterized by the notation ðDHL Þnat , and we can state: ðDHL Þnat 40

ð6:42Þ

6.4.8. Natural Distribution Homogeneity within a Lot We found out that the Distribution Heterogeneity is never nil; thus, it would be interesting to specify under which conditions it could be minimized and what its minimum is. There are many ways to mix and homogenize a material carrying an important Distribution Heterogeneity, such as shovel mixing, sheet mixing, cube mixers, shakers, vee-blenders, mixing with impellers or blades, fluidized-bed mixing, and so on. All of these techniques and systems are based on the same principle, which is mixing. Let us consider a glass jar in which we introduced several alternate layers of white quartz and black magnetite. After mixing the content of the jar on a shaker mixer for a few seconds, the homogenizing process takes place; the material seems less heterogeneous. Nevertheless, we still clearly see some white and black layers and patches. After mixing the content for a long time, these layers and patches may slowly disappear, making room for a gray mixture of homogeneously distributed grains of quartz and magnetite. Let us, however subdivide the contents of the jar into a given number of equal size and complementary groups Gn of fragments, and determine their respective contents in quartz and magnetite. This can easily be performed with a small magnet. We find that the proportions of quartz and magnetite in each group Gn may be somewhat similar, nevertheless, they are never strictly equal, regardless of the time spent to homogenize the material. For each material there is a small amount of residual heterogeneity or minimum Distribution Heterogeneity ðDHL Þmin , never strictly zero, which is a characteristic of the material itself: ðDHL Þmin 40

ð6:43Þ

The minimum Distribution Heterogeneity ðDHL Þmin is the logical consequence of three factors: • The particulate nature of the material • The Constitution Heterogeneity of the material • The aleatory nature of the mixing process Theoretically, the homogenizing mechanism is trying to delete any correlation between the position of a fragment and its personality, characterized by its size, weight, density, shape, and so on. All these fragment characteristics are going to be used by the omnipresent gravity to maintain some degree of segregation which is more stable, thus, some degree of Distribution Heterogeneity. In fact, ðDHL Þmin can be

99

Heterogeneity of a Zero-Dimensional Lot

compared to an asymptotic limit that is always superior to zero, and this limit highly depends on the physical properties of the material. We can state that when ðDHL Þmin is practically reached, all the fragments are randomly distributed within the domain DL of the lot L. It does not mean it is a stable state. Now, we should try to statistically interpret this random distribution and select a mathematical model for it. Let us consider a sublot L as a sample made of NF fragments selected one by one at random from a very large lot L’ having the same average properties as L. The number of fragments in L’ is larger than NF by several orders of magnitude (in fact, from a theoretical standpoint L’ should be considered infinite). Let us admit that the variance of hi within L’ is s2. The hypothesis of a random distribution of the fragments between the various groups Gn of neighboring fragments is strictly equivalent to the other hypothesis stating that these Gn groups are samples of Nn fragments from the lot L’ selected one by one at random, therefore with a uniform probability of being selected. Under these conditions, it is a well-known result of the analysis of the variance to have: • The quantity X X h2

n

n

s2

j

¼

NF CHL s2

ð6:44Þ

following a chi-squared distribution with NF  1 degrees of freedom • The quantity   X X hnj  hn 2 NF C Hn ¼ s2 s2 n j

ð6:45Þ

following a chi-squared distribution with NF  NG degrees of freedom • The quantity X N n h2

n

n

s2

¼

NF DHL s2

ð6:46Þ

following a chi-squared distribution with NG  1 degrees of freedom From these equalities, we can state that we have three independent unbiased estimators of the variance s2: • The total variance: s2t ¼

CHL NF NF  1

ð6:47Þ

100

Theory of Sampling and Sampling Practice

• The variance between groups: s2b ¼

ðDHL Þmin NF NG  1

ð6:48Þ

• The variance within groups or residual variance: s2r ¼

C H n NF NF  NG

ð6:49Þ

Looking at s2t and s2b we see that: • s2t and s2b are both unbiased estimators of the same variance s2, • CHL and s2t have a very definite value (intrinsic property of a material), • NF is usually very large, and therefore s2t is practically equal to CHL, and • The residual or minimum heterogeneity is a random variable, the true unknown average of which is s2t ¼ CHL , with:   m s2b ¼ s2t ¼ CHL

ð6:50Þ

  Transposing equation (6.50) in (6.48) and assimilating the average m s2b to s2b we obtain: ðDHL Þmin ¼

NG  1 CHL NF  1

ð6:51Þ

Conclusion: We succeed to express the average residual or minimum Distribution Heterogeneity as a function of the Constitution Heterogeneity CHL and both populations NF and NG. 6.4.9. Different Kinds of Natural Distribution Homogeneity In Section 5.1.2, we showed that there are three anisotropic hybrids between the heterogeneity and the homogeneity of distribution. It is time to take a closer look at them. 6.4.9.1. Three-Dimensional Homogeneity It is the only isotropic and nondegenerate kind of natural Distribution Homogeneity. It is the limit that can be approached with cube mixers, shaker mixers, vee-blenders, homogenizing silos, and so on. The state of homogenization reached with these systems is very unstable because of the omnipresence of gravity which is force having only one direction. Conclusion: In this unstable state there is no heterogeneity in any direction, and the most appropriate module of observation is a sphere (strictly isotropic) or a cube (quasiisotropic).

Heterogeneity of a Zero-Dimensional Lot

101

6.4.9.2. Two-Dimensional Distribution Homogeneity This kind of homogeneity most often results from the degeneration of a three-dimensional Distribution Homogeneity because of the action of gravitational forces. The main factors on which gravity works are the size, density, and shape of the fragments. We learned, particularly in mineral processing and sampling, that it is much easier to define the conditions promoting segregation than to define the conditions ensuring homogenization, and this could likely be generalized to many other industries. This is particularly true of the fragments that have large differences in size, density, and shape as is often the case. Thus, if gravity segregates various classes of components, it introduces a certain amount of Distribution Heterogeneity along a vertical axis, while the distribution remains homogeneous in the horizontal plane. We can often observe a sizable correlation between the position of fragments along a vertical axis and their personalities. Conclusion: In this case we have a certain amount of heterogeneity in only one dimension, and the most appropriate module of observation is a vertical cylinder (isotropic in the horizontal projection) or a prism with square section (quasiisotropic).

6.4.9.3. One-Dimensional Distribution Homogeneity This case is different from the preceding case because it is never the result of natural causes, but the result of very elaborate techniques.77 While the two-dimensional homogeneity is the result of an undesirable degradation of a three-dimensional state of homogeneity, the one-dimensional homogeneity is the result of an intentional improvement of a less favorable distribution. In fact, a state of one-dimensional Distribution Homogeneity, such as the one achieved by bed-blending techniques66 often precedes a three-dimensional homogenization. This type of homogeneity is observed in elongated piles similar to those obtained in bed-blending systems, metallurgical products under the form of bars, and so on. Therefore, it is only in one dimension that the distribution is considered homogeneous. Conclusion: In this case we have a certain amount of heterogeneity in two dimensions and homogeneity in the third dimension, which is often a horizontal axis. The most appropriate module of observation is a vertical or inclined slice of constant thickness (isotropic in a space with one dimension).

6.4.9.4. One-Dimensional Revolution Distribution Homogeneity This type of distribution is observed at the discharge of a conveyor belt travelling at low speed with material fallig on a horizontal plane or inside a storage bin. It is also observed in the popular technique of coning and quartering during the often-ineffective phase of homogenization where the distribution is symmetrical around a vertical axis. Conclusion: The distribution is heterogeneous only in two dimensions and homogeneous in one dimension, and the appropriate module of observation is a “piece of pie” or radial cut.

102

Theory of Sampling and Sampling Practice

6.4.9.5. Three-Dimensional Distribution Heterogeneity This is the state of a lot of material in which none of the above kinds of heterogeneity can be observed. Conclusion: The distribution is heterogeneous in three dimensions, and the appropriate module of observation is a sphere or a cube. Under this hypothesis, the natural Distribution Heterogeneity ðDHL Þnat stays smaller than CHL but on average superior to the minimum Distribution Heterogeneity ðDHL Þmin : CHL  ðDHL Þnat  ðDHL Þmin 40

ð6:52Þ

6.4.10. Experimental Verification of the Homogeneity Theoretically, it is possible to select a series of groups Gn of fragments, such as the contents of cubes, prisms, slices, radial cuts, depending on the cases. Then we can estimate the content an and the weight Mn of these groups, and calculate the value of hn and its variance DHL ¼ s2 ðhn Þ. Thus, in some cases, we can estimate the Constitution Heterogeneity CHL and the minimum Distribution Heterogeneity ðDHL Þmin , and in a later section we will come back to this. Most of the time, if we proceed with a logical analysis of how the lot of material was generated, we can decide which of the most likely cases of homogeneity described above we are dealing. In the absence of any information, it is strongly recommended to retain the hypothesis that the distribution is heterogeneous in three dimensions, and any speculation on its homogeneity is a dangerous form of gambling.

6.4.11. Maximum Distribution Heterogeneity within a Lot We saw in equation (6.39) that C Hn is positive or equal to zero; therefore, the maximum value that Distribution Heterogeneity DHL may take is the Constitution Heterogeneity CHL itself: ðDHL Þmax ¼ CHL

ð6:53Þ

This obviously assumes that C Hn ¼ 0, implying that CHn = 0 irrespective of n. Thus, the maximum Distribution Heterogeneity is observed when all groups Gn have a homogeneous constitution, which necessitates one of several conditions; • The lot L carries a homogeneous constitution, therefore: DHL ¼ C Hn ¼ CHL ¼ 0

ð6:54Þ

• Each group Gn is identical to Fi, which means that all groups are made of only one fragment, therefore: DHL ¼ CHL 40

ð6:55Þ

103

Heterogeneity of a Zero-Dimensional Lot

• Each group Gn is made of several fragments, but all these fragments have the same critical content anj ¼ an , which implies that we simultaneously have: ○ A complete mineral liberation ○ A complete separation of the minerals, and ○ A grouping of minerals according to their composition.

6.4.12. Definition and Properties of the Grouping Factor It seems natural to define a Grouping Factor γ as the ratio of the total number of fragments NF to the total number of groups NG, but we define it even more exactly as follows: γ¼

ðNF  NG Þ ðNG  1Þ

ð6:56Þ

or: 1 NG  1 ¼ γ NF  NG

ð6:57Þ

Because NF and NG are very large numbers, and because they are very different, γ is itself very likely to be a large number, therefore, for convenience that will become clear soon, we rewrite equation (6.57) as follows: 1 NG  1 ffi 1 þ γ NF  1

ð6:58Þ

There is a small approximation there, but when NG is very small compared to NF, which is often the case, γ is close to the average number of fragments Nn in groups. Then, for all practical purposes we may admit that: γffi

NF ¼ Nn NG

ð6:59Þ

Conclusion: The order of magnitude of γ is the average number of fragments in the groups Gn which justifies the name grouping factor. Now, we can rewrite equation (6.51) in the following, convenient way: ðDHL Þmin ¼

NG  1 1 CHL ffi CHL NF  1 1þγ

ð6:60Þ

The grouping factor γ is a positive and decreasing function of the number of groups Gn, or an increasing function of the average number of fragments Nn per group, therefore an increasing function of the size of the groups. From this definition we can see that the grouping factor γ reaches a minimum equal to zero when the number of

104

Theory of Sampling and Sampling Practice

groups is equal to the number of fragments; thus, each group is made of only one fragment: γ ¼ 0 when NG ¼ NF

ð6:61Þ

However, γ becomes very large and the minimum Distribution Heterogeneity very small when the number of groups is approaching one, in fact: γ ¼ þ∞ and ðDHL Þmin ¼ 0 when NG ¼ 1

ð6:62Þ

Conclusion: It is the grouping factor that represents the size of the module of observation, which is an important factor in the definition of the Distribution Heterogeneity as described several times in Section 6.4.9.

6.4.13. Definition and Properties of the Segregation Factor At this stage it is important to recall equation (6.52) that shows the domain of the natural Distribution Heterogeneity ðDHL Þnat : CHL  ðDHL Þnat  ðDHL Þmin 40 Using equation (6.60), we can rewrite inequality 6.52 as follows: 1þγ 1 CHL  ðDHL Þnat  CHL 40 1þγ 1þγ

ð6:63Þ

Notice that the factor 1þγ 1þγ is equal to 1 and introduced only for the sake of convenience. Therefore, to obtain ðDHL Þnat we must multiply CHL by some factor between 1 and ð1 þ γÞ. We chose to write this factor ð1 þ ξγÞ, and we define ξ as the segregation factor: ðDHL Þnat ¼

1 þ ξγ CHL 1þγ

ð6:64Þ

Let us transpose this definition in the inequality 6.63 and divide all the terms by which gives: ð1 þ γÞ  ð1 þ ξγÞ  140

CHL ð1þγÞ

ð6:65Þ

Then, let us subtract 1 from all members and finally divide them by γ. We obtain: 1ξ0

ð6:66Þ

This defines the natural domain of segregation and we could write: 1  ξ nat  0

ð6:67Þ

105

Heterogeneity of a Zero-Dimensional Lot

When ξ approaches 0, the lot approaches the state of minimum Distribution Heterogeneity ðDHL Þmin . When ξ = 1, the lot is in a state of maximum Distribution Heterogeneity ðDHL Þmax . Finally, by transposing equation (6.57) into (6.64) we can write: ðDHL Þant ¼ ð1 þ ξγÞ

NG  1 CHL NF  1

ð6:68Þ

This equation will be useful to determine the general expression of the Distribution Heterogeneity. 6.4.14. Effect of the Observation Scale on the Value of the Distribution Heterogeneity The scale of observation is characterized by the size of the groups, therefore by the grouping factor γ. Table 6.2 summarizes the effective domains of NG, γ, and DHL. The first column represents the limit reached by groups made of only one fragment. Column 2 represents the natural domain of the Distribution Heterogeneity. The last column represents the limit case where the number of groups is equal to 1 and equals the entire lot. 6.4.15. Effect of the Size Distribution of the Fragments on the Value of the Constitution Heterogeneity The set LF of NF fragments making up the lot L may lose its integrity either because of clustering or aggregation, or because of voluntary or involuntary comminution of the fragments. 6.4.15.1. Aggregation Let us consider the set LF of NF fragments and the set of LG of NG groups of fragments, then assume that the fragments of each group are aggregated and form a new fragment. Thus, we obtain a new set LF of NF = NG new fragments. The Constitution Heterogeneity CHL’ of the new lot L’ is: CHL0 ¼ DHL  CHL

ð6:69Þ

Conclusion: An aggregation can only reduce the Constitution Heterogeneity of the material under investigation.

TABLE 6.2 Effect of the observation scale on NG, γ, and DHL Nn =1 (irrespective of n)

NF > Nn > 1

Nn = NF

NF



NG



1

0 CHL

 

γ ðDHL Þnat

 

+∞ 0

106

Theory of Sampling and Sampling Practice

6.4.15.2. Comminution We can follow the same reasoning in an opposite way and reach the following alternative conclusion concerning a comminution: Conclusion: A comminution can only increase the Constitution Heterogeneity of the material under investigation.

6.4.16. General Expression of the Distribution Heterogeneity By transposing equation (6.18) into equation (6.68) we obtain a very important expression for the Distribution Heterogeneity: DHL ¼ ð1  ξγÞðNG  1Þ

NF X ðai  aL Þ2 M2i  2 NF  1 i a2L ML

Usually we are dealing with a large number of fragments, therefore above equation can be simplified as follows: DHL ¼ ð1  ξγÞðNG  1Þ

X ðai  aL Þ2 M2  2i 2 a ML L i

ð6:70Þ NF NF 1

¼ 1, and the

ð6:71Þ

This formula is rich in information, and clearly shows that the Distribution Heterogeneity essentially depends on three factors: • The Constitution Heterogeneity CHL, which is an intrinsic characteristic of the population under investigation. • The grouping factor γ, which is a characteristic of the module or scale of observation. • The segregation factor ξ. which is a characteristic of the degree of heterogeneity of the distribution. In practice we never try to estimate these factors; their definition and introduction are only necessary for a logical analysis of the notion of heterogeneity. We still show in Chapter 13 that the sampling variance is proportional to DHL; therefore, it is essential to learn how to minimize the Distribution Heterogeneity. Equation (6.71) clearly shows that to minimize DHL, we have to minimize γ, ξ, and CHL: • γ is minimized by reducing the size of the groups, which in sampling will become increments. • ξ is minimized by homogenizing the material making up the lot. • CHL cannot be minimized; it is an intrinsic property of the lot in a particular state of comminution, and nothing can be done about it. Conclusion: The long theoretical development of this chapter is perfectly justified because it provides a clear understanding about the nature of heterogeneity, and it also provides solutions to sampling problems which are associated with heterogeneity.

Heterogeneity of a Zero-Dimensional Lot

107

6.4.17. Illustration of the Definition of Heterogeneity The definition of heterogeneity given so far can be easily illustrated with a series of simple pictures. Assuming we have a population of 64 fragments represented by 64 small squares in the plane of this two-dimensional page, we can represent a lot L by a large square of 8x8 adjacent small squares. Figure 6.1 shows 64 squares that are strictly identical, corresponding to the definition of a lot with a homogeneous constitution; furthermore, the distribution is strictly homogeneous. Any subset has exactly the same composition. This confirms the conclusions reached in Sections 6.4.5 and 6.4.6: without Constitution Heterogeneity, any form of Distribution Heterogeneity cannot exist. Figure 6.2 shows 16 horizontal and 16 vertical modules. The constitution is heterogeneous as there are four different components (i.e., centered black circles, centered white squares, centered black triangles, and centered white triangles). But, the distribution shows a form of homogeneity that can be defined as homogeneous modular distribution. If the modules are different from four adjacent horizontal or vertical squares, then some heterogeneity starts to show between the different groups. This type of structure is sometimes encountered in perfect mineral crystals. Figure 6.3 shows 16 centered white triangles, 16 centered black circles, 16 centered white squares, and 16 centered black triangles, however the 4 components are completely separated into four horizontal zones. The constitution is heterogeneous, and the Distribution Heterogeneity is at its maximum: the constituents are completely segregated from one another as horizontal layers. On the other hand, assuming the module of observation was a vertical volume made of 8 squares, we would be back to a homogeneous modular distribution, so it is interesting to notice that it highly depends on how we define the module of observation. This distribution also shows very well the dangers of grab sampling when the operator takes the sample where it is easy access, for example at the top of layered truck load.

FIGURE 6.1 Homogeneous constitution and distribution.

108

Theory of Sampling and Sampling Practice

FIGURE 6.2 Heterogeneous constitution but homogeneous modular distribution. All groups described by the module 4 squares x 1 square have strictly the same composition, either horizontally or vertically.

FIGURE 6.3 Heterogeneous constitution and maximum distribution heterogeneity (i.e., perfect segregation).

Figure 6.4 shows 16 centered white triangles, 16 centered black circles, 16 centered white squares, and 16 centered black triangles, as in Figures 6.2 and 6.3; however, the four components are distributed at random. It is the state of natural distribution homogeneity. For example, each square is selected at random, one by one, before being placed inside the frame of the lot. Such a random selection is equivalent to mixing or homogenization, and tends to suppress any correlation between position and

Heterogeneity of a Zero-Dimensional Lot

109

FIGURE 6.4 Heterogeneous constitution and natural distribution Homogeneity (i.e., no correlation between location and personality).

personality of the units. We can still detect a few small clusters which clearly demonstrate that this state of natural distribution homogeneity is far from perfect.

6.5. Transformation of a Set of Units with Two Descriptors into an Equivalent Set of Units with One Descriptor One of the difficulties encountered in the publications of Pierre Gy’s theory prior to the last modern version entitled Heterogeneity, Sampling, Homogenization77 is the fact that the unit Um was described with two parameters: its critical content am and its weight Mm. This difficulty was eliminated by replacing these two descriptors by a single descriptor, which is the heterogeneity hm carried by the unit. This is one of the major developments described in Pierre Gy’s most recent textbook.78 Let us consider: 1. A real lot L, with a critical content aL and a weight ML, made of a set of Nu real units Um characterized by the two parameters am and Mm, and by hm which already contains am and Mm,. We saw that we have: hm ¼

ðam  aL Þ Mm ðam  aL Þ Mm  ¼ Nu  ML aL aL Mm

ð6:72Þ

0 with 2. The imaginary lot L’ with a0L and M0L , made of a set of Nu0 imaginary units Um 0 0 0 0 am , Mm , and hm . The lot L’ and the units Um are derived from the lot L and the unit Um using the following equalities which define them:

110

Theory of Sampling and Sampling Practice

M0m ¼

Nu0 ¼ Nu : L0 has the same population as L:

ð6:73Þ

M0L ¼ ML : L0 as the same weight as L:

ð6:74Þ

a0L ¼ aL : L0 has the same critical content as L:

ð6:75Þ

ML 0 ¼ Mm : U m has the same weight as the average unit Um : Nu

0 carries the same amount of heterogeneity as Um : h0m ¼ hm : Um

ð6:76Þ

ð6:77Þ

Taking into account 6.72 and 6.77, we easily deduce: 

h0m

 a0m  aL ¼ ¼ hm aL

ð6:78Þ

For all practical purposes, its heterogeneity and its sampling, the unit Um has exactly 0 of uniform weight M0m ¼ Mm and of critical the same behavior as the imaginary unit Um 0 content am equal to: a0m ¼ ð1 þ hm ÞaL

ð6:79Þ

Conclusion: Because the set of hm is identical to the set of h0m , the lot L strictly has the same properties as the lot L’; therefore, it is perfectly defined by the set of values of the single descriptor hm. We transformed a set of units with two descriptors into an equivalent set of units with only one descriptor.

6.6. Practical Use of the Proposed Definitions of the Heterogeneity The definition given regarding the various kinds of heterogeneity was originally intended to solve problems encountered by the theory of sampling particulate materials. This subject will be developed in depth in the subsequent chapters. We believe, however, that these definitions can have various implications far beyond the sole purpose of sampling each time the heterogeneity of a raw material, high purity product, chemical or pharmaceutical product, food in general, silicon chips, supraconductors, and many others may become a problem. In conclusion, the study and understanding of heterogeneity is an inseparable factor from the progress of modern technology.

111

Heterogeneity of a Zero-Dimensional Lot

6.7. Practical Example of the Calculation of Heterogeneity Characteristics in a Counted Population As an illustration of the definition of the heterogeneity hm we give in Table 6.3 the contents am in percent molybdenite, the weights Mm in gram, and the heterogeneities hm TABLE 6.3 Calculation of hm for the molybdenite content of a series of increments taken in a rougher-feed circuit Increment m

am (%MoS2)

Mm (g)

hm (10–3)

Increment m

am (%MoS2)

Mm (g)

hm (10–3)

1

0.156

3241

-94.0

36

0.188

3778

2

0.152

3423

-121.4

37

0.182

3552

+85.3 +45.8

3

0.143

3436

-171.7

38

0.195

3612

+122.3

4

0.159

3372

-81.5

39

0.158

3327

-85.8

5

0.148

3529

-147.9

40

0.178

3687

+23.8

6

0.136

3300

-202.2

41

0.182

3452

+44.5

7

0.127

3122

-236.5

42

0.181

3687

+41.6

8 9

0.132 0.143

3007 3436

-203.6 -171.7

43 44

0.174 0.172

3281 3572

0 -11.5

10

0.158

3421

-88.2

45

0.165

3497

-50.7

11

0.161

3410

-71.5

46

0.181

3627

+40.9

12

0.165

3418

-49.6

47

0.177

3335

+16.1

13

0.172

3607

-11.6

48

0.183

3461

+50.2

14

0.175

3072

+5.0

49

0.189

3448

+83.4

15

0.159

3305

-79.9

50

0.188

3801

+85.8

16 17

0.164 0.183

3560 3291

-57.4 +47.7

51 52

0.184 0.184

3520 3730

+56.7 +60.1

18

0.174

3470

0

53

0.198

3651

+141.3

19

0.184

3520

+56.7

54

0.185

3907

+69.3

20

0.160

3830

-86.4

55

0.196

3524

+125.0

21

0.165

3607

-52.3

56

0.204

3721

+180.0

22

0.161

3922

-82.2

57

0.186

3656

+70.7

23

0.168

3530

-34.1

58

0.194

3918

+126.3

24 25

0.158 0.155

3718 3628

-95.9 -111.1

59 60

0.198 0.197

3530 3815

+136.6 +141.5

26

0.162

3772

-73.0

61

0.205

3820

+190.9

27

0.161

3225

-67.6

62

0.192

3640

+105.6

28

0.169

3815

-30.8

63

0.190

3791

+97.8

29

0.157

3570

-97.8

64

0.185

3507

+62.2

30

0.183

3717

+53.9

65

0.188

3889

+87.8

31

0.171

3492

-16.9

66

0.196

3505

+124.3

32 33

0.165 0.182

3747 3418

-54.4 +44.1

67 68

0.194 0.197

3929 3515

+126.7 +130.3

34

0.167

3801

-42.9

69

0.201

3739

+162.7

35

0.172

3617

-11.7

70

0.196

3743

+132.7

112

Theory of Sampling and Sampling Practice

FIGURE 6.5 The fluctuations of the critical content am :

FIGURE 6.6 The fluctuations of the active components weight Mm.

carried by 70 increments selected from the stream of a rougher feed in a flotation plant. We are going to assume for now that the 70 increments represent the totality of a lot L made of 70 units Um. We can verify that: • The average of the hm is negligible. • The variance of the hm, which is the Constitution Heterogeneity CHL of the 70 increments, is around 10.116 x 10–6 • The Intrinsic Heterogeneity IHL is about 36g.

Heterogeneity of a Zero-Dimensional Lot

113

FIGURE 6.7 The fluctuations of the heterogeneity hm.

Because of its definition, hm is a function of am and Mm and we may wonder which of these two descriptors is the most relevant. Experience proves that the most significant descriptor of hm is by far am ; this is well demonstrated in Figures 6.5 to 6.7. We easily observe that the fluctuations of hm follow exactly the fluctuations of am , and it seems that the fluctuations of Mm do not affect those of hm, or only to the second order. Furthermore, to make our point even clearer, let’s look at the respective relative, dimensionless standard deviations of am and Mm which are respectively 0.5919 and 0.0000164. Obviously, the variability of Mm is negligible by several orders of magnitude. This does not mean we should forget the fluctuations of Mm in all cases; we should wisely keep an eye on them and this is the objective of IWE in Chapter 27. In summary: • aL = 0.174% MoS2 • Mm = 3565g • CHL = 0.0101 • IHL = 36g

This page intentionally left blank

7 Heterogeneity of a One-Dimensional Lot Notion of Variography

7.1. Introduction There are three different kinds of one-dimensional lots: • A stream, stationary or moving, made of particulate material (e.g., ore, products, chemicals on a conveyor belt, slurries, or solids-in-gas suspensions in ducts, etc.) • A stream, stationary or moving, of liquid (e.g., water stream, river, liquid chemicals, etc.) • A stream, stationary or moving, made of a chronological set of discrete units (e.g., railroad cars, trucks, bags, jars, and also core samples from downhole drilling, etc.) These three apparently different cases are, in fact, identical from practical and theoretical standpoints. The description and characterization of the various forms of heterogeneity of these chronological lots (e.g., process trends, downhole trends, cycles) necessitate the use of continuous functions. Nevertheless, because no practical ways are known to do it, the experimental study of these forms of heterogeneity would have to be performed on a finite number of quasi-punctual increments for particulate materials, slurries, liquids, core samples completely or partially recovered, or on a finite number of units for railroad cars, trucks, bags, jars, and so on. Thus far, in Chapter 6, we have considered a lot of material strictly as a statistical population in which the intrinsic Constitution Heterogeneity and the associated Distribution Heterogeneity were the only two forms of heterogeneity; however, there are other forms of heterogeneity that generate nonrandom fluctuations within a lot of material. Geostatisticians have developed an effective means to quantify these nonrandom fluctuations in multidimensional ore deposits.28–31 It would be uneconomical and impractical to apply geostatistical principles to three- or even two-dimensional lots for the sole purpose of sampling; these principles applied to a one-dimensional lot could, however, in a relatively simple way, give valuable information to the quality control engineer. Fortunately, it is always possible to transform a three- or two-dimensional lot into a one-dimensional lot and, in fact, it is done all the time to facilitate handling, transportation, reclaiming, homogenizing, and so on. To simplify the content of this chapter, we assume that we are working with a moving one-dimensional stream of particulate material (e.g., ore travelling on a conveyor belt) equipped with a sampling station capable of selecting correctly delimited and extracted increments, which is a subject largely developed in Part VII of this book, at any interval of time we may choose. 115

116

Theory of Sampling and Sampling Practice

The use of the following principles with a liquid stream or a chronological set of discrete units does not present any practical difficulty; actually, it is often a simple matter of looking at the same problem occurring more or less continuously at a different scale. The study of a chronological set of units, and the quantification of all the forms of heterogeneity fluctuations carried out by this set, random or nonrandom, is often called variography. The most important and effective tool used in variography is called the relative semi-variogram, most often simply called the variogram.

7.2. Total Heterogeneity Supported by a One-Dimensional Lot Let us consider a one-dimensional lot L made of Nu discrete units (e.g., increments, railroad cars, trucks, bags, jars, etc.) arranged in chronological order, and Um one of these units with m=1.2. . ..,Nu. Such chronological series of units are very often encountered in practice, for example, the production of a plant packaged at the end of the continuous stream of a given product ready for commercialization; it can be a copper concentrate, olive oil, cat litter, beer, you name it. For each package, the chronological value of the subscript m becomes highly relevant, and should be carefully recorded. In this chapter, we do not consider the heterogeneity fluctuations within each unit that could be treated independently like we did in Chapter 6, but we are more interested by the heterogeneity fluctuations between units. The heterogeneity carried out by the material of each unit is entirely defined by the three following descriptors: • Mm: the weight of active components in Um • Am: the weight of the component of interest A in Um • am : The component of interest content, or critical content of Um It is important to remember that each component of interest will have its own following descriptors, respectively, Am, am , Bm, bm , Cm, cm , and so on. For each component three descriptors are related as follows: a¼

A M

ð7:1Þ

Therefore, only two of these descriptors need to be known to completely define a given unit. 7.2.1. Definition of the Heterogeneity Carried by the Unit Um As already defined in Section 6.3.1.2 for a zero-dimensional lot, the total heterogeneity carried by the unit Um of a one-dimensional lot can be expressed as follows: hm ¼

ðam  aL Þ Mm ðam  aL Þ Mm  ¼ Nu  ML aL aL Mm

ð7:2Þ

where Mm is the weight of the average unit Um . The study of a large number of chronological series leads to the conclusion that in many cases they are the result of

Heterogeneity of a One-Dimensional Lot

117

the accumulation, around the average value aL , of three kinds of fluctuations with independent causes. This is true for each of the three descriptors Am, am , and Mm, but it is also true for the heterogeneity hm, whose average is zero as shown in equation (6.16). These three independent kinds of fluctuations can be defined as follows: • A short-range term, mainly random, discontinuous at every instant, reflecting the random nature of Constitution Heterogeneity within one unit or one increment. For liquids, these fluctuations are usually negligible but not nil. This term is defined as the heterogeneity h1m carried by one unit. • A long-range term, nonrandom, mainly continuous, representing trends between units. This term is a nonrandom form of heterogeneity defined as h2m. It could also be defined as a large-scale segregation or as a regionalization by geostatisticians. • A cyclic term, continuous, such as cycles introduced by reclaiming operations or corrective actions to maintain a parameter between two known limits. Such a cycle is nearly always detrimental to quality, therefore to sampling, and is defined as the nonrandom and periodic form of heterogeneity h3m. Taking into account the fact that the heterogeneity of the constant aL is obviously zero, we can write: hm ¼ h1m þ h2m þ h3m

ð7:3Þ

Experience shows that these three forms of heterogeneity reflect mainly the fluctuations of the critical content am of the units or increments around the average content aL , and to a much lesser extent the fluctuations of the weight Mm of the units or increments around their average Mm ; therefore, we should direct our attention to the critical content assuming that the flow rate of the stream has been reasonably regulated and maintained relatively constant (i.e., fluctuations of ±20% in the flow rate are regarded as an acceptable maximum). This subject will be revisited in Chapter 27.

7.2.2. Characterization of a Chronological Series It is possible to characterize a chronological series in two different ways: • An overall characterization of the heterogeneity carried by the statistical population of the hm using a scalar such as the variance. The order of the units, even if known, is voluntarily considered as irrelevant. This approach brings us back to the problem studied in Chapter 6. • A sequential characterization of the heterogeneity carried by the chronological series of hm using a function such as the variogram. The order of the units described by the value of the subscript m is now highly relevant. We study this case in this chapter.

118

Theory of Sampling and Sampling Practice

7.3. Overall Characterization of the Heterogeneity of a One-Dimensional Lot If we voluntarily choose to ignore the chronological order of the units, we transform the chronological series of units into a statistical population of units that we can define as explained in Sections 6.3.4 and 6.3.6, respectively: • CHL: the overall Constitution Heterogeneity of the population of units Um, which should not be confused (if the units are made of several fragments) with the Constitution Heterogeneity of the population of fragments. • IHL: the Intrinsic Constitution Heterogeneity of the lot L. When looking at a chronological series that way, we may indeed miss valuable information; unfortunately, many people look at it that way.

7.4. Sequential Characterization of the Heterogneity of a One-Dimensional Lot 7.4.1. Order and Correlation Problems generated by the heterogeneity of auto-correlated series have been studied and solved by Matheron,28–31 David,173 Journel, and Huijbregts.177 The main tool proposed by these authors for the characterization of this auto-correlation is the variogram. We will see that the variogram can be effectively used to characterize the heterogeneity of a chronological series of units Um. In all prior publications of Pierre Gy,64,73 the variogram was applied to the three descriptors Am, am , and Mm, (see definitions in Section 7.2). In 1988 Pierre Gy’s publication77 and this publication it is suggested that the same analysis can be achieved in a much simpler way by studying the variogram of the heterogeneity hm carried out by the various units Um. In many cases, for all practical purposes, the study of the variogram of am is sufficient when the variability of the flow rate is well regulated. 7.4.2. Basic Definition of the Semi-Variogram Let us consider a chronological series of values for hm and define j as a given discrete interval (j = 1,2,3,. . .) separating two units whose heterogeneity will be compared; j is also called the lag between units. j¼

 min

ð7:4Þ

where θ is a given interval, and θmin the minimum interval under consideration; these intervals can be expressed in units such as second, minute, hour, meter, and so on, while j (i.e., the lag) is relative and dimensionless. A variogram is a plot of the average differences in a characteristic such as the values of the various hm, between pairs of units selected as a function of time or distance, where the pairs are chosen in whole-number multiples (e.g., every meter, 2m, 3m, etc., or every

119

Heterogeneity of a One-Dimensional Lot

second, 2s, 3s, etc.). When a variogram is calculated, units regularly spaced along a onedimensional lot are considered, and the measurement of a continuity index of their heterogeneity hm as a function of time or distance is determined. For monitoring purposes, the most natural way to compare two values such as h1 and h2 representing, respectively, the heterogeneity of unit U1 and the one of unit U2 separated by a given lag j, is to consider their difference: d ¼ h1  h2

ð7:5Þ

But, what is mostly relevant is the average difference between the heterogeneity of ðNu  jÞ pairs of units, a given lag j apart: d¼

 1 X hmþj  hm Nu  j m

ð7:6Þ

It is well known that such an average difference d tends toward zero, and to get around this difficulty it is convenient to define a new difference D, which is the average of the squared differences: D¼

2 1 X hmþj  hm Nu  j m

ð7:7Þ

m Bi +j = ss ec h to m r

This difference D is not exactly what we want to measure. As illustrated in Figure 7.1 the desired difference is d, which is the shortest distance AB between any point (hm+j, hm) and the bisector where hm+j = hm; it is the moment of inertia around the bisector, if you can imagine the picture in three dimensions. Obviously d = AB = BC = BD. From this figure we may write:

hm+j C

d

h

A

d B

d D

E

hm

FIGURE 7.1 Illustration of the desired difference between pairs of units relative to the bissector where hm+j = hj.

120

Theory of Sampling and Sampling Practice

AC ¼ AE  DE ¼ AD  DE ¼ hmþj  hm

ð7:8Þ

which leads to:  2 d2 ¼ hmþj  hm  d2   hmþj  hm pffiffiffi d¼ 2

ð7:9Þ ð7:10Þ

Now, we may rewrite equation (7.7) correctly: Dcorrect ¼ Vð jÞ ¼

1 2ðNu  jÞ

X

hmþj  hm

2

ð7:11Þ

m

The descriptor h has a relative and dimensionless value; therefore, the variogram V(j) is an absolute variogram; in that case the calculation of the relative variogram is meaningless because the average m(hm) is nil as shown in Section 6.3.2.

7.5. Actual Presentation of the Variogram In the majority of chronological series, the differences between values at two points hm and hm+1 can be expected to increase as the interval increases, so the variogram V(j) can be expected to increase. This is true for all variograms of observations that are positively autocorrelated. For negatively autocorrelated series the variogram will decrease. For a series that shows neither positive nor negative autocorrelation, but is in fact statistically independent, the variogram would be flat. (i.e., random). In the mineral industries, most series will have increasing variograms; truly decreasing variograms and flat variograms are rare. In practice, a variogram may appear flat due to a lack of closely spaced data. The variogram may also be flat when data is very closely spaced. The semi-variogram V(j) is calculated for each interval (e.g., j =1,2,3,. . .) by using Nu  j pairs of values for the heterogeneity hm, then we compute V(j=1), V(j=2), V(j=3), and so on, as illustrated in Figure 7.2. The variographic function can thus be calculated and an example of it is shown in Section 7.6 and Figure 7.3. Before going further, three important remarks should be made concerning the correct number of pairs of chronological units that can be used for the calculation of the variogram, and also how not to misuse the variogram. 7.5.1. Problem Associated with the Central Values of the Chronological Series Values of V(j) calculated for j > Nu/2 do not involve central values of the chronological series, and this should not be so; the number of pairs Nu – j should be larger than Nu/2 (i.e., j  Nu/2). Furthermore, if this condition is satisfied, there is also the fact that all central values of hm between m = j and m = Nu – j are taken into account twice while the others only once. Fortunately, experience proves that the

121

Heterogeneity of a One-Dimensional Lot

j=3

Increments

j=2

j=1

Nu - 1 pairs Nu - 2 pairs Nu - 3 pairs

FIGURE 7.2 Illustration of the schematic computation of a variogram.

effect of this problem on the resulting variographic function is of no concern as long as the number of pairs Nu – j is larger than Nu/2, which is an important rule to remember.

7.5.2. Problem Associated with the Precision of the Variographic Variance  2 The average of the squared differences hmþj  hm is known with sufficient precision only if the number of pairs Nu – j available is at least equal to 25. The rule of thumb is as follows: • For Nu – j > 30, the precision is very good. • For Nu – j =25, the precision is acceptable. • For Nu – j =20, the precision deteriorates dangerously. • For Nu – j 1 as follows: S0 j  12  1 j2 Vð0Þ þ W ð jÞ ¼ W0 ð jÞ 2 j W0 ð jÞ ¼

ð7:62Þ

ð7:63Þ

Yet, we already know that within the linear part of the variogram V( j) with a slope V1 we will obtain a linear first order average integral W( j) with a slope V1/2. Furthermore, because of equation (7.63) we may write:

V(j)

V(0)

Sill

0

1

2

3

4

5

j=T / Tmin

FIGURE 7.9 Typical variogram obtained for increments collected at very short intervals where the best estimate of V(0) has been chosen as the sill of the first three points (i.e., j =1,2, and 3).

142

Theory of Sampling and Sampling Practice

For j > 1 we have W ð j Þ ¼ W0 ð j Þ

ð7:64Þ

Because of equation (7.62) we may write: For j = 1 we have W0 ð1Þ ¼ Vð1Þ

ð7:65Þ

W ð0Þ5W0 ð0Þ

ð7:66Þ

By continuity: For j = 0 we should have

For this reason, the extrapolation toward j = 0 of the line representing the point W0 ð jÞ known for j  0 gives a slight overestimation W0 ð0Þ of V(0): W0 ð0Þ ¼ EST ½Vð0Þ

ð7:67Þ

7.12.4. Point-by-Point Calculation of the First Order Average Integral W(j) If we recall the definition of W(j) given in Section 7.10.1, we can write: W ð jÞ ¼

1 j

ðj

1 Vð j0 Þdj0 ¼ Sð jÞ j 0

ð7:68Þ

For j = 0 we have: S ð 0Þ ¼ 0

ð7:69Þ

For j > 0 we have: Sð jÞ ¼ Sð j  1Þ þ

Vð j  1Þ Vð jÞ þ 2 2

ð7:70Þ

V(0) is estimated by extrapolation as shown in Section 7.12.3.

7.12.5. Point-by-Point Calculation of the Second Order Average Integral W’(j) If we recall the definition of W’(j) given in Section 7.10.2, we can write: 2 W ð jÞ ¼ 2 j 0

Let us define the surface S’(j):

ðj 0

j0 Wðj0 Þdj0 ¼

2 0 0 Sðj Þdj j2

ð7:71Þ

143

Heterogeneity of a One-Dimensional Lot

0

ðj

Sðj0 Þdj0

ð7:72Þ

2 0 S ð jÞ j2

ð7:73Þ

S0 ð0Þ ¼ Sð0Þ ¼ 0

ð7:74Þ

S ð jÞ ¼

0

W 0 ð jÞ ¼ For j = 0 we have:

For j > 0 we have: S0 ð jÞ ¼ S0 ðj  1Þ þ

S ð j  1Þ S ð j Þ þ 2 2

ð7:75Þ

From this, we can deduce that for j = 0 we have the very important relationship: W 0 ð0Þ ¼ Wð0Þ ¼ Vð0Þ

ð7:76Þ

And for j > 0 we have: W 0 ð jÞ ¼

2 0 S ð jÞ j2

ð7:77Þ

7.12.6. Calculation of the Heterogeneity Fluctuation Error If we recall the definition given in Section 7.11.1, we can write:   j  W 0 ð jÞ ¼ Eð jÞsy Qs ðHFEÞsy ¼ 2W 2

ð7:78Þ

Qs2 ðHFEÞst ¼ W 0 ð jÞ ¼ Eð jÞst

ð7:79Þ

Qs2 ðHFEÞra ¼ Eð jÞra ¼ s2 ðhm Þ ¼ CHL ¼ Constant

ð7:80Þ

2

where Q is the number of increments in a sample collected during a routine quality control program, as an example. Therefore, these equations are directly applicable to the optimization of the mode of selection of the increments and their necessary number to meet a given allotted total sampling error. We must comment on equation (7.78); because we do not have access to numerical values of W(x) except when x is a positive integer, when j is odd we must do an approximation, therefore between j and j + 1 we do a linear interpolation of W(j). Under these conditions, if we define j0 as the sampling interval of interest, we may write:

144

Theory of Sampling and Sampling Practice

For j = 2 j0 we have :   j ¼ 2W ð j0 Þ 2W 2

ð7:81Þ

  j ¼ W ð j0 Þ þ W ð j0 þ 1Þ 2W 2

ð7:82Þ

For j = 2 j0 + 1 we have :

7.13. Step-By-Step Interpretation of a Variogram A variographic experiment was conducted to optimize a series of automated sampling stations in a molybdenum flotation plant, The material under investigation was a rougher tailing pulp.

7.13.1. Investigation of the Random Term V(0) A series of 60 increments were collected at θmin = 20s imtervals. All increments were thoroughly subsampled, prepared, and assayed for their MoS2 content. Results are summarized in Tables 7.4 and 7.5. Figure 7.10 shows the calculated relative variogram V(j), with respect to our dsicussions in Sections 7.5.1 and 7.5.2; we obtain only 30 points on the variogram. At first glance this variogram appears essentially flat, which was predictable as the data seems to be within the analytical variability and the material seems to show very little heterogeneity. The first two points of the first order average integral were used to estimate V(0) which is around 9.36x10–4. We can make a few remarks: • It is easy to see in Figure 7.10 that the first order average integral W(j) shown by the thick dark line, completely erased the residual random term, making the extrapolation to estimate V(0) very easy. • Someone may think the variogram looks cyclic. It is an illusion, and all the variability we see is almost exclusively random.

7.13.2. Investigation of the Continuous Term of V(j) A series of 80 increments were collected at θmin = 10-minute intervals; all increments were thoroughly subsampled, prepared, and assayed by strictly following the same protocols as those applied to increments collected for studying the random term V(0). Results are summarized in Table 7.6. With respect to our discussions in Sections 7.5.1 and 7.5.2, the calculated variogram provides only 40 points. The results of V(j) and W(j)sy are provided in Figure 7.11. Table 7.7 shows how we would organize our calculations for a clear interpretation of the variogram. Results from V(j), W(j) and W’(j) are reported in Figure 7.11.

145

Heterogeneity of a One-Dimensional Lot

TABLE 7.4 Investigation of the short-term molybdenum content fluctuations in a rougher tailing pulp of a flotation plsnt am (% MoS2)

Increment number m

am (% MoS2)

1

0.031

31

0.035

2

0.031

32

0.034

3

0.032

33

0.033

4

0.032

34

0.031

5 6

0.033 0.034

35 36

0.033 0.034

7

0.035

37

0.034

8

0.032

38

0.036

9

0.035

39

0.036

10

0.034

40

0.032

11

0.036

41

0.033

12

0.034

42

0.034

13 14

0.034 0.036

43 44

0.034 0.035

15

0.033

45

0.036

16

0.033

46

0.036

17

0.036

47

0.032

18

0.036

48

0.033

19

0.032

49

0.033

20

0.035

50

0.034

21 22

0.034 0.034

51 52

0.034 0.035

23

0.034

53

0.034

24

0.033

54

0.036

25

0.035

55

0.034

26

0.035

56

0.034

27

0.034

57

0.034

28

0.035

58

0.032

29 30

0.036 0.034

59 60

0.034 0.033

Increment number m

Now we can use Table 7.7 to calculate the continuous Heterogeneity Fluctuation Error generated by a sample collected, for example, every 8-hour shift, knowing that one increment was collected every 20 minutes (i.e., Q = 24): Case of a random systematic sampling interval: s2 ðHFEÞsy ¼

10x104 ¼ 0:42x104 24

146

Theory of Sampling and Sampling Practice

TABLE 7.5 Experimental variogram V(j) of the short-term molybdenite content fluctuations in a rougher-tail pulp of a flotation plant V(j) x 10–4

j

j

V(j) x 10-4

1

12

16

17

2

15

17

16

3

15

18

14

4

17

19

17

5

20

20

18

6

16

21

16

7 8

13 15

22 23

17 19

9

17

24

17

10

19

25

15

11

15

26

17

12

20

27

19

13

17

28

14

14

16

29

18

15

14

30

19

FIGURE 7.10 Graphic representation of the variogram V(j) of the short-term molybdenite content fluctuations in a roughertail pulp of a flotation plant.

147

Heterogeneity of a One-Dimensional Lot

TABLE 7.6 Investigation of the long-range nonrandom molybdenite content fluctuations in a rougher tail pulp of a flotation plant am (% MoS2)

Increment number m

am (% MoS2)

1

0.034

41

0.043

2

0.034

42

0.042

3

0.032

43

0.042

4

0.034

44

0.038

5 6

0.033 0.034

45 46

0.041 0.042

7

0.034

47

0.041

8

0.034

48

0.037

9

0.034

49

0.037

10

0.031

50

0.038

11

0.034

51

0.041

12

0.036

52

0.042

13 14

0.037 0.037

53 54

0.038 0.042

15

0.036

55

0.041

16

0.034

56

0.046

17

0.036

57

0.044

18

0.034

58

0.045

19

0.038

59

0.044

20

0.038

60

0.048

21 22

0.039 0.038

61 62

0.048 0.046

23

0.037

63

0.047

24

0.041

64

0.046

25

0.042

65

0.047

26

0.042

66

0.047

27

0.042

67

0.047

28

0.041

68

0.050

29 30

0.038 0.037

69 70

0.047 0.047

31

0.038

71

0.044

32

0.041

72

0.044

33

0.042

73

0.045

34

0.040

74

0.046

35

0.041

75

0.047

36

0.042

76

0.041

37 38

0.042 0.045

77 78

0.042 0.046

39

0.043

79

0.044

40

0.044

80

0.047

Increment number m

148

Theory of Sampling and Sampling Practice

FIGURE 7.11 Graphic representation of the variogram, V(j), which is the top thin line, the first order average integral W(j) which is the thick center line, and the second order average integral W’(j) which is the bottom dashed line.

Case of a stratified random sampling interval: s2 ðHFEÞst ¼

14x104 ¼ 0:58x104 24

Therefore, when using a random systematic sampling selection, the relative standard deviation of the continuous Heterigeneity Fluctuation Error is: sðHFEÞsy ¼ 0:65% relative If we were using a stratified random sampling interval, result will be: sðHFEÞst ¼ 0:76% relative Important remark: In the above calculation for the continuous Heretogeneity Fluctuation Error we made the assumption that the descriptor am was by far the dominant factor.

7.14. Practical Interest of a Variographic Experiment in Quality Control From a practical standpoint, the main interest in conducting a variographic experiment is to predict in advance the order of magnitude of the continuous Heterogeneity Fluctuation Error HFE or its components QFE and IWE as introduced is Section 4.8. We may wonder if it is permissible to extrapolate the conclusions reached by such an experiment performed one day to the next week, next month, or next year. The answer is complex, not obvious, and the author strategy is to use existing data produced every working shift (e.g., 8-hour shift), calculate the variogram, and see the evolution of V(0) and V(j=1) and judge from here if something is changing substantially that would trigger the necessity to perform another variographic experiment to insvestigate what it is that has been changing.

149

Heterogeneity of a One-Dimensional Lot

TABLE 7.7 Illustration of the organization of the necessary calculations for the interpretation of the long-range variogram (unit = x10-4) j

V(j)

S(j)

W(j)

S’(j)

W’(j)

2W(j/2)

E(j)sy

0 1

9.36

0

9.36

0

9.36

18.72

9.36

14

12

12

6

12

22

10

2

20

29

15

27

14

24

10

3

27

53

18

68

15

27

12

4

27

80

20

135

17

30

13

5

25

106

21

228

18

33

15

6 7

32 37

134 169

22 24

348 500

19 20

36 38

17 18

8

45

210

26

690

22

40

18

9

50

257

29

924

23

41

18

10

55

310

31

1208

24

42

18

11

57

366

33

1546

26

43

17

12

58

423

35

1941

27

44

17

13

58

481

37

2393

28

46

18

15 15

61 70

541 606

39 40

2904 3478

30 31

48 50

18 19

16

77

680

43

4121

32

52

20

17

79

758

45

4840

33

55

22

18

77

836

46

5637

35

58

23

19

79

914

48

6512

36

60

24

20

84

995

50

7467

37

62

25

21

87

1081

51

8505

39

64

25

22 23

91 94

1170 1262

53 55

9631 10,857

40 41

66 68

26 27

24

97

1358

57

12,157

42

70

28

25

101

1457

58

13,565

43

72

29

26

98

1556

60

15,072

45

74

29

27

99

1655

61

16,678

46

76

30

28

98

1753

63

18,382

47

78

31

29

108

1856

64

20,187

48

79

31

30 31

114 118

1967 2083

66 67

22,099 24,124

49 50

80 83

31 33

32

114

2199

69

26,265

51

86

35

33

115

2314

70

28,522

52

88

36

34

121

2432

72

30,895

53

90

37

35

125

2555

73

33,389

55

91

36

36

133

2684

75

36,009

56

92

36

37

132

2816

76

38,759

57

94

37

38 39

136 137

2950 3087

78 79

41,642 44,661

58 59

96 98

38 39

40

139

3225

81

47,817

60

100

40

150

Theory of Sampling and Sampling Practice

7.14.1. Stability of a Variogram In most of the projects we have been involved with, we found that a variogram is relatively stable with time and there are various reasons for this: • Ore or raw materials always coming from the same place often have the same composition over time. • Material submitted to unchanged processes, such as the same commiution stages, same flow rate, same fragment size distribution, same physical and chemical treatments, show similar properties over time. • The quest for uniform product quality and homogenization processes leads to stability over time. There are cases, however, where we cannot expect the variogram to remain stable, such as with materials coming suddenly from a different location, comminution stages that have been changed, or something within the process that has been modified. It is our recommendation to perform a 2-week variogram, every week, using existing data from every working shift, and keep an eye on the values of V(0), V(j=1), W(j=1) and W’(j=1) as a systematic quality control program. The variogram is the most powerful quality control tool you may imagine, and nobody uses it; perhaps it is because the information it provides is free! 7.14.2. Average Estimates versus Instantaneous Estimates Process control can be understood in two different ways: • We want an accurate and precise estimate of the overall average content aL of a certain constituent of interest to evaluate the market value of a shipment, the performance of an 8-hour working shift, or the consistency of the material balance of a process. In such cases it is important to minimize the continuous Heterogeneity Fluctuation Error HFE, and the variogram is the best tool at our disposal to calculate this error. The variogram is used to predict the contribution of HFE to the variance of the overall error affecting the estimate of aL . • We want an accurate and precise estimate of the content of a certain constituent of interest at a certain instant t to find out if something needs to be changed in the process. The interest of such an estimate is justified if several conditions are fulfilled: ○ The analysis of the sample can be performed immediately. ○ The short-term Heterogeneity Fluctuation Error HFE1 is known: s2 ðHFE1 Þ ¼

V ð0Þ Q

ð7:83Þ

where Q is the number of increments collected at very short intervals around the instant t. ○ The precision of the Analytical Error AE is known. ○ The range jR of the variogram, if any, is known. Sometimes a variogram may have several ranges.

Heterogeneity of a One-Dimensional Lot

151

Why do these conditions need to be fulfilled? ○ If the results are available long after the facts, why should a sample be taken in the first place? ○ If s2 ðHFE1 Þ is larger than the long-range fluctuation we want to correct, we may disturb the process when the best thing to do is to leave it alone. ○ If samples are collected at intervals beyond the range jR, many process disturbances may go undetected. Even in these cases we still see the advantage to proceed with a variographic experiment, which would tell whether the detected quality fluctuations are of concern.

7.15. Practical Applications of the Auxiliary Functions of the Variogram In this chapter we spent a lot of time to study the auxiliary functions of the variogram, then considerably simplified such study, because the applications are many, such as: • Extrapolating the variogram to V(0) with greater precision and accuracy • Smoothing out the residual random variability V4(j) • Smoothing out the cyclic variability • Calculating the variance s2 ðHFEÞ of the total Heterogeneity Fluctuation Error and its components, so essential to perform reliable metallurgical accounting • Controlling a process better by emphasizing the relevant variability • Predicting the performance of a bed-blending operation, so critical for the reliable functioning of a process.

This page intentionally left blank

Part IV

Sampling Errors Introduced by Various Forms of Heterogeneity

This page intentionally left blank

8 Sampling of One-Dimensional Lots The Continuous Model

This chapter should seal the important information we presented in Chapter 7 and place it in its proper context, and also expand on the various components of the Overall Estimation Error OEE already introduced in Chapter 4. The continuous model of a one-dimensional lot has very important industrial applications. The model applies to material moving on conveyor belts, pipes, ducts, streams, rivers, olive oil generated in a plant after the crushers, glass bottle thickness in a beer bottle factory, consistency of fragment size distribution in bags filled with cat litter material, and so on. It also applies to stationary, elongated piles, whose cross-section is relatively constant. It is an extremely versatile approach with only our imagination for limits. To this model of the lot corresponds a sampling model consisting of a given number of increments. Each of these increments are selected at random systematic, stratified random, or strict random intervals. The continuous model of sampling studied in this chapter is applicable to all objects that can be transformed into a one-dimensional model, regardless of their physical state, origin, or composition. The problem is approached exactly in the same manner for all the various components of interest, but it should be clearly understood that each of these components carries its own heterogeneity; therefore, each component needs its own solutions, then the most challenging component may ultimately rule the selected sampling protocol.

8.1. Theoretical Use of the One-Dimensional Model When we apply the model to punctual functions, the calculated error only involves the selection of the punctual increments, assuming that this selection is correct. The resulting error is called the continuous Heterogeneity Fluctuation Error HFE. When we apply the model to extended functions, the calculated Sampling Selection Error SSE involves the Heterogeneity Fluctuation Error and the Increment Delimitation Error IDE, which is an important error studied in Part VII. SSE ¼ HFE þ IDE

ð8:1Þ

When we apply the model to discrete functions, the calculated Sampling Selection Error (SSE) involves HFE, IDE, and a new error called the Increment Extraction Error (IEE), which is also studied in detail in Part VII; then we can write:

155

156

Theory of Sampling and Sampling Practice

SSE ¼ HFE þ IDE þ IEE

ð8:2Þ

Furthermore, when we apply the model to real functions represented by the real unknown critical content of the selected, extracted, and prepared increments the calculated Total Sampling Error TSE involves the sequence shown in equation (8.2) and an additional error called the Increment Preparation Error IPE, which is also studied in detail in Part VII; then we can write: TSE ¼ SSE þ IPE ¼ HFE þ IDE þ IEE þ IPE

ð8:3Þ

Remark: In due time, in Part VII, we will see that it is entirely up to us to make IDE + IEE + IPE completely negligible (i.e., almost zero) by respecting the rules of sampling correctness unconditionally, which is rarely done in practice in many industries.

8.2. The Analytical Error The Analytical Error AE comes in addition to all errors and we do not know how to eliminate it completely. The Analytical Error is beyond the scope of this book (see Ingamells and Pitard22); usually, it is much easier to detect, correct, and minimize AE than sampling errors. Important remark: The delimitation and extraction of the aliquot for assaying (e.g., 0.5-g subsample for a copper assay, a 30-g subsample for fire assay, etc.) is not part of the Analytical Error; it is indeed part of a sampling selection process. It is vital not to make that confusion in daily laboratory practice. We can say that the Overall Estimation Error OEE is equal to the sum of the Total Sampling Error TSE and the Analytical Error AE: OEE ¼ TSE þ AE

ð8:4Þ

If the conditions of sampling correctness are rigorously respected, which is a very optimistic assumption because of common malpractice in many industries, we can write the following equation: OEE ¼ HFE þ AE

ð8:5Þ

8.3. The One-Dimensional Model: Definitions and Basic Notations 8.3.1. Definition of the Real Lot L Let us define: L: a one-dimensional lot flowing from time t ¼ 0 to time t ¼ TL TL: the total flowing time H: a section across the stream corresponding to the increment collected by a correctly designed and maintained cross-stream sampler. In most figures shown in this book this section is illustrated by a surface across the stream; it could be called a sampling window.

157

Sampling of One-Dimensional Lots

M(t): the flow rate of the active components across H at the time t ML: the total weight of active components in the lot L Then, by definition, we can write: ð TL

ML ¼

MðtÞdt

ð8:6Þ

0

MðtÞ: the average flow rate of the lot L. We have by definition: MðtÞ ¼

ML TL

ð8:7Þ

aðtÞ: the critical content, which is the proportion of the component of interest, of the slice of material flowing through H at the time t. aL : the average true critical content of the lot L. It is this value that the sequence sampling + analysis is estimating. We have: Ð TL aL ¼

0

aðtÞMðtÞdt Ð TL MðtÞdt

ð8:8Þ

h(t): the amount of heterogeneity carried out by the material flowing through H between the time t and t + dt. By analogy with equation (6.15) we can write: hðtÞ ¼

½aðtÞ  aL  MðtÞdt ½aðtÞ  aL  MðtÞ ¼   aL aL MðtÞdt Mð t Þ

ð8:9Þ

hL: the amount of heterogeneity carried out by the lot L. By definition, it should be nil: hL ¼

ð TL

hðtÞdt ¼ 0

ð8:10Þ

0

8.3.2. Definition of an Imaginary Lot L’ Using the approach described in Section 6.5, we can define an imaginary Lot L’ directly derived from L. The lot L’ has the following imposed properties: TL0 ¼ TL

ð8:11Þ

ML0 ¼ ML

ð8:12Þ

aL0 ¼ aL

ð8:13Þ

M0 ðtÞ ¼ MðtÞ ¼ Constant

ð8:14Þ

h0 ðtÞ ¼ hðtÞ

ð8:15Þ

From equation (6.79) we can easily calculate a0 ðtÞ:

158

Theory of Sampling and Sampling Practice

a0 ðtÞ ¼ ½1 þ h0 ðtÞaL0 ¼ ½1 þ hðtÞaL

ð8:16Þ

with R TL0 aL0 ¼

0

a0 ðtÞMðtÞdt 1 ¼ a0 ðtÞdt ¼ aL R TL0 T L 0 MðtÞdt

ð8:17Þ

Therefore, the lot L’ characterized by a constant flow rate M0 ðtÞ ¼ MðtÞ carries a heterogeneity similar to the lot L characterized by a potentially fluctuating flow rate MðtÞ. Conclusion: It is strictly equivalent to sample the real lot L and its functions aðtÞ and MðtÞ, or the imaginary lot L’ and its functions a0 ðtÞ and MðtÞ ¼ constant. Obviously, for all practical purposes, we choose the second alternative. Thus, we can directly analyze the fluctuations of a0 ðtÞ, or the equivalent fluctuations of h(t) related to those of a0 ðtÞ, by equation (8.16).

8.4. Characterization of the Heterogeneity of a One-Dimensional Lot In this book we are separating the characterization of the heterogeneity of a lot to be sampled from the sampling selection performed on the same lot; the former has a huge influence on the importance of the errors generated by the later. Nevertheless, we found it logical and advantageous not to mix both concepts. Therefore, in this chapter, we want to find ways to calculate the Sample Selection Error generated by the collection of Q random systematic, stratified random, or strict random increments, whose reunion is a sample. These Q increments are selected within a one-dimensional lot whose heterogeneity is characterized by: • A variogram V(j) of the heterogeneity h(t), and its auxiliary functions, if we look at the long-term properties of the material • A constant factor of heterogeneity, or intrinsic heterogeneity IHL, if we look at the short-term (i.e., time necessary for the material to travel through a sampling window H) properties of the material. Taking into account equation (8.16) we can write: hðtÞ ¼

a0 ðtÞ  aL aL

ð8:18Þ

The absolute fluctuations of h(t) are nothing other than the relative fluctuations of a0 ðtÞ. Conclusion: The absolute variogram of h(t) is identical to the relative variogram of a0 ðtÞ. Therefore, the quantitative and qualitative fluctuations of the material making up a one-dimensional lot can be entirely characterized by the absolute variogram of h(t).

159

Sampling of One-Dimensional Lots

8.5. Characterization of the Increment Sampling Process The increment sampling process consists of representing the lot L by a set of increments (punctual in a first stage) located along the time axis, either at random systematic, at stratified random, or at strict random. Let us define: Q: the number of increments selected within the lot L Iq: the increment q = 1,2,. . .,Q tq: the time when Iq has been extracted dt:  the  amount of time occupied by a sampling window necessary to collect Iq a tq : the critical content at the time tq M tq : The flow rate at the time tq. By definition, we have:   Mq ¼ M tq dt

ð8:19Þ

Mq : the weight of the average increment. By definition, we have: Mq ¼ Mq ðtÞdt ¼ constant

ð8:20Þ

hq: the amount of heterogeneity carried out by the increment Iq. By definition, we have:  

  a tq  aL M tq  hq ¼ aL MðtÞ

ð8:21Þ

8.6. Characterization of the Sample The sample S is obtained after the reunion of the Q increments Iq: S¼

X

Iq with q ¼ 1; 2; . . . ; Q

ð8:22Þ

q

Let us define aS as the critical content of the punctual sample S. By definition, we have: P     P   a tq M tq dt A tq q q ¼P   aS ¼ P   M tq dt M tq q

ð8:23Þ

q

This relation can be written as follows: P 0    a tq M tq dt aS ¼

q

P q

  M tq dt

¼

1 X 0  a tq Q q

ð8:24Þ

160

Theory of Sampling and Sampling Practice

    We know, however, that a0 tq ¼ 1 þ hq aL , then: aS ¼

! 1X 1þ h q aL Q q

ð8:25Þ

8.7. The Continuous Heterogeneity Fluctuation Error HFE The variance of the continuous Heterogeneity Fluctuation Error is the equivalent variance used in geostatistics.178 This variance can be calculated for the two most important cases encountered in sampling practice: the random systematic selection and the stratified random selection. Let us define: HFE: The relative continuous Heterogeneity Fluctuation Error, resulting from representing the lot L by the sample S. By definition, we have: HFE ¼

aS  aL 1X ¼ hq a Q q

ð8:26Þ

This equality is interesting because it expresses the continuous Heterogeneity Fluctuation Error as the arithmetic mean of all the amounts of heterogeneity carried out by the increments. ðHFEÞsy : The continuous Heterogeneity Fluctuation Error generated by a random systematic selection (i.e., constant intervals but initial starting point selected at random, which is a required condition to obtain a valid sampling protocol). Any sampling protocol that does not involve some randomness is, by definition, an invalid protocol that no reliable statistician can defend. ðHFEÞst : The continuous Heterogeneity Fluctuation Error generated by a stratified random selection (i.e., constant strata extent but random selection within each stratum). These two selection modes are both probabilistic. The auto-correlation between increments and periodic fluctuations may bring nonrandom disturbances that are studied later in this chapter; nevertheless, we admit that if Q is large enough (e.g., Q  30) the continuous Heterogeneity Fluctuation Error HFE follows, more or less, a normal distribution. From the equality 8.26 we deduce the mean m(HFE) of the continuous Heterogeneity Fluctuation Error. X 1 hq mðHFEÞ ¼ m Q q

! ¼

1X   m hq Q q

ð8:27Þ

If all the increments are selected correctly (i.e., with a constant probability), we conclude that m(HFE) is identical to the mean of h(t), which is zero: mðHFEÞ ¼

1 TL

Z 0

TL

hðtÞdt ¼ 0

ð8:28Þ

161

Sampling of One-Dimensional Lots

If we consider individual fragments as in a zero-dimensional lot, equation (8.28) is not strictly true.

8.8. Variance of the Continuous Heterogeneity Fluctuation Error HFE Following the conclusion reached in Section 8.4, and because the continuous Heterogeneity Fluctuation Error HFE is defined as a relative error, the absolute variance of HFE can be expressed by the relative variogram of a0 ðtÞ, the absolute variogram of h(t) and its auxiliary functions, and the error generator functions introduced in Section 7.11.1. 8.8.1. Variance of HFE for Random Systematic Sampling A random systematic sampling scheme is entirely defined when we select the constant interval Tsy between the chronological increments. The number Q of increments Iq covers the space or time between 0 and TL and is equal to the ration TL/Tsy: Q¼

TL Tsy

ð8:29Þ

  Matheron demonstrated that by extending the values a tq of each punctual increment to their respective zone of influence Tsy, the error generated could be written as follows:       Tsy Tsy   1 0 2W E Tsy sy s ðHFEÞsy ¼  W Tsy ¼ 2 TL Q 2

ð8:30Þ

We stated, however, that a probabilistic sample should have its first increment collected at random and not in the center of its zone of influence as written in equation (8.30); however, this random starting point is not farther from the center of the interval than a value equal to Tsy/2. Thus, everything behaves as if the entire lot L within TL was defined with a ± Tsy/2 fluctuation. This fluctuation becomes negligible when Tsy is very small when compared to TL (i.e., Q  30). 8.8.2. Variance of HFE for Stratified Random Sampling A stratified random sampling scheme is entirely defined when we select the extent Tst of the constant strata within which the chronological increments are selected at random.   Matheron demonstrated that by extending the values a tq of each punctual increment to their zone of influence Tst, the error generated could be written as follows: s2 ðHFEÞst ¼

1 0 Tst EðTst Þst W ðTst Þ ¼ TL Q

ð8:31Þ

8.8.3. Variance of HFE for Strict Random Sampling In practice we never use this scheme; however, it is interesting to remember that the variance of the error generated by a systematic scheme applied to a population of

162

Theory of Sampling and Sampling Practice

increments whose chronology has been lost is identical to the variance of the continuous Heterogeneity Fluctuation Error HFE generated by a random sampling scheme. Therefore, we may write:   1 TL 1 ¼ s2 ðhm Þ s ðHFEÞra ¼ E Q ra Q Q 2

ð8:32Þ

8.9. Components of the Continuous Heterogeneity Fluctuation Error HFE We saw that the fluctuations of h(t) essentially follow those of aðtÞ. Then let us consider: • The real flow with a critical content aðtÞ and a flow rate M(t) • The imaginary flow with a critical content a0 ðtÞ and a flow rate M’(t):

M0 ðtÞ ¼ MðtÞ ¼

ML ¼ Constant; and TL

a0 ðtÞ ¼ ½1 þ hðtÞaL

ð8:33Þ ð8:34Þ

The flow rate M’(t) being a constant, the error HFE is the error applied to the critical content a0 ðtÞ. The relative variogram of a0 ðtÞ is identical to the absolute variogram of h(t) and not much different from the variogram of aðtÞ. We can divide a0 ðtÞ and the amount of heterogeneity h(t) it carries into three parts as already introduced in Section 7.2.1: hðtÞ ¼ h1 ðtÞ þ h2 ðtÞ þ h3 ðtÞ

ð8:35Þ

where: h1(t) is the short-term discontinuous component of h(t), h2(t) is the long-term continuous component, and h3(t) is the periodic component. In the same way, in Section 7.9.1, we showed that the variogram V(j) of the heterogeneity h(t) could be decomposed into a sum of four terms: Vð jÞ ¼ V1 ð jÞ þ V2 ð jÞ þ V3 ð jÞ þ V4 ð jÞ

ð8:36Þ

where: V1(j) is the variogram of h1(t) and represents the discontinuous term of V(j). It is the constant V0. V2(j) is the variogram of h2(t) and represents the continuous term of V(j). V3(j) is the variogram of h3(t) and represents the periodic term of V(j). V4(j) is the residual component of V(j) whose expected value is nil. All components of the heterogeneity h(t) and all those of the variogram V(j) contribute to the total continuous Heterogeneity Fluctuation Error HFE and we can write: HFE ¼ HFE1 þ HFE2 þ HFE3 þ HFE4

ð8:37Þ

163

Sampling of One-Dimensional Lots

The additivity of errors whose probabilities are independent leads to the additivity of their average and variance as well.

8.9.1. Components of the Average m(HFE) By definition, we have: mðHFEÞ ¼ mðHFE1 Þ þ mðHFE2 Þ þ mðHFE3 Þ

ð8:38Þ

There is no possible modeling of the term HFE4; however, we saw that it was quickly negligible if Nu  j  25. If this condition is not fulfilled, HFE4 can bring some unwelcome background noise detrimental to a good modelling of the variogram V(j). The average of m(HFE) is nil, except in the case of zero-dimensional objects where we consider the heterogeneity between individual fragments. In this case m(HFE) approaches zero but is, in most cases, practically negligible. 8.9.2. Components of the Variance s2(HFE) As soon as we defined the auxiliary functions W(j), W’(j), W”(j), and the error generators E(j)sy and E(j)st, we analyzed the components of the variogram V(j) of the heterogeneity h(t). Then we found that for each component of h(t) and of V(j) there was a homologous component HFE. Taking into account the definition of the error generators, we can conclude that each component of the variogram generates a homologous component of the variance of HFE, and we can write: s2 ðHFEÞ ¼ s2 ðHFE1 Þ þ s2 ðHFE2 Þ þ s2 ðHFE3 Þ

ð8:39Þ

In the next sections we study each of these components. The point-by-point interpretation of the variogram V(j) suggested at the end of Chapter 7 would have to be used each time we attempt to quickly estimate the total continuous Heterogeneity Fluctuation Error HFE. However, if we attempt to find out how each component HFE1, HFE2, and HFE3 is behaving, we would have to proceed to a logical analysis of these components, which we do in the following sections.

8.10. The Discontinuous Random Term HFE1 8.10.1. Properties of the Average mðHFE1 Þ If the sampling is performed correctly we should have: mðHFE1 Þ ¼ 0

ð8:40Þ

Therefore, under this stringent condition only, the sampling of the component h1 ðtÞ of hðtÞ is accurate and this property is independent from the mode of selection of the increments (i.e., random systematic, stratified random or strict random).

164

Theory of Sampling and Sampling Practice

8.10.2. Properties of the Variance s2 ðHFE1 Þ With respect to the definition of the error generators we can write: s2 ðHFE1 ÞSY ¼ s2 ðHFE1 ÞST ¼ s2 ðHFE1 ÞRA ¼

E1 ðjÞ V0 ¼ Q Q

ð8:41Þ

Relation in which E1 ð jÞ represents any of the following: E1 ð jÞ ¼ E1 ð jÞSY ¼ E1 ð jÞST ¼ E1 ð jÞRA ¼ V0 ¼ W0

ð8:42Þ

The variance of interest s2 ðHFE1 Þ is totally independent from the mode of selection. It depends on two factors only: • V0, which is the value of the variogram V(j) when j = 0. It is a function of the state of a particular solid, the resulting Constitution Heterogeneity, and the size of the increments. • Q, which is the number of the increments in the sample. For random systematic sampling we have: Q¼

TL TSY

ð8:43Þ

TL TST

ð8:44Þ

For stratified random sampling we have: Q¼

For strict random sampling, the value of Q is an arbitrary choice. As we already know, we do not have direct access to the exact value of V0. What we know is either found by extrapolation of V(j), or by an independent experiment also followed by a smaller scale extrapolation. Therefore, let us consider: ðV0 ÞEXP which is the experimental estimation of V0. VA, which is the relative variance of the estimation of h(t). This variance includes subsampling errors, Increment Preparation Errors, Increment Delimitation and Extraction Errors that are assumed to be nil if the rules of sampling correctness are respected, and the Analytical Error. We can write: ðV0 ÞEXP ¼ V0 þ VA

ð8:45Þ

From here, we may be dealing with two different cases: • The variance VA is negligible when compared to ðV0 ÞEXP . It is the case when we sample coarse materials by taking small increments, then we can directly work with the experimental estimation ðV0 ÞEXP :

165

Sampling of One-Dimensional Lots

V0 ≈ ðV0 ÞEXP

ð8:46Þ

• The variance VA is not negligible when compared to ðV0 ÞEXP . It is important to estimate VA if possible, and to subtract it from ðV0 ÞEXP : V0 ¼ ðV0 ÞEXP  VA

ð8:47Þ

8.10.3. Mathematical Cancellation of the Variance s2 ðHFE1 Þ The variance s2 ðHFE1 Þ is nil when one of the two following conditions is fulfilled: • The condition concerning the material making up the lot is V0 = 0. • The condition concerning the sampling operation is Q being infinite. We know that ðV0 ÞEXP , VA, and V0 are always positive and never nil; even in the case of liquids ðV0 ÞEXP and V0 can be very small or negligible but not nil. We also know that if Q is infinite, the weight of the sample becomes the weight of the entire lot. Therefore, there is no more sampling problem, nor sampling at all. For all practical purposes Q is a finite number. Conclusion: Because of the particulate nature of solids, it is practically impossible to cancel the variances2 ðHFE1 Þ. 8.10.4. Minimization of the Variance s2 ðHFE1 Þ Since we cannot cancel s2 ðHFE1 Þ we can at least try to minimize it by reducing V0 or by increasing the number of increments Q.

8.10.4.1. Minimization of V0 V0 is a function of: • the particulate state of the material: the smaller the fragments the smaller the value of V0. In general, we have no control over the size of the fragments at the primary sampling stage. • the Constitution Heterogeneity of the material: for a particular state of comminution this heterogeneity depends upon four factors over which we have no control: ○ a mineralogical factor c, ○ a liberation factor ℓ, ○ a fragment shape factor f, and ○ a particle or fragment size distribution factor g. These four factors are defined and studied in detail in Chapter 10. • the size of the increments: the larger the increments the smaller the value of V0. This is a point over which we have control; therefore, to minimize V0 we will collect increments as heavy as practically and economically possible.

166

Theory of Sampling and Sampling Practice

• the distribution of the fragments: the amount of segregation that can be observed at the scale of the increments is a factor. Theoretically, the segregation can be reduced by homogenization; however, it is almost always impractical and not economical; therefore, it is a problem over which we have no control.

8.10.4.2. Increasing the Number of Increments Q We can minimize the variance s2 ðHFE1 Þ by collecting as many increments as practically and economically possible. This reduces the grouping factor that was introduced in Section 6.4.12. Conclusion: We should collect as many large increments as practically and economically possible to reduce s2 ðHFE1 Þ. Thus, we should always make a compromise between a good but expensive precision, and bad but cheap precision. It is also preferable to collect many small increments instead of a few large ones, providing that the rules of increments delimitation and extraction correctness are respected. This subject is described in detail in Part VII of this book. If these sampling correctness rules are not fulfilled unconditionally, we take the risk of performing inaccurate sampling, which is much worse than performing imprecise sampling. How does all of this apply to the sampling of liquids? This is a common question. For liquids, V0 is often negligible and what is most important is to perform a correct delimitation and extraction of the increments.

8.10.5. The Two Components of the Error HFE1 It is at the level of the term h1(t) and of the error HFE1 that the two complementary sampling models (i.e., the continuous and discrete models) can be linked to one another. The logical analysis of the components of the Error HFE1, with respect to the notions expressed in Chapter 6, is developed in Chapter 9. We see that the Error HFE1, strongly related to the discontinuous nature of the material, can be dissociated into the sum of two components assuming the Increment Weighting Error IWE is negligible: • one connected to the Constitution Heterogeneity (i.e., the Fundamental Sampling Error FSE. • One connected to the Distribution Heterogeneity (i.e., the Grouping and Segregation Error GSE). For on-going clarification, let’s recall: HFE1 ¼ QFE1 þ IWE1

ð8:48Þ

Assuming IWE1 s negligible, as is often the case when sampling is correct: HFE1 ≈ QFE1 ¼ FSE þ GSE

ð8:49Þ

167

Sampling of One-Dimensional Lots

8.11. The Continuous Nonrandom Term HFE2 As we saw in Chapter 7 there is a direct chronological and mathematical relationship between the continuous heterogeneity h2(t) and the continuous Heterogeneity Fluctuation Error HFE2. This chronology involves the following terms: • The term h2(t) of the heterogeneity h(t) carried out by the lot L. • The term V2(j) of the variogram V(j). It is the variogram of h2(t). Frequently it takes one of the following shapes: ○ Parabolic: V2 ð j Þ ¼ V 1 j þ V 2 j 2

ð8:50Þ

V2 ð jÞ ¼ V1 ð jÞ when V2 ¼ 0

ð8:51Þ

V2 ð jÞ ¼ 0 when V1 ¼ V2 ¼ 0

ð8:52Þ

○ Rectilinear:

○ Flat:

• The term W2(j) of the auxiliary function W(j), which often can take one of the following shapes: ○ Parabolic: W2 ð jÞ ¼

    V1 V2 2 jþ j 2 3

ð8:53Þ

○ Rectilinear: 

 V1 j when V2 ¼ 0 2

ð8:54Þ

W2 ð jÞ ¼ 0 when V1 ¼ V2 ¼ 0

ð8:55Þ

W2 ð jÞ ¼ ○ Flat:

• The term W20 ð jÞ of the auxiliary function W 0 ð jÞ, which often can take one of the following shapes: ○ Parabolic:

168

Theory of Sampling and Sampling Practice

    V1 V2 2 jþ j ¼ 3 6

ð8:56Þ

  V1 j when V2 ¼ 0 ¼ 3

ð8:57Þ

W20 ð jÞ ¼ 0 when V1 ¼ V2 ¼ 0

ð8:58Þ

W20 ð jÞ ○ Rectilinear: W20 ð jÞ ○ Flat:

• The term E2 ð jÞSY of the error generator Eð jÞSY , which often can take one of the following shapes: ○ Parabolic: E2 ð jÞSY ¼

  V1 j 6

ð8:59Þ

E2 ð jÞSY ¼

  V1 j 6

ð8:60Þ

○ Rectilinear:

○ Flat: E2 ð jÞSY ¼ 0 when V1 ¼ V2 ¼ 0

ð8:61Þ

• The term E2 ð jÞST of the error generator Eð jÞST , which often can take one of the following shapes: ○ Parabolic:     V1 V2 2 jþ j ¼ 3 6

ð8:62Þ

  V1 j when V2 ¼ 0 ¼ 3

ð8:63Þ

E2 ð jÞST ○ Rectilinear: E2 ð jÞST ○ Flat:

169

Sampling of One-Dimensional Lots

E2 ð jÞST ¼ 0 when V1 ¼ V2 ¼ 0

ð8:64Þ

• The term E2 ð jÞRA of the error generator Eð jÞRA , which cannot be separated from the term E3 ð jÞRA . If we assume that E3 ð jÞRA ¼ 0, we have for all shapes: E3 ð jÞRA ¼ VR ¼ constant

ð8:65Þ

• The term s2 ðHFE2 ÞSY of the total variance s2 ðHFEÞSY of the continuous Heterogeneity Fluctuation Error for a constant sampling interval TSY. For all shapes of the variogram we have: s2 ðHFE2 ÞSY ¼ E2 ðTSY ÞSY

TSY TL

ð8:66Þ

• The term s2 ðHFE2 ÞST of the total variance s2 ðHFEÞST of the continuous Heterogeneity Fluctuation Error for a constant sampling strata TST. For all shapes of the variogram we have: s2 ðHFE2 ÞST ¼ E2 ðTST ÞST

TST TL

ð8:67Þ

• The term s2 ðHFE2 ÞRA of the total variance s2 ðHFEÞRA of the continuous Heterogeneity Fluctuation Error for Q increments selected at random. For all shapes of the variogram we have: s2 ðHFE2 ÞRA ¼

VR Q

ð8:68Þ

In all these relations, V0, V1, V2, and VR are the variographic parameters already defined in Chapter 7, and TL is the total extent of the one-dimensional lot, either expressed as time or distance.

8.11.1. Properties of the Average mðHFE2 Þ Under the condition that the sampling selection is performed correctly (i.e., IDE + IEE + IPE + IWE = 0), the average mðHFE2 Þ is nil; furthermore, the fact that mðHFEÞ is nil is a structural property as defined in Chapter 4. Consequently, providing that we have a correct selection, all the components of HFE will also have an average that is nil. It really follows that: mðHFE2 Þ ¼ 0

ð8:69Þ

This property of mðHFE2 Þ is independent from the mode of selection, either random systematic, stratified random, or strict random.

170

Theory of Sampling and Sampling Practice

8.11.2. Properties of the Variance s2 ðHFE2 Þ The variance s2 ðHFE2 Þ is a function of the mode of selection of the increments as explained in the following section. 8.11.2.1. Random Systematic with an Interval TSY From equations (8.66) and (8.59) through (8.61), we can write for the following shapes of the variogram: • Parabolic: s2 ðHFE2 ÞSY ¼

2 V1 TSY 6TL

ð8:70Þ

s2 ðHFE2 ÞSY ¼

2 V1 TSY 6TL

ð8:71Þ

• Rectilinear:

• Flat: s2 ðHFE2 ÞSY ¼ 0

ð8:72Þ

8.11.2.2. Stratified Random with a Strata TST • Parabolic: s2 ðHFE2 ÞST ¼

3 2 V2 TST V1 TST þ 3TL 6TL

ð8:73Þ

s2 ðHFE2 ÞST ¼

2 V1 TST V1 TST ¼ 3TL 3Q

ð8:74Þ

• Rectilinear:

• Flat: s2 ðHFE2 ÞST ¼ 0

ð8:75Þ

8.11.2.3. Strict Random with Q Increments From equation (8.68) we can write for all shapes of the variogram: s2 ðHFE2 ÞRA ¼

VR Q

ð8:76Þ

171

Sampling of One-Dimensional Lots

From equation (7.31) we can calculate VR as follows: VR ¼ VR0  V0

ð8:77Þ

where VR0 is the sill of the variogram. 8.11.3. Cancellation of the Variance s2 ðHFE2 Þ 8.11.3.1. Mathematical Cancellation For random systematic and stratified random sampling, s2 ðHFE2 Þ can be zero under the following conditions: • Conditions from the material making up the lot: V1 ¼ V2 ¼ 0

ð8:78Þ

• Conditions from the sampling operation: TSY or TST ¼ 0

ð8:79Þ

For strict random sampling, s2 ðHFE2 Þ can cancel under the following conditions: • Conditions from the material making up the lot: VR ¼ 0 which implies that VR0 ¼ V0

ð8:80Þ

• Conditions from the sampling operation: Q ¼ þ∞

ð8:81Þ

Now, we may review these conditions in more detail. 8.11.3.2. Conditions Concerning the Material Making Up the Lot We should first review the variographic parameters included in the variance of the continuous Heterogeneity Fluctuation Error: V0: It is the intercept of the variogram with the ordinate axis (i.e., j = 0). It is a function of the size of the fragments, the constitution heterogeneity, the segregation, and the size of the increments. V0 is always positive. V1: it is the first order derivative at the origin of the variogram (i.e., it is the slope of the variogram in the linear part). This value shows how fast the heterogeneity can vary. The higher the slope, the larger the sampling error. V2: It is the second order derivative at the origin of the variogram. It can be: • Positive, then showing an acceleration of the variations of the heterogeneity.

172

Theory of Sampling and Sampling Practice

• Negative, then showing a deceleration of the variations of the heterogeneity. • Nil when the variogram is linear. We should notice that V2 affects s2 ðHFE2 ÞST but not s2 ðHFE2 ÞSY as shown in Sections 8.11.2.1 and 8.11.2.2. VR: It is the sill of the continuous component of the variogram or the variance of h2(t). VR0 : It is the sill of the total variogram or the variance of h(t). Bed blending techniques have the property of eliminating the terms V1, V2, and s2 ðHFE2 Þ. Contrary to the usual belief, bed blending has no effect on the term V0 and s2 ðHFE1 Þ; such blunder is an indication of how little people know about the Theory of Sampling. Bed blending should be used to prepare coal piles for coal burning plants, because the reclamation of the coal piles would provide a constant feed to the power generators. The same remark applies to feed of cement plants where the chemical composition is critically important.

8.11.3.3. Conditions Concerning the Sampling Operation The conditions necessary to eliminate TSY or TST are to collect an infinite number of increments; in such a case the sample becomes identical to the lot and there is no more sampling. Conclusion: There is no practical method to cancel the variance s2 ðHFE2 Þ, except for valuable materials such as precious metals or materials feeding plants whose variation can be smoothed out by bed blending techniques.

8.11.4. Minimization of the Variance s2 ðHFE2 Þ It is very difficult or even uneconomical to cancel the variance s2 ðHFE2 Þ, however, it is of primary importance to find ways to minimize it. We already know that s2 ðHFE2 Þis: • for random systematic sampling, a rapidly increasing function of the constant interval TSY, • for stratified random sampling, a very rapidly increasing function of the length of the constant strata TST, • for strict random sampling, a decreasing function of the number of increments Q covering the total domain TL of the one-dimensional lot L. Therefore, we can minimize s2 ðHFE2 Þ by reducing the interval TSY or the strata TST between increments (i.e., by increasing the number of increments Q whose reunion makes up the sample). Conclusion: For a costly but very precise solution, we will take as many increments as possible. For a cheap, but not very precise solution, we will take only a few increments, but never less than 25. It becomes a matter of compromise and judgement between the number of increments Q we can afford and the variance s2 ðHFE2 Þ that is generated.

Sampling of One-Dimensional Lots

173

8.12. The Periodic Term HFE3 The analysis of the term HFE3 is performed by the modeling of the variogram V(j). We saw that for all practical purposes this modeling was cumbersome and was not a must. In addition, it can be a very difficult task in the presence of complex periodic phenomena. We strongly recommend a direct interpretation of the variogram by calculating the continuous Heterogeneity Fluctuation Error HFE. This was performed in Section 7.12, with the point-by-point interpretation of the variogram. Nevertheless, the following information may be of interest to the reader. 8.12.1. Limitations of the Variographic Analysis of the Periodic Function In practice, the stability of the mean and the variance of the heterogeneity h3(t) is only approximate; furthermore, this limitation is already often applicable to simple sinusoidal models. As far as the term h3(t) is concerned, neither its amplitude or its frequency, are strictly constant. Even, with these restrictions, the conclusion seems clear: the safest strategy is to use a stratified random selection; this is especially true for sampling for material balance. 8.12.2. Frequency and Origin of Periodic Fluctuations Among the thousands of variographic experiments performed by Gy and the author, the majority of which took place in the mineral industry, about half of the variograms obtained showed periodic fluctuations. These fluctuations can greatly vary in their amplitude from one process to another, which their period range from a few seconds to several hours, and sometimes several days. There are several origins for the periodic fluctuations. 8.12.2.1. Periodic Fluctuations from the Ore Deposit According to geostatisticians this case is rare; nevertheless, it has been observed as a series of alternate rich and poor layers in a uranium mine. The author saw periodic phenomena introduced by the miner by going away from a certain area in the open pit and coming back just under the same place to mine the next bench; such a cycle may have a several weeks period. Since periodic phenomena are very frequent, obviously their origin is not structural to the deposit. 8.12.2.2. Periodic Fluctuations Created by the Organization of Human Activities Throughout industry, human activities are organized most often in a cyclic manner; by the 8 or 12-hour working shift, by the day, or by the week. Often, within a large plant, some activities are organized in three 8-hour shifts, while some other activities need only two working shifts or even one working shift. Everyone knows that quality and output are different from a day shift to a night shift for many reasons. These cycles have a long period (e.g., 8, 12, or 24 hours) and are usually too long to be detrimental to random systematic sampling, though they can be detrimental to process control, the economic consequences of which are usually devastating.

174

Theory of Sampling and Sampling Practice

8.12.2.3. Periodic Fluctuations Introduced by the Control Room Modern transformation processes are based on automatically controlled systems either by mechanical or electronic devices, whose regulation is centralized in a control room. For example, a certain parameter is allowed to change only between two preestablished limits. When one limit is reached, a correction is made, which unfortunately is often an overcorrection, which sends the parameter to the other limit, and vice versa. These corrections introduce cycles with regular amplitude and variable periods. The period also becomes smaller if the space between the two limits becomes narrower. The length of these periods is generally on the order of a few minutes, precisely the intervals generally accepted to collect the increments at regular intervals. Operators need careful training not to make the bad habit of overcorrecting a process, the economic consequences of which are usually devastating. 8.12.2.4. Periodic Fluctuations Introduced by Pieces of Equipment Certain pieces of equipment currently used in mineral processing are built in such a way that they progressively accumulate certain size or density fractions of the material until a natural limit is reached. Then the process is reversed, and this accumulation decreases until a low point is reached, then the process is reversed again. This can be observed with mechanical classifiers, hydro-cyclones, centrifugal pumps, thickeners, and so on. Reclaimers traveling from one extremity of an elongated pile to another often introduce a cycle whose period is equal to the time necessary to travel from one end to another, which is in the order of 5 to 15 minutes, precisely the intervals generally accepted to collect the increments at regular intervals. 8.12.2.5. Periodic Fluctuations That Are Random Ghosts Sometimes a variogram may look cyclic to someone and random to someone else with more experience. If in doubt, the recommendation is to observe the same variogram during another period of time, then decide if the structure of the variogram is consistent. You don’t want to claim that a variogram is cyclic when it is not. 8.12.3. From the Heterogeneity h3(t) to the Term HFE3 There is a chronological and mathematical relationship between the following terms: • The term h3(t) of the total heterogeneity h(t) carried by the one-dimensional lot. The simplest model representing h3(t) is the sinusoidal model: h3 ðtÞ ¼ h3 sin

2πt P

ð8:82Þ

where P is the period. In a real situation the term h3(t) may be represented by a far more complex model. • The term V3(j) of the variogram V(j). It is the variogram of the heterogeneity h3(t):

175

Sampling of One-Dimensional Lots

 V 3 ð j Þ ¼ V3 With V3 ¼

2πj 1  cos P



h23 2

ð8:83Þ

ð8:84Þ

• The term W3(j) of the auxiliary function W(j):   P 2πj sin W3 ð jÞ ¼ V3 1  2πj P

ð8:85Þ

It is a smoothed sine curve whose value approaches V3. • The term W30 ð jÞ of the auxiliary function W’(j): W30 ð jÞ

   P2 2πj ¼ V3 1  2 2 1  cos 2π j P

ð8:86Þ

It is a very smoothed sine curve whose value rapidly approaches V3.

8.12.4. Properties of the Moments of the Error HFE3 The study of these properties can be lengthy and although the benefits we can get from it are only academic, these properties should be briefly summarized. The reader will almost always have an advantage to proceed with a far more pragmatic point-by-point interpretation of the variogram as described in Chapter 7. Among the critical factors influencing the error HFE3 are the following ratios: TL/TSY or TL/TST, TL/P, and TSY/P or TST/P. Here, TSY is a random systematic and constant interval. TST is a constant stratum, and P is the period of the cyclic phenomena. Let us define: • QLs the quotient of the division of TL by Ts(TSY or TST), leaving a remainder TLs. By definition we have: TL ¼ QLs Ts þ TLs with 0  TLs 5Ts

ð8:87Þ

• QLP the quotient of the division of TL by P, leaving a remainder TLP. By definition we have: TL ¼ QLP P þ TLP with 0  TLP 5P

ð8:88Þ

• QsP the quotient of the division Ts(TSY or TST) by P, leaving a remainder TsP. By definition we have:

176

Theory of Sampling and Sampling Practice

Ts ¼ QsP P þ TsP with 0  TsP 5P

ð8:89Þ

8.12.4.1. General Case The three remainders TLs, TLP, and TsP are different from zero in practice. Because TLs is not nil, the number of increments Q is a random variable within ±1 unit, Therefore, the sampling operation is biased and only a random selection could suppress this bias. We can class the variances as follows: • If TLs is close to TP/2: s2SY 5s2ST 5s2RA

ð8:90Þ

s2ST 5s2RA 5s2SY

ð8:91Þ

• If TLs is close to zero or TP:

8.12.4.2. Particular Cases • TLs = 0, TLP>0, TsP>0: the number of increments Q is not a random variable. The sampling process is accurate regardless of the sampling mode. The variances can be arranged the same way as in relations 8.90 and 8.91. • TLs = 0, TLP = 0, but TsP>0; we easily demonstrate that: mðHFE3 Þ ¼ 0

ð8:92Þ

s2 ðHFE3 ÞSY ¼ 0

ð8:93Þ

This solution is never practical because the values of Ts and P are never known with enough precision. We cannot guarantee that the remainders TLs and TLP will be nil. • TsP = 0 and TLs and TLP are different from zero. If the selection is random systematic, then TSY is a multiple of the period P. Q increments do not bring more information than a single increment. The variance s2 ðHFE3 ÞSY is independent of Q but strongly depends on the position of the selection within the sine curve. The maximum risk is reached when this position corresponds to a maximum or a minimum on the sine curve, which can be expressed as follows: s2 ðHFE3 ÞSY ¼ V3 ¼

h23 2

ð8:94Þ

This error is not a systematic error, but a large random error that cannot be smoothed out by increasing the number of increments Q. If we proceed with stratified random

177

Sampling of One-Dimensional Lots

selection, the length of the strata TST is a multiple of the period P. For example, TST = P. In this case the variance s2 ðHFE3 ÞST can be expressed as follows: s2 ðHFE3 ÞST ¼

h2 V3 ¼ 3 Q 2Q

ð8:95Þ

8.12.5. Practical Recommendations The objective is to determine and suppress the maximum risk. There are two cases: 8.12.5.1. The Period P and the Amplitude V3 Are Unknown We did not perform a variographic experiment, but because of the process we have good reasons to believe that there are periodic fluctuations. The safe solution is to implement a stratified random sampling selection. 8.12.5.2. The Period P and the Amplitude V3 Are Known Two cases may take place: pffiffiffiffiffiffi • The maximum variance V3 is acceptable. The standard deviation sðHFE3 ÞSY ¼ V3 is considered reasonable; therefore, we can implement a random systematic selection and forget about the periodic term. • The maximum variance V3 is not acceptable. The only practical and wise solution is to implement a stratified random sampling selection. Conclusion: If there is a chance that periodic fluctuations are present within a onedimensional lot, the safest and wise solution is to implement a stratified random sampling selection. This is especially true for metallurgical accounting and material balance in a plant.

This page intentionally left blank

9 Sampling of Zero-Dimensional Lots The Discrete Model

9.1 Introduction In Chapters 7 and 8 we looked at lots of particulate materials using a continuous model. This is equivalent to the examination of these lots with a wide-angle lens, without paying attention to small details. Now, we want to look at these details. We want to examine these lots with a magnifying lens. The continuous model and the discrete model are complementary models. Earlier in this book we defined a zero-dimensional lot as a set of discrete units, therefore, there is an equivalence between the notions of discrete model and of zero-dimensional lot. The discrete model of sampling we introduce in this chapter is applicable to all lots that can be reduced to a zero-dimensional model. This also applies to ores, vegetables, fruits, cereals, chemicals, wastes, and any of their constituents (e.g., minerals, solids in pulps or slurries, foreign materials in cereals, particle size distributions, moisture content, precious metals, etc.). By definition a zero-dimensional lot is a set of NU nonchronological units Um, with m = 1,2,3,. . .,NU. These units represent: • Either isolated particles or fragments Fi with i = 1,2,3,. . ., NF • Or groups of particles or fragments Gn with n = 1,2,3,.. . ., NG

9.2 Notations Let us define: L is a lot of particulate material whose descriptors are the weight ML, the critical content aL or component of interest. And the total weight AL of the critical component or component of interest. NL is the number of units making up the lot L. Um is a given unit of the lot L with m = 1,2,3,. . ., NL. The descriptors of this unit are Mm, am , and Am. Pm is the selection probability of the unit Um. For the time being we shall assume that the probability Pm is defined and constant for the unit Um. 179

180

Theory of Sampling and Sampling Practice

Now, let us repeat the same selecting step many times exactly under the same conditions: select the units making up the sample, measure the critical content or content of interest in the sample, assuming our procedure is nondestructive, return the sample to the lot L prior to the next selecting step, and so on. We define: K is the number of these steps (it should be a very large number). SK is the sample obtained after one selecting step of rank k with k = 1,2,3,. . ., K. The descriptors of SK are MSK, aSK , and ASK. NSK is the number of units making up the sample SK. Z is the set made by the reunion of K samples. The descriptors of Z are MZ, aZ , and AZ. NZ is the number of units making up the set Z, which should be a very large number. Q is the frequency with which we observe the unit Um in the set Z. It is a random m

function of the number K.

9.3. Distribution of the Random Variables

Q

m,

NSK , aSK , and ASK

9.3.1. Recall of a Statistical Property Let us consider a set L of NL statistically independent random variables xm, pm a parameter associated to xm and x the function: x¼

X

pm xm

ð9:1Þ

m

Under these conditions it has been demonstrated that: m ð xÞ ¼

X

pm mðxm Þ

ð9:2Þ

p2m s2 ðxm Þ

ð9:3Þ

m

and

s 2 ð xÞ ¼

X m

9.3.2. Distribution of the Frequency

Q m

of the Unit Um in the Set Z

By definition of the selection process, each unit is either selected with a probability Pm or not selected with a probability (1 – Pm). This is typical of a binomial distribution characterized by: • Average: m

Y

¼ KPm

ð9:4Þ

¼ KPm ð1  Pm Þ

ð9:5Þ

m

• Variance: s2

Y m

181

Sampling of Zero-Dimensional Lots

9.3.3. Number of Units NZ in the set Z In the present model each unit is submitted to the selection process with a given probability Pm of being selected; obviously the number of units in the sample is not a constant, it is a random variable. We can compute the moments of this random variable and two cases are considered: • By definition NZ is the number of units in the large set Z, which is obtained by the reunion of all the samples SK: NZ ¼

X

NSk with k ¼ 1; 2; 3; . . . ; K

ð9:6Þ

k

Then:

mðNZ Þ ¼ KmðNSk Þ

ð9:7Þ

s2 ðNZ Þ ¼ Ks2 ðNSk Þ

ð9:8Þ

• In the same way we may consider NZ as the sum of the NL frequencies units Um in the set Z: NZ ¼

XY m

m

with m ¼ 1; 2; 3; . . . ; NL

Q

of the

m

ð9:9Þ

Then: X Y X m Pm ¼K m

ð9:10Þ

X Y X s2 Pm ð1  Pm Þ ¼K m

ð9:11Þ

mðNZ Þ ¼

m

s2 ðNZ Þ ¼

m

m

m

9.3.4. Distribution of the NSk Units in the Sample Sk If we compare relation 9.7 with relation 9.10, then relation 9.8 with relation 9.11, after dividing each side of the equalities by K we obtain: mðNSk Þ ¼

X

Pm

ð9:12Þ

Pm ð1  Pm Þ

ð9:13Þ

m

s2 ðNSk Þ ¼

X m

These relations define mðNSk Þ and s2 ðNSk Þ. 9.3.5. Weight MZ of the Set Z The weight MZ can be expressed in two different ways:

182

Theory of Sampling and Sampling Practice

• As the sum of the K different weights MSk taken by the samples SK: MZ ¼

X

MSk with k ¼ 1; 2; 3; . . . ; K

ð9:14Þ

k

Then:

mðMZ Þ ¼ KmðMSk Þ

ð9:15Þ

s2 ðMZ Þ ¼ Ks2 ðMSk Þ

ð9:16Þ

• As the sum of the weights of the NZ Q units representing the set Z. Because the unit Um shows up in Z with a frequency we obtain: X

MZ ¼

Mm

m

Y m

m

with m ¼ 1; 2; 3; . . . ; NL

In this relation Mm is a numerical parameter and can apply the results of Section 9.3.1 and write: m ð MZ Þ ¼

X

Mm m

X

M2m s2

m

Y m

m

m

Y

m

s2 ðMZ Þ ¼

Q

ð9:17Þ

is a random variable; therefore, we X

Mm Pm

ð9:18Þ

M2m Pm ð1  Pm Þ

ð9:19Þ

¼K

m

¼K

X m

9.3.6. Distribution of the Weight MSk of the Sample Sk If we compare relation 9.15 with relation 9.18, then relation 9.16 with relation 9.19, after dividing each side of the equalities by K we obtain: mðMSk Þ ¼

X

Mm Pm

ð9:20Þ

M2m Pm ð1  Pm Þ

ð9:21Þ

m

s2 ðMSk Þ ¼

X m

9.3.7. Distribution of the Total Weight ASk of the component of Interest in the Sample Sk We should remember that Am ¼ am Mm , By applying the same reasoning as that applied to the weight MSk we should obtain: mðAm Þ ¼

X

Am Pm ¼

m

s 2 ð Am Þ ¼

X m

A2m Pm ð1  Pm Þ ¼

X

am Mm Pm

ð9:22Þ

a2m M2m Pm ð1  Pm Þ

ð9:23Þ

m

X m

183

Sampling of Zero-Dimensional Lots

9.4. Properties of the Sample Selection Error SSE: The General Case 9.4.1. Definition of the Sample Selection Error SSE The final purpose of our model is to study the distribution of the content of interest (i.e., critical content) aSk of the sample SK. By definition, aSk is the quotient of the weight ASk of the component of interest in the sample divided by the weight MSk of the sample SK: aSk ¼

ASk MSk

ð9:24Þ

The relative Selection Sampling Error SSE is generated by the substitution of the sample SK to the original lot L: SSE ¼

aSk  aL aL

ð9:25Þ

9.4.2. Relationship between the Properties of SSE and Those of aSk The true unknown critical content aL being a constant, the distribution law of the Sample Selection Error SSE is identical to that of the sample content aSk : mðSSEÞ ¼

mðaSk  aL Þ aL

ð9:26Þ

s2 ðaSk Þ a2L

ð9:27Þ

s2 ðSSEÞ ¼

Now, we are going to study the properties of the distribution of aSk . 9.4.3. Distribution Law of aSk 9.4.3.1. Important Remarks Relation 9.24 represents aSk as the ratio of two random variables ASk and MSk. Geary180 and Bastien179 have studied the properties of the quotient of two random variables and both reached the following conclusions: • In the general case, the quotient does not follow a simple law and it is impossible to simply express its moments as a function of ASk and MSk . • However, the distribution of aSk tends toward a normal law when two conditions are fulfilled: a. Both the numerator and the denominator are distributed according to a normal law. b. The relative standard deviation of the denominator is small when compared to one.

184

Theory of Sampling and Sampling Practice

• When these two conditions are fulfilled, the average of the quotient is practically equal to the quotient of the averages; however, the expression of the variance is usually not simple. Let us determine how the two first conditions can be fulfilled. 9.4.3.2. Normality of the Numerator and the Denominator The central limit theorem of Laplace and Liapounoff states that the distribution of the characteristics of a sample (e.g., ASk and MSk) selected from a given population of units tends toward a normal distribution when the number NSk of units making up the sample is increasing and becomes a large enough number. Now, in the sampling of particulate materials, the question is what is a large enough number? To prevent lengthy discussions about this important issue for sampling we assume, at least for now, that the number NSk of units in a sample is large enough to have ASk and MSk normally distributed. It is important, however, to remember the two following restrictions to our optimistic assumption: • The critical content aSk should not approach zero as it does with poor precious metals ores, with trace constituents in high purity materials, in fauna and flora, or in wastes. This is almost equivalent in saying that the sampling process should not introduce a Poisson process, and this alone justifies Part V of this book dedicated to the works of Visman and especially Ingamells. • The sampling selection should obey the rules of sampling correctness, unconditionally, because if it does not the skewness of the distribution can be amplified as a result of the selective bias; at this stage, the sampling process would become useless, invalid, and highly misleading. For the time being, let us assume that both conditions are fulfilled. Cases where they are not fulfilled will be largely discussed in Parts V, VI, and VII of this book. 9.4.3.3. Value of the Relative Standard Deviation of the Denominator MSk We can express the relative variance u2 ðMSk Þ of the weight of the sample using relations 9.20 and 9.21: s2 ðMSk Þ ¼ u ðMSk Þ ¼ 2 m ðMSk Þ 2

P

M2m Pm ð1  Pm Þ P 2 m Mm Pm

m

ð9:28Þ

If we call u20 the maximum acceptable relative variance, then the necessary condition for the value of u2 ðMSk Þ is: u2 ðMSk Þ  u20

ð9:29Þ

The set of the weights Mm of the various units Um in the lot L is a part of the problem directly related to the characteristics of the material to be sampled. As far as the selecting probabilities Pm are concerned, we can admit that they are the product of two factors:

185

Sampling of Zero-Dimensional Lots

Pm ¼ P P0m

ð9:30Þ

where: P is the weighted average of the selecting probabilities. It gives an idea about the order of magnitude of Pm and it is defined as follows: P

Mm Pm X Mm Pm P ¼ mP ¼ Mm ML m

ð9:31Þ

m

We can change the value of P as we wish. P0m is the variability or modulation around one of the selecting probabilities P we have chosen to select one increment. It is going to vary for each particular sampler, and by definition, if the selection performed by the sampler is correct we should have: P0m ¼ 1 regardless of m:

ð9:32Þ

For an incorrect selection, it readily follows that: P0m ¼

Pm 6¼ 1 P

ð9:33Þ

By transposing the value of Pm given by relation 9.30 into relation 9.28, then solving the inequality 9.29 for P we obtain: P

M2m P0m P   2 P 2 02 P Mm Pm þ u20 Mm P0m

m

m

ð9:34Þ

m

This relation is useful only when P0m ¼ 1, which means the selection must be correct, which then can simplify it to the following form: P

P 

P m2 Mm m

M2m þ u20 M2L

ð9:35Þ

In practice, it is demonstrated that the condition exposed in relation 9.35 is not, by far, as stringent as the conditions required for maintaining a precise sampling protocol. In other words, the necessary conditions for correctness are not as difficult to implement as the conditions for good precision; it is more or less an attitude problem from those who refuse to learn the Theory of Sampling.

9.4.3.4. Conclusions Concerning the Distribution Law of aSk The quotient aSk usually follows a normal law of distribution, at least approximately. There are exceptions for constituents that are present at very low levels, such as

186

Theory of Sampling and Sampling Practice

precious metals, trace constituents in high purity materials, in chemicals, in pharmaceutical products, in food, and in wastes. The smaller the content aSk of these trace constituents, the more the skewness of the distribution of aSk , and the more the sampling practitioner should be alarmed and refer to parts 5 and 6 of this book to minimize severe misunderstandings and devastating consequences. The more the skewness, the less the proportion of values of aSk that will fall inside the conventional standard deviation.

9.4.4. Moments of the Critical Content. The Theoretical Approach Let us define: N is the average of the population NSk in the sample SK with: N ¼ mðNSk Þ ¼

X

ð9:36Þ

Pm

m

M is the average of the population of weights MSk of the sample SK with: M ¼ mðSK Þ ¼

X

Mm P m

ð9:37Þ

m

A is the average of the population of the weights ASk of the critical component in the sample SK with: A ¼ m ð Am Þ ¼

X

Am Pm ¼

X

m

am Mm Pm

ð9:38Þ

m

a is the following quotient: a¼

A mðASk Þ ¼ M mðMSk Þ

ð9:39Þ

Let us also define the following auxiliary variables: x is the relative deviation of the weight MSk with its average M which can be written: MSk ¼ ð1 þ xÞM

ð9:40Þ

y is the relative deviation of the weight ASk of the critical component in the sample SK with its average A, which can be written: ASk ¼ ð1 þ yÞA

ð9:41Þ

z is the relative deviation of the critical content aSk with its average a, which can be written: aSk ¼ ð1 þ zÞa Therefore, we can write:

ð9:42Þ

187

Sampling of Zero-Dimensional Lots

aSk ¼

ASk ð1 þ yÞ ¼ a ¼ ð1 þ zÞa MSk ð1 þ xÞ

ð9:43Þ

By dividing both sides by a we obtain: 1þz¼

1þy 1þx

ð9:44Þ

As already mentioned, the difficulty is the fact that the denominator (1 + x) is a random variable. But among the conditions of normality of the distribution of aSk we find that x should be small in comparison to 1, and we saw that in practice it is often the case. This suggests that we can develop the quotient 1/(1 + x) as a series and transform relation 9.44 as follows: 1 ¼ 1  x þ x2  x3 þ … þ ð1Þn xn þ xn "ðxÞ ð 1 þ xÞ   1 þ z ¼ ð1 þ yÞ 1  x þ x2  x3 þ x4  …     z ¼ ðy  xÞ þ x2  xy  x3  x2 y þ … The development of

1 1þx

ð9:45Þ ð9:46Þ ð9:47Þ

is convergent when the absolute value of x is smaller than 1: jxj51

ð9:48Þ

which is obviously true because we already showed that x was small in comparison with 1. y is also smaller than 1:



y 5 1

ð9:49Þ

  The conclusion is obvious: in relation 9.47 the term x2  xy is much smaller than  3    the term ðy  xÞ; similarly, the term x  x2 y is much smaller than the term x2  xy , and so on. Thus, we can proceed with two important approximations: • The first approximation is characterized by the subscript 1: z1 ¼ y  x which gives: ðaSk Þ1 ¼ ð1 þ z1 Þ ¼ ð1 þ y  xÞa

ð9:50Þ

Gy44,64,73 demonstrated that this first approximation is sufficient in many practical cases, except for trace constituents for which the second approximation is suggested. • The second approximation is characterized by the subscript 2: z2 ¼ y  x þ x2  xy which gives:   ðaSk Þ2 ¼ ð1 þ z2 Þa ¼ 1 þ y  x þ x2  xy a

ð9:51Þ

For the time being, we will use only the first approximation; the second approximation will be discussed later in more detail in this book. There may be a way to prevent the

188

Theory of Sampling and Sampling Practice

use of the more complex second approximation if the selected sample mass is such that the introduction of a Poisson process is prevented, which is the objective of Part V. 9.4.5. Moments of aSk and SSE Using the first approximation, Gy57 demonstrated in 1975 the following results. The averages are: P

am Mm Pm mðaSk Þ1 ¼ a ¼ mP Mm Pm

ð9:52Þ

a  aL aL

ð9:53Þ

m

mðSSEÞ1 ¼ The variances are: P s2 ðaSk Þ1 ¼

m

ðam  aÞ2 M2m Pm ð1  Pm Þ  2 P Mm Pm

ð9:54Þ

m

s2 ðSSEÞ1 ¼

s2 ðaSk Þ1 a2L

ð9:55Þ

The long and tedious development to reach these results is beyond the objectives of this book. For the details of this development the reader can refer to the publication mentioned above. Remark concerning the accuracy of sampling: if we relate 9.52 defining a to the relation expressing the critical content aL of the lot, and if we consider relation 9.30, we obtain: P

P am Mm Pm am Mm P0m m m a¼ P ¼ P Mm P m Mm P0m m

ð9:56Þ

m

P

am Mm aL ¼ mP Mm

ð9:57Þ

m

We can conclude that if an incorrect sampling selection is characterized by a set of values P0m the real weight Mm is substituted by a fictitious weight equal to the product Mm P0m , and in general we have: a 6¼ aL

ð9:58Þ

In other words, it is fair to say that the sampling process would be ineffective, become invalid and highly misleading, which emphasizes the critical importance for the reader to follow the rules listed in Part VII.

189

Sampling of Zero-Dimensional Lots

9.5. Properties of the Sampling Selection Error SSE: Correct Sampling 9.5.1. Consequences of Selection Correctness Hypothesis The selection correctness is expressed as follows: Pm ¼ P ¼ constant regardlessof m

ð9:59Þ

9.5.2. First Approximation If we transpose relation 9.59 into relation 9.56, we obtain for the averages: P

am Mm mðaSk Þ1 ¼ mP ¼ aL Mm

ð9:60Þ

mðSSEÞ1 ¼ 0

ð9:61Þ

m

Conclusion: as a first approximation, the correct sampling is strictly accurate. For the variances we obtain: P 1P  s2 ðaSk Þ1 ¼ P

m

ðam  aL Þ2 M2m  2 P Mm

ð9:62Þ

m

" # 1  P X ðam  aL Þ2 M2m 2 s ðSSEÞ1 ¼  2 P m a2L ML

ð9:63Þ

Conclusion: As far as the variance is concerned, this first approximation is most of the time sufficient.

9.6. The Correct Probabilistic Model Let us define the lot L with descriptors aL and ML as a discrete set obtained by the reunion of N potential groups of fragments Gn with n = 1,2,3,. . .,N. These groups of fragments should be complementary and not overlapping. The descriptors of these groups are aGn , MGn , and hGn . Then, we can write: L¼

X

Gn

ð9:64Þ

n

If we assume that the average size of these groups is equal to the size of real increments Iq collected by a sampler with q = 1,2,3,. . .,Q, then we can write:

190

Theory of Sampling and Sampling Practice

TL ¼ NTI

ð9:65Þ

where T is duration of an event. In the correct probabilistic model, we make the hypothesis that all units of the lot L are submitted to the sampling selection process with a constant probability P. From relations 9.61 and 9.62 we can write: mP ðSSEÞ1 ¼ 0   1  P X ðaGn  aL Þ M2n 1P 2 2 sP ðSSEÞ1 ¼  2 ¼ s ðhGn Þ 2 PN2 n PN aL M

ð9:66Þ ð9:67Þ

9.7. The Components of the Sampling Selection Error SSE We should emphasize the fact that we are making the assumption that the sampling selection is correct, which implies that the Increment Materialization, Increment Delimitation, Increment Extraction, and Increment Preparation Errors are nil, which implies that the rules listed in Part VII are fulfilled unconditionally; in other words, these rules are not negotiable. For economic reasons, top management with no insight with the Theory of Sampling, most of the time transgresses these most elementary principles, with devastating consequences on attempts to perform any form of reconciliation; the economic consequences are vastly misunderstood. IME ¼ IDE þ IEE þ IPE ¼ 0

ð9:68Þ

Therefore, the Total Sampling Error TSE is equal to the Sampling Selection Error SSE, which is itself equal to the continuous Heterogeneity Fluctuation Error HFE: TSE ¼ SSE ¼ HFE

ð9:69Þ

9.7.1. Small Scale Heterogeneity In the following section we compare the continuous model with the discrete model and find out if they are consistent. In the discrete model we consider only fluctuations between fragments and small groups of fragments such as increments. Under these conditions the differential dh2(t) and dh3(t) of the long-term heterogeneity fluctuations h2(t) and the periodic heterogeneity fluctuations h3(t) are practically negligible in comparison with the differential dh1(t) of the short-term heterogeneity fluctuation h1(t): dh2 ðtÞ þ dh3 ðtÞ  0

ð9:70Þ

dhðtÞ  dh1 ðtÞ

ð9:71Þ

From these approximations it readily follows that:

191

Sampling of Zero-Dimensional Lots

HFE2 þ HFE3  0

ð9:72Þ

TSE ¼ SSE ¼ HFE  HFE1

ð9:73Þ

Under these conditions, and these conditions only, we can compare the moments of the Sampling Selection Error SSE with those of the short-range Heterogeneity Fluctuation Error HFE1. 9.7.2. Comparison of SSE from the Discrete Model with HFE1 from the Continuous Model 9.7.2.1. Moments of the Error HFE1 in the Continuous Model For these moments we shall use the subscript c. The average is: mc ðSSEÞ ¼ mc ðHFE1 Þ ¼ 0

ð9:74Þ

V0 Q

ð9:75Þ

The variance is: s2c ðSSEÞ ¼ s2c ðHFE1 Þ ¼

Where V0 is the intercept of the variogram with the ordinate axis and Q is the number of increments in the sample representing the lot. We should note that correct sampling is strictly accurate in the case of the continuous model. 9.7.2.2. Moments of the Sampling Selection Error SSE in the Discrete Model For these moments we shall use the subscript D. We already showed that: L¼

X

Gn

ð9:76Þ

n

TL ¼ NTI

ð9:77Þ

Only in a first order approximation (subscript 1), the average is: mD ðSSEÞ1 ¼ 0

ð9:78Þ

In a second order approximation (subscript 2), the average is: mD ðSSEÞ2 ¼ 

  1  B X ðaGn  aL Þ M2n  2 B aL ML n

ð9:79Þ

where B is the sampling probability with B ¼ Q=N. The TOS shows that equi-probabilistic sampling is accurate, but only to the first order approximation. It also shows that in a second order approximation there is indeed a

192

Theory of Sampling and Sampling Practice

small deviation from a strictly accurate average, but experience is incapable to verify this because of the limited analytical precision. We will say that accuracy is good enough, except perhaps in the case of trace constituents. 9.7.2.3. Comparison of the Averages of Both Models Both averages are nil; however, in the discrete model it is nil only in a first order approximation due to the discrete nature of particulate materials. This difference between both models is intuitively logical because we know that they describe the material with different scales of observation: the continuous model with a wide-angle lens, and the discrete model with a magnifying lens. 9.7.2.4. Comparison of the Variances of Both Models In the continuous model we know that we have: s2c ðSSEÞ ¼

1 V0 Q

ð9:80Þ

Because of the isolation of h1(t) in relation 9.71, the variogram of h(t) is flat and V0 is   equal to the sill V00 of the variogram, which is equal to s2 hq , which is itself an estimator of s2 ðhGn Þ and of the distribution heterogeneity DHL defined in Chapter 6. Then:   V0 ¼ s2 hq ¼ s2 ðhGn Þ ¼ DHL and s2c ðSSEÞ ¼

1 DHL Q

ð9:81Þ

ð9:82Þ

Relation 9.67, however, could have been written as follows:  s2P ðSSEÞ1

¼

 1 1 2  s ðhGn Þ PN N

ð9:83Þ

where PN = Q. Therefore: 

 1 1 2  s ðhGn Þ ¼ Q N   1 1 2 sP ðSSEÞ1 ¼  DHL Q N

s2P ðSSEÞ1

ð9:84Þ ð9:85Þ

s2P ðSSEÞ1 was defined in the context of the discrete model; therefore, equation (9.85) can also be written:  s2P ðSSEÞ1

¼

s2D ðSSEÞ

¼

 1 1  DHL Q N

ð9:86Þ

193

Sampling of Zero-Dimensional Lots

In the continuous model, by definition, the number of potential increments N is infinite, therefore:  s2c ðSSEÞ

¼

 1 1 DHL  Q N1

ð9:87Þ

Conclusion: Both models lead to the same results, Therefore, they are consistent with one another and complementary. 9.7.3. Logical Analysis of the Discontinuous Heterogeneity Fluctuation Error HFE1 Because the average of HFE1 is very near zero, we concentrate the analysis on the variance s2 ðHFE1 Þ. 9.7.3.1. Influence of the Size of the Increments on the Variance s2 ðHFE1 Þ Let us consider the lot L as a set of fragments, then as a set of groups of fragments. • As a set of NFi fragments Fi submitted independently to the selection process with a constant probability P of being selected; the variance s2Fi ðHFE1 Þ is: s2Fi ðHFE1 Þ ¼

1P 1P DHL ¼ CHL PNFi PNFi

ð9:88Þ

where CHL is the value taken by DHL when the increments are made of only one fragment. • As a set of NGn groups of fragments Gn submitted independently to the selection process with a constant probability P of being selected; the variance s2Gn ðHFE1 Þ is: s2Gn ðHFE1 Þ ¼

1P DHL PNGn

ð9:89Þ

But: DHL ¼

ð1 þ Þ CHL ð1 þ  Þ

ð9:90Þ

Therefore: s2Gn ðHFE1 Þ ¼ ð1 þ Þ

NFi ðNGn  1Þ 2 s ðHFE1 Þ ðNFi  1ÞNGn Fi

ð9:91Þ

After simplification we obtain: s2Gn ðHFE1 Þ ¼ ð1 þ Þs2Fi ðHFE1 Þ Or:

ð9:92Þ

194

Theory of Sampling and Sampling Practice

s2Gn ðHFE1 Þ ¼ s2Fi ðHFE1 Þ þ s2Fi ðHFE1 Þ

ð9:93Þ

Because both factors  and  are positive or nil but never negative, the second term of relation 9.93 is never negative, then s2Fi ðHFE1 Þ is the minimum of s2Gn ðHFE1 Þ. A good comprehension of equation (9.93) is critically important for the following chapters. Conclusion 1: The most reproducible way to sample a lot of particulate material is to independently submit each fragment one by one to the selection process. Obviously, this is not practical, except in a heterogeneity test, but as soon as the size of the units (i.e., the size of the increments) increases so does the sampling variance. Conclusion 2: If a lot is sampled correctly with a constant probability P, the sampling variance can never be smaller than s2Fi ðHFE1 Þ. We will define this minimum as the variance of the Fundamental Sampling Error FSE. Important remark: We will never emphasize strongly enough that if P is not constant (i.e., the cardinal rules of sampling correctness listed in Part VII are not respected unconditionally), everything becomes ineffective, and the sampling process becomes invalid and highly misleading, with unpredictable economic consequences; we are no longer in a position to help people. This is a severe warning about the generation of massive, invisible financial losses generated by invalid data.

9.7.3.2. Definition of the Fundamental Sampling Error FSE We define the Fundamental Sampling Error FSE as the error generated under the following conditions: • When the implementation of a correct sampling selection process with a constant probability P is satisfied. • When it is applied to the population of the NFi fragments Fi. The moments of the Fundamental Sampling Error FSE were expressed in relations 9.61 and 9.63 in the general case of units Um. We can rewrite these relations for the fragments Fi as follows: mðFSEÞ  0 in a first order approximation " # 1  P X ðai  aL Þ2 M2i 1P 2 s ðFSEÞ ¼  2 ¼ CHL 2 P PNFi aL ML i

ð9:94Þ ð9:95Þ

Conclusion: The Fundamental Sampling Error FSE is the only sampling error that never cancels as soon as the selection probability is smaller than one, which is always the case in daily practice. If P = 1, then the sample is the entire lot itself; therefore, there is no sampling. In other words, the Fundamental Sampling Error is the minimum sampling error generated when all conditions have been optimized.

195

Sampling of Zero-Dimensional Lots

9.7.3.4. Definition of the Grouping and Segregation Error GSE We can define the Grouping and Segregation Error GSE as the complementary part of the Fundamental Sampling Error FSE within the discontinuous Heterogeneity Fluctuation Error HFE1. Then its moments are: The average is:

The variance is: or

mðGSEÞ ¼ mðHFE1 Þ  mðFSEÞ  0

ð9:96Þ

s2 ðGSEÞ ¼ s2 ðHFE1 Þ  s2 ðFSEÞ

ð9:97Þ

s2 ðGSEÞ ¼ s2 ðFSEÞ

ð9:98Þ

Conclusion 1: The Fundamental Sampling Error FSE is the consequence of the Constitution Heterogeneity CHL, while the Grouping and Segregation Error GSE is the consequence of the Distribution Heterogeneity DHL. There are between the variances s2 ðFSEÞ and s2 ðGSEÞ the same connections as between the heterogeneities CHL and DHL. Conclusion 2: We are now well equipped with critically important fundamentals to handle all the following chapters.

This page intentionally left blank

10 The Fundamental Sampling Error

In this chapter and Chapter 11, we make attempts at quantifying the variance of the Fundamental Sampling Error, leading to well-known Gy’s formulas. Of great importance is the objective, as many practitioners not well-acquainted with Gy’s work have criticized these formulas because they do not reflect reality observed between duplicate or replicate samples obtained by using splitting processes. First and foremost, the variance of FSE cannot and will not match the variance between duplicate or replicate samples because such variance includes the Grouping and Segregation Error GSE and the Analytical Error AE that is often extremely difficult to separate from the last subsampling error generated at the balance room of the laboratory. Important Remark: It is an extremely difficult exercise, and sometimes a misleading suggestion when some practitioners claim that Gy’s formulas need to be calibrated by looking at the variance between series of duplicate or replicate samples obtained by using splitting processes. If a problem shows up with the quantification of the variance of FSE, it has nothing to do with Gy’s formulas, but everything to do with the mineralogy, or other important characteristics, of the component of interest that we do not know well enough; there is a fundamental difference. Therefore, it is of paramount importance to know the material to be sampled extremely well before we make any attempt to quantify the variance of FSE. The so-called calibration of Gy’s formulas is too often an excuse to cover up our lack of understanding of the material to be sampled and compensate for the resulting mistakes. From a practical standpoint this chapter is very important because it deals with the useful implementation of formulas obtained in Chapters 6 and 9. We were then working in the abstract, and we already saw that the calculation of the constitution heterogeneity CHL involves a very large number of fragments. Especially when dealing with fine particles, it is impossible to economically estimate their number NF. This difficulty can be overcome by calculating the Intrinsic Heterogeneity IHL already introduced in Section 6.3.6 and equation (6.20) at the cost of a legitimate approximation. The approximation we are going to use is based on the fact that, often, there is a strong correlation between the critical content of a fragment and its density, while the correlation between the critical content of a fragment and its size is usually not as important. In other words, if we investigate a size-density fraction Lαβ of the lot L, each fragment of Lαβ is represented by the average fragment Fαβ characterized by: •

its volume: vα ¼ fα d3α

ð10:1Þ

197

198

Theory of Sampling and Sampling Practice

where fα is a fragment shape factor taking into account that fragments are not necessarily cubes. • its density: ρβ •

its weight: MFαβ ¼ vα ρβ ¼ fα d2α ρβ



ð10:2Þ

its critical content: aαβ

As a good approximation we can write: X

ðai  aL Þ2 M2i ¼

XX α

i

β

 2 Nαβ aαβ  aL M2Fαβ

ð10:3Þ

But: MLαβ Nαβ

MFαβ ¼

ð10:4Þ

Therefore: X

ðai  aL Þ2 M2i ¼

XX α

i

2 aαβ  aL MFαβ MLαβ

ð10:5Þ

β

Transposing equation (10.2) into equation (10.5), we obtain: X

ðai  aL Þ2 M2i ¼

X α

i



X β

 2 ρβ aαβ  aL MLαβ

ð10:6Þ

Depending on the case, we may use equations (10.3), (10.5), or (10.6) in our calculations: IHL ¼

X α



X β

 ρβ

aαβ  aL a2L

2 

MLαβ ML

ð10:7Þ

This is a valuable reference formula, because it is complete and still relatively simple. For convenience, we should summarize all the above notations in Table 10.1. Important remark: In practice, the variability of the volume of fragments is much more important than the variability of their density, which suggests a simplification by considering only the coarsest fractions. Furthermore, vα being proportional to the third power of the average diameter dα of the fragments of a fraction, it is obvious that the coarsest fraction has an overwhelming influence on the value of IHL, which is important to know to greatly simplify heterogeneity tests; then, it immediately follows:

199

The Fundamental Sampling Error

TABLE 10.1 Notations used in the calculation of the Intrinsic Heterogeneity IHL Characteristics Units

Average diameter

Weight

Critical content

Number of fragments

Average fragment

Average density

Size fraction of L: Lα



MLα





Fα of volume vα

_

Density fraction of Lα: Lαβ

_

MLαβ

aαβ

Nαβ

Fαβ of weight MFαβ

ρβ

Density fraction of L: Lβ

_

MLβ





_

ρβ

ðIHL Þα ¼ IHL ≈ ðIHL ÞαMAX ≈

X

 2 ρβ aαβ  aL MLαβ

ð10:8Þ

2 vαMAX X  ρβ aαMAXβ  aL MLαMAXβ 2 aL β

ð10:9Þ

vα 2 aL ML

β

Nevertheless, it is good practice and safer to consider several of the consecutive coarsest fractions, respectively α1, α2, α3,. . ., αk,. . ., αn, the finest size fraction being αn. Obviously, we should have: dα1 4dα2 4dα3 4…4dαk

ð10:10Þ

Then, we calculate, respectively ðIHL Þα1 , ðIHL Þα2 , and so on. As an excellent approximation we may write: IHL ¼ ðIHL Þα1 þ ðIHL Þα2 þ … þ ðIHL Þαk

ð10:11Þ

In many practical cases, we observe that ðIHL Þα1 is a reasonable estimate of IHL. The reason is simple: as an example, we will not argue if the sample mass should be 1000g, or 1200g, or even 1500g; we will most certainly argue if the necessary sample mass should be 5000g or 10000g, when it is 1000g only. The same remark must be kept in mind when thinking about the validity of the two following hypotheses. We shall not confuse the respective contributions ðIHL Þα1 , ðIHL Þα2 ,. . ., ðIHL Þαk to IHL, with the intrinsic heterogeneity IHLα of a given size fraction α, which is calculated as follows: IHLα ¼

2 vα X  ρβ aαβ  aα MLαβ aα Mα β

ð10:12Þ

Now, we can make the following two hypotheses: Hypothesis #1: Experience shows that the critical content aαβ usually varies much more from one density fraction to the next than from one size fraction to the next; therefore,

200

Theory of Sampling and Sampling Practice

all values of aαβ may be replaced by the average critical content aβ of the density fraction Lβ . The author never saw a case where this hypothesis was not valid. Hypothesis #2: The study of a large number of real cases shows that the proportions MLαβ little from one density fraction to the next; therefore, we may assume MLβ usually vary M Lα can be replaced by their average M that all values MLαβ ML . It is then legitimate to write: L β

MLαβ ≈

MLβ MLα ML

ð10:13Þ

Therefore, by transposing these hypotheses into equation (10.7) and separating the terms in α from those in β, we obtain the following much more pragmatic relation that will be heavily used in practice: P IHL ¼

α

vα MLα "X ML

β

 ρβ

aβ  aL a2L

2

MLβ  ML

# ð10:14Þ

For convenience, we call the first factor X and the second factor between brackets Y: IHL ¼ XY

ð10:15Þ

Now, we can find ways to estimate, respectively, X, which is relative to the size fractions, and Y, which is relative to the density fractions.

10.1. Definition of the Fragment Shape Factor The calculation of X implies that we know the volume vα of the average fragment in the fragment size fraction Lα. We already saw with equation (10.1) that vα is equal to the product fα d3α where fα is defined as a fragment shape factor that is nothing more than a correcting factor taking into account the fact that fragments are not perfect cubes; therefore, it could also be defined as a cubiform coefficient. •

If all fragments of the size fraction Lα are cubes, by definition fα= 1.



If all fragments are spheres of radius r = 0.5 (i.e., the diameter being equal to 1), the volume v of the spheres would be 43 πr3 ¼ 0:523 (i.e., the shape factor of a sphere is f = 0.523).

The fragment shape factor is dimensionless. Many experiments have shown that most minerals have a shape factor around 0.5 (e. g., for the average coal it is 0.446, for iron ore it is between 0.495 and 0,514, pure pyrite is 0470, quartz is 0.474, pure cassiterite is 0.530, ordinary gravel in your garden is 0.5, etc.) There are exceptions: •

Materials occurring as flakes such as mica, biotite, scheelite, and so on have a shape factor around 0.1.



Materials occurring as soft solids and submitted to mechanical stress, such as small gold nuggets, have a shape factor around 0.2.

201

The Fundamental Sampling Error



Acicular minerals occurring as needles, such as tourmaline, asbestos, serpentine, and so on have a shape factor superior to 1, and sometimes as large as 10. These minerals may create difficult sampling problems but fortunately they are not common.

10.2. Estimation of the Fragment Shape Factor For all practical purposes a quick examination of minerals under the microscope is enough to estimate the shape factor fα by comparing observations with the remarks made in Section 10.1; nevertheless, we can describe a more accurate way to estimate fα. The experimental estimation of the shape factor fα requires an accurate determination of the average size d of the fragments from Lα collected for the experiment, and two methods are suggested: •

We can calculate an average of d, we should call dα, if the collected fragments are from a size fraction located between two consecutive sieves with upper opening dα1 and lower opening dα2. The collected fragments for the experiment must be properly collected at random. Therefore, we may write: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3 3 3 d α1 þ dα2 dα ¼ 2

ð10:16Þ

In this case d is similar to dα introduced earlier. Determine the weight M of these fragments and count them; their number is noted as p. If the fragments are selected at random, p does not need to be large for the experiment to be accurate (e.g., p = 30 should be sufficient), It immediately follows: fα ¼

M pd3α ρ

ð10:17Þ

ρ being the average density of the fragments. •

We can also take a sieve of opening dα1, screen some of the material through it for no more than 10 seconds. Clean the screen, collect the minus dα1 size fraction, and screen it again always using the same sieve, but this time for at least 10 minutes. A few fragments that went through the sieve the first time will never have a chance to pass through the same sieve again. Evidently, the size of these few fragments is very close to dα1. Determine the weight M of these p fragments and it immediately follows: fα1 ¼

M pd3α1 ρ

The estimate fα1 is very close to fα. Now, the factor X can be written:

ð10:18Þ

202

Theory of Sampling and Sampling Practice



X α

fα d3α

MLα ML

ð10:19Þ

Furthermore, f is found to be relatively constant from one size fraction to another; Therefore, we may write for all practical purposes: X¼f

X d3 MLα α

α

ð10:20Þ

ML

10.3. Definition of the Fragment Size Distribution Factor The factor X of equation (10.15) is weighted by the proportion For convenience we chose to write: X¼f

X d3 MLα α

α

ML

¼ fgd3

MLα ML

of the size fraction Lα.

ð10:21Þ

In this simple form d is the maximum fragment size, defined as the opening of a square mesh retaining no more than 5% oversize; g is a new correcting factor taking into account the fact that all fragments do not have the same size d, and it is a dimensionless parameter. g would equal to 1 if we were dealing with perfectly calibrated material (e.g., steel balls in a ball bearing), but in practice we mostly encounter the following values for g for the above definition of d: •

With noncalibrated material (e.g., material coming from a jaw crusher, or a pulverizer), g is around 0.25.



With calibrated material (e.g., between two consecutive sieve openings of a certain series), g is around 0.55. With naturally calibrated materials such as cereals (e.g., beans, wheat, oats, rice, etc.), g is around 0.75.



Important remark: The value of g highly depends on the convention taken to define the maximum fragment size d. For more information on this subject refer to Gy.49

10.4. Definition of the Maximum Fragment Size We chose to define the maximum fragment size d as the opening of a square mesh retaining no more than 5% oversize; nevertheless, it is easy to see that there are as many values for g as values to define d. Therefore, we shall explain why we chose this convention of 5%. A study was performed by Gy49 on 113 different size fraction analyses performed on many kinds of materials, which showed an interesting property: if we chose for d the value of the opening of a square mesh retaining no more than 5%

203

The Fundamental Sampling Error

oversize, the value of g for noncalibrated material varies very little on both sides of 0.25. The 113 curves representing g as a function of % oversize, have the property of concentrating around 5% oversize that corresponds to a value around 0.25 for g. Therefore, it was convenient and wise to choose a value for g that varies little from one size distribution to the next, making an evaluation of this correcting factor easy, quick, and simple. Most of these size analyses gave values for g between 0.20 and 0.30, which was most certainly not the case for other possible definitions of d. Furthermore, it is in our interest to refer to a value of d that is easy to quickly estimate by just looking at the material, and also easy to memorize. Obviously, this value is easier to appreciate when it comes closer to 0%. It was then logical to retain the 5% convention. Important remark: Practitioners using TOS must remember this 5% convention because other people such as metallurgists often use another convention traditional to them that is 20%. The confusion between both conventions can lead to large errors in the calculation of the variance of FSE. This confusion is often a problem when we ask people what is the maximum size d of the material they want to sample.

10.5. Definition of the Mineralogical Factor Now that we know how to estimate X in a simple way we will still have to estimate Y, which is a little more difficult: Y¼

X β

 ρβ

aβ  aL a2L

2 

MLβ ML

ð10:22Þ

This formula is a sum extended to all fractions. We should observe that each  the density 2 term of the sum is proportional to aβ  aL , and obviously, this quantity is minimum when the material is homogeneous, which is a limit case when the material is perfectly homogeneous and aβ is equal to aL irrespective of Lβ , therefore Y = 0. Y reaches a maximum when the mineral or constituent of interest is completely liberated. In this case each fraction Lβ is homogeneous: one density fraction is made of the pure constituent of interest, the other is made of pure gangue. If we assume that the material is made of only two liberated constituents, then we should call LM the density fraction containing the pure mineral of interest of density ρM , while the gangue fraction is called Lg of density ρg , therefore aM ¼ 1, ag ¼ 0, and MM ML ¼ a L: . By transposing these values in equation (10.22) we obtain: YMAX ¼ c ¼ ρM

ð1  aL Þ2 þ ρg ð1  aL Þ aL

ð10:23Þ

By definition, we call YMAX the mineralogical factor c. In a situation where the constituent of interest is dispersed into several different major mineral occurrences, things may become more complicated, but the problem can still be solved by using equation (10.22), especially if these different major mineral occurrences have similar densities.

204

Theory of Sampling and Sampling Practice

10.5.1. Cases Where Several Phases of the Mineral of Interest Are Present Nature is often more complex than ideal models we create to represent it. For example, it is not rare that the copper ore has several origins, even within a single geological unit. In such cases we may wonder what happens to the mineralogical factor and the liberation factor. It happens that these phases that are completely distinct may be completely liberated relative to each other. In some cases, these phases are partially associated with one another. But, such association usually exists only at the coarse scale. As the size d diminishes for finer stages of comminution these phases usually liberate relative to one another long before the copper minerals themselves liberate. In such cases it is advisable to take as a reference the liberation of the various phases instead of the liberation of the copper minerals themselves. The determination of the mineralogical factor is easy in both cases, while the estimation of the liberation factor is easier when considering the liberation of the various phases, as usually ‘ is easier to determine accurately as it comes closer to 1. However, to do this for ‘ it is necessary to modify the definition of the mineralogical factor c. This can be done by replacing the copper mineral A and its gangue by the various copper mineral phases present. In practice we can often assume that there is one copper mineral phase that is rich and plays the role of the copper mineral A and another copper mineral phase that is poor and plays the role of the gangue. If a1 and a2 , ρ1 and ρ2 are respectively the copper mineral contents and the densities of the phases rich and poor, it is possible to give the mineralogical factor the more general form as follows: c¼

ða1  aL ÞðaL  a2 Þ ½ða1  aL Þρ1 þ ðaL  a2 Þρ2  a2 ða1  a2 Þ

ð10:24Þ

If we make a1 = 1 and a2 = 0 we revert to the usual general case. Depending on which type of copper ore that is used to determine the liberation curves, we may want to consider such a transformation for the calculation of c prior to the calculation of ℓ. 10.5.2. Calculation of the Mineralogical Factor The purpose of sampling is to exactly estimate aL ; however, we always have an idea of the approximate value of aL , and if we do not we may proceed through appropriate iteration. aL is often expressed as a percentage; however, in formula 10.23 it should be expressed as a proportion of 1 (e.g., 10% is written 0.1, 1000 ppm is written 0.001, etc.), which is a convenient and simple convention that prevents the presence of too many factors in formulas. We shall also emphasize that aL is the proportion of a physical component, and not the proportion of a chemical element (e.g., % molybdenite, rather that % molybdenum). In many practical cases equation (10.23) can be simplified in two ways: •

Cases where aL 50:1 (i.e., smaller than 10%), which is a very common case: c¼



ρM aL

Cases where aL 40:9 (i.e., larger than 90%), such as for concentrates:

ð10:25Þ

205

The Fundamental Sampling Error

c ¼ ð1  aL Þρg

ð10:26Þ

10.5.2.1. Practical Example of a Coal Shipment We want to monitor the sulfur content of a coal shipment. The sulfur content is around 1.5%. The density of the coal is around 1.4. The sulfur mostly occurs as pyrite FeS2 of density 4.8. To calculate the mineralogical factor for FeS2 in this coal, we shall use the following values for each factor: ρM ¼ 4:8 ρg ¼ 1:4 aL ¼ 0:015x120 ¼ 0:028 (i.e., 2S converted to FeS2) 64 The calculation of c using equation (10.23) gives: c ¼ 4:8

ð1  0:028Þ2 þ 1:4ð1  0:028Þ ¼ 163 0:028

The calculation of c using the simplified equation (10.25) gives: 4:8 c ¼ 0:028 ¼ 171 which is considered close enough, and perhaps slightly conservative.

Because aL 510%, both values obtained for c are in good agreement.

10.5.2.2. Practical Example of a Molybdenum Trioxide Concentrate We want to accurately monitor the molybdenum content of a MoO3 shipment. The molybdenum content is around 61.00%. The density of MoO3 is 4.4, while the density of impurities is 2.7. To calculate the mineralogical factor for MoO3 in this type of concentrate, we shall use the following values for each factor: ρM ¼ 4:4 ρg ¼ 2:7 aL ¼ 0:61x 144 96 ¼ 0:92 (i.e., Mo converted to MoO3) The calculation of c using equation (10.23) gives: c ¼ 4:4

ð1  0:92Þ2 þ 2:7ð1  0:92Þ ¼ 0:25 0:92

The calculation of c using the simplified equation (10.26) gives: c ¼ 2:7ð1  0:92Þ ¼ 0:22 Because aL 40:9, both values obtained for c are valid and in good agreement.

206

Theory of Sampling and Sampling Practice

10.6. Definition of the Liberation Factor We defined the minimum of Y as zero and the maximum as c, and of course, the actual value of Y is, most of time, somewhere between: 0Yc

ð10:27Þ

This is the actual domain of Y, and to recall this property we can write:

with:

Y ¼ c‘

ð10:28Þ

0‘1

ð10:29Þ

We define ‘ as the liberation factor. ‘ = 0 when the material is perfectly homogeneous (i.e., when there is no liberation of the constituent of interest), and ‘ = 1 when the constituent of interest is completely liberated (i.e., when Y is maximum). ‘ is a dimensionless factor. Most materials can be classified according to their degree of heterogeneity, and a good scale of it is the liberation factor: •

Very heterogeneous material, ‘ = 0.8

• •

Heterogeneous material, ‘ = 0.4 Average material, ‘ = 0.2



Homogeneous material, ‘ = 0.1



Very homogeneous material, ‘ = 0.05

These classifications do not help very much and they lack reliability; furthermore, we see that ‘ varies too much to be able to attribute an average value to it. Therefore, two methods for its calculation are suggested.

10.7. Calculation of the Liberation Factor The estimation of the liberation factor ‘ is not always easy and, if overlooked, our future recommendations for an optimum sampling protocol could be misleading. There are at least two methods to calculate ‘, and the method we choose highly depends on the available information and equipment.

10.7.1. Calculation of the Liberation Factor through the Notion of Constitution Heterogeneity This method is by far the method of choice of the author because there is a logical mathematical derivation for it, therefore it is not based on empirical observations. We begin with equation (10.22). Let’s make several hypotheses that must be kept in mind to

207

The Fundamental Sampling Error

understand the limitations of the following recommended methods. Let’s also assume we are interested in the copper content of a lot to be sampled. First hypothesis: Following an analytical investigation, we suppose that the maximum copper content amax of the coarsest fragments of the lot is known. Second hypothesis: We suppose that all size fractions have roughly the same copper content aL, or at least they are within the same order of magnitude. Third hypothesis: We suppose that inside each fraction all of the copper is located in a sub fraction of copper content amax, density ρR , and relative weight: M aL ¼ ; ML amax

ð10:30Þ

while the remainder of the size fraction of density ρg , and relative weight 1  not contain any copper or very little. Then we can rewrite equations (10.22) and (10.23) as follows:   ðamax  aL Þ2 aL ð0  aL Þ2 aL þ ρg 1 Y ¼ ρR amax a2L amax a2L





aL amax

does

ð10:31Þ

which leads after simplification to: Y ¼ ρR

    a amax L  1 þ ρR  ρg 1 aL amax

ð10:32Þ

amax being usually much larger than aL , the second term may become negligible, therefore:  Y ¼ ρR

amax 1 aL

 ð10:33Þ

It would be convenient to suppress the factor ρR which is difficult to estimate. Let’s call V1 the volume occupied by copper and V2 the volume of the gangue. The average density ρ of their mixture can be expressed as follows: ρ¼

V1  ρ M þ V2  ρ g

ð10:34Þ

V1 þ V2

LÞ with ρM being the density of the copper mineral. But V1 ¼ ρaL and V2 ¼ ð1a ρ , then: M

ρ¼

ρM  ρg aL  ρg þ ρM ð1  aL Þ

or

ρM  ρg ρ

¼ aL  ρg þ ρM ð1  aL Þ

g

ð10:35Þ

By transposing 10.35 into the general equation (10.23) to calculate the mineralogical factor and after simplifications we obtain:

208

Theory of Sampling and Sampling Practice

c¼ therefore:

ð1  aL ÞρM  ρg aL  ρ

amax Y ρR aL  1 ‘ ¼ ¼ ð1a Þρ ρ L M g c

ð10:36Þ

ð10:37Þ

aL ρ

‘¼

ðamax  aL ÞρR  ρ ð1  aL ÞρM  ρg

ð10:38Þ

In practice we also know that ρM 4ρR 4ρ4ρg therefore: ρR  ρ ≈1 ρM  ρg

ð10:39Þ

We finally obtain the very practical formula: ‘¼

amax  aL 1  aL

ð10:40Þ

amax and aL should be expressed as a proportion of the copper mineral content (i.e., as part of one), and not as a metal content.

10.7.1.1. Recommended Method #1 for the Determination of amax for Each Size Fraction of a Typical Fragment Size Distribution 1. Collect a large composite sample representing a single geological unit, from leftover half core samples (e.g., fifty 6-kg samples). 2. Dry the composite. 3. Crush the composite to d = 2.54 cm. The definition of d is the size of a screen retaining no more than 5% of the material by weight. 4. Screen the entire composite using the following screens: 2.54 cm, 1.25 cm, 0.635 cm, 0.335 cm, 0.17 cm, 0.085 cm, 0.0 425cm, and 0.0212 cm. Below 0.0425 cm the method becomes awkward, but it can be done. 5. Wash, dry, and weigh each size fraction. 6. Spread the size fraction between 2.54 cm and 1.25 cm on a clean surface. 7. Using a portable X-ray machine select 10 fragments showing the highest copper content. Using a microscope, identify the main copper mineral to calculate the mineral content. Crush and grind them in a mortar, then assay them for their copper content. You obtain 10 copper results. Look at the distribution of the 10 results. Calculate the average of the 10 results and call the average an estimate of amax for d = 2.09 cm. Using formula 10.40 calculate ‘ for d = 2.09 cm. 8. Spread the size fraction between 1.25 cm and 0.635 cm on a clean surface.

The Fundamental Sampling Error

209

9. Using a portable X-ray machine select 10 fragments showing the highest copper content. Crush and grind them in a mortar, then assay them for their copper content. You obtain 10 copper results. Look at the distribution of the 10 results. Calculate the average of the 10 results and call the average an estimate of amax for d = 1.05 cm. Using formula 10.40 calculate ‘ for d = 1.05 cm. 10. Repeat the same process for the other size fractions between 0.635 cm and 0.335 cm, between 0.335 cm and 0.17 cm. 11. For the smaller size fraction, identify 10 zones where the copper content is high with the X-ray machine. At each of these zones collect a spoonful of fragments. Look at them under the microscope and estimate amax using proportion standard references from the mineralogist when you spot a fragment with high copper content. amax will be the average of your observation from the 10 spoonful subsamples you collected from each respective size fraction. Then, you can calculate ‘ using formula 10.40 for each respective size fraction.

10.7.1.2. Recommended Method #2 for the Determination of amax for the Top Size Fraction of a Typical Comminution Stage This method is longer but may be more accurate because of the limitation of hypothesis #2 under different conditions of comminution. 1. Collect a large composite sample representing a single geological unit, from leftover half core samples (e.g., fifty 6-kg samples). 2. Dry the composite. 3. Crush the composite to d = 2.54 cm. 4. Split the composite into 7 sublots. 5. Screen one sublot using 2.54 cm and 1.25 cm screens 6. Wash, dry, and weigh the size fraction. 7. Spread the size fraction between 2.54 cm and 1.25 cm on a clean surface. 8. Using a portable X-ray machine select 10 fragments showing the highest copper content. Using a microscope, identify the main copper mineral to calculate the mineral content. Crush and grind them in a mortar, then assay them for their copper content. You obtain 10 copper results. Look at the distribution of the 10 results. Calculate the average of the 10 results and call the average an estimate of amax for d = 2.09 cm. Using formula 10.40 calculate ‘ for d = 2.09 cm. 9. Crush the second sublot to d = 1.25 cm 10. Screen the sublot using 1.25 cm and 0.635 cm screens 11. Wash, dry, and weigh the size fraction. 12. Spread the size fraction between 1.25 cm and 0.635 cm on a clean surface. 13. Using a portable X-ray machine select 10 fragments showing the highest copper content. Crush and grind them in a mortar, then assay them for their copper content. You obtain 10 copper results. Look at the distribution of the 10 results. Calculate the average of the 10 results and call the average an estimate of amax for d = 1.05 cm. Using formula 10.40 calculate ‘ for d = 1.05 cm.

210

Theory of Sampling and Sampling Practice

14. Repeat the same process for the other size fractions between 0.635 cm and 0.335 cm, between 0.335 cm and 0.17 cm, by crushing another sublot appropriately each time. 15. For the smaller size fraction, after crushing a sublot appropriately, identify 10 zones where the copper content is high with the X-ray machine. At each of these zones collect a spoonful of fragments. Look at them under the microscope and estimate amax using proportion standard references from the mineralogist. amax will be the average of your observations from the 10 spoonful subsamples you collected from each respective size fraction. Then, you can calculate ‘ using formula 10.40 for each respective size fraction.

10.7.2. Calculation of the Liberation Factor Using the Notion of Liberation Size There is another way to calculate the liberation factor ‘ that may be more familiar to mineral processing engineers. We can estimate ‘ as a function of the liberation diameter, which is called d‘ . When the actual size d is smaller than the liberation size d‘ , the liberation is then complete and ‘ ¼ 1; however, when d4d‘ , the liberation is not complete and, a rule of thumb, it is shown that ‘ can often be estimated with the following formula: rffiffiffiffi d‘ ‘¼ d

ð10:41Þ

A more general and appropriate formula would be: ‘¼

 r d‘ d

ð10:42Þ

where r is a parameter to be estimated with microscopic and mineralogical observations. The author suggests extreme caution for the use of this popular method that has been responsible for huge mistakes and therefore controversies when it is abused by using very poor mineralogical information. This is another reason why the first method shown in the former section is far more reliable. When the constituent of interest is made of only one phase, the liberation factor ‘ varies regularly as a function of the fragment size d. In the case where d is much larger than d‘ , size fractions near d are homogeneous and ‘α varies little with dα , as soon as there is some pure gangue in the material, for example dα 5100d‘ , the liberation factor ‘α starts to increase rapidly as dα diminishes. Curves showing the value of ‘α as a function of dα vary widely from one physical constituent to another. Furthermore, the same physical constituent may show very different liberation curves from one material to another. The example of ash in coal is typical as shown by Gy’s experiments.49 Not only are there variations from one deposit to another, but also from one geological unit to another, which is another reason to be cautious about formulas 10.41 and 10.42

211

The Fundamental Sampling Error

10.8. Calculation of the Intrinsic Heterogeneity IHL After combining equalities 10.15, 10.21, and 10.28. we obtain: IHL ¼ XY ¼ fgc‘d3

ð10:43Þ

Practical example: We have a routine core sample of coal and we want to determine its ash content. We crush it to 95% minus 2.5cm. The average ash content is expected to be around 8%, and a mineralogical investigation shows that 2.5-cm fragments can occasionally contain up to 32% ash. The density of pure coal is 1.4 and the density of the schist is 2.2. To calculate the intrinsic heterogeneity, we shall use the following values: aL ¼ 0:08 aMAX ¼ 0:32 ρM = 2.2 (i.e., the constituent of interest is ash.) ρg ¼ 1:4 f ¼ 0:5 g = 0.25 (i.e., noncalibrated material) d ¼ 2:5cm ‘¼ c¼

0:32  0:08 ¼ 0:26 1  0:08

ð1  0:08Þ2 2:2 þ ð1  0:08Þ1:4 ¼ 24:6 0:08

IHL ¼ fgc‘d3 ¼ 0:5x0:25x24:6x0:26xð2:5Þ3 ¼ 12:5g We may wonder about the real meaning of such a number. This is what we try to comprehend in the following section.

10.9. Definition of the Fundamental Sampling Error FSE We shall define the Fundamental Sampling Error FSE as an error that occurs when the increment selection is correct as suggested in Part 7 of this book, and when the increments making up one sample are made of one single fragment collected at random. Obviously, it is a limit case, but it is known that this error is generated only by the Constitution Heterogeneity CHL. Gy has demonstrated in detail that the mean m(FSE) of the Fundamental Sampling Error is negligible for most cases, except for trace constituents when a Poisson process is inadvertently introduced because the sample mass is way too small, which is a subject studied in Part V of this book. Furthermore, Gy

212

Theory of Sampling and Sampling Practice

demonstrated that the variance s2FSE of the Fundamental Sampling Error FSE can be expressed as follows: s2FSE ≈ σ 2FSE ¼

1P 1P CHL ¼ IHL PNF PML

ð10:44Þ

Here, P is a selecting probability that shall remain constant if sampling is considered properly, and performed correctly, as suggested in Part 7 of this book: • •

Considered properly when the grade of each fragment follows a normal distribution accepting for central value the true average aL of the lot. Performed correctly when the fragments making up a sample are collected one by one at random from the lot.

For all practical purposes we will refer to the limited estimated variance s2FSE rather than the true unknown variance σ 2FSE from an infinite population. We also know that: MS ¼ PML

ð10:45Þ

By transposing equation (10.45) into equation (10.44) we obtain:  s2FSE

¼

 1 1 IHL  MS ML

ð10:46Þ

This formula is extremely practical for the optimization of sampling protocols. When ML is much larger than MS (i.e., at least 10 times larger), formula 10.46 can be simplified to: s2FSE ¼

IHL MS

ð10:47ÞÞ

This equation leads to an obvious discussion: if s2FSE = 1, then IHL = MS, and suggests the following interpretation of IHL: Assuming that all fragments could be selected and extracted at random one by one, the intrinsic heterogeneity IHL of the constitution heterogeneity is the weight of a sample, the grade aS of which is distributed around aL with a relative standard deviation expressed in percent s = ±100%.

11 Minimizing the Fundamental Sampling Error in Sampling Protocols

The Fundamental Sampling Error FSE is the only error that is never zero because it is generated by the intrinsic heterogeneity of a particulate solid; furthermore, through comminution processes, the smaller the largest fragment size becomes, the larger the liberation factor. Fortunately, the factor d3 is likely to decrease much faster than ‘ increases. Therefore, changing d through comminution can provide a means to minimize the variance of the Fundamental Sampling Error. In this chapter we review all the practical applications of formula 10.46, used to produce sampling protocols in accordance with a carefully determined objective, and to evaluate the precision of a protocol used by others.

11.1. Minimum and Maximum of the Fundamental Sampling Error Laboratories often receive series of samples in which the analyst knows generally nothing about the sampling procedure that was used to generate these samples. Let us assume that the weight of these samples is around MS and the top fragment size is d. How can the analyst estimate the variance of FSE generated during the collection of these samples? • The lot L had a very large weight ML compared to MS , then a maximum variance of FSE can be calculated: 

 1  0 IHL MS

s ðFSEÞMAX ¼ 2

ð11:1Þ

• The sample comes from the result of a splitting process using a riffle splitter, then a minimum variance of FSE can be calculated:  s2 ðFSEÞMIN ¼

 1 1 IHL  MS 2MS

ð11:2Þ

It is clear that we may know nothing about the Fundamental Sampling Errors generated during sampling stages carried out prior to the one generated by the last selection; 213

214

Theory of Sampling and Sampling Practice

however, we can calculate s2 ðFSEÞMAX and s2 ðFSEÞMIN . The variance of FSE generated by the last selection is somewhere between this minimum and maximum, which is a valuable indication, because if this minimum is already very large there is no chance for that sample to be representative of the initial lot. This quick test may allow a chemist, geologist, miner, or metallurgist to give meaningful advice about the minimum weight of a sample selected from a lot at a particular stage of comminution.

11.2. Construction of Sampling Nomographs In the early years of the Theory of Sampling, sampling nomographs were very useful tools to the engineer. With the progress of computers, it became easy to rely on quick calculations rather than nomographs. However, nothing can better show the logical steps of a sampling procedure than a sampling nomograph, and computers should be used to obtain these nomographs in a form and quality that never existed in the past. The value displayed by a calculator or a computer that represents the variance of FSE of an isolated sampling stage is far less meaningful than the same value displayed in its context, with the possibility of seeing at a glance not only the errors generated by the other sampling stages but also the alternatives. Therefore, the sampling nomograph is the best available tool to quickly estimate sampling errors, effectively design optimum sampling procedures, and find reasonable economical compromises. It is also proven that any of us have good visual memory, and the sampling nomograph prints facts on our mind in a unique way. In an ordinary sampling scenario we voluntarily change the values of d and MS. When we change d, the liberation factor ‘ also changes, and so does the product fgc‘ often called the sampling constant C. Either we select an average value of C, or we draw a curve that represents its variations that are directly proportional to the changes of ‘. In what follows, as d changes, we assume that the different values taken by C are known. When the weight of ML of the original lot is very large compared to the weight MS of the sample, the simplest way to estimate the variance s2 ðFSEÞ is by using the following formula: s2 ðFSEÞ ¼

Cd3 MS

ð11:3Þ

To place as much information as possible on the nomograph, we found it convenient to use logarithmic coordinates; therefore, it is convenient to express the above formula as a logarithm: log s2 ðFSEÞ ¼ log C þ 3 log d  log MS

ð11:4Þ

For a given fragment size d, which is by definition a constant for a given comminution stage carrying a given sampling constant C, the value of log s2 ðFSEÞ and the derivative of the function log s2 ðFSEÞ is equal to –1. This means that the line representing the variations of log s2 ðFSEÞ as a function of  log MS for a given fragment size d and a given sampling constant C, has a slope on the nomograph equal to –1.

Minimizing Sampling Error in Sampling Protocols

215

• Every lot L in a particular state of comminution of weight ML is represented on the graph by one and only one point (e.g., lower point shown by the thick broken line in Figure 11.1). The graph shows three values for ML for three different states of comminution. • Every sampling stage which does not involve a comminution (d = constant) is represented by a change of place along the straight line of slope –1, until a new point corresponding to the weight MS of the selected sample (e.g., upper point shown by the thick broken line in Figure 11.1). The graph shows three values for MS for three different states of comminution. • Every comminution or change of fragment size d (MS = constant), tends to lower the upper point until it intercepts a new straight line of slope –1 corresponding to the new particle size d, as illustrated by the vertical sections of the thick broken line shown in Figure 11.1. Every successive sampling and comminution stage is represented by a broken section, first moving from a large weight to a smaller weight, then moving from a larger value for d to a lower one. Figure 11.1 shows two of these successive sampling and comminution stages. The horizontal dashed line is an objective called safety line that must be selected by the practitioner. In this example it is set at ±15% relative. Figure 11.1 illustrates the following sampling and preparation protocol: • A lot weighing about 3 tons with a value of d around 1.5cm is sampled to provide a 20000-g sample: it is the primary sampling stage. • The 20000-g sample is crushed to 95% minus 0.3cm (d= 0.3cm): it is the primary sample preparation stage.

FIGURE 11.1 Illustration of a typical sampling nomograph expressing the variance of FSE as a function of sample or lot weights.

216

Theory of Sampling and Sampling Practice

• The new 20000-g lot is sampled to provide a 1000-g sample: it is the secondary sampling stage. • The 1000-g sample is pulverized to 95% minus 0.015cm (d=0.015cm): it is the secondary sample preparation stage. • The new 1000-g sublot is finally subsampled to provide a 30-g subsample used for fire assay to determine the gold content of the sample: this is the final tertiary sampling stage. It is of the utmost importance to notice that the variability associated with that 30-g subsample is not part of the analytical variability. It is important to note that equation (10.46) applies only to one segment with a slope –1, starting from a lot and ending with a sample. It does not apply to a segment starting for a large value of d and ending with a smaller value of d (i.e., a vertical segment in Figure 11.1). Therefore, the formula is valid only for a given stage of comminution where d remains constant. There are other ways to construct a sampling nomograph that we will present in later sections and later chapters.

11.3. Revisiting the Determination of the Liberation Factor We know from the Theory of Sampling the variance of the Fundamental Sampling Error can be estimated using the following formula:  s2FSM ¼

 1 1 f  g  c  ‘  d3  Ms M L

ð11:5Þ

We also know all factors except ‘ are relatively easy to estimate. Therefore, this would suggest the use of the following formula to calculate ‘ for each stage of comminution: ‘¼

s2FSM 1 1 3 Ms  ML f  g  c  d

ð11:6Þ

The method suggested highly resembles a typical Heterogeneity Test, except that it should be repeated for several stages of comminution. Important remark: In Gy’s earlier literature the intrinsic heterogeneity IHL was written as follows: IHL ¼ f  g  c  ‘  d3 ¼ C  d3

ð11:7Þ

The problem with this presentation was that C, which is the product of four factors, must be calculated every time the value of d changes since the liberation factor varies rapidly with the value of d. As a result, in the new literature it became a tradition, for practicality, to summarize the value of IHL as follows:

217

Minimizing Sampling Error in Sampling Protocols

IHL ¼ f  g  c  ‘  d3 ¼ K  dx

ð11:8Þ

In this new presentation, the liberation factor is assumed to follow an empirical model that can be provided by metallurgists or a reliable mineralogist following thorough microscopic observations, such as:  r d‘ ‘¼ d

ð11:9Þ

where d‘ is defined as the liberation size of the constituent of interest. In many cases when the constituent of interest is a single mineral, the exponent r is not far away from 0.5. But, as clearly shown by Gy49 (1967, Figure 5.5, page 77) for the liberation of ash in coals, and further demonstrated by François-Bongarçon96 for gold, r is not necessarily anywhere close to 0.5, especially when the constituent of interest is located in various minerals. Under such new conditions, equation (11.5) should be approximated as follows:  s2FSM ¼

 1 1 K  dx  Ms ML

ð11:10Þ

where:

and

K ¼ f  g  c  ðd‘ Þr

ð11:11Þ

x¼3r

ð11:12Þ

K and r then become the key factors to quantify in various experiments described by François-Bongarçon. The author of this book favors the approach using amax for the determination of the liberation factor, in which case x = 3; nevertheless, François-Bongarçon’s approach proved to be extremely useful in the mining industry.

11.4. Recommnded Method for the Determination of IHL, for Each Size Fraction of a Typical Fragment Size Distribution 11.4.1. General Approach The following method takes copper as an example, but applies to many other constituents of interest. 1. Collect a large composite sample representing a single geological unit, from leftover half core samples (e.g., 50 6-kg samples). 2. Dry the composite. 3. Crush the composite to d = 2.54 cm.

218

Theory of Sampling and Sampling Practice

4. Screen the entire composite using the following screens: 2.54 cm, 1.25 cm, 0.635 cm, 0.335 cm, 0.17 cm, 0.085 cm, 0.0425cm, and 0.0212 cm. Below 0.0425 cm the method becomes awkward, but it can be done. 5. Wash, dry, and weigh each size fraction. 6. Spread the size fraction between 2.54 cm and 1.25 cm on a clean surface. 7. Collect 50 samples made of 50 fragments collected one by one at stratified random. 8. Weigh each sample: the average weight of these samples is highly relevant in equation (11.6). Crush and pulverize each sample. Assay each sample for copper. Convert the copper contents into mineral contents using the main copper mineral. You obtain 50 copper mineral results. Look at the distribution of the 50 results. Calculate ‘ using formula 11.6 for d = 2.09 cm. 9. Spread the size fraction between 1.25 cm and 0.635 cm on a clean surface. 10. Collect 50 samples made of 50 fragments collected one by one at stratified random. 11. Weigh each sample: the average weight of these samples is highly relevant in equation (11.6). Crush and pulverize each sample. Assay each sample for copper. Convert the copper contents into mineral contents using the main copper mineral. You obtain 50 copper mineral results. Look at the distribution of the 50 results. Calculate ‘ using formula 11.6 for d = 1.05 cm. 12. Repeat the same process for the other size fractions between 0.635 cm and 0.335 cm, between 0.335 cm and 0.17 cm, and so on. 13. For smaller size fractions, use 50 one-gram samples made of ten stratified random 0.1-gram increments. Then, calculate ‘ using formula 11.6 for each respective size fraction; obviously, we lose a little on accuracy.

11.4.2. Example of General Method Applied to a Gold Deposit In any gold project, it is essential to perform a Heterogeneity Test to calculate sampling nomographs and optimize sampling protocols. The test should be performed on each type of major mineralization. A preliminary mineralogical study reveals that the test can be done on a 340-kg composite, using samples from the top size fraction, made of p = 50 fragments each, mainly because the gold is found to be relatively finely disseminated within a quartz matrix. The following case is a real case, not necessarily appropriate for every gold deposit. 1. The composite is made of material from 50 different locations within a single type of mineralization. Dry the composite overnight, at 110oC. After drying the composite weighs about 340 kg. Crush the entire composite to roughly –2.00 cm using a clean jaw crusher with opening adjusted to about 2 cm. 2. Screen the lot through 1.25-cm, 0.63-cm, 0.335-cm, 0.17-cm, 0.085-cm, 0.0425-cm, and 0.0212-cm screens. 3. Weigh each size fraction and record these weights. 4. Spread the –1.25 cm +0.63 cm fraction on a clean surface: the Heterogeneity Test is performed on this fraction, where d = 1.05 cm.

Minimizing Sampling Error in Sampling Protocols

219

5. From this fraction, collect 100 samples. Each sample is made of pi fragments selected one by one at random, up to nearest 50 grams. Number these samples from 1 to 100, weigh each of them, and record values for pi. Results for the sample weights are illustrated in Figure 11.2, where the arithmetic average is MS ¼ 50:2g. Results for the number of fragments are illustrated in Figure 11.3, where the arithmetic average is p = 51.7 fragments.

FIGURE 11.2 Plot of the 100 sample weights.

FIGURE 11.3 Plot of the number of fragments in the 100 samples.

220

Theory of Sampling and Sampling Practice

6. Pulverize each sample directly in an enclosed ring and puck pulverizer to about 95% minus 106 microns (Do not use a dusty plate pulverizer which is known to smear gold too much). 7. Assay each sample for gold by fire assay and gravimetric finish, using the entire sample. Atomic Absorption finish is not recommended for the test, because the most relevant assays are the ones showing high gold contents. Results are summarized in Table 11.1 and plotted in Figure 11.4. The arithmetic average of the 100 fire assays is 98.4 g/t and the relative, dimensionless variance is a close estimate of s2FSM ≈ 0:4278. 8. Crush all size fractions, and what is left from the –1.25 cm +0.63 cm fraction, to 95% minus 0.30 cm, and collect a split about 1000 grams from each size fraction. If the coarsest size fraction has less than 1000 grams, use the entire fraction. Pulverize each 1000-gram split to 95% – 106μm. Perform a screen fire assay using the entire sample and a 150μm screen. Weigh and assay the entire +150μm fraction, and perform two 50-gram fire assays on the –150μ fraction. Report all weights and assays, which are very relevant for the interpretation of the test. Results are summarized in Table 11.2. 9. From any rejects from the test, prepare a 10000-gram sample. Screen the entire composite sample on a 212-μm screen. There is always a substantial amount of material that does not grind well. Wash this coarse material. Separate the heavy minerals by panning. With the heavy concentrate, prepare polished sections and find the exact nature of the material that does not comminute well.

11.4.2.1. Advantages of the Recommended Approach 1. The test accumulates an enormous amount of information at a reasonable cost. 2. The Fundamental Sampling Error for the coarsest fragments (i.e., about 1 cm for blasthole chips, reverse circulation chips, or the product of a jaw crusher) is well known. It is indeed the primary sampling stage in ore grade control and exploration that is likely to introduce a large error. 3. The experiment uses 1-cm fragments, therefore the sample weight and K are the only contributors to the variance of the Fundamental Error, d3 being one or very near. This gives an accurate estimate of the sampling constant K. 4. Differences in gold content between various size fractions are well documented. For example, if the –212μm material contains five times more gold than the coarse fragments, then any loss of fines during the implementation of the sampling protocol would introduce large Increment Extraction Errors and Increment Preparation Errors, which are all devastating nonconstant bias generators. 5. Screen fire assays performed on 1000-g pulps and especially the 10000-g sample from the rejects, give valuable information about the behavior of gold particles after the material has been pulverized (i.e., delayed comminution). 6. Polished sections can provide the necessary information to correct the calculated nomograph if judged necessary. 7. The variance of the Analytical Error is usually negligible compared to the variance between the 100 hand-picked samples made of p fragments.

221

Minimizing Sampling Error in Sampling Protocols

TABLE 11.1 Summary of results from the Heterogeneity Test Sample number

Sample weight (g)

p fragments

g/t gold

Sample number

Sample weight (g)

p fragments

g/t gold

1

49.11

52

43.4

51

50.37

47

94.06

2

50.75

55

52.48

52

50.88

56

176.61 64.75

3

50.7

44

111.99

53

49.26

56

4

49.74

53

81.83

54

50.09

49

64.79

5

49.94

51

116.38

55

50.83

51

140.45

6

50.62

55

44.94

56

50.1

57

91.22

7

50.7

40

43.98

57

50.71

47

131.88

8 9

50.43 49.98

48 43

87.7 91.42

58 59

49.66 49.67

62 51

87.21 160.85 141.2

10

51.51

60

28.55

60

50.42

53

11

50.85

50

98.48

61

49.49

54

48.44

12

51.26

49

71.04

62

50.79

52

106.51 52.76

13

50.59

52

182.62

63

50.34

54

14

49.98

49

61.74

64

50.85

54

57.56

15

50.35

56

81.11

65

50.62

59

129.92

16 17

50.31 50.79

57 50

40.72 61.02

66 67

49.43 50.49

47 46

69.45 77.85

18

49.91

56

95.98

68

49.98

48

214.77

19

49.8

52

82.68

69

49.71

48

114.74

20

50.35

53

98.52

70

49.59

54

36.89

21

49.45

49

40.07

71

50.13

49

92.01

22

49.85

56

307.63

72

49.28

56

52.07

23

50.62

47

112.75

73

50.33

45

158.51

24 25

50.66 49.94

51 47

159.47 122.51

74 75

49.65 50.13

64 51

176.58 69.97

26

49.66

49

84.91

76

49.51

49

48.51

27

49.58

47

104.76

77

49.66

48

91.42

28

49.91

53

205.61

78

50.58

52

73.33

29

49.73

50

83.61

79

49.53

44

80.77

30

50.37

50

31.6

80

49.34

48

55.88

31

50.22

50

449.37

81

50.34

49

87.9

32 33

50.62 50.42

49 48

98.93 50.94

82 83

49.47 50.5

51 47

130.81 59.34

34

50.54

54

88.82

84

50.03

57

61.81

35

50.25

50

137.12

85

49.18

51

36.07

36

50.78

60

97.15

86

50.4

58

43.64

37

49.78

49

108.22

87

50.58

48

208.05

38

50.48

60

85.9

88

49.77

61

57.49

39

49.93

48

62.08

89

50.93

49

48.3

40 41

50.76 50.66

52 48

110.07 60.16

90 91

50.41 49.65

55 56

58.86 127.87

42

49.72

55

260.15

92

49.8

54

30.2

43

50.79

49

55.26

93

49.89

57

29.14

(Continued )

222

Theory of Sampling and Sampling Practice

TABLE 11.1 (Cont.) 44

50.69

55

55.57

94

50.33

56

62.08

45

49.75

56

36.38

95

49.74

48

102.94

46

50.32

53

47

49.14

53

61.29 123.2

96

50.28

52

97

50.76

45

40.21 275.1

48

49.47

47

130.78

98

50.03

55

49

50.23

50

158.13

99

50.03

52

85.84 41.21

50

50.72

53

118.3

100

50.32

57

117.86

FIGURE 11.4 Plot of the 100 gold fire assays for the Heterogeneity Test.

8. The histogram of the 100 assays performed on p 1-cm fragments can provide a good estimate of the proportion of gold easy to sample (i.e., the Ingamells’ low background gold content). 9. Since the p 1-cm fragments are collected one by one at random, at least in many cases, there is no Grouping and Segregation Error GSE included in the estimation of the sampling constant K.

11.4.2.2. Disadvantages of the Recommended Approach 1. The one hundred samples made of p 1-cm fragments often generate a few outliers. Some people see this as a problem. Actually, it translates the behavior of the gold quite well, and could even lead to a good modeling of the Poisson distribution using the histogram of the 100 assays. One issue that is welcome, and always true, these outliers minimize the negative effect of the Analytical Error.

223

Minimizing Sampling Error in Sampling Protocols

TABLE 11.2 Testing the gold grade changes between fragment size fractions Size fraction (in cm)

% Split

Sample weight (g)

Weight (g) of +150 µm sent to lab

Gold content (g/t) of +150 µm

Gold content (g/t) of – 150 µm

Weighted average gold content (g/t)

+1.25

10

1087

70.38

197.72

76.56

84.40

-1.25+0.63

5

2624

18.29

1475.94

89.90

99.56

-0.63+0.33

5

827

28.04

436.83

85.97

97.87

-0.33+0.17

5

778

31.94

305.81

78.47

87.80

-0.17+0.08

5

584

24.04

372.54

77.24

89.40

-0.08+0.04

5

418

20.11

283.25

71.32

81.52

-0.04+0.02

5

332

16.27

306.41

81.27

92.30

-0.0212

5

447

21.44

227.22

174.51

177.04

2. The test is performed on a calibrated size fraction and is not representative of the grade in the total sample. This leads to a particle size distribution factor about 0.5 instead of 0.25, but we know that in our calculations. If the gold content of the finer size fractions is higher than the tested fraction, then the mineralogical factor used in the tested fraction would be slightly conservative, therefore the sampling constant K would also be conservative. In all cases, we know exactly what is happening. 3. The entire test is performed on a single composite. But, material to be sampled at the mine and especially at the mill is often made from many areas as well. At least, we confine the composite to a single type of mineralization. There is no doubt the Heterogeneity Test should be performed on other geological units as well.

11.4.2.3. Calculation of the Sampling Constant K Because the fraction from which the 100 subsets were collected was made of a very large number of fragments, formula 11.5 can simplify to: C¼ with:

s2FSM  MS dx

ð11:13Þ

C¼f gc‘

We know the denominator dx is very close to 1, therefore the following estimated value of K is quite accurate regardless of x: 0:4278 50:2 ¼ 18:6g =cc 1:053 pffiffiffi K ¼ C d ¼ 18:6ð1:050:5 Þ ¼ 19:1 C¼

224

Theory of Sampling and Sampling Practice

It is interesting to notice that if we had used the average number p = 51.7 fragments instead of the average mass of the sample MS = 50.2g, an average density of 2.7, a fragment shape factor about 0.5 for this kind of material, and a fragment size distribution factor about 0.75, which is appropriate for fragments between two consecutive screen openings of a given series, we would have obtained almost the same result: K¼

0:4278 51:7 2:7 0:5 0:75 ð1:05Þ30:5

¼ 19:8

11.4.2.4. Calculation of the Exponent x Clearly, results from all tests described above do not provide the necessary information to accurately calculate the exponent x in formula 11.13. There is no doubt it should be x = 3, unless the liberation factor is modeled as a function of d itself. The following solutions, suggested many times by Gy’s literature, are:  0:5 using the liberation size d‘ for • Use the approximate, empirical formula ‘ ¼ dd‘ the liberation factor, unless better information, based on solid metallurgical or mineralogical facts is available, or • define a liberation curve, including the liberation size, using valuable information from mineral processing experiments performed to optimize the process, or, if not available yet, • define a liberation curve, including the liberation size, using necessary observations and tests from a logical, comprehensive, and complete mineralogical study, which should be performed anyway for the feasibility study of any new project. • If any doubt persists, proceed with the method suggested in section 10.7.1.2.

11.4.2.5. Calculation of a Sampling Nomograph Using the following equation, it becomes easy to calculate a sampling nomograph like the one illustrated in Figure 11.5, then test the validity of an existing protocol, and suggest a better one if judged necessary.  s2FSM ¼

 1 1 K  d2:5  Ms M L

ð11:14Þ

As clearly shown on the nomograph, the tested protocol (i.e., a 5-kg sample collected from a 750-kg blasthole pile, crushed to 95% minus 0.17 cm, then split to a 250-g sample, then pulverized to 95% minus 150 µm, and finally fire assayed using a 30-g analytical subsample) is a safe protocol with respect to FSE. 11.4.2.6. Checking the Validity of the Calculated Value for K A size fraction analysis was performed, the results of which are shown in Table 11.2. The gold content varies by a factor 2 between size fractions. In this particular case, the variation is not severe enough to worry about a significant change in the value of the

225

Minimizing Sampling Error in Sampling Protocols

FIGURE 11.5 Testing an existing sampling protocol on the nomograph.

mineralogical factor c, therefore the calculated value for K should be quite reliable, and slightly conservative. Observed differences between size fractions should definitely alert us to potential, severe problems with delimitation, extraction, and preparation biases if the selected sampling equipment, and the way it is used, are not correct. It should not alert us for the validity of K, and we should explain the logic of this statement. The true value of IHL is the sum of all partial values affecting every fragment size fraction: IHL ¼ ðIHL Þα1 þ ðIHL Þα2 þ … þ ðIHL Þαk

ð11:15Þ

Because of the overwhelming effect of the third power on the value of d3 in the coarse size fraction, even when the liberation factor increases for the smaller size fractions, the value of IHL is not much different from the value obtained for ðIHL Þα1 . Conclusion: Performing the recommended Heterogeneity Test only on the coarse size fraction presents absolutely no risk for the representativity of the test. It makes it, cheap, faster, and by all mean good enough. Furthermore, and this is a big advantage, we have no problem with the variance of GSE, which is a transient phenomenon changing all the time, that would mislead the calculation of the variance of FSE in a big way.

11.5. Cases Where the Material Has Been Crushed Close to the Liberation Size of the Constituent of Interest For practical purposes the following material applies to gold; however, the logic that is presented in the theoretical developments applies to any other constituents of interest.

226

Theory of Sampling and Sampling Practice

In cases where most of the gold reports to size fractions above 80 µm it is likely that the fine pulverization performed with laboratory mills will substantially liberate the gold. In this category we may also include alluvial gold. Hypothesis #1 made earlier in Chapter 10 for the derivation of equation (10.14) is still valid, while hypothesis #2 may show a weakness. Therefore, we may rewrite equation (10.7) as follows: IHL ¼

X α

X

fα  d3α

β

ρβ

ðaβ  aL Þ2 MLαβ a2L  ML

ð11:16Þ

By definition we know the gold is liberated, therefore the shape factor is a function of the density, thus: IHL ¼

X α

d3α

X β

fβ  ρ β

ðaβ  aL Þ2 MLαβ a2L  ML

ð11:17Þ

Let’s develop this relation for the density class ρg of the gangue and for the density class ρAu of the gold. Let’s also call ε1 the infinitesimal gold content of the gangue and ε2 the infinitesimal weight of gold in the gangue: IHL ¼

X α

" d3α

ðεβ  aL Þ2 ε2 ðaAu  aL Þ2 MLαAu fg ρg þ f ρ Au Au a2L  ML α2L  ML

# ð11:18Þ

Obviously, the first term of the sum is negligible when compared to the second one. Furthermore, aAu = 1 by definition, and aL is usually very small. Therefore 11.18 simplifies as follows: IHL ¼

X α

d3α

  ρAu  MLαAu fAu α2L  ML

ð11:19Þ

By definition: X MLαAu α

ML

¼ aL

ð11:20Þ

Changing the sign sum for the particle size distribution factor gAu equivalent and for aL, we obtain the very useful simplified formula: IHL ¼ fAu  gAu  d3Au 

ρAU aL

ð11:21Þ

If fAu = 0.2, gAu = 0.25, and ρAu = 16 (in practice, native gold often alloys with some other metals), useful sampling nomographs can be calculated with the following formula: 

s2FSM

 1 1 0:8 3 ¼  d Ms ML aL Au

ð11:22Þ

Minimizing Sampling Error in Sampling Protocols

227

FIGURE 11.6 Nomograph for liberated gold for 700µm and 150µm gold particles, a grade of 1.2 g/t gold, and a targeted sFSE of 15% relative.

Using this formula, convenient sampling nomographs can be calculated like the one illustrated in Figure 11.6. A gold deposit shows gold particles up to 700μm in a given type of mineralization. After pulverizing the material very fine, most of the gold is liberated or very near liberation. We may wonder what the correct sample weight would be to perform assays with sFSE ¼ 0:15 (i.e., a tolerance of 15% relative). Assuming the large sample is screened on a 150-μm screen, and the + 150-μm fraction is fire assayed entirely we may wonder how much of the –150-μm fraction should be used for fire assay. The expected gold content at the cutoff grade, where the maximum precision is required for ore grade control, is 1.2g/t. A 10163-g sample is needed for the metallic screen assay, and a 100-g sample from the –150-μm fraction needs to be fire assayed. Assuming the mill feed is sampled after the grinding circuit, providing one 8-hour shift sample, if only a 30-g or 50-g fire assay is performed on that sample, the estimated gold content would be totally ludicrous. Managers, manufacturers of sampling equipment, engineering firms, and laboratory supervisors have the duty to understand, detect, and correct such problem. Similar derivations could be performed for other minerals of interest, and lead to their own version of equation (11.22). Such formulas have powerful applications in Chapter 12 and Part VI of this book.

11.6. Cases Where a Constituent of Interest Is Associated with Another Major Mineral Many minerals such as galena, sphalerite, chalcopyrite, pyrite, chromite, molybdenite, and so on often contain some precious metals such as gold, platinum, palladium,

228

Theory of Sampling and Sampling Practice

rhenium or others. For example, for a molybdenite ore, we can assume that the rhenium grade varies little inside the host mineral. In fact, we may consider it relatively constant. Under these conditions, which need to be verified by a mineralogist, the rhenium content becomes somewhat irrelevant as far as sampling for rhenium is concerned. We should concentrate our attention on the mineral content alone (e.g. molybdenite). Then, we come back to the general cases initiated by formulas 11.5 and 11.6.

11.7. Sampling for Fragment Size Distri Bution Analysis For many industries the determination of the fragment size distribution of a material is often one of the most important criteria for the following reasons: • the assessment of the quality of a product, • the assessment of the efficiency of a process, • the compliance with the clause of a commercial contract, • the assessment of the accuracy of a sampling system, and • the assessment of the representativeness of a sample. Let’s define the necessary notations in this section: LC a size fraction of interest aLC the proportion of LC in the lot L MLC the mass of the size fraction of interest NLC the number of fragments in LC Fj a fragment of LC Mj the mass of the fragmentFj aFj the critical content of Fj FLC the average fragment of the size fraction LC MFLc the mass of FLC Now it is possible to rearrange the general equation [2.35] to calculate IHL: within LC ai ¼ aFj ¼ 1 outside LC ai ¼ 0 therefore: IHL ¼

X ð1  aLc Þ2 M2j X M2 X ð1  aLc Þ2 M2j  aLc X M2 i i  þ ¼  þ 2 2 M M M M a a L L Lc L Lc Lc j i j i

ð11:23Þ

Now, let’s introduce the auxiliary parameters that have the dimension of a mass: XL ¼

X M2 i

i

ML

ð11:24Þ

229

Minimizing Sampling Error in Sampling Protocols

XLc ¼

X M2j

ð11:25Þ

MLc

j

Now, assuming there is only one size fraction of main interest: IHL ¼

ð1  aLc Þ2 XLc þ XL aLc

ð11:26Þ

a2Lc is usually very small and we can eliminate it in a first approximation: IHL ¼

ð1  2aLc Þ XLc þ XL aLc

ð11:27Þ

XL and XLc should be expressed in practical terms because Mi and Mj are not known. It is relatively easy to do this when we have at our disposal an existing fragment size distribution that has been performed in the past on a similar material. It is sufficient to substitute Fi and Fj of any given size fraction by the average fragment FLx of that size fraction. Then, more notations are needed: NLx the number of fragments in any class Lx MFLx the mass of the average fragment FLx aLx the proportion of Lx in the lot L. By definition: MFLx ¼

MLx NLx

ð11:28Þ

Changing Mi for the average fragments: XL ¼

X M2 i

i

ML

¼

X NLx  MFLx  MFLx ML

x

¼

X MFLx  MLx x

ML

ð11:29Þ

But, ML ¼ MaLxLx then, XL ¼

X

MFLx  aLx

ð11:30Þ

x

In the same way: XLc ¼

X M2j j

MLc IHL ¼

¼

NLc  MFLc  MFLc MLc  MFLc ¼ ¼ MFLc MLc MLc

X ð1  2aLc Þ MFLc þ MFLx  aLx aLc x

ð11:31Þ

ð11:32Þ

230

Theory of Sampling and Sampling Practice

All terms in this formula can either be calculated or estimated when we have at our disposal a rough idea of the actual size distribution. Proceed by iteration if necessary. The average mass of the fragments of a given size fraction can be estimated in two different ways: • By direct measurement of the mass of a given number of fragments selected at random within the size fraction. • By calculation using the volume V, shape factor f, and density ρ of fragments, with the following two formulas: MFLc ¼ VFLc  ρ ¼ f  ρ  d3FLc MFLx ¼ VFLx  ρ ¼ f  ρ  d3FLx

ð11:33Þ ð11:34Þ

dFLc and in a same manner dFLx can be estimated using the upper and lower opening of two screens defining a size fraction, with the following formula:

dFLc

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3 3 3 ðupperopeningÞ þ ðLoweropeningÞ ¼ 2

ð11:35Þ

Finally, IHL can take its final, complete form: " IHL ¼ f  ρ

#  X 1 3 3  2 dFLc þ dFLx  aLx aLc x

ð11:36Þ

Now we may calculate the variance of the Fundamental Sampling Error: " #   X 1 1 1 3 3 f ρ ¼   2 dFLc þ dFLc  aLx MS ML aLc x 

s2FSE

ð11:37Þ

This formula can often be simplified for many applications: • If ML 410MS • If dFLc is not much different from d • If aLc is small, then s2FSE

  f ρ 1 ¼  2 d3FLc MS aLc

ð11:38Þ

Remember the limitations: If the coarse fragments represent a large proportion of the lot, P 3 for example more than 20%, the term dFLc  aLx should be kept. Furthermore, if aLc is x

becoming too large (e.g., >0.30) then the approximation taken in formula 11.27 is no longer valid: these are rare cases.

Minimizing Sampling Error in Sampling Protocols

231

11.8. Important Guidelines A sampling protocol can involve several sampling stages, several sample preparation stages and a final analytical stage and they are all affected by a certain amount of uncertainty. In this section we make the optimistic assumption that sampling and analytical procedures are correct; this does not mean bias generator errors such as Increment Delimitation Errors, Increment Extraction Errors, Increment Preparation Errors, Increment Weighting Errors, and the final Analytical Error (AE) are zero. They most certainly are not, however precautions were taken to hopefully make them very small. Regardless of precautions these errors take place and the small biases they generate, which are not reproducible, are responsible for a small inflation of precision problems and in many cases, there is nothing much we can do about it. Furthermore, at each sampling stage there is a variance of the Grouping and Segregation Error (GSE) involved, and we know this variance is not constant and we also know that if precautions are taken it should not be a very large variance; nevertheless, there are several sampling stages and these variances most certainly add up. Now, the question is: after selecting a Total Allotted Variance s2Allotted what proportion of that variance should be allotted to the total acceptable variance for residual errors due to slight incorrectness, GSE and AE combined? Let’s call this a Residual Variance s2Residual for convenience. The answer to this question is not easy because most variances involved in the Residual Variance s2Residual due to all the errors mentioned above are not quantifiable in a practical way; the only thing we know how to do is to minimize their effect through preventive competence. Nevertheless, preventive competence falls short of satisfying Quality Control regulators who may know very little about sampling errors and analytical errors. The following analysis may help us reach a reasonable consensus. Furthermore, in this section the variance of the long-range Heterogeneity Fluctuation Error s2HFE2 is not taken into consideration because it is part of a totally different issue which requires the optimization of the required sampling interval between increments (e.g., increments taken across a flowing stream to prepare a composite sample representing a working shift or a shipment of any commodity). Also, the periodic Heterogeneity Fluctuation Error s2HFE3 is not taken into consideration because it is also part of a different issue which requires the selection on an appropriate sampling mode in order to minimize the effect of this potential error.

11.8.1. The Total Allotted Variance Assuming this discussion is well assimilated, now it is necessary to select a maximum variance for the Total Allotted Variance s2Allotted involved in a sampling and subsampling protocol so it becomes possible to use it as an absolute upper limit in a nomograph that may be used to optimize such protocol. Such limit is illustrated in Figure 11.1 by the thick horizontal dashed line; we could call it the safety line. First, it is generally accepted that introducing a Poisson process in a sampling protocol is not a good idea. This may start taking place when the standard deviation of the total sampling error exceeds about 35% relative;100 above such value the assays distribution from many replicate samples may start to be skewed toward lower values when looking at small contents such as gold or base metals in ore, or impurities in concentrates, or other commodities, or pollutants in the environment. It would be a safe guideline to suggest an

232

Theory of Sampling and Sampling Practice

absolute maximum at 32% as the transition to a Poisson process is a slow transition in which there is no magical limit. Obviously, these values are not sacrosanct, and open to discussion and we will investigate them in detail in Part VI of this book. 11.8.1.1. The Geologist Looking for Gold or Copper Precious metals, such as gold, have a bad reputation for creating extremely difficult sampling problems. Yet, nobody, especially the exploration or grade control geologist, wants a Poisson process to enter a database because of the devastating consequences.22 This suggests the selection of a Total Allotted Variance s2Allotted ¼ ð0:32Þ2 ¼ 0:1024 to be the wise upper limit on a nomograph. It is important to mention that the geologist may not always be able to control the outcome of the gold assays when isolated coarse gold particles are present in the material to be sampled; what is important is for him or her to be aware of the problem as there are ways to prevent it in an effective way.131 For base metals, which often consist of contents near or above 1%, the upper limit suggested for precious metals is not a wise limit, as it is much easier to perform a better sampling job. For base metals the value for the standard deviation of the total sampling error should not exceed about 20% relative, which is a Total Allotted Variance s2Allotted ¼ ð0:20Þ2 ¼ 0:040; this should be considered as an absolute upper limit in the nomograph. 11.8.1.2. The Metallurgist Point of View The metallurgist in charge of metallurgical accounting, reconciliation with the mine, and routine material balance exercises, following the above guidelines would create a situation where it would be impossible to perform a reliable material balance on a given working shift, or on a given day, or even on a given week because of the additivity of variances on the feed, tail, and concentrates. Averages from data acquired over a period of several weeks would have to be used to minimize the negative effects of additive uncertainties. For precious metals the value for the standard deviation of the total sampling error should not exceed about 10% relative, which is a Total Allotted Variance s2Allotted ¼ ð0:10Þ2 ¼ 0:010; this should be considered as an absolute upper limit in the nomograph. For base metals the value for the standard deviation of the total sampling error should not exceed about 5% relative, which is a Total Allotted Variance s2Allotted ¼ ð0:05Þ2 ¼ 0:0025; this should be considered as an absolute upper limit in the nomograph. 11.8.1.3. The Sales People’s Point of View Sales people in charge of delivering commodities to many clients around the world may think the above guidelines would create a situation where too much money would be at risk because of unacceptable levels of uncertainty. As a result, sampling protocols must be far more stringent. For precious metals the value for the standard deviation of the total sampling error should not exceed about 3% relative or even less, which is a Total Allotted Variance s2Allotted ¼ ð0:03Þ2 ¼ 0:0009; this should be considered as an absolute upper limit in the nomograph.

Minimizing Sampling Error in Sampling Protocols

233

For base metals the value for the standard deviation of the total sampling error should not exceed about 1% relative or even less, which is a Total Allotted Variance s2Allotted ¼ ð0:01Þ2 ¼ 0:0001; this should be considered as an absolute upper limit in the nomograph. Some of the author’s clients objected to these limits, saying that they are too high and unacceptable. My answer to them was that if they could indeed satisfy these limits, they may be doing better than a lot of people around the world still using invalid grab sampling for such important commodities, which is flagrant disregard to due diligence. 11.8.1.4. The Regulator and Lawyer’s Point of View Sometimes, during litigation, the word uncertainty is not welcome, especially for people who think a generated number coming from a laboratory is the panacea, white or black, with no difference between. These guidelines may give them a realistic perspective, although these guidelines may not be enough to educate them properly about heterogeneity and the variability it creates in sampling and analytical protocols. 11.8.2. A Logical, Economic Distribution of the Total Allotted Variance Many practitioners allow an equal variance to all sampling stages; in appearance it seems a logical thing to do. However, it is a fact that sampling stages are more expensive when the size of fragments in the material to be sample increases and when the necessary sample mass increases. This would suggest allowing the primary sampling stage (i.e., Fundamental Error FSE1) half of the total allotted variance assigned to the Fundamental Sampling Errors, then 1/4 of it for the secondary sampling stage (i.e., Fundamental Error FSE2), then 1/8 of it for the tertiary sampling stage (i.e., Fundamental Error FSE3), and so on. Taking into account that the Residual Variance s2Residual is not anywhere close to zero, and making the optimistic assumption that stringent precautions were taken to make sampling correct and to minimize the effects of segregation and to minimize the Analytical Error, and also to save some allowance for later contribution of Heterogeneity Fluctuation Errors HFE2 and HFE3 it is a safe guideline to allow half of the Total Allotted Variance to the Residual Variance, leaving the other half for all the Fundamental Sampling Errors involved. Then, the logical and most economic sampling protocol should allow the variance for each sampling stage as follows, first taking the example of gold for an exploration and ore resources evaluation program.

11.8.2.1. Distribution of the Total Allotted Variance for Gold in Exploration and Grade Control Total Allotted Variance s2Allotted ¼ ð0:32Þ2 ¼ 0:1024 which is 32% total uncertainty. 2

Þ ¼ 0:0512 which is 22.6% uncertainty. Residual Variance s2Residual ¼ ð0:32 2 Variance of the Fundamental Sampling Error for the primary sampling stage Þ2 ¼ 0:0256 which is 16% uncertainty. s2FSE1 ¼ ð0:32 4

234

Theory of Sampling and Sampling Practice

This 16% uncertainty has always been Gy’s recommendation for the allotted variance of the Fundamental Sampling Error for gold for a primary sampling stage, so there was logic in that choice, and the logic of such decision is often lost under piles of literature, unfortunately. Variance of the Fundamental Sampling Error for the secondary sampling stage Þ2 ¼ 0:0128 which is 11% uncertainty. s2FSE2 ¼ ð0:32 8 Variance of the Fundamental Sampling Error for the tertiary sampling stage Þ2 s2FSE3 ¼ ð0:32 16 ¼ 0:0064 which is 8% uncertainty, and so on; the variance of FSE becomes more stringent as the sampling stage involved becomes cheaper. 11.8.2.2. Distribution of the Total Allotted Variance for Copper in Exploration and Grade Control Total Allotted Variance s2Allotted ¼ ð0:20Þ2 ¼ 0:040 which is 20% total uncertainty. 2

Þ Residual Variance s2Residual ¼ ð0:20 ¼ 0:020 which is 14% uncertainty. 2 Variance of the Fundamental Sampling Error for the primary sampling stage Þ2 ¼ 0:010 which is 10% uncertainty. This is a reasonable guideline for most s2FSE1 ¼ ð0:20 4 base metals. Variance of the Fundamental Sampling Error for the secondary sampling stage Þ2 ¼ 0:005, which is 7% uncertainty. s2FSE2 ¼ ð0:20 8 Variance of the Fundamental Sampling Error for the tertiary sampling stage Þ2 s2FSE3 ¼ ð0:20 16 ¼ 0:0025, which is 5% uncertainty, and so on. Similar calculations can be made with the recommended allotted variance for environmental sampling, for material balance, and for the sampling of important commodities. For more details refer to Pitard.133

12 Other Approaches, a Strategy, and Cardinal Rules for the Estimation of the Variance of FSE

12.1. Introduction The following material should not be perceived as a replacement for the method of choice to estimate constants K and x in a formula suggested by François-Bongarçon to quantify the variance of the short-range Quality Fluctuation Error QFE1 affecting splitting processes in routine sampling and subsampling protocols. The suggested calibration was approved as a method of choice in a common publication by Pitard and François-Bongarçon130 and should remain so as far as sampling practitioners are concerned. However, Heterogeneity Tests are far from perfect and no matter how careful practitioners are, there are pitfalls that can be prevented by paying attention to arguments presented in this chapter. Therefore, the only objective is to suggest to geologists and mineralogists that early on they can provide valuable information by adding the necessary observations on the drilling log and by performing simple mineralogical tests. The added information can help to prevent great mistakes in the ways exploration and grade control data are looked at. But first some paradigms that are well accepted by sampling experts should be eliminated. In the mind of many people a Poisson process cannot take place unless the constituent of interest particles (e.g., gold particles) are liberated from their rock matrix; there is nothing that can be so far from the truth. Such belief is based on the fact that gold particles should be randomly distributed, but obviously they are not. There are plenty of geological explanations for in-situ gold particles to be distributed in a certain area of a small ore block (e.g., 15 x 15 x 15 meters). Therefore, someone may rightly object to using a Poisson model which is the simplest possible and most random way in which we may explain why the gold particles are where they are. However, all this assumes some a priori knowledge of the regionalization within that little block. We may have some of that knowledge between blocks, but not necessarily within any given block. Therefore, before going any further, we must elaborate on the paradigm of being an observer, because the observer has no idea where the gold in that block is. He may know there is gold, but he does not know where. The resulting effect is that when he drills that block, and within that block there may be 1, 2, 3, or more clusters of gold particles somewhere, the location to drill chosen by the observer who knows nothing ahead of time is a random process of its own, even if the gold is not strictly distributed at random. So, the resulting gold content of that core, within that block, can be assimilated to a random process, not because of the way it is distributed in the deposit, 235

236

Theory of Sampling and Sampling Practice

but because of what the observer is doing with the selected location and selected basic volume of the support of observation as he becomes a participant; there is a subtle difference. It is exactly the same thing for coarse fragments in which the gold is not necessarily liberated. For the purist who rightly insists that random variables be defined by reference to an appropriate probability block it is not much of a loss to take the Poisson model as a welcome tool to help us, especially when the observer is personally responsible for introducing the Poisson process in the first place (i.e., no a priori knowledge and an extremely small support volume). These are the main reasons why the author decided to add Parts 5 and 6 in this book, which are necessary to leave no stone unturned. For anyone who may have the desire to better understand what is meant by “Poisson process” Kingman’s book (1993) is an excellent source.181 With this knowledge in mind it is not difficult to demonstrate that collected data for most trace constituents, and gold is one of them, are affected by a Poisson process of some kind that was originally introduced because of the limited volume of the drilling support. This has huge implications in the collection of representative samples to perform Heterogeneity Tests and ultimately calibrate constants K and x. The appropriate approach for conducting Heterogeneity Tests for major and some minor constituents is well established, and the objective of this chapter is not to question this. However, for low grade gold deposits for example, the conventional approach may indeed work well for deposits with finely disseminated gold, but it may be misleading when gold particles are large (e.g., superior to a few hundred microns, or when fine gold or any other trace constituent clusters). The author witnessed many such cases and clearly there is a need to suggest a strategy to make sure sampling practitioners are not reaching over-optimistic conclusions. Again, without understanding how Poisson processes may take place, the following material may seem bizarre for the reader. In a letter criticizing François-Bongarçon’s work Smee and Stanley182 said: Gy’s formula is based on and derivable from the binomial theorem. Consequently, Gy’s formula doesn’t apply to samples containing very low concentrations of elements contained in rare grains (e.g., Au, PGE, diamonds, etc.), where a Poisson relation is applicable. Our avoidance in referencing Gy stems directly from the fact that we consider samples containing nuggets to be a scenario that is inconsistent with Gy’s approach.

This statement shows sampling practitioners in the world of Measurement Uncertainty vastly misunderstand Gy’s work and have no idea about the many subtleties of his propositions and therefore they are in no position to criticize those who apply his work in a wise and knowledgeable way. First, the Poisson model is a limit case of the Binomial Model use by Gy, and therefore a close “cousin,” and Gy was perfectly aware of nugget problems. Nobody who is knowledgeable enough would use Gy’s general formula to calculate the variance of FSE for a sample mass that is too small by several orders of magnitude. However, anyone can turn the formula around and calculate the necessary sample mass that is required to prevent the introduction of a Poisson process, a domain for which the formula is perfectly applicable. This is exactly what Gy always did and it is what is suggested at the end of this chapter.

Estimation of the Variance of FSE

237

12.2. Cardinal Rule #1 in Sampling Biases in sampling are the worse misfortune that may take place, and they were the driving force to establish the many rules of sampling correctness, so theoretical developments of equi-probable sampling made by Gy and Matheron could apply in practice. This led to the many advances to minimize Increment Delimitation Error, Increment Extraction Error, Increment Preparation Errors and Increment Weighting Error which are the biggest contribution of Gy’s theory by far according to his own words. Is this sufficient to prevent sampling biases? The answer is no. For example, it is well known that the content of a constituent of interest may drastically change from one size fraction to another. Then, plain logic would suggest the following Cardinal Rule in sampling should never be broken: a sample mass that is too small to well represent all size fractions cannot provide a sample representative of anything else; this has huge implications for any kind of Heterogeneity Test. Successive stages of sampling and sub-sampling may each require compliance with a preestablished limit that highly depends on the practitioner’s objectives as suggested in Chapter 11. But, the most difficult size fraction to properly represent in the sample is obviously the one containing the largest fragments. This strongly suggests the following formula, already presented in equation (11.37), to calculate the variance of Fundamental Sampling Error to be used to make sure a given size fraction is well represented in collected samples: " #   X 1 1 1 3 3 f ρ ¼   2 dFLc þ dFLc  aLx MS ML aLc x 

s2FSE

ð12:1Þ

Notations are listed in Section 11.7. We already saw that this formula can often be simplified for many applications: • If ML 410MS • If dFLc is not much different from d defined as the size opening of a screen that would retain 5% of the material by weight. • If aLc is small, then s2FSE ¼

  f ρ 1  2 d3FLc MS aLc

ð12:2Þ

and if dFLc ¼ d, so we can make sure the most difficult size fraction is well represented in the sample, with aLc ¼ 0:05by definition, we obtain: s2FSE ¼

18  f  ρ  d3 MS

ð12:3Þ

This convenient formula provides a filter to make sure the exponent x for d is not abused when used in a formula like the following one:

238

Theory of Sampling and Sampling Practice



s2QFE1

 1 1 K  dx ¼  MS ML

ð12:4Þ

where: K ¼ f  g  c  ðd‘ Þr and x¼3r K and x are the key factors to quantify in various experiments. If x < 3, clearly it is not an issue when the values for d are below 1 cm, however it can indeed become an issue for large values of d such as for sampling run off mine material. Example of application: If a run off mine material has a value of 10 cm for d and a 1-ton sample is required to represent the coarsest fragments with an uncertainty of 15% (1s), it would be unfortunate to recommend a much smaller mass on the basis that x is much smaller than 3. Obviously, the value used for K has a big influence on the outcome of this discussion; indeed, if K is very high it is likely that there is no problem; the point is: be very cautious about this issue.

12.3. Cardinal Rule #2 in Sampling The size dM of the grains of mineral of interest, liberated or not, must play an important role in the necessary sample mass. dM can also be a cluster equivalent when several of those grains are very close to one another within a core sample or within a large fragment. Gy corrected for this problem in an elegant way, not always well understood by practitioners, with his liberation factor. In other words, in his original formula with x = 3, both concepts d and dM were preserved; be aware it is no longer the case with formula 12.4. Often, especially for trace constituents, it is difficult and impractical to determine the liberation factor with sufficient accuracy, and this makes some formulas vulnerable. Enormous literature has been written on this subject, the best one by FrançoisBongarçon.183,184 However, it is not a must to use the conventional, favorite approach suggested by Gy’s general and well-known formula. The following suggestion is pragmatic, accurate, and falls in line with Ingamells’ approach; it is summarized in the three following statements: • Use Gy’s suggested approach for liberated gold when dM, which is dAu in formula 12.5, becomes the dominant factor; it can be generalized to many other components of interest:   1 1 fAu  gAu  ρAu 3  dAu ð12:5Þ s2FSE ¼ aL MS ML • Verify that the sample mass suggested by the generalized version of equation (12.4) is compatible with the mass necessary to represent all size fractions in the lot by using equation (12.3).

Estimation of the Variance of FSE

239

• The largest required sample mass for a pre-selected precision, obtained by equation (12.3) (i.e., using d) and equation (12.5) (i.e., using dAu ) necessarily takes priority on deciding what the sampling protocol should be. Generalization of equation (12.5) by defining new notations: fM the shape factor of the constituent of interest gM the particle size distribution factor of the constituent of interest ρM the density of the constituent of interest dM the maximum size of the constituent of interest particle, liberated or not, or cluster of such particles contained in a single fragment of the surrounding matrix; dM is defined as the size of a screen that would retain no more than 5% by weight of all the particles of the constituent of interest. Thus, we obtain the very useful simplified formula: IHL ¼ fM  gM  d3IM 

ρM aL

ð12:6Þ

Useful sampling nomographs can be calculated with the following formula: 

s2FSE

 1 1 fM  gM  ρM  d3M ¼  aL MS ML

ð12:7Þ

The great advantage of this approach is its accuracy and the easiness to collect the relevant and necessary information through microscopic observations, and it should somewhat reconcile Gy, Ingamells, and François-Bongarçon. In the event reconciliation is not possible it should be a clear indication some heterogeneity properties of the constituent of interest are still unknown and further investigation is needed. This debate naturally leads to Cardinal Rule #3. Another advantage of equation (12.7) is for subsampling finely ground material, as some constituents such as soft ones like gold, molybdenite, galena, and many more do not comminute well. Very hard minerals like chromite may show the same problem. For example, a sample pulverized to 99% minus 106 microns may still contain a 300-micron gold particle making all other formulas weak and perhaps misleading. As Pierre Gy said many times, especially when criticizing the work of Richard,185 when deciding what the exponent of d should be, and therefore the constant x, there is a confusion between FSE, QFE1, and even the Analytical Error AE poorly defined by nonchemists and TOS experts. This confusion has been responsible for over a century for total chaos, and still remains an issue today. Problems are: 1. For very fine material the variance of FSE rapidly becomes a negligible factor unless unrecognized delayed comminution takes place for the constituent of interest. 2. The segregation error can be huge as the constituent of interest is liberated and possibly of a very different density than the rest of the material. 3. Taking the optimistic assumption that analytical increments are taken perfectly at random (an absolute requisite for Gy’s definition of GSE), which is rarely the case at the balance room of a laboratory, the variance of GSE can become small indeed;

240

Theory of Sampling and Sampling Practice

however, it takes work, and an analytical chemist is not willing to spend the necessary time to perform the task correctly. As a result, the segregation error which is no longer GSE, may become vastly underestimated because it no longer obeys rules of sampling correctness set by the TOS. 4. The Analytical Error AE cannot be estimated by performing replicate assays that include the last FSE and last GSE. Let’s assume the chemist takes a 30-g analytical subsample for fire assay; the taking of that sample has nothing to do with the Analytical Error which includes fusion, cupellation, acid digestion of a bead, contamination, losses, spectrometer calibration or use of a precision balance, additive and proportional interferences, and so on. In other words, it is very hard, if not impossible in some cases to appreciate what AE really is. Furthermore, AE is extremely operator-dependent. There is no such thing as a bad analytical method, there are only incompetent analysts who apply it for the wrong conditions. 5. There is no such thing as a segregation free analysis when taking replicate samples in a given size fraction as particles segregate even if they are all the same size. They will most certainly segregate because of density, shape, electrostatic property differences, and so on. All this is clearly summarized in the sketch illustrated in Figure 12.1 and very familiar to Visman, Ingamells, and Gy through verbal conversations, and many others who were wise enough to admit that what they measured with replicate samples or replicate assays may have nothing to do with the variance of FSE. It can be noticed as well that in this figure when segregation is mentioned it is not necessarily referring to GSE as defined in Chapter 13; the subtle difference depends on what the operator may do. So, this strongly suggests, calling variance sources by their respective name instead of calibrating x to compensate for things that are not clearly defined or understood. An example is appropriate: an operator shakes a laboratory pulp to collect a tiny analytical sample, then makes the assumption there is no longer any significant segregation in the pulp, and finally takes one or two tiny increments with no respect to the TOS. The resulting variance, after guessing what the analytical variance should be and removing it, is found to be large. The operator put the blame on a large variance of FSE when it is clear that he was introducing a massive segregation variance because of the way he collected the increments. He was introducing a variance that has nothing to do with FSE, nor GSE, because all the subtle principles clearly defined in Chapters 6 and 13 were completely ignored therefore prohibit the segregation variance to be a random one as it should be.

12.4. Cardinal Rule #3 in Sampling: Using a Logical Strategy The following integrated, iterative approach to estimate the variance of FSE, or FSE + GSE, should be made a cardinal rule in sampling. Iteration is the word of wisdom in sampling. The following steps are not necessarily suggested in chronological order. Rather, each step can be taken simultaneously, which ultimately will provide confidence that no stone has been left unturned.

Estimation of the Variance of FSE

241

12.4.1. The Mandatory Calibration of K and x The calibration of constants K and x in equation (12.4) as suggested by FrançoisBongarçon is a mandatory step that is nonnegotiable; please notice notations in that formula very carefully. Indeed, the use of the notation QFE1 is valid only if the operator has collected many increments in full compliance with sampling correctness, which is a very optimistic assumption as experience proves. If not in full compliance, then the resulting variance is anyone’s guess because there is no longer any theoretical development possible as demonstrated by Gy and Matheron. Such calibration allows us to minimize the variance of the Fundamental Sampling Error and also measures the leftover effect of the Grouping and Segregation Error depending on the equipment used to split samples at the sample preparation room and at the laboratory, and on the operator’s training which can be a huge factor. For the details of such procedure the reader is referred to François-Bongarçon’s publications (2000 and 2001).183,184 12.4.2. The Geologist to the Rescue It is necessary to better log the properties of gold in each geological unit. With minor modifications the same list may easily apply to other constituents of interest in iron ore, in coal, in porphyry copper deposits, and others, gold being only a convenient example. For each core sample within substantial mineralization the following information should be carefully logged: • Where is the gold? • What are the associations of gold? • How much gold is finely disseminated within sulfides, such as pyrite or other minerals? • How much gold is coarse and perhaps nearby other minerals? • Are gold and pyrite or other mineral occurrences associated with narrow or large quartz veins? If so, are there several quartz events? • Study size distribution of gold particles. A good histogram is needed for each geological unit. After observing several thousand samples within mineralization it should be possible to roughly estimate the size dM above which only 5% of the gold can report. • Equally important, study the size distribution of gold particle clusters; in other words when you see one gold particle (measure it), how many more gold particles are in the immediate vicinities? For example, 10 or more within 100 cm3? After observing several thousand samples within mineralization it should be possible to roughly estimate the size dM above which only 5% of the gold can report as cluster equivalents. • Etc. . . 12.4.3. The Mineralogist to the Rescue Suarez and Carrasco186 demonstrated in an unambiguous way that careful mineralogical studies can provide valuable information to model the variability of the liberation factor as a function of comminution stage. It is very unfortunate that such study does not generate more interest. The same study suggests that the maximum content model

242

Theory of Sampling and Sampling Practice

Selection of a constituent of interest within a single geological unit Calculate constants K and x to calibrate the Gy's formula to calculate the variance of QFE1

Calculate necessary sample mass to properly represent largest size fractions

Reconcile Information from geologists and mineralogists

Calculate necessary sample mass to properly represent largest particles or clusters of constituent of interest

FIGURE 12.1 Logical heterogeneity test flow sheet.

suggested many years ago by Gy is a very reliable model that was used all the time in a mineral processing research laboratory (Minemet); see Gy34,35 and Pitard (1993, which is the second edition of this book, and 2009127). 12.4.4. Representing the Coarsest Fragments The selected sample or subsample mass must fairly represent the coarsest fragments. This task is easily done by using formula 12.3. 12.4.5. Representing the Coarsest Particles of the Constituent of Interest The selected sample or subsample mass must fairly represent the largest particles of a given constituent of interest. This task is done by using formula 12.7. This is critically important for constituents showing delayed comminution. Usually, soft minerals such as gold, galena, molybdenite, and very hard minerals such as chromite can show such problem. As a good example, the coarse gold case shown by Pitard and Lyman134 clearly shows that a Heterogeneity Test performed by using conventional 30-g fire assays would most likely have led to very misleading conclusions; the test is not the problem, but the completely inappropriate 30-g subsample is the issue; in other words, the operators would have used the wrong tools. 12.4.6. A Logical Flow Sheet to Perform Heterogeneity Tests Figure 12.1 summarizes the necessary steps to perform a reliable Heterogeneity Test for various constituents of interest during exploration and grade control; the approach can

Estimation of the Variance of FSE

243

easily be extended to other materials in other industries. The reconciliation box has a very important mission in cases where conclusions are grossly different: a logical explanation must be found that may lead to important decisions concerning the selection of fully optimized sampling and subsampling protocols. 12.4.7. Calculating the Necessary Sample Weight Ms Instead of the Variance of FSE To prevent any misunderstanding about possible Poisson processes introduced by using insufficient sample weight MS, and create any issue about the validity of Gy’s formulas, anyone can turn the formulas around and calculate the necessary sample weight that is required to prevent the introduction of a Poisson process, and remain in a domain for which the formulas are perfectly applicable. It is also a good practice to calculate the necessary sample weight in accordance with guidelines presented in Section 11.8.

This page intentionally left blank

13 The Grouping and Segregation Error

In Chapter 9 we defined the Grouping and Segregation Error GSE as the complementary part of the Fundamental Sampling Error FSE within the discontinuous Heterogeneity Fluctuation Error HFE1. It is important to recall its moments as follows: The average is: mðGSEÞ ¼ mðHFE1 Þ  mðFSEÞ ≈ 0

ð13:1Þ

s2 ðGSEÞ ¼ s2 ðHFE1 Þ  s2 ðFSEÞ

ð13:2Þ

s2 ðGSEÞ ¼ γξs2 ðFSEÞ

ð13:3Þ

The variance is:

or

where: γ is a grouping factor, dimensionless, always positive; it characterizes the size of the increments making up a sample. In fact, γ is practically proportional to the size of the increments. ξ is a segregation factor, dimensionless, whose value is always between 0 and 1, and it characterizes the amount of segregation. It is important to note that both factors γ and ξ cannot be dissociated from one another, and it is always their product that we have to take into account. In practice, it would be extremely difficult to estimate the product γξs2 ðFSEÞ and it is never done, mainly due to the transient nature of segregation that changes all the time with particulate materials. The estimation of the variance s2 ðGSEÞ by difference as suggested in equation (13.2) is not precise, especially when s2 ðGSEÞ is small when compared to s2 ðHFE1 Þ and s2 ðFSEÞ. It should be emphasized that the results shown in this chapter have been reached at the cost of some hypotheses clearly defined in Chapter 9; they are: • The sampling selection is correct, as presented in detail in part seven of this book: IDE þ IEE þ IPE ¼ 0

ð13:4Þ

• which gives to all fragments or groups of fragments an equal probability P of being part of the sample.

245

246

Theory of Sampling and Sampling Practice

Under those very stringent conditions we chose to define the Sample Selection Error as the optimized Sample Selection Error SSEOPT . Then, and then only, we may write: 1P DHL PNGn

ð13:5Þ

s2Gn ðHFE1 Þ ¼ s2Fi ðHFE1 Þ þ λξs2Fi ðHFE1 Þ

ð13:6Þ

s2 ðSSEOPT Þ ¼ s2Gn ðHFE1 Þ ¼ with:

It is important to recall that in the above equations we make the assumption that the Increment Weighting Error IWE is 0 or at least negligible, then we can write: s2Gn ðHFE1 Þ ≈ s2Gn ðQFE1 Þ

ð13:7Þ

13.1. Minimization of the Grouping and Segregation Error 13.1.1. Conditions for the Cancellation of the Average m(GSE) As with the bias of FSE, the bias of GSE is different from zero; however, it is almost always negligible in a first order approximation. We do not analyze this bias in this chapter. 13.1.2. Conditions for the Cancellation of the Variance s2(GSE) Equation (13.3) shows that the variance s2(GSE) of the Grouping and Segregation Error is equal to the product of three factors; therefore, s2(GSE) will cancel if only one of these factors is zero. 13.1.2.1. Cancellation of s2(FSE) This problem has been studied in Chapters 10 and 11 and we reached the conclusion that the variance of FSE could never cancel, unless the sample is the entire lot, which is not the objective of sampling. Conclusion: To cancel s2(GSE) we cannot change s2(FSE) which is always positive. 13.1.2.2. Cancellation of the Grouping Factor In Section 6.4.12, we mathematically defined the grouping factor as follows: γ¼

NFi  NGn NGn  1

ð13:8Þ

The value of γ reaches zero when NGn ¼ NFi . Conclusion: The grouping factor γ cancels when the number of increments NGn in the sample is equal to the number of fragments NFi in the sample. In other words, each

247

The Grouping and Segregation Error

increment making up the sample is made of only one fragment selected at random within the lot. Obviously, such a request is not realistic for routine purpose; however, it is exactly what we did in the suggested Heterogeneity Test in Section 11.4.2 to eliminate the risk of having a GSE error confusing the later calculation of the variance of FSE.

13.1.2.3. Cancellation of the Segregation Factor For all practical purposes, in Section 6.4.13, we defined the domain of the segregation factor ξ as follows: 1ξ0

ð13:9Þ

Thus, ξ is either positive or nil. Conclusion: The segregation factor cancels when the distribution of the fragments within the lot reaches an unstable stage of natural distribution homogeneity DHL min , which is the limit stage reached by a three-dimensional homogenizing system that would work perfectly, which is not realistic in routine work. 13.1.3. Conditions for the Minimization of the Variance s2(GSE) It is not practical and often impossible to find and implement the conditions that will cancel one of the three factors γ, ξ, and s2(FSE); however, we know how to minimize each of them and this should be our strategy. 13.1.3.1. Minimizing the Variance s2(FSE) of the Fundamental Sampling Error Conditions for minimizing s2(FSE) were studied in detail in Chapters 10 and 11 and will not be recalled here. We only state that it is a very critical subject with which the reader should spend some time because of the important consequences in optimizing sampling protocols.

13.1.3.2. Minimizing the Grouping Factor If we look at the definition of γ in equation (13.8) we find that γ increases uniformly as the size of the increments increases. In other words, the larger the number of fragments in an increment, the larger the grouping factor γ, and if NGn becomes equal to 1, the variance s2(GSE) becomes infinite. We could think that there is a contradiction here because s2(GSE) becomes very large when the size of the increment becomes almost the size of the lot (i.e., increments of an infinite number of fragments), which should not be so by definition. This is because the simplified equation (9.91) is applicable only if NGn is large when compared to 1. If this condition is not fulfilled, the true value of γ should be calculated as follows: γ¼

NFi  NGn NFi ðNGn  1Þ NFi  NGn NFi  ¼  NGn  1 ðNFi  1ÞNGn NGn NFi  1

ð13:10Þ

With this complete formula, when NGn approaches 1, γ approaches NFi which is logical.

248

Theory of Sampling and Sampling Practice

Conclusion: We can minimize the grouping factor γ by taking as many and as small increments as practically possible, assuming that IDE, IEE, and IPE are negligible when the increments are collected, which is a huge and important restriction. We cannot reduce the increment size as much as we would like because we should never forget the rules of sampling correctness described in Part VII of this book. Therefore, there is a minimum increment weight under which we are not allowed to go, thus restricting the possibilities of reducing the grouping factor. Awareness of this limitation is essential to select an effective compromise. As a rule of thumb based on numerous experiments, a sample should be made of a least 30 increments to substantially minimize the uncertainty generated by GSE. Also, our experience shows that the grouping factor is by far the easiest factor to minimize in our attempt to diminish the variance of GSE. The means by which the grouping factor is minimized have important applications when we proceed to the critical inspection of a splitting process, for example: • Coning and quartering is not effective in reducing the grouping factor (i.e., only two potential increments). • Fractional shoveling is very effective in reducing the grouping factor (i.e., we can easily increase the number of increments to 25, 30, 50, or even more). • When selecting a riffle splitter, we have the advantage to choose one with as many riffles as possible. • Selecting a rotary divider is highly recommended because of the possibility to collect a very large number of correct increments.

13.1.3.3. Minimizing the Segregation Factor The minimization of the segregation factor ξ is much more difficult to achieve than commonly accepted. The omnipresence of gravity, which is the mother of segregation, generating an unidirectional force, the state of segregation or distribution heterogeneity is the rule within our environment, rather than the exception. The various mechanisms through which segregation takes place are many; some are obvious to our observation,   some are more difficult to detect. The minimum distribution heterogeneity DHL min defined in Section 5.4.8 is a very state that can be easily disturbed toward the  unstable  state intermediary between DHL min and the distribution heterogeneity ðDHL Þmax through operations such as handling, reclaiming, discharging, transporting, flowing,   falling, and so on. In fact, the segregation factor ξ = 0 when we reach DHL min and ξ = 1 when we reach ðDHL Þmax , as explained in Section 6.4.13. The only method at our disposal to reduce the segregation factor is to homogenize the material prior to sampling, which is not always practical nor economical because of the large volume involved; furthermore, the homogenizing process is often a wishful thinking process and there is absolutely no guarantee for success. Homogenizers are usually limited to the size of laboratory equipment and cannot handle more than a few hundred kilograms in a single run. Also, to homogenize large lots, the amount of energy that is necessary can be enormous. If we could achieve a three-dimensional homogenization of a large lot and if the state of distribution homogeneity was regarded as stable, this would be a good way of reducing the segregation factor. However, two remarks should be made:

The Grouping and Segregation Error

249

• Most of the time, the energy required to homogenize a large lot in a threedimensional space makes the operation unfeasible. • The kind of homogeneity that we could achieve is always unstable and is rapidly transformed into a two-dimensional homogeneity. Therefore, we must observe that with the exception of small lots (i.e., laboratory size lots) we can rarely achieve the necessary homogenization. Even on a laboratory small scale, achieving distribution homogeneity is often wishful thinking and, in fact, there are many recommended homogenization procedures that actually achieve the contrary, such as homogenizing material on the top of a mat by rolling the mat from opposite corners; this common, popular rolling method is actually the best way to segregate the material. In many instances, it is essential to understand where and how segregation mechanisms occur, not only is it important for the sampling operation but also for process control. A good understanding of these mechanisms underlines the overwhelming importance that the increment delimitation has in the sampling process. In fact, we can already state that if, and only if, the increment delimitation is correct, a correctly extracted increment is an unbiased estimator, regardless of the state of segregation of the lot. On the contrary, if the increment delimitation is not correct, even a correctly extracted increment is a useless and biased estimator; furthermore, the bias is directly proportional to the transient distribution heterogeneity, therefore the bias is never constant and will remain unknown. Conclusion: As we already stated in Section 6.4.8 that DHL is always superior to zero, we can also state that an incorrect increment delimitation always introduces a bias in the sampling operation, and that bias has the following properties: • The bias is never constant. • The magnitude of the bias is unpredictable. • The bias cannot be quantified by conventional bias tests.

13.2. A Logical Solution to Problems Generated by the Variance of the Grouping and Segregation Error The Grouping and Segregation Error GSE has a unique position among the properties of all the errors making up the Total Sampling Error TSE. We know how to completely cancel errors such as IDE, IEE, and IPE, and it is entirely up to us to do so. We know how to estimate and optimize, using a variographic experiment, errors such as HFE2, and HFE3. We know how to estimate the Fundamental Sampling Error FSE. But, unfortunately, we do not know how to estimate the complementary part of FSE, which is GSE. Taking into account the large amount of variographic experiments he performed between 1960 and 1988, Gy reached the pragmatic conclusion that the product γξ, to which the variance s2 ðGSEÞ is proportional, stays slightly smaller than 1. We do not introduce an important uncertainty in the estimation of s2 ðGSEÞ if we assume that γξ ≈ 1. There is a catch to this statement due mainly to human nature that we will analyze soon.

250

Theory of Sampling and Sampling Practice

Observations from past experiments (remember, an experiment has nothing to do with routine practices.): γξ  1

and ð1 þ γξ Þ  2

ð13:11Þ

Let’s make it a conservative estimate and write: γξ ¼ 1

and ð1 þ γξ Þ ¼ 2

ð13:12Þ

In the absence of a variographic experiment, this last formula allows us to estimate the variance s2 ðHFE1 Þ of the discontinuous heterogeneity fluctuations, making the optimistic assumption that the variance of the Analytical Error AE is negligible, as follows: s2 ðHFE1 Þ ¼ s2 FSE þ s2 GSE ¼ ð1 þ γξ Þs2 ðFSEÞ  2s2 ðFSEÞ

ð13:13Þ

Conclusion: When optimizing a sampling protocol, always assume that the variance s2 ðGSEÞ of the Grouping and Segregation Error is equal to the variance s2 ðFSEÞ of the Fundamental Sampling Error. This practice allots 2s2 ðFSEÞ for s2 ðHFE1 Þ. Important remark: we demonstrated in Chapter 6 that the distribution heterogeneity DHL cannot be larger than the constitution heterogeneity CHL. Because of this, we may conclude that the variance s2 ðGSEÞ cannot be larger than the variance s2 ðFSEÞ. Furthermore, experience demonstrates very well that the variance due to segregation may in many cases look like it is overwhelming. Let’s explain the reasons for such apparent contradiction: • Claims that increments are really collected a random is not true, and actually the sampling probability P is all over the place (e.g., going fishing in a little bag to collect the 1-g analytical subsample). • Pseudo statements about sampling correctness, when in reality, IDE, IEE, and IPE flaws are obviously present. • Presence of an unrecognized Poisson process, invalidating the calculations of the variance of FSE (e.g., assaying 30-g for gold when the grade is less than 1 g/t in the presence of visible gold). • The variance of the Analytical Error is erratic and underestimated, and so on. In all these cases and many more, statements made leading to equations (13.11), (13.12), and (13.3), are no longer applicable, and the observed variability that we may attribute to the variance of GSE may have nothing to do with GSE.

13.3. The Transient Nature of Segregation The moments of the Fundamental Sampling Error FSE are relatively easy to calculate because the constitution heterogeneity depends on characteristics that are easily accessible, such as fragment size and fragment composition. These characteristics are intrinsic properties of particulate materials. In the case of distribution heterogeneity, things are different; the moments of the Grouping and Segregation Error change constantly,

251

The Grouping and Segregation Error

depending on mechanical handling of the material. For particulate materials, segregation is a transient phenomenon, and we saw that DHL varies between a natural minimum ðDHL Þmin and a maximum ðDHL Þmax from one instant to another. Attempts to calculate the moments of this error are perfect exercises in futility and would not aid in our quest to perform precise and accurate sampling (i.e., representative sampling). 13.3.1. Segregation Introduced by a Transfer from One Conveyor Belt to Another The transfer of dry particulate material from one conveyor belt to another, a technique often used to change the flow direction by a 90o angle, illustrates very well the transient nature of segregation. As shown in Figure 13.1, the material may be distributed homogeneously on the first conveyor belt (i.e., near ðDHL Þmax ), then become highly segregated as it reaches the second conveyor belt (i.e., near ðDHL Þmax ). The segregation introduced by the discharging belt may vary considerably depending on the speed of the belt. 13.3.2. Segregation Introduced at the Discharge Point of a Laboratory Blender Consider a V-blender containing a layer of black chromite particles (density about 5.5) and on top of it a layer of white quartz sand (density around 2.6). Such a distribution is perfectly heterogeneous. If we operate the blender for one hour, we obtain a mix which seems at first glance to be uniformly gray. The material has been homogenized. An operator may take this for granted and collect an increment directly under the discharge point of the blender. Such an operation is risky because as the material flows out of the blender, it strongly segregates again, and the increment may very well have proportions of quartz and chromite which are nothing close to the true proportions. Because the chromite is black, and the quartz sand is white, it is very easy to see the rapid evolution of the segregation and it is strongly recommended to perform such an exercise when training new sample preparation operators. Figure 13.2 shows a typical V-blender. In conclusion, never speculate about the distribution homogeneity of a material; a single operation may transform the very unstable state of homogeneity into a

Homogeneously distributed stream

Segregated stream

FIGURE 13.1 Transfer from one conveyor belt to another introduces segregation, the larger fragments having less resistance from air drag.

252

Theory of Sampling and Sampling Practice

Never fill the blender above the level of the axis of rotation.

Segregation is likely to reoccur at the discharge point.

FIGURE 13.2 Example of a V-blender used to achieve distribution homogeneity.

surprisingly segregated state, while another operation may succeed in temporarily rehomogenizing the material.

13.4. The Relativity of Segregation The kind of segregation we consider with particulate materials should not be confused with the large-scale segregation of consolidated orebodies. The segregation of compact solids is fixed if we compare the small instant of our observation to geological times. The segregation of a constituent within a compact solid is a complex phenomenon to study and geostatisticians have solved this problem in a very elegant way, which is the structural term of their regionalized variables. The state of distribution homogeneity or heterogeneity often largely depends on the scale to which we refer. For example, let’s assume gold is finely disseminated in quartz veins in a gold ore deposit. If we sample the deposit for quartz veins, they may be highly segregated in some areas of the deposit. But, on a smaller scale, if we sample a given quartz vein for gold, gold may not be nearly as segregated.

The Grouping and Segregation Error

253

Conclusion: there is no such thing as a segregation constant. There are two reasons for that: • As we move a module of observation, segregation within that module may change, regardless of the fact that the material is a compact solid or not. • In the case of particulate solids, the state of distribution heterogeneity is a transient quantity that may drastically change from one instant to another, or from one point to another.

13.5. Segregation because of Fragment Density Heterogeneity Let us consider a size fraction Lα from a lot of material made of two different liberated constituents: one constituent is white quartz of density 2.7, the other constituent is black chromite of density 4.5. The difference in color would make it easy for the observer to see the segregation taking place in what follows. Now, suppose this broken up material travels on a conveyor belt. Obviously both density fractions Lαβ travel at the same speed at the discharge point of the belt. The chromite fragments having more momentum (i.e., same velocity but higher mass) segregate further away from the discharge point than the quartz as illustrated in Figure 13.3.

Stream

Small density material High density material

FIGURE 13.3 Segregation of dense fragments from lighter ones in a same size fraction at the discharge of a conveyor belt.

13.6. Segregation because of Fragment Size Heterogeneity Fine particles of a density class Lβ can segregate from coarser fragments from the same density class. At the discharge of a conveyor belt the coarse fragments having a larger mass, then encountering less resistance from air, segregate ahead of the discharge point, while fine particles strongly affected by air are falling just under the discharge point, as illustrated in Figure 13.4.

254

Theory of Sampling and Sampling Practice

FIGURE 13.4 Segregation of the fine particles from larger fragments in the same density class, at the discharge of the conveyor belt.

Slow stream

FIGURE 13.5 Fines concentrate in the center of the pile, while coarse fragments roll down on the outside part of the pile.

If the speed of the stream is substantially reduced, we may see a circular type of segregation, the fine particles sifting through the larger fragments and agglomerating in the upper part of the pile, while the large fragments role down on the fines to segregate at the bottom of the pile as shown in Figure 13.5. The phenomenon is amplified as the pile becomes steeper.

13.7. Segregation because of Fragment Shape Heterogeneity Fragments with sharp angles have a different rate of friction and angle of repose than fragments with round and smooth angles. Suppose we have a mixture of two

255

The Grouping and Segregation Error

liberated constituents, one occurring as cubes, the other occurring as spheres, which is an extreme case to illustrate our point. Both classes of fragments have the same size and the same density. At the discharge point of a slow-moving conveyor belt, cubes segregate in the middle of the pile, while spheres segregate around the pile as illustrated in Figure 13.6.

Slow stream

FIGURE 13.6 Segregation generated by different angles of repose.

Falling stream in a chute

Angular, or light Round, or heavy

FIGURE 13.7 Segregation generated by different friction rate on a chute.

256

Theory of Sampling and Sampling Practice

At the discharge point of a chute as shown in Figure 13.7, the two classes, angular and round, are likely to behave differently: spheres get more momentum by rolling down the chute, while cubes adhere to the bottom wall of the chute because of strong friction generated by sharp angles. Accordingly, cubes segregate under the discharge point, while spheres segregate ahead of the discharge point.

13.8. Segregation because of Air Turbulence The trajectory of fine particles can be strongly affected by air turbulence and generate large Increment Delimitation, Extraction, and Preparation Errors responsible for unacceptable biases. This type of segregation is encountered when fines travel at too high speed on a conveyor belt or inside a duct. It is also encountered with sampling cutters moving too fast, creating an air cushion around it, which may prevent fines from properly entering the cutter. The result is that coarse fractions have a higher probability of being selected than the fine fractions, leading to an unacceptable deviation from the rules of sampling correctness addressed in Part VII of this book. Solutions to this problem are: • Do not use a belt conveyor to feed dry fine materials to a sampler, unless it travels very slowly, and the chute to the cutter opening is very short. • Use screw feeders discharging the fine material 1 or 2 cm above the plane generated by the cutter opening. • Use slow cutter speed. The same problem may be encountered when riffle splitting fine and dry material by means of a Jones riffle splitter. Often, a large portion of fine particles is lost and generates a deviation from correctness. There are two solutions to this difficulty: • Feed the riffle splitter very slowly. • Use a riffle splitter equipped with door panels on both sides, holding the fine particles tightly. An example of such a splitter is the riffle type sampler described in the American Society for Testing and Materials (ASTM) standard D 2013.

13.9. Segregation because of Vibrations Shaking tables used to separate minerals of different densities are well known in mineral processing. The same segregation effect often occurs naturally when a mixture of dry particulate material is submitted to regular vibrations. For example, a well-prepared laboratory sample may have its constituents homogeneously distributed when leaving the sample preparation room, while the same sample may become highly segregated after several hours of transportation inside a car. Laboratory vibro-spatulas commonly

257

The Grouping and Segregation Error

used to weigh the final analytical subsample may introduce segregation that annihilates all previous efforts to obtain a representative sample. Again taking our example of a mixture of well-calibrated fine quartz and chromite, it can be easily demonstrated that the homogeneously distributed mixture strongly segregates into two phases at the discharge point of the vibro-spatula. If the material is released on a sheet of paper as the vibrating spatula travels slowly across the sheet, we may observe a segregated linear pile made of pure quartz at first, then a mix of quartz and chromite, in the middle, then pure chromite at the end, as illustrated in Figure 13.8. This segregation mechanism can significantly generate a bias when the analyst is not aware of it; most of the time the analyst cannot see the different minerals as they all look the same in the fine powder. In Figure 13.8, assume all the material on the sheet weighs about 3 grams, then imagine if the analyst needed only 1 gram for the assay of the chromite content; the bias would be enormous. Furthermore, that phenomenon may ruin a Standard Reference Material (SRM) when it has been used several times. It is common to take SRMs’ accuracy for granted; however, they are not immune from segregation and it is not difficult to imagine the consequences when we know that SRMs are often used for calibration in modern instrumental analysis. Rotary Sample Dividers (RSD) equipped with a stationary vibro-feeder and many adjacent buckets as illustrated in Figure 13.9 are becoming very popular because of the possibility to collect a very large number of increments in a short time. Because particles or fragments bounce in a perfectly symmetrical way from one bucket to the next, the rotation speed of such unit can be as high as 45 cm/s at the point of chute to the radial buckets. However, be careful with a reject type RSD equipped with only two radial buckets, one for the sample and one for a quality control duplicate as illustrated in Figure 13.10. The bouncing of particles or fragments of the edges of a rotating bucket can become an issue. The recommended speed is only 30 cm/s. In modern laboratories, these sample dividers are highly recommended as they provide an excellent way to minimize the variance of GSE.

Trajectory of the vibrating spatula

Material trajectory

Mixture quartz-chromite Almost pure quartz

FIGURE 13.8 Segregation introduced by a laboratory vibro-spatula.

Almost pure chromite

258

Theory of Sampling and Sampling Practice

Adjacent rotating buckets

FIGURE 13.9 Stationary feeder and rotating divider equipped with many adjacent buckets.

Non-adjacent rotating buckets

FIGURE 13.10 Rotary Sample Divider (RSD) equipped with only two rotating buckets, called Reject Type.

The Grouping and Segregation Error

259

13.10. Other Causes for Segregation The above causes for segregation phenomena taking place within a lot of particulate material, either on a large scale at the plant, or on a small scale at the laboratory, are causes easy to observe. There are many other causes that are not reviewed in this book, because it can be a vast subject. Examples are: • Segregation occurs because of different magnetic properties between fragments. Some fragments containing magnetite, iron, cobalt, nickel, or gadolinium can be strongly magnetic (i.e., ferromagnetic constituents). Segregation may also occur under special conditions with constituents having weak magnetic properties (i.e., diamagnetic constituents). • Segregation occurs because of the electrostatic properties of some liberated constituents occurring as tiny flakes such as biotite, scheelite, etc. This kind of segregation can be devastating for the analyst who tries to collect a subsample from a plastic bag for potassium or tungsten determination. • Segregation occurs because of the different moisture content between liberated constituents adsorbing water in very distinctive amounts. • Segregation occurs because some liberated constituents adhere differently to the walls of storage bins. Mitchell,187 Johanson,188 Williams,189 and Minnitt170 are authors of interesting articles on segregation.

This page intentionally left blank

Part V

Integration of Visman and Ingamells’s Works into the Theory of Sampling

This page intentionally left blank

14 The Works of Visman and Ingamells Relevant to the Theory of Sampling

14.1. Scope The objective of this chapter is not to compare or debate the works of Visman10–13 and Ingamells14–22 with Gy’s work, as there is no doubt such a document would be futile because Gy’s achievement is well established as a flawless Theory of Sampling. Rather, it is important to emphasize the work that may be beneficial and provide relevant, harmonious additions in some areas, and indeed there are many possibilities. Such additions may enhance our capabilities to predict sampling difficulties by using a stronger strategy and to design better sampling experiments that would allow us to further understand the heterogeneity of minor and trace constituents.

14.2. An Introduction to Poisson Processes Amounts of minor and trace constituents are the key words in many industries where their accurate determination is of paramount importance. The Theory of Sampling would be incomplete without an influx of what we can do when Poisson processes are almost inevitable, or when people are unaware of such possibilities, or when people are in denial that they are indeed a possible event; there are numerous examples, such as quantifying trace amounts of constituents in pharmaceutical products, in high purity materials, in the environment, in genetically modified organisms, in precious metals exploration, etc. This is where the work of Ingamells is priceless; his entire work is based on Visman’s work and Poisson statistics. The author should make the emphasis very clear, because it is crucial: without a good understanding of Poisson processes there is no possible in-depth understanding of the TOS because too many subtleties become elusive, and this has escaped the attention of most sampling practitioners around the world and it needs to be corrected. 14.2.1. A Simple, Pragmatic Observation Figure 14.1 represents the position of a few gold particles in a piece of diamond core drilling. At first sight there is no obvious pattern, and the particles seem distributed at random within that small volume. We could easily imagine many other examples. A Poisson process is a model of such phenomenon, which uses the theory of probability to 263

264

Theory of Sampling and Sampling Practice

FIGURE 14.1 Simplistic illustration of a few gold particles within a piece of HQ-diameter diamond core drilling sample.

describe this sort of highly random behavior; a Poisson process takes place when the probability of including one or several gold particles in the sample is a small probability. Some people may object to this model, such as if we look at the same volume within a larger area, trends or local clusters with a logical pattern may slowly appear. The point is that a sample is too often a miniature amount of material and within such small scale the observed feature, such as why this particle is there rather than here or why is it bigger than the other one, is a property of statistical independence; and yes, in practice this assumption of independence is satisfied only approximately. The Poisson model is only the simplest and most random possible model to describe a phenomenon where the collected sample is obviously one or several orders of magnitude too small to contain a sufficient, statistically significant number of gold particles.

14.2.2. The Poisson Model: The Law of Small Probabilities The Poisson model is a limit case of the binomial model where the proportion p of the constituent of interest is very small (e.g., fraction of 1%, ppm or ppb), while the proportion q = 1-p of the material surrounding the constituent of interest is practically 1. We will expand on this concept later in this chapter and Chapter 25. Experience shows that such constituent may occur as rare, tiny grains, relatively pure but they don’t have to be, and they may or may not be liberated. As the sample becomes too small, the probability of having one grain or a sufficient amount of them in one selected sample diminishes drastically; furthermore, and this is where it becomes confusing to many, when one grain is present, the estimator as of aL becomes so high that it is often considered as an outlier by the inexperienced practitioner while it is the most important finding that should indeed raise his attention. It is rather amusing that people get so upset by a high value when they don’t pay attention to the many that are under-estimated; people are easily taken prisoner of paradigms, such as better-known normal or log-normal distributions.

265

Visman and Ingamells and the Theory of Sampling

Some people may object to Poisson statistics to model sampling errors when the constituent of interest is not liberated, as they think in terms of discrete particles. Liberation does not have to occur for Poisson processes to take place that are solely the result of the selected sample mass being too small by one or several orders of magnitude. The remark can also be generalized to the In-Situ Nugget Effect when the basic module of observation is too small by one or several orders of magnitude to fairly represent the true, unknown grade at any given point; this is further explored in Part VI of this book. Then, when using the Poisson model, it is irrelevant to argue about a sample mass that should be 1500g for example, rather than 1000g; we may look at why the sample mass is 1000g when it should be 100000g; we no longer have to look at too many details, but we must focus on the important ones that can make a staggering difference. Experience proves again and again that in such a search, it is the maximum size of the nuggets, clusters, or nearly pure constituent grains, liberated or not, that can make a tremendous difference. Let’s call pðx ¼ rÞ the probability of r low-frequency isolated coarse grains, liberated or not, appearing in a sample, and θ is the average number of these grains per sample. pðx ¼ rÞ ¼

θr θ e r!

ð14:1Þ

with r = 0, 1, 2, 3,. . . The variance of the Poisson distribution is θ ¼ n  p  q ≈ n since q is close to 1. The mean value of the Poisson distribution is θ ¼ n  p. 14.2.3. Shape of a Poisson Distribution It takes a value of θ ¼ 6 for the Poisson distribution to begin to vaguely look like a normal distribution. When θ ¼ 1 there is the same probability of having r = 0 or 1. When θ becomes much smaller than one, the mode of the distribution becomes skewed toward low grades in a drastic way (i.e., most of the time we completely miss what we are looking for.). 14.2.4. A Simple, but Useful Exercise Ten 1000-g samples collected from a large lot, and ground to minus 500-µm, were submitted to a gravity concentration for the coarse gold. The number of coarse gold particles found in each sample was 2,1,0,0,2,1,3,0,1, and 2 respectively. 1. Calculate the average number of coarse gold particles per sample: θ ¼ ð2 þ 1 þ 2 þ 1 þ 3 þ 1 þ 2Þ=10 ¼ 1:2 2. Calculate the probability of occurrence of 0, 1, 2, 3, 4, 5, and 6 coarse gold particles in any 1000-g sample selected at random from the same lot. Using equation (14.1) the following results are obtained: " 1:2

pðx ¼ 0Þ ¼ ð2:718Þ

# ð1:2Þ0 ¼ 0:301 0!

266

Theory of Sampling and Sampling Practice

pðx ¼ 1Þ ¼ ð2:718Þ

1:2

pðx ¼ 2Þ ¼ ð2:718Þ

1:2

pðx ¼ 3Þ ¼ ð2:718Þ

1:2

pðx ¼ 4Þ ¼ ð2:718Þ

1:2

pðx ¼ 5Þ ¼ ð2:718Þ

1:2

pðx ¼ 6Þ ¼ ð2:718Þ

1:2

"

# ð1:2Þ1 ¼ 0:361 1!

"

# ð1:2Þ2 ¼ 0:217 2!

"

# ð1:2Þ3 ¼ 0:087 3!

"

# ð1:2Þ4 ¼ 0:026 4!

"

# ð1:2Þ5 ¼ 0:006 5!

"

# ð1:2Þ6 ¼ 0:001 6!

3. If one of these coarse gold particles on average makes a contribution of about 0.5 g/t, estimate the variability of the typical 1000-gram samples using the above distribution. 30.1% of the samples would show 0 g/t gold 36.1%,, 0.5 g/t 21.7%,, 1.0 g/t 8.7%,, 1.5 g/t 2.6%,, 2.0 g/t 0.6%,, 2.5 g/t 0.1%,, 3.0 g/t It is interesting to notice that θ ¼ 1:2, therefore there is a much greater chance to underestimate the gold content than to overestimate it. This has huge implications in sampling, especially in above-cutoff grade control when too few samples are used to make an economic selection.

14.2.5. Additivity of Poisson Processes Someone may think this is really a bad case; it is nothing when compared to what follows. When samples taken contain discrete grains of the constituent of interest, and they are sub-sampled in such a way that the subsamples also contain discrete grains of reduced size, a double Poisson distribution of the assay values is likely to take place. This phenomenon is very common in sampling, and people may work for years on such a case and never see a thing. The most important feature of the Poisson distribution is indeed its additivity.

267

Visman and Ingamells and the Theory of Sampling

If each primary sample contains a limited average number μ of constituent of interest grains, and the subsamples they generate also contain a limited average number θ of reduced constituent of interest grains, the distribution of assay values is doubly Poisson. P½x ¼ r; y ¼ n ¼ p½x ¼ rP½y ¼ n ¼

θr  eθ μn  eμ  r! n!

ð14:2Þ

x+y has distribution P½θ þ μ.

14.2.6. Programming Factorials Difficulties may arise in programming factorials. The use of the improved Stirling approximation is convenient to use: hnin n! ¼ e

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi πð6n þ 1Þ 3

ð14:3Þ

This was a short introduction to Poisson processes that are very useful to model reality shown by selected samples that are too small by one or several orders of magnitude to maintain the variance of FSE in a normal area of applicability. But, before going further in depth on this subject it is essential to introduce the works of J. Visman and especially his follower C. O. Igamells.

14.3. Visman’s Sampling Equation Visman’s sampling constants A and B13 are useful when the material to be sampled is not well-mixed (i.e., mineral grains are not thoroughly random distributed throughout the lot to be sampled). The sampling equation is: S2 ¼

s2 A B ¼ þ N MS N

ð14:4Þ

where S is the uncertainty in the average of N assays on samples of individual mass M s ¼ MNS . This equation, on a smaller scale of course, has almost the same significance as a similar, well-known equation suggested by D.G. Krige in the early days of Geostatistics:190 σ

2

v V

¼σ

2

v V0

 0 V þσ V 2

ð14:5Þ

where the variance of samples from volume v taken in the ore deposit of volume V is the sum of the variance of samples from volume v taken in a given ore block of volume V’ and the variance of the ore blocks V’ in the corresponding deposit of volume V. The first term can be considered in a first approximation as a random term where the volume v taken from V’ can make a difference. The second term can be considered as a

268

Theory of Sampling and Sampling Practice

nonrandom term where the quantity of well-defined blocks V’ taken from V make the difference. Immediately, for the sake of consistency we should define and translate all these new terms into their equivalents in TOS. A is the Visman homogeneity constant. It is Gy’s Intrinsic Constitution Heterogeneity IHL multiplied by the square of the average content of the lot, with aL expressed in appropriate units, or as a proportion as part of 1. So, from now on, let’s remain cautious as in Gy’s theory contents, such as aL, are expressed as a proportion, part of one. There is nothing wrong in expressing the aL content in either %, g/t, ppm or ppb as long as these units are properly kept in mind where appropriate when calculating a relative, dimensionless variance or a term like IHL. A ¼ IHL  a2L

ð14:6Þ

B is the Visman segregation constant. It should be well understood it is a constant only under temporary, specific conditions, as segregation is a transient phenomenon changing all the time. It is related to the variance of Gy’s Grouping and Segregation Error variance S2GSE as follows: B ¼ S2 

A ¼ s2GSE  a2L  N ¼ s2SE  a2L M S

ð14:7Þ

N being the number of increments collected in a sample, and s2SE being the segregation variance if only one increment was collected. N can also consist of different samples collected in a lot. The weakness of Visman and Ingamells approaches is not to emphasize the irrefutable fact that the variance s2SE is a transient phenomenon that can change all the time as the lot is transferred from a given place to another; therefore, it was concluded in Gy’s theory that the quantification of this variance is somewhat an exercise in futility, however understanding its behavior can lead to beneficial strategies.

14.4. A Very Useful Visman’s Experiment Valuable experiments can be made to quantify the Intrinsic Constitution Heterogeneity and also the variability due to local segregation. Such segregation may be measured on a small scale (i.e., within a few kilograms of particulate material) or on a larger scale (i.e., within a mineral deposit). Visman suggested that two series of samples properly subsampled and assayed (i.e., subsampling and assaying uncertainties must be of the second order of magnitude) could be collected; one series uses small samples of mass MS1 and the other series uses large samples of mass MS2 . As a result, variances s21 from small samples and s22 from large samples are obtained, and from 10.46, 13.3, 14.4, 14.6, and 14.7, and with N = 1, we obtain the following relationships. From the series of small samples, assumed to be collected at stratified random as suggested in Figure 14.2, an already known nonnegotiable condition recalled by equation (9.59):

Visman and Ingamells and the Theory of Sampling

269

FIGURE 14.2 A first series of small samples collected at stratified random.

  s21 1 1 2 2 IHL þ s2FSE1 ξγ ¼ sFSE1 þ sGSE1 ¼  MS1 ML a2L

ð14:8Þ

Assuming the mass of the lot is at least ten times larger than the mass of the sample, 14.8 can be simplified to the following form:   IHL  a2L IHL  a2L  ξγ s21 ¼ s2FSE1 þ s2GSE1 a2L ¼ þ MS1 MS1

ð14:9Þ

In a similar way, from the series of large samples, as shown in Figure 14.3, we obtain:   IHL  a2L IHL  a2L  ξγ s22 ¼ s2FSE2 þ s2GSE a2L ¼ þ MS2 MS2 s21  s22 ¼

ð14:10Þ

IHL  a2L IHL  a2L MS2  IHL  a2L  MS1  IHL  a2L ð1 þ ξγÞ  ð1 þ ξγÞ ¼ ð1 þ ξγÞ MS1 MS2 MS1  MS2 ð14:11Þ s21  s22 ¼ ð1 þ ξγÞ

IHL ðMS2  MS1 Þa2L MS1  MS2

ð14:12Þ

270

Theory of Sampling and Sampling Practice

FIGURE 14.3 A second series of large samples collected at stratified random.



ð1 þ ξγÞIHL 

a2L

 s21  s22 ðMS1  MS2 Þ ¼ MS2  MS1

ð14:13Þ

If the material is not segregated then ξ ¼ 0, or if increments are made of one random fragment selected at random then γ ¼ 0, we obtain the well-known, valuable Visman’s relationship: IHL ¼

ðs21  s22 ÞðMS1  MS2 Þ a2L ðMS2  MS1 Þ

ð14:14Þ

The purpose of this development was to show that everything is consistent between Gy and Visman, which is important to know for the following applications.

14.4.1. Discussion about the Experimental Variances Several scenarios may develop during such an experiment; let’s discuss them briefly.

14.4.1.1. Case #1: s21 ≈ s22 and It Is a Large Variance This case is an indication that the sample mass is not the most important issue. The constituent of interest is segregated on a large scale and collecting many small samples is a better strategy than collecting a few large samples.

271

Visman and Ingamells and the Theory of Sampling

14.4.1.2. Case #2: s21 ≈ s22 and It Is a Small Variance This case is an indication that the constituent of interest carries very little heterogeneity on a small scale and on a large scale. In such rare case, it is not difficult to perform good sampling with just a few small samples. 14.4.1.3. Case #3: s21 i s22 This case is an indication that the sample mass plays an important role. Figure 14.4 illustrates a real case with isolated iron salt impurity grains in an ammonium paratungstate pure product. A Poisson’s process affects the small samples. It is recommended to select an optimum sample mass Mspot that will be defined later in Section 14.7, which is a compromise between the sample mass that is necessary to minimize the variance of the Fundamental Sampling Error FSE and the total number of samples N that is necessary to account for the segregation in the lot to sample. Three cases may develop: 1. If Mspot hMs1 then both series of samples are valid. 2. If Ms1 hMspot hMs2 then only the large samples are valid. 3. If Mspot iMs2 then both series of samples are invalid; we must iterate the sampling effort using a sample mass Mspot 14.4.1.4. Case #4: s21 h s22 This case is puzzling to many people. It is a very important case that should raise suspicion. The content of the constituent of interest may be much larger than usually recognized, but it is difficult to prove this because samples are too small by one or

Small sample

Large sample

Isolated iron impurity particle FIGURE 14.4 A case where a Poisson process affects the small samples.

272

Theory of Sampling and Sampling Practice

several orders of magnitude: a Poisson process has been taking place for both series of samples. Figure 14.5 shows a real case where such a process took place when sampling gold. 14.4.2. Case Where No Large Samples Are Available It is not always possible to collect and assay large samples. In such cases it is possible to create artificial large samples by compositing assays from small samples, if we have enough of them. Obviously, such an approach is a simulation of reality and the test is somewhat less powerful, though it is still possible to retrieve valuable information about the kinds of heterogeneity carried by a given constituent in the lot to be sampled. If Q is the compositing factor, we may write: IHL ¼

ðs21  s22 ÞðMS1  Q  MS1 Þ ¼ a2L ðQMS1  MS1 Þ

IHL ¼

ðs21  s22 ÞQ  M2S1   1 2 aL  MS1  Q 1  Q

ðs21  s22 ÞMS1   1 2 aL 1  Q

ð14:15Þ

ð14:16Þ

14.5. Ingamells’ Most Probable Result When samples have a much too small mass and contain only a few isolated large grains of the constituent of interest, liberated or not, the distribution of these grains in a large number of samples is given by the Poisson probability function:

Coarse gold particle

FIGURE 14.5 A case where a Poisson process affects both series of samples.

273

Visman and Ingamells and the Theory of Sampling

Pð~ x ¼ rÞ ¼

θr θ e r!

ð14:17Þ

with r = 0, 1, 2, 3,. . . and where Pðx ¼ rÞ is the probability of r low-frequency isolated coarse grains appearing in a sample, liberated or not, and θ is the average number of these grains per sample. It is important to note that the highest probability occurs when r is less than θ, except when θ is an integer when Pðx ¼ θÞ ¼ Pðx ¼ θ  1Þ. Therefore, the most probable number of low frequency grains is always less than the average number. With real samples, in which there is usually a spectrum of grain sizes, the most probable assay is always less than the true value. If we call the most probable result of an assay γ, then: γ ¼ aL  f ðθÞ

ð14:18Þ

where f ðθÞ is a function of the average number of low-frequency grains; this function must approach zero as θ increases and must approach ðaL  aH Þ as θ decreases. We define aH as a relatively homogeneous low background content easy to sample. So, as θ becomes large γ ! aL , and as θ becomes small γ ! aH . Remark: We should not confuse the most probable result γ with the grouping factor γ which is a component of the variance of GSE; we apologize for the use of the same notation, but when reading the text it should be straightforward to which one we refer. Ingamells and Switzer15,16 found that the following semi-empirical function meets these requirements: f ðθÞ ¼

aL  aH aL  aH and γ ¼ aL  2θ þ 1 2θ þ 1

ð14:19Þ

Let’s define a few terms from Ingamells: aD is the proportion of the constituent of interest that is difficult to sample, then by definition: aD ¼ aL  aH

ð14:20Þ

The total mass of the constituent of interest in the sample is MS  aL . The mass of the constituent of interest in the proportion easy to sample is MS  aH . The mass of the constituent of interest in the proportion difficult to sample is MS  aD . From the binomial model it follows that: aL ¼ p  aH þ q  aD ¼ ð1  qÞaH þ q  aD ¼ aH  aH  q þ aD  q ¼ aH þ qðaD  aH Þ Then: aL  aH ¼ qðaD  aL Þ q¼ Then θ can be calculated as follows:

aL  aH aD  aH

ð14:21Þ

274

Theory of Sampling and Sampling Practice

θ¼

aL  aH MS  aD  aH d3M ρM

ð14:22Þ

But, from Ingamells and Pitard22: d3M ρM ¼

A IHL a2L ¼ ðaL  aH ÞðaD  aH Þ ðaL  aH ÞðaD  aH Þ

ð14:23Þ

Then, by transposing in 14.22 and simplifying: θ¼

ðaL  aH Þ2 MS IHL  a2L

ð14:24Þ

By transposing in 14.19, and assuming the material is not segregated (i.e., ξ = 0):

γ ¼ aL 

ðaL  aH Þ 2ðaL  aH Þ2 MS IHL  a2L þ IHL  a2L IHL  a2L γ¼

¼



aL 2ðaL  aH Þ2 MS þ IHL  a2L  ðaL  aH ÞIHL  a2L 2ðaL  aH Þ2 MS þ IHL  a2L

2aL ðaL  aH Þ2 MS þ aH  IHL  a2L 2ðaL  aH Þ2 MS þ IHL  a2L

ð14:25Þ

14.6. Ingamells’ Gangue Concentration Many of the equations developed so far depend on the estimation of the more homogeneous low background content easy to sample aH which is equivalent of saying that we need to estimate aD the proportion of the constituent of interest that is difficult to sample. With real data the estimation of aH is difficult. However, it is relatively easy to make such calculation by using the most probable result of an assay γ by looking at the modes γ1 and γ2 or the harmonic means h1 and h2 of the data distribution of the small and large samples. The harmonic mean is calculated as follows: N h1 ¼ X 1 x i i

ð14:26Þ

where N is the number of samples. From equation (14.25) we may write: h1 ¼ γ1 ¼

2aL ðaL  aH Þ2 MS1 þ aH  IHL  a2L 2ðaL  aH Þ2 MS1 þ IHL  a2L

ð14:27Þ

275

Visman and Ingamells and the Theory of Sampling

h2 ¼ γ2 ¼

2aL ðaL  aH Þ2 MS2 þ aH  IHL  a2L 2ðaL  aH Þ2 MS2 þ IHL  a2L

ð14:28Þ

whence 2γ1 ðaL  aH Þ2 MS1 þ γ1  IHL  a2L ¼ 2aL ðaL  aH Þ2 MS1 þ aH  IHL  a2L 2MS1 ðaL  aH Þ2 ðaL  γ1 Þ ¼ IHL  a2L ðγ1  aH Þ

ð14:29Þ

2γ2 ðaL  aH Þ2 MS2 þ γ2  IHL  a2L ¼ 2aL ðaL  aH Þ2 MS2 þ aH  IHL  a2L 2MS2 ðaL  aH Þ2 ðaL  γ2 Þ ¼ IHL  a2L ðγ2  aH Þ

ð14:30Þ

2MS1 ðaL  aH Þ2 ðaL  γ1 ÞðaL  γ2 ÞMS2 ¼ IHL  a2L ðγ1  aH ÞðaL  γ2 ÞMS2

ð14:31Þ

2MS2 ðaL  aH Þ2 ðaL  γ2 ÞðaL  γ1 ÞMS1 ¼ IHL  a2L ðγ2  aH ÞðaL  γ2 ÞMS1

ð14:32Þ

Subtracting 14.32 from 14.31 gives: IHL  a2L ðγ1  aH ÞðaL  γ2 ÞMS2 ¼ IHL  a2L ðγ2  aH ÞðaL  γ1 ÞMS1 MS2 ðaL  γ1  γ1  γ2  aL  aH þ γ2  aH Þ ¼ MS1 ðaL  γ2  γ1  γ2  aL  aH þ γ1  aH Þ MS2  aL  aH þ MS2  γ2  aH þ MS1  aL  aH  MS1  γ1  aH ¼ MS1  aL  γ2  MS1  γ1 γ2  MS2  aL  γ1 þ MS2  γ1  γ2 aH ðaL  MS2  MS2  γ2  aL  MS1 þ MS1  γ1 Þ ¼ γ1  MS2 ðaL  γ2 Þ  γ2  MS1 ðaL  γ1 Þ aH ¼

γ1  MS2 ðaL  γ2 Þ  γ2  MS1 ðaL  γ1 Þ MS2 ðaL  γ2 Þ  MS1 ðaL  γ1 Þ

ð14:33Þ

14.6.1. Discussion about the Low Background Content The low background content that is easy to sample aH can be a parameter of great interest. It is a variable of its own in a mineral deposit, or in a high purity product, or in the environment. Such variability should be the object of more research as it can have geometallurgical and economic implications in a project. For example, if a deposit is difficult to sample for its gold content, it would be critically important to find out what proportion of the gold is difficult to sample. If it is 2/3 of the gold, the geologist may have a serious problem. If it is only 5% of the gold, it may not be too much of a problem. Also, in a flotation process for molybdenum, if 1/3 of the molybdenum that is easy to sample cannot be liberated at the mill then the molybdenum recovery is likely to be poor, which may have serious economic consequences. On the other hand, if only 5% of the molybdenum cannot be liberated, it may not be too much of a problem.

276

Theory of Sampling and Sampling Practice

14.7. Ingamells’ Optimum Sample Mass The Ingamells’ optimum sample mass can be described as a compromise between the necessary sample mass to minimize the variance of FSE and the necessary number of samples N to minimize the effect of local segregation on the overall estimation. When calculating IHL, sample mass alone is an issue. When minimizing the effect of segregation, the number of increments in a composite sample or the number of samples collected over a certain target is an issue. Therefore, the uncertainty of a selected sampling protocol depends both on the total weight of samples taken and on their number. There is evidently an optimum individual increment mass or, depending on your objectives, an optimum individual sample mass, which will give the most information at the lowest cost. The cost of a sampling campaign may be broken into cost related to the total number of samples taken and cost related to the total mass of samples taken, plus a fixed cost related to logistics. $ ¼ P  Ms þ Q  N þ F ¼ P  N  M S þ Q  N þ F

ð14:34Þ

P is the cost per gram of sample, Q is the cost per sample such as subsampling and assaying, and F is the fixed cost related to local logistics. From equations (14.4), (14.6), and (14.7) we may write: S2 ¼

IHL  a2L S2SE  a2L IHL  a2L s2SE  a2L  M s IHL  a2L þ s2SE  a2L  M S ¼ þ þ ¼ N  M S N N  M s N  M s N  M s

Solving for N: IHL  a2L þ S2SE  a2L  M S S2  M S

ð14:35Þ

PðIHL  a2L þ s2SE  a2L  M S Þ QðIHL  a2L þ s2SE  a2L  M S Þ þ þF S2 S2  M S

ð14:36Þ

P  IHL  a2L P  s2SE  a2L  M S Q  IHL  a2L Q  s2SE  a2L þ þ þ þF S2 S2 S2  M S S2

ð14:37Þ

N¼ Transposing 14.35 in 14.34: $¼ $¼

Differentiating 14.37 with respect to M S : P  S2SE  a2L IHL  a2L  Q d$ ¼ 

2 S2 dMS M 2 S S The minimum cost is achieved when

d$ ¼ 0, i.e., when dM S

ð14:38Þ

277

Visman and Ingamells and the Theory of Sampling

P  s2SE a2L IHL  a2L  Q IHL  Q ¼ or M 2 S ¼

2 2 S2 MS S P  S2SE sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi IHL  Q

MS ¼ s2SE  P

ð14:39Þ

Substituting 14.39 in 14.37 gives the minimum cost for any desired variance S2:

$min ¼

P  IHL  S2

a2L

qffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 s2SE  P Q  s2  a2 Q  IH  a L L IHL  Q P  L SE p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffi þ þF þ þ 2 2 IH  Q S S 2 2 L S SSE  P s2SE

2

$min

P  IHL  a2L aL ¼ þ S2

a2L

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi IHL  Q  s2SE  P S2

þ

a2L

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi IHL  Q  s2SE  P

þ

Q  s2SE  a2L þF S2

S2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Q  s2SE  a2L P  IHL  a2L 2a2L 2 Pþ IH þ  Q  s þF $min ¼ L SE S2 S2 S2 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P  IHL  a2L Q  s2SE  a2L 2a2L IHL Q $min ¼ þ þ 2  þF S2 S2 S s2SE P

ð14:40Þ

Q IHL In a sampling campaign, if the sample mass is chosen such that ¼ 2 , the campaign P will be optimized by taking samples with an optimum mass MSopt : sSE MSopt ¼

IHL s2SE

ð14:41Þ

14.8. Ingamells’ Minimum Sample Mass There is a Minimum Sample Mass MS min to include at least one particle of the constituent of interest about 50% of the time in the collected sample which happens when r =1 in equation (14.1); Ingamells and Pitard22 show that it can be calculated as follows: MS min ¼

IHL  a2L ½aL  aH 2

ð14:42Þ

For replicate samples to provide a normally distributed population the recommended sample mass MSrec should be at least six times larger than MS min . Gy suggests a more stringent condition to make sure that there is no Poisson skewness affecting the database, with a recommended mass about 40 times the mass MS min . At this point there is an important issue to address.

278

Theory of Sampling and Sampling Practice

14.9. The Link with Gy’s Preventive Suggestions In a subtle way Ingamells’ work confirms that Gy’s suggested equations to calculate the variance of the Fundamental Sampling Error should be used in the domain of normal distributions. As a result, it would be a much better strategy to calculate the mass of a sample for a given allotted variance for the Fundamental Sampling Error instead of calculating the variance for a given sample mass; this would eliminate the vast misuse of the suggested formulas in domains where they do not belong as clearly addressed in Gy’s theory concerning the properties of ratios of random variables like the one shown in equation (9.24).

14.10. Necessary Variances to Construct a Meaningful Sampling Diagram In the following equation it is assumed the mass of the sample is always second order of magnitude relative to the mass of the lot. More complex formulas could be obtained taking into account the mass of the lot. 14.10.1. The Variance of the Fundamental Sampling Error It is convenient to express the standard deviation of the variance of FSE with the proper units, which is directly derived from equation (14.6): sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi IHL  a2L sFSE aL ¼ Ms

ð14:43Þ

14.10.2. The Variance s2opt Taking into Account the Optimum Sample Mass From equations (14.4), (14.6), and (14.7), we may write: s2opt ¼

IHL  a2L s2SE  a2L þ Ms N

ð14:44Þ

But, from equation (14.41) we know that: s2SE ¼

IHL Msopt

It follows, with MS ¼ N  Msopt s2opt ¼

IHL  a2L IHL  a2L þ N  Msopt N  MSopt

ð14:45Þ

279

Visman and Ingamells and the Theory of Sampling

sopt

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2IHL  a2L ¼ Ms

ð14:46Þ

14.10.3. The Variance of a Single Assay s2N¼1 From equation (14.4) making N =1, it readily follows:

sN¼1

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi IHL  a2L þ s2SE  a2L ¼ Ms

ð14:47Þ

This standard deviation gives an idea about the segregation uncertainty affecting a single sample as the sample mass increases. A high value of sN¼1 is an indication that many samples need to be collected to assess the variability across the lot to be sampled; it is a valuable indicator for selecting the right sampling strategy (i.e., many small samples versus few large samples).

14.11. Case Study: A Sampling Diagram in a Nickel-Cobalt Deposit A lateritic nickel-cobalt deposit is drilled and assayed for its cobalt content which is an important by-product for the project. The drilling technique uses HQ-diameter tubing with a tungsten carbide drill-head, and the drilling is performed dry. Each sample is 1-meter long and weighs about 7500 grams. Some holes show very little cobalt but some others show attractive grade intercepts. Geologists were tempted to believe there were some areas much richer in cobalt than others. Table 14.1 shows 12 of these consecutive holes (there were many more). Each hole shows 12 consecutive samples (there were many more).

TABLE 14.1 12 holes in a lateritic nickel-cobalt deposit. The horizontal axis shows consecutive holes, while the vertical axis shows consecutive 1-meter samples. Assays are expressed in % cobalt 0.03

0.10

1.07

0.64

0.34

0.14

0.09

0.16

0.21

0.20

0.28

0.22

0.07

0.20

0.16

0.24

0.20

0.24

0.25

0.36

0.73

2.42

0.81

0.53

0.02

0.02

0.03

0.41

0.31

0.46

0.29

0.33

0.28

0.41

0.35

0.11

0.09 0.02

0.04 0.03

0.04 0.05

0.03 0.28

0.09 0.23

0.08 0.33

0.09 1.01

0.12 0.17

0.50 0.10

0.28 0.07

0.09 0.03

0.47 0.08

0.11

0.22

0.21

0.24

0.21

0.20

0.20

0.20

0.21

0.18

0.14

0.13

0.05

0.04

0.04

0.03

0.03

0.04

0.04

0.03

0.05

0.10

0.16

0.12

0.02

0.02

0.01

0.03

0.01

0.02

0.06

0.05

0.08

0.17

0.35

0.28

0.02

0.02

0.03

0.03

0.05

0.03

0.02

0.03

0.03

0.08

0.09

0.05

0.02

0.02

0.03

0.02

0.08

0.14

0.12

0.30

1.34

1.04

0.50

0.27

0.02

0.02

0.02

0.02

0.02

0.02

0.04

0.07

0.12

0.16

0.30

0.43

0.20

0.26

0.17

0.12

0.12

0.10

0.22

0.23

0.27

0.29

0.22

0.18

280

Theory of Sampling and Sampling Practice

A mining test performed in the same area shown on Table 14.1 followed by pilot plant pressure sulfuric leach reveals that the cobalt content was almost the same everywhere and slightly higher than expected. Retrospectively, looking at these existing exploration data, we may discover what went wrong. 14.11.1. Compositing Horizontally Q ¼ 12 We obtain 12 horizontal composites. 14.11.2. Calculation of the Low Background Content for Cobalt The harmonic means are used with formula 14.33. Figure 14.6 shows the histogram of increasing cobalt contents, and the calculated value for the Low Background Content aH and a value of 0.03% is used in the calculation of the sampling diagrams. L is Ingamells’ nomenclature used in the software. aH ¼ L ¼ 0:03% 14.11.3. Calculation of the Most Probable Result The most probable result of an assay γ is calculated using formula 14.25 and shown in Figure 14.7. We may observe that a single 7500-g sample gives a most probable result halfway between the estimated overall average content and the estimated Low Background cobalt content. It takes the averaging of 73 samples to eliminate the Poisson skewness. Comment about the Ingamells’ nomenclature: we could use the TOS nomenclature for unknown values (i.e., γ, aL , aH ) or the corresponding Ingamells’ estimated values (i.e., Y, X, L).

FIGURE 14.6 Calculation of the Low Background Content.

Visman and Ingamells and the Theory of Sampling

281

14.11.4. Calculation of Standard Deviation of the Fundamental Sampling Error The calculation of the standard deviation of the Fundamental Sampling Error is performed using equation (14.43), and it is shown in Figure 14.8. Using equation (14.42) we may observe that the calculated Minimum Sample Mass Ms min is about 20 kg,

FIGURE 14.7 Calculation of the Most Probable Result γ, illustrated as the dashed line. The low background content is the lower solid line. The overall arithmetic average is the upper solid line.

FIGURE 14.8 Calculation of the standard deviation sFSE αL illustrated as the dashed curves.

282

Theory of Sampling and Sampling Practice

which is much larger than the basic drilling support set at 7 kg. Furthermore, following the guideline given in Section 11.8.2 it would be unwise to collect a sample with less than a 16% relative for the standard deviation of the FSE, which leads to the collection of a 500-kg sample (71 x 7 kg). Of course, such sample can only be obtained by averaging neighboring samples until the required mass is obtained. Nevertheless, if no such averaging is performed, the Poisson process will most certainly create very unfortunate illusions, and this is exactly what happened during the exploration and preparation of the geological block model. The Optimum Sample Mass MSopt is calculated using equation (14.41). It is a compromise between FSE (necessary sample mass) and GSE (necessary number of samples to account for field segregation). In Figure 14.8 it shown to be about 100kg. Conclusion: It would be extremely unwise to collect samples smaller than 100kg, which confirms that drilling with reverse circulation or large diameter percussion into relatively soft laterites would have been a much better option.

14.11.5. Calculation of the Standard Deviation Taking into Account the Optimum Sample Mass The calculation of the standard deviation sopt is performed using equation (14.46), and it is shown in Figure 14.9. As suggested in the graphic, the significance of sopt is meaningless below the Optimum Sample Mass, which is about 100 kg when using equation (14.41). In other words, it is practically impossible to draw a logical geological block model using the information from 7-kg samples, unless a massive field moving average is used, which of course, in turn results in the loss of small-scale definition.

FIGURE 14.9 Calculation of the standard deviation s0pt illustrated as the dashed curves.

Visman and Ingamells and the Theory of Sampling

283

14.11.6. Calculation of the Standard Deviation of a Single Assay sN¼1 The calculation of the standard deviation sN¼1 is performed using equation (14.47), and it is shown in Figure 14.10. The spread of sN¼1 as the sample mass becomes very large gives an indication of the amount of large-scale segregation that was present in the field in the area under study. This domain is where Geostatistics should perform very well. Now the Ingamells’ sampling diagram is complete, and as presented it is reasonably consistent with the TOS. We may debate the accuracy of the information, nevertheless, it is fairly obvious the sampling diagram contains enormous amount of valuable information, and it is the general message that counts to select a reasonable field sampling strategy.

14.11.7. Compositing Vertically Q ¼ 12 We obtain 12 vertical composites. Steps performed for horizontal compositing are repeated and a new, complete sampling diagram is obtained and illustrated in Figure 14.11. Observations are substantially different from what is observed in the horizontal diagram illustrated in Figure 14.10. Results are summarized in Table 14.2. The main conclusions and recommendations are: • The 1-meter HQ-diameter support with a mass of 7kg is much too small. It is recommended to drill with larger diameter. • It is absolutely necessary to composite 7-kg individual samples until the recommended sample mass is reached. • There is less large-scale segregation as a function of depth than as a function of lateral distance, explaining a smaller Optimum Sample Mass for vertical compositing. Because holes are 25 meters apart, this conclusion was predictable.

FIGURE 14.10 Calculation of the standard deviation sN¼1 illustrated as the dashed curves.

284

Theory of Sampling and Sampling Practice

FIGURE 14.11 Complete sampling diagram by compositing vertically.

TABLE 14.2 Summary of observations from vertical and horizontal sampling diagrams Parameters

aH Ms min MSrec Msopt Number of 7-Kg to composite for uncertainty = 16% sN¼1 for very large samples

Vertical compositing

Horizontal compositing

0.03%

0.03%

20 kg

20 kg

150 kg

150 kg

40 kg

100 kg

71

71

From 0.13 to 0.28% Co

From 0.09 to 0.31% Co

• A massive Poisson process is affecting the database therefore many areas are unfairly looking unattractive that may result in an underestimation of the ore reserves. Collect samples with optimum sample mass. • A few blocks may appear to have extremely high cobalt contents when it is not true.

15 Theoretical, Practical, and Economic Difficulties in Sampling for Trace Constituents

15.1. Summary Many industries base their decisions on the assaying of tiny analytical subsamples. The problem is that most of the time several sampling and subsampling stages are required before the laboratory gives its ultimate assays using advanced chemical and physical methods of analysis. As long as each sampling and subsampling stage is the object of due diligence using the Theory of Sampling it is likely that the integrity of the soughtafter information has not been altered and the generated database is still capable to fulfill its informative mission. Unfortunately, more often than not, unawareness of the basic properties of heterogeneous materials combined with the unawareness of stringent requirements listed in the Theory of Sampling, lead to the conclusion that massive discrepancies may be observed between the expensive outcome of a long chain of sampling and analytical custody, and reality. There are no areas that are more vulnerable to such misfortune than sampling and assaying for trace amounts of constituents of interest in the environment, in high purity materials, in precious metals exploration, food chain, chemicals and pharmaceutical products. Without the preventive suggestions of the Theory of Sampling serious difficulties may arise when making Gaussian approximations or even log-normal manipulations in the subsequent interpretations. A complementary understanding of Poisson processes injected in the Theory of Sampling may greatly help the practitioner understand structural sampling problems and prevent unfortunate mistakes from being repeated over and over until a crisis is reached. This chapter presents an overview of the theoretical, practical and economic difficulties often vastly underestimated in the search to quantify trace amounts of valuable or unwelcome components.

15.2. Scope The heterogeneity of trace constituents in lots to be sampled for the determination of their contents has been the object of extensive work by many authors in the past. The scope of this chapter is to focus attention on the works done by Gy,72 Ingamells, and Pitard.22 Links between the works of these authors are investigated, and an up-to-date strategy to resolve sampling difficulties is suggested. The challenge is to provide 285

286

Theory of Sampling and Sampling Practice

adequate, realistic sample and subsample mass at all sampling and subsampling stages, all the way to the balance room at the assaying laboratory. More often than not, meeting basic Theory of Sampling (TOS) requirements to keep the variance of the Fundamental Sampling Error (FSE) within reasonable limits are beyond economic reach, or at least in appearance. Therefore, when these difficulties are ignored for practical reasons, awareness becomes the only tool at our disposal to show the possible consequences. Such awareness must be properly managed, which is the primary objective of this chapter.

15.3. Industries that Should Be Concerned Regardless of what the constituent of interest is in a material to be sampled, it always carries a certain amount of heterogeneity. Many industries are concerned about such a structural property. Some industries using materials of mineral origin such as metallurgy, cement, coal, glass, ceramics, uranium, and so on, are challenged every day to quantify contents of critically important constituents. These difficulties reach a paroxysm when these constituents are present in trace amounts. There are many other similar examples in the agricultural, food, paper, chemical, and pharmaceutical industries. There is another stunning example in sampling for trace constituents in the environment; companies struggling to meet regulatory requirements have great concerns about the capability to collect representative samples that will be assayed for trace constituents. All these examples are just the tip of the iceberg in our world today.

15.4. A Logical Approach Suggested by the Theory of Sampling The Theory of Sampling is, by definition, a preventive tool for people working in the industry to find ways to minimize the negative effects of the heterogeneity carried by critically important components. Such heterogeneity generates variability in samples, therefore variability in data that are later created. The following steps are essential for the definition of a logical and successful sampling protocol. The discussion is limited to the sampling of zero-dimensional, movable lots. For one-dimensional lots the reader is referred to chronostatistics in a later chapter; for two and three-dimensional lots the reader is referred to more in-depth reading of geostatistics. 15.4.1. Mineralogical and Microscopic Observations At the early stage of any sampling project it is mandatory to proceed with a thorough mineralogical study or microscopic study that may show how a given trace constituent behaves in the material to be sampled. The conclusions of such study may not be stationary in distance or time, nevertheless they give an idea about the direction that one may go when reaching the point when an experiment must be designed to measure the typical heterogeneity of the constituent of interest. These important studies must remain well focused. For example, in the gold industry it is not rare to see a mineralogical study of the gold performed for a given ore for a given mining project; then the final report may consist of 49 pages elaborating about the many minerals present in the

Theoretical, Practical, and Economic Difficulties

287

ore, and only one page for gold which is by far the most relevant constituent. Wellfocused substance should be the essence. 15.4.2. Heterogeneity Tests Many versions of Heterogeneity Tests have been suggested by many authors. For example, Gy suggested about three versions, François-Bongarçon suggested at least two, Pitard suggested several versions, Visman suggested one, and Ingamells suggested several: all these authors are well referred to in this book. They all have something in common: they are usually tailored to a well-focused objective and they all have their merits within that context. It is important to refer to François-Bongarçon’s works because of his well-documented approaches. It is the view of this author that for trace constituents, experiments suggested by Visman and Ingamells provide the necessary information to make important decisions, about sampling protocols, the interpretation of the experimental results, and the interpretation of future data collected in similar materials; this is especially true to find ways to circumvent nearly unsolvable sampling problems because of the economic impact of ideal sampling protocols. 15.4.3. Respecting the Cardinal Rules of Sampling Correctness Let’s be very clear on a critically important issue: if any sampling protocol or any sampling system does not obey the cardinal rules of sampling correctness listed in Part VII of this book, then minimized sampling errors leading to an acceptable level of uncertainty no longer exist within a reachable domain. In other words, if Increment Delimitation Errors (IDE), Increment Extraction Errors (IEE), Increment Weighting Errors (IWE), and Increment Preparation Errors (IPE) are not addressed in such a way that their mean is no longer close to zero, we slowly leave the domain of sampling and enter the domain of gambling. In this chapter the assumption is made that the mean of these bias generator errors is close to zero. In the eventuality anyone bypasses sampling correctness for some practical reason, solutions no longer reside in the world of wisdom and generated data are simply unethical. It is rather baffling that many standards committees on sampling are still at odds with the rules of sampling correctness, revealing a need for more education of TOS at universities and colleges. 15.4.4. Quantifying the Fundamental Sampling Error Enormous amounts of work have been done by Gy, François-Bongarçon, and Pitard on the many ways to calculate the variance of the Fundamental Sampling Error. For the record the Theory of Sampling offers very different approaches and formulas for the following cases: 1. The old, classic parametric approach where shape factor, particle size distribution factor, mineralogical factor, and liberation factor must be estimated. 2. A more scientific approach involves the global determination of the constant factor of constitution heterogeneity. 3. A totally different approach focuses on the size, shape, and size distribution of the liberated, nonliberated, or even in-situ grains of a certain constituent of interest. 4. A special case when the emphasis of sampling is on the determination of the size distribution of a material.

288

Theory of Sampling and Sampling Practice

15.4.5. Minimizing the Grouping and Segregation Error The Grouping and Segregation Error GSE is completely defined by its mean and variance: meanðGSEÞ ≈ 0 S2GSE ¼ γξS2FSE If the variance of GSE is the product of 3 factors, this would suggest that the cancellation of only one factor could eliminate GSE: • It is not possible to cancel the variance of FSE unless the sample is the entire lot, which is not the objective of sampling. However, it should be minimized, and we know how to do this. • It is not possible to cancel γ unless we collect a sample made of 1-fragment increments at random one at a time. This is not practical; however, it was done in a recommended method in this book for the experimental determination of IHL. In a routine sampling protocol, the right strategy is to collect as many small increments as practically possible, so we can drastically minimize γ; this proves to be by far the most effective way to minimize the variance of GSE. • It is not possible to cancel ξ which is the result of transient segregation. All homogenizing processes have their weaknesses and are often wishful thinking processes; this proves to be the most ineffective way to minimize the variance of GSE.

15.4.6. The Challenges of Reality Reality often shows that between what is suggested by Gy’s theory and what the actual implemented protocols are to account for the content of trace constituents, there is an abysmal difference and we should understand the reasons for such unfortunate shortcoming; there could be several reasons: • Requirements from Gy’s theory are dismissed as impractical and too expensive. • The TOS is not understood. • Protocols are based on past experience from somebody else. • Top management does not understand the link between hidden cost and sampling. • Normal or log-normal statistics are applied within domains where they do not belong. • Poisson processes are vastly misunderstood and ignored. • People have a naïve definition of what an outlier is. Clearly, we need a different approach, and this is where the work of Ingamells can greatly help. Ingamells can help us to better understand the behavior of bad data, so management can be convinced that after all, Gy’s preventive approach is best, even if it seems expensive at first glance; in this statement there is a political and psychological subtlety that has created barriers for the TOS for many years, and breaking this barrier is the entire essence of this chapter.

289

Theoretical, Practical, and Economic Difficulties

15.5. Ingamells’ Work to the Rescue 15.5.1. From Visman to Ingamells Most of the valuable work of Ingamells is based on Visman’s sampling theory. What is most relevant is Ingamells’ work on Poisson distributions that can be used as a convenient tool to show the risks involved when the variance of FSE is out of control: it cannot be emphasized strongly enough that the invasion of any database by Poisson processes can truly have catastrophic economic consequences in any project, exploration, feasibility, processing, environmental, medical and legal as well. Again, let’s make it very clear, any database invaded by a Poisson process because of the sampling and sub-sampling procedures that were used is a direct, flagrant departure from the due diligence practices in any project. Yet, sometimes we don’t have the luxury of a choice, such as in the sampling of diamonds; then awareness is of the essence and can weigh heavily in the decision making process. 15.5.2. Limitations of Normal and Lognormal Statistical Models At one time, scientists became convinced that the Gaussian distribution was universally applicable, and an overwhelming majority of applications of statistical theory are based on this distribution. A common error has been to reject “outliers” that cannot be made to fit the Gaussian model or some modification of it as the popular lognormal model. The tendency, used by some geostatisticians, has been to make the data fit a preconceived model instead of searching for a model that fit the data. On this issue, a Whittle quote172 later on used and modified by Michel David173 was superb: “there are no mathematical models that can claim a divine right to represent a variogram.” The variogram is not the target in this chapter, however the philosophy remains the same. It is now apparent that outliers are often the most important data points in a given data set, and a good understanding of Poisson processes is a convenient tool to use. 15.5.3. Poisson Processes Poisson and double Poisson processes explain why highly skewed distribution of assay values can occur. The grade and location of an individual point assay, which follows a single or double Poisson distribution, will have virtually no relationship with the surrounding ore, and it will be impossible to assign a grade other than the mean value to mineable small-size blocks; this makes the assumption that the mean value is calculated with no elimination of outliers. Similar difficulties can occur with the assessment of impurity contents in very valuable commodities. 15.5.3.1. The Single Poisson Process The Poisson model is a limit case of the binomial model and was explored to a great extent in Chapter 3. It is important to recall equation (3.39): Pðx ¼ rÞ ¼

θr θ e r!

ð15:1Þ

290

Theory of Sampling and Sampling Practice

where r is the possible number of constituent of interest grains present in the sample, θ is the hypothetical average number of constituent of interest grains present in many samples. If n is the number of trials, the variance of the Poisson distribution is θ ¼ npq ≈ np since q is close to 1. The mean value of the Poisson distribution is θ ¼ np. 15.5.3.2. The Double Poisson Process When primary samples taken from the deposit contain the constituent of interest in a limited average number μ of discrete grains or clusters of such grains (i.e., P[y=n]), and they are subsampled in such a way that the subsamples also contain discrete grains of reduced size in a limited average number θ (i.e., P[x=r]), a double Poisson distribution of the assay values is likely. The probability P of r grains of mineral appearing in any subsample is determined by the sum of the probabilities of r grains being generated from samples with n grains (not to be confused with n trials as defined earlier). Let’s define the ratio f: f ¼

θ μ

ð15:2Þ

With θ ¼ μ or θ ¼ n f for each possibility, the equation for the resulting, compounded probability of the double Poisson distribution is: Pðx ¼ rÞ ¼

X

Pðy ¼ nÞ Pðx ¼ rÞ ¼

¼∞ n nf r X μn eμ ðnf Þr enf f r eμ nX μ e n ¼ n! r! r! n¼0 n!

ð15:3Þ

for r = 0, 1, 2, 3,. . . This is the probability of obtaining a sample with r grains of the constituent of interest. Again, the equation could be modified using the improved Stirling approximations given by the following formula: hnin n! ¼ e

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi πð6n þ 1Þ 3

ð15:4Þ

Recommendation: Before going any further, it is strongly suggested the reader goes back to Section 3.8.2 and study it very carefully, so the subtleties behind the following case studies would be better understood.

15.6. Case Study #1: Estimation of the Iron Content in High-Purity Ammonium Paratungstate The following case study involves a single stage Poisson process and the economic consequences can already be staggering because of the nonrepresentative assessment of the impurity content of an extremely valuable high purity material. It should be emphasized that the analytical protocol that was used was categorized as fast, cheap,

291

Theoretical, Practical, and Economic Difficulties

and convenient. In other words, it was called a cost-effective analytical method. It should be emphasized that there are probably thousands of similar cases in many industries, as the consequence of economists not communicating enough with knowledgeable technical staff. A shipment of very valuable high-purity ammonium paratungstate used in the fabrication of tungsten coils in light bulbs was assayed by an unspecified supplier to contain about 10 ppm iron. The contractual limit was that no shipment should contain more than 15 ppm iron. The client’s estimates using large assay samples were much higher than the supplier’s estimates using tiny 1-gram assay samples. The maximum particle size of the product was 150-µm. To resolve the dispute a 5000-g lot was assayed 80 times using the standard 1-g assay sample weight used at the supplier’s laboratory. Table 15.1 shows all the assay values generated for this experiment. A summary of results is as follows: The estimated average x ≈ aL of the 80 assays was 21 ppm. The absolute variance S2 ¼ 378 ppm2 The relative, dimensionless variance s2R ¼ 0:86 The absolute standard deviation s ¼ 19 ppm The relative, dimensionless standard deviation SR ¼ 0:93 or 93% It should be emphasized that these variances and standard deviations were calculated using conventional formulas, which may not be a very good thing to do when a Poisson process is taking place.

TABLE 15.1 Summary of 80 replicate iron assays in high-purity ammonium paratungstate Sample number

ppm Fe

Sample number

ppm Fe

Sample Number

ppm Fe

Sample Number

ppm Fe

1

4

21

44

41

5

61

2

20

22

21

42

31

62

4

3

21

23

21

43

19

63

21

4 5

31 16

24 25

18 21

44 45

6 18

64 65

29 20

28

6

16

26

4

46

18

66

35

7

14

27

17

47

4

67

19 48

8

12

28

32

48

4

68

9

4

29

7

49

5

69

4

10

9

30

18

50

4

70

14

11

36

31

20

51

19

71

8

12 13

32 31

32 33

21 4

52 53

6 44

72 73

6 115

14

4

34

19

54

74

74

4

15

22

35

32

55

16

75

9

16

4

36

4

56

4

76

13

17

4

37

64

57

33

77

26 32

18

19

38

7

58

4

78

19

48

39

48

59

34

79

4

20

68

40

18

60

64

80

12

292

Theory of Sampling and Sampling Practice

From the TOS we can write the following relationship:   s21 1 1 2 2 IHL þ s2FSE1 γξ ¼ sFSE1 þ sGSE1 ¼  MS1 ML a2L

ð15:5Þ

All terms are well defined in the TOS. The subscript 1 refers to the only information we have from a small sample weighing 1g. The effect of ML is negligible as it is very large relative to MS. We don’t know the value of the variance S2GSE1 of the Grouping and Segregation Error; however the material is well calibrated and there are no reasons for a lot of segregation to take place, and also the isolated grains containing high iron content have about the same density as the other grains because their composition is mainly ammonium paratungstate anyway. Therefore, we may assume in this particular case that S2FSE1  S2GSE if each 1-g sample is made of several random increments, so the value of IHL we may calculate is only slightly pessimistic. We may write: 0:86  S2GSE ≈ IHL

ð15:6Þ

We may therefore assume that IHL  0:86 g. If the tolerated standard deviation of the FSE is 16% relative, we may calculate the optimum necessary sample mass MS: Ms ¼

0:86 ð0:16Þ2

¼ 34g

ð15:7Þ

Obviously, it is a long way from the 1-gram that was used for practical reasons. This mass of 34 grams is the minimum sample weight that will ensure a normal distribution of the assay results. Another parameter that can be obtained is the low background content aH which is probably around 4 ppm as suggested by the histogram in Figure 15.1. This high-frequency low value may sometimes represent only the lowest possible detection of the analytical method; therefore, we should remain cautious as we define the true low background content of a product for a given impurity.

15.6.1. Investigation of the Histogram Figure 15.1 illustrates the histogram of N = 80 assays shown in Table 15.1. In this histogram it is clear that the frequency of a given result reaches a maximum at regular intervals, suggesting that we may class the data in various zones; zone A with 27 samples showing zero grain with the iron impurity; zone B with 29 samples showing 1 grain; zone C with 13 samples showing 2 grains; zone D with 5 samples showing 3 grains; zone E with 3 samples showing 4 grains; zone F with 1 sample showing 5 grains; Zone G with 6 grains shows no event; finally zone H with 7 grains shows one event, which may be an anomaly in the model of the distribution. The set of results appears Poisson distributed, and a characteristic of the Poisson distribution is that the variance is equal to the mean: s2 ¼ aL ¼ 21 ppm ¼ 1:18grain = sample ¼ θ

ð15:8Þ

293

Theoretical, Practical, and Economic Difficulties

Frequency

A

B

C

D

E

F

G

H

20 15 10 5 0 0

10

20

30

40

50

60

70

80

90

100 110 ppm Fe

FIGURE 15.1 Histogram of eighty 1-gram assays for iron in ammonium paratungstate.

The assumption that aH = 4 ppm needs to be checked. The probability that the lowest assay value represents aH can be calculated. If the average number of grains showing the impurity per sample θ is small, there is a probability that the lowest assays represent aH. The probability that a single collected sample will have zero grain is: Pðx ¼ 0Þ ¼

θ0 eθ ¼ eθ ¼ e1:18 0!

ð15:9Þ

If we call Pðx ¼ 0Þ the probability for a success of obtaining no grain with the impurity, then the probability Px of n successes in N trials is given by the binomial model: Px ¼

N! Pn ð1  PÞNn n!ðN  nÞ!

ð15:10Þ

where P is the probability of having a sample with no grain containing the impurity when only one sample is selected, and (1-P) is the probability of having at least one grain when only one sample is collected; then the probability of no success Pðx ≠ 0Þ with N samples is: Pðx≠0Þ ¼

 θ 0  N0  N N! e 1  eθ ¼ 1  eθ 0!ðN  0Þ!

ð15:11Þ

Relation 15.11 shows the probability that none of N samples is free from low-frequency impurity grains. The probability that the lowest assay value represents aH is:  N Pðx ¼ 0Þ ¼ 1  1  eθ ¼ 0:9999

ð15:12Þ

294

Theory of Sampling and Sampling Practice

Assuming that aH is not the analytical detection limit, we can be sure that the lowest assay represents aH. Having found that the value θ ¼ 1:18, we may calculate the Poisson probabilities for samples located in each zone illustrated in Figure 15.1. Thus, by multiplying each probability by 80, we may compare the calculated distribution with the observed distribution. Results are summarized in Table 15.2. The observed distribution is very close to the calculated distribution if we exclude the very high result showing 115 ppm that should not have appeared with only 80 samples. A characteristic of the Poisson distribution is that the variance S2 of the assays is equal to the average aL. S2 ¼ x ≈ aL ¼ θ

ð15:13Þ

or s¼

pffiffiffi θ

ð15:14Þ

But, in practice the number of grains is not used; instead concentrations are used such as %, g/t, ppm, or ppb. Let’s call C the conversion factor and rewrite [11.41] properly: pffiffiffi s¼C θ

ð15:15Þ

Thus, we may calculate the contribution C of a single average impurity grain to a single iron assay: 21 C ¼ pffiffiffiffiffiffiffiffiffi ¼ 19 ppm 1:18

ð15:16Þ

15.6.2. Discussion of Acceptable Maximum for the Standard Deviation of the FSE Ingamells suggested that a minimum of six of the grains showing the impurity, or a cluster of this impurity, should be present in a sample for the analysis of this sample to be meaningful. The objective of such recommendation is to eliminate the Poisson

TABLE 15.2 Comparison of the calculated distribution with the observed distribution r

Poisson probability for θ = 1.18

Calculated distribution

Observed distribution

0 1

0.307 0.363

25 29

27 29

2

0.213

17

14

3

0.084

7

5

4

0.025

2

3

5

0.006

0

1

6

0.001

0

0

7

0.0002

0

1

Total

0.999

80

80

Theoretical, Practical, and Economic Difficulties

295

process from damaging the database. If a 1-g sample contains an average θ ¼ 1:18 grains, then the minimum recommended sample mass is around 5 g. Using this mass and the value of IHL obtained earlier we may write: S2SFE 

0:86 ¼ 0:17 5

SSFE  0:41 or 41% relative

ð15:17Þ ð15:18Þ

But, following Gy’s recommendations a 34-gram sample is recommended to achieve a 16% relative standard deviation, which would contain about 41 grains. Now, how can 5-g leading to 41% relative and 34-g leading to 16% relative be reconciled? To further discuss this difference, let’s construct the useful Ingamells’ sampling diagram. With the set of data given in Table 15.1 a set of artificial, large 10-g samples made of Q=10 small one-gram samples can be created, and they are shown in Table 15.3.

15.6.3. Visman Sampling Equation With this information Visman sampling constants A and B can be calculated: S2 ¼

s2 A B ¼ þ N Ms N

ð15:19Þ

where S is the uncertainty in the average of N=80 assays on samples of individual mass M S ¼ MNs ¼ 1g. A is the Visman homogeneity constant. It is the Gy’s Intrinsic Constitution Heterogeneity IHL multiplied by the square of the average content of the lot. From the variances and Visman’s equation we obtain:  A ¼ IHL a2L ¼

 s21  s22 MS1 ¼ 379773mg = ppm2 1 2 aL 1  Q

From Gy we suggested earlier: A ¼ IHL a2L ¼ 0:86ð21:475Þ2 ¼ 396611mg = ppm2 Those numbers are very close and within the variances precision; therefore, this would suggest there is no room to calculate the amount of segregation for iron in the lot. It is wise to assume that B, the Visman segregation constant, is: B ¼ s2 

A ¼ S2GSE a2L N ¼ S2SE a2L ≈ 0 M S

This confirms the opinion that iron in calibrated ammonium paratungstate grains has no reason whatsoever to segregate in a substantial way, as the iron is within ammonium paratunstate salt anyway, and all the observed variability is due to the variance of FSE.

296

Theory of Sampling and Sampling Practice

TABLE 15.3 Iron content of artificial large samples of mass equal to 10 g N sample number

Composited small samples

Iron content in large samples

1

1–10

15

2

11–20

27

3

21–30

20

4

31–40

24

5 6

41–50 51–60

11 30

7

61–70

22

8

71–80

23

15.6.4 The Most Probable Result The Most Probable Result γ for the assaying of iron as a function of analytical sample mass MS is calculated with Ingamells’ equation (14.25). Also, the Low Background Content aH can be calculated with equation (14.33) or estimated in a more pragmatic way with a histogram as illustrated in Figure 15.2. For more discussion on this equation, refer to Section 14.5. Values of γ are illustrated in Figure 15.3, and it basically represents the location of the mode of the probability distribution relative to the expected arithmetic average aL. A careful study of the γ curve in Figure 15.3 (i.e., dashed curve) is the key to complete our discussion of the difference between Ingamells’ recommendation and Gy’s recommendation for a suggested maximum value for the standard deviation of FSE. It can be observed that the recommended mass by Ingamells (i.e., 6 grains in the sample) or a sFSE = ±41%

FIGURE 15.2 Low Background iron content aH which is L in Ingamells’ notations.

Theoretical, Practical, and Economic Difficulties

297

FIGURE 15.3 Illustration of the Ingamells’ sampling diagram for ammonium paratungstate.

relative leads to a location of the mode still substantially below the expected arithmetic average aL. It is not the case with the necessary sample mass of 33 grams (i.e., 33 x MS) to obtain a %SSFE ¼ 16% as recommended by Gy. In other words, Gy’s recommendation was simply more conservative and less subjective to ambiguity.

15.7. Case Study #2: Poisson Processes in a Gold Deposit The following real case involves only a single Poisson process and it is a very common case in the world of gold exploration and mining. It is important to show readers how devastating a single Poisson process invading a data base is. The other objective of presenting this case is to demonstrate that cutting grades or eliminating so-called outliers is sometimes completely unjustified unless a laboratory manager recognizes his staff indeed made a mistake. This case took place during exploration of a gold deposit where some visible gold was observed on some core samples during the routine logging procedure. Most of the gold is in large quartz veins. Samples sent to the laboratory are 1-meter half NQ core samples weighing on average 2.35 kg. 30-g fire assay are routinely performed. As a test a full 1-meter core containing a few visible gold particles and weighing 3840g was crushed and pulverized to 95% minus 106 µm, then N=128 replicate 30-g fire assays were performed until complete extinction of the sample. Table 15.4 shows the results. It is important to emphasize that the work was performed by welltrained professionals in a mineral processing research facility. It was extremely entertaining to observe the various gold beads following cupellation. Because a gravimetric finish was involved, there is absolutely no doubt about how much gold was present in that core sample, making the calculation of the overall content

298

Theory of Sampling and Sampling Practice

absolutely unambiguous, which gave enormous support to our opinion that cutting high grades during exploration or grade control is an extremely unfortunate practice; actually, it is fair to say that it is a monumental mistake, resulting from a complete misunderstanding of Poisson processes. Then, there is another problem, many practitioners would not hesitate to apply a Lognormal model to the distribution of data illustrated in Table 15.4, which is another monumental mistake. The distribution follows a Poisson model and nothing else. In a Poisson distribution, every data, regardless of how high it gets, is an unbiased estimator of the true average; therefore, the method suggested by Sichel191 for the estimation of the mean of a lognormal distribution model does not apply in this case study and would be misleading. The data were grouped into 16 groups of 8 as shown in Table 15.5 (Compositing factor Q = 8). Each group was averaged. It was assumed that the data was supplied in random order; in other words, no rearrangement of the data was made. Values for the 16 groups are shown in Table 15.5. The overall variance of the small sample (i.e., 30-g fire assays) is 57.28 (g/t)2. The standard deviation S1 of the small sample is 7.55 g/t. The standard deviation S2 of the large samples (i.e., 240-g composites) is 1.98 g/t The mode of each distribution, small and large samples, was calculated using the harmonic mean γ. N γ¼P 1

ð15:20Þ

i xi

TABLE 15.4 Replicate 30-g fire assays from a single 1-meter NQ core sample Results expressed as g/t gold 0.524

1.722

0.436

0.664

0.611

0.546

0.49

0.574

0.528

0.557

0.557

0.527

0.529

0.49

53.826 0.519

9.739

1.052

6.852

0.668

1.052

0.677

0.468

0.831

0.948

0.742

0.588

0.555

0.744

0.188

1.6

0.725

1.591

0.8

1.4

0.677

0.67

0.749

1.817

0.846

1.2

0.64

0.587

1.078

0.769

22

0.983

2.383

2.609

1.235

2.017

0.857

1.765

0.833

0.913

3.583

0.703 0.505

0.664 0.492

0.688 1.278

1.417 0.698

1.591 0.669

0.715 1.261

7.235 0.527

0.734 0.794

3.017

0.685

0.674

0.655

1.296

2.765

0.983

0.785

1.374

1.191

1.27

0.549

0.672

0.587

2.122

1.374

0.693

0.57

0.896

1.052

0.609

1.339

0.592

0.541

0.668

0.568

0.584

2.852

0.681

0.652

0.36

0.948

1.339

0.597

0.67

0.589

0.703

0.597

0.53

0.59

0.538

0.503

1.278

16.696

0.522

2.417

0.704

0.666

0.739

0.636

0.638

0.641

1.113

63.043

1.722

2.174

299

Theoretical, Practical, and Economic Difficulties

TABLE 15.5 Values in g/t gold of the 16 groups of 8 fire assays. Group number

Average gold content in g/t

1

7.35

2 3

0.54 2.67

4

0.76

5

1.07

6

3.71

7

1.73

8

1.72

9

0.78

10 11

1.36 1.14

12

0.79

13

0.91

14

0.7

15

2.92

16

8.84

Overall average gold content (confirmed by the total weight of gold beads)

2.311

The Visman constant A measuring the Intrinsic Heterogeneity was calculated: 

 S21  S22 ðMS1 : MS2 Þ IHL ¼ a2L ðMS2  MS1 Þ

ð15:21Þ

MS1 is the mass in grams of the small fire assay samples. MS2 is the mass in grams of the large composite samples. aL is the overall average gold content. IHL is Gy’s Intrinsic Heterogeneity1 with: A ¼ IHL  a2L ¼ 1700grams½g=t2

ð15:22Þ

Presented that way, the reader may realize there is a common ground between Visman’s work and Gy’s work. Similarly, the Visman constant B, which is constant at a particular instant as long as the lot is not reshaped in any way, was calculated: B ¼ s21 

A ¼ s2GSE  a2L  N ¼ s2SE  a2L ¼ 0:33½g=t2 M S

ð15:23Þ

where S2GSE would be Gy’s Grouping and Segregation variance and S2SE the segregation variance in the lot at the time the analyst collected the 30-g fire assay samples. The

300

Theory of Sampling and Sampling Practice

preliminary conclusion is that, at this instant, the segregation was moderate and the total variance was mainly due to the random nugget effect. The average Low Background gold content aH was calculated using the harmonic means γ1 of the 30-g fire assays and γ2 of the composites; The result is illustrated in Figure 15.4. aH ¼

γ1  MS2 ðaL  γ2 Þ  γ2  MS1 ðaL  γ1 Þ ¼ 0:699g=t MS2 ðaL  γ2 Þ  MS1 ðaL  γ1 Þ

ð15:24Þ

The running average of aH reaches 0.699 g/t at the 63th sample roughly, which means that 63 of the lowest gold assays among the 128 assays belong to the Low Background aH category: Po ¼

63 ¼ 0:492 128

ð15:26Þ

The average number θ or coarse gold particles per 30-g fire assay:  θ¼

aL  aH s1

2

 ¼

 2:311  0:699 2 ¼ 0:0456 7:55

ð15:27Þ

There is one coarse gold particle per 1/0.0456 samples or one per 22 samples or a total of 6 coarse gold particles in the entire 1-meter NQ core sample. Calculation of a Poisson Distribution with θ = 0.0452 The following calculations were performed using a gamma function and improved Stirling approximation: Pðx ¼ rÞ ¼ eθ θr rr er

  πð6n þ 1Þ 0:5 3

ð15:28Þ

The probability P(x=0) of zero coarse gold particles to appear in a random 30-g fire assay is: Pðx ¼ 0Þ ¼ 0:9558 ð95:6% of the timeÞ The probability P(x=1) of one coarse gold particles to appear in a random 30-g fire assay is: Pðx ¼ 1Þ ¼ 0:04337 ð4:3% of the timeÞ Pðx ¼ 2Þ ¼ 0:0009775



 0:1% of the time for 2 particles

Pðx ¼ 3Þ ¼ 0:0000147 Pðx ¼ 4Þ ¼ 0:000000166

Theoretical, Practical, and Economic Difficulties

301

FIGURE 15.4 Calculation of the Low Background Content aH.

Multiplying these probabilities by 128 fire assays gives: • Fire assays with 0 coarse gold particle: 122 • with 1 coarse gold particle: 6 • with 2 coarse gold particles: none So, the 6 highest data represent 30-g fire assays containing only one coarse gold particle. Preliminary conclusions: • At this stage it is very clear the exploration geologist would be greatly mislead by the data he generated. • Because the exercise was done in such a way that the entire mass of the core was assayed, it is also very clear the arithmetic average of the 128 assays leads to the true unknown content of the core. Cutting the few high grades or considering them as outliers or considering them lognormally distributed, would have been a devastating mistake; this is food for thought since such practices are widely used in the mining industry because very few people understand heterogeneity and consequently the TOS.

15.8. Summarizing All the Information in a Sampling Diagram The above information can be summarized in a powerful diagram called the Ingamells’ Sampling Diagram; it is displayed in Figure 15.5.

302

Theory of Sampling and Sampling Practice

FIGURE 15.5 Complete Ingamells’ sampling diagram.

• A horizontal line shows the overall average gold content ɑL= 2.31 g/t of the 128 30-g fire assays. • Below, another horizontal line shows the Low background gold content aH = 0.699 g/t. • The first vertical line at the left of the diagram shows the basic fire assay sample mass MS = 30 g. • The curved dashed line γ, between the overall average and the low background content, shows the most probable value of a gold assay drifting away from the overall average as assayed sample mass becomes smaller and smaller, assuming an infinite amount of material was available. Ultimately, for a 30-g fire assay the value of γ gets very close to the background content aH = 0.699 g/t. • The set of two symmetric solid curves around the overall average represents the uncertainty sFSE generated by the random Intrinsic Heterogeneity as a function of assayed mass. The lines are extended to a larger amount of material if it had been available. • The dotted lines showing the uncertainty sN=1 gives an idea about the amount of segregation that is observed if a single sample is collected. The lines are extended to a larger amount of material if it had been available. • The dashed lines starting at about 5000-g mass shows the uncertainty of assays if an Optimum Sample Mass MSopt is used. In this case these lines are of little interest since segregation is obviously not the main problem which is a ridiculously low sample mass. • The second vertical line represents the Minimum Sample Mass MSmin = 700g to assay to include 50% of the time one average coarse gold particle and 50% of the time to have none.

Theoretical, Practical, and Economic Difficulties

303

• The third vertical line represents the necessary sample mass to assay (414 MS = 419 x 30 = 12420g) to obtain an uncertainty of 16% as recommended; this is definitely where the problem is: 30-g versus 12420g. This graphic has its limitations and is certainly not the panacea, however it can give an excellent perspective of a bad situation to a geologist in due time before the exploration goes too far without a better optimized strategy.

15.9. How a Double Poisson Process May Take Place If the above problem was not enough, in reality it can get much worse. When a single Poisson process takes place as described so far, it is very likely another similar process took place as the drill core samples were collected. Indeed, looking at many 1-meter NQ core samples as geologists were doing the logging it is not rare to observe clustering visible gold. Within a small volume the size of a man’s fist, it was not rare to see up to four or five coarse gold particles. If it had been possible to know how many of these particles were hiding inside the core and not visible it is not unreasonable that a cluster of six or seven coarse gold particles could have been observed. Then, the compounding effect of two nearly similar Poisson processes can totally ruin the existing exploration drilling program, and there are no statistics that can logically correct this problem. Conclusions: This case is not rare in the world of gold mining. It is a wake-up call for economists who believe that information generated by NQ half core drilling (cutting the core sample in halves greatly aggravates the problem that was already severe) followed by 30-g fire assay is an industry standard. It is also a demonstration that the practice of always cutting high grade during exploration should be revisited and replaced by far more in-depth investigations. Furthermore, it was shown that isolated outliers should be treated with respect as they are often an indication that a wrong grade control strategy is used. Even more, it shows that outliers showing up in Final Tails of a CIL/CIP circuit are extremely valuable data, and their elimination is an irresponsible act in most cases. In such a circuit all the fine gold easy to sample has been leached; the only remaining gold is the one that is likely to be extremely difficult to sample because of insufficient subsample assaying mass issues. In the case of the 3840-g 1-meter NQ core sample, not only the diameter of the core was too small in the first place but the fact that it was routinely cut in halves was not a very good idea. It should be noticed that a full 1-meter HQ core sample weighing about 8.3 kg would allow the geologist to move from very bad to bad, which is an improvement. The killing factor is definitely the 30-g fire assay. Some people would argue that if the material had been pulverized for 5 minutes or more in a large puck pulverizer the coarse gold particles would not exist anymore; this is precisely the point, they indeed slowly disappear for good by smearing on the walls and puck of the pulverizer, which is obviously a much worse problem. Also, but not the least, this was an example for gold, but when investigating the content of trace constituents at the ppm or even ppb level it is very likely similar situations may occur and this problem is often vastly overlooked with all economic consequences.

304

Theory of Sampling and Sampling Practice

15.10. Recommendations In any gold project showing visible gold, do only a limited number of diamond core drilling, just to define a good geological model. As far as the definition of gold grade is concerned it is definitely better to use much larger samples obtained by face-return Reverse Circulation drilling. The advantage of Reverse Circulation drilling is twofold: 1. The mass of the sample is larger by one order of magnitude. 2. The gold clusters are dismantled and the gold particles, liberated or not, are redistributed in the entire sample mass as it should be. The use of face-return Reverse Circulation is emphasized as being the only valid Reverse Circulation drilling technique (i.e., RC) for gold exploration because of the much more limited down-hole possibility of contamination and better recovery. It is important to also mention that the receiving cyclone should have an efficiency superior to 99% recovery; old cyclones losing too many fines are out of the question. Let’s review three possible cases: 15.10.1. Case #1: Maximum Gold Particle Size below 80μm In this case conventional sample preparation is sufficient. Automated sample preparation facilities are highly recommended. 15.10.2. Case #2: Maximum Gold Particle Size between 80 and 200μm Crush the sample to minus 1.7 mm. Split a 1000-g sample using a rotary sample divider. Pulverize the 1000-g sample to about 95% minus 106μm. Process the 1000-g sample in a gravity gold concentrator. Fire assay the entire concentrate. A 10% split from the tailing is obtained that must be dried and assayed using a 30-g fire assay. 15.10.3. Case #3: Maximum Gold Particle Size between 200 and 1000μm Crush the sample to minus 1.7 mm. Split a 5000 to 10,000-g sample using a rotary sample divider. Pulverize the 5000 to 10,000-g sample to about 95% minus 106μm. Process the entire sample in gravity gold concentrator. Fire assay the entire concentrate. A 5% split from the tailing is obtained that must be dried and assayed using a 30-g fire assay.

16 From Links between Gy and Ingamells to a Sampling Strategy

16.1. Summary The links between Gy and Ingamells are straightforward. In a first preventive approach, Gy offers many tools to optimize sampling protocols and to implement them in a correct and logical way. Gy’s approach was to prevent the presence of a Poisson process at all cost. Once, the author asked Pierre Gy: “Pierre, what happens if the presence of a Poisson process is almost inevitable?” The answer was quick and simple: “Francis, we cannot help them!” In a second approach, Ingamells offers ways to quantify and identify the sources of variability in an existing database. Furthermore, when no preventive approach as suggested by Gy has been implemented, the study of existing data as suggested by Ingamells would most certainly demonstrate how devastating such an adventurous shortcoming can be. In other words, by using the works of Gy and Ingamells simultaneously, any auditor can proceed with confidence to determine if the involved company is due diligent in its practices. Ingamells’ motto was: “If the presence of a Poisson process is inevitable, the very least you can do is understand it, and be very clear about the implications, and especially the economic consequences.” All this may also have devastating consequences in environmental sampling of lowcontent pollutants, because a Poisson process would declare many areas as safe areas when it is not true, and also declare a few areas as highly polluted when it is not true either. Furthermore, this statement is also true for the safety of food and pharmaceutical products; this alone is food for thought.

16.2. Discussions, Conclusions and Recommendations for Future Work TOS stands on a very solid theoretical foundation and its applications are universal. However, as clearly shown in Chapters 14 and 15, sampling practitioners using the TOS must be careful that they do not enter the domain of Poisson processes that very quickly becomes dangerous and too often misunderstood. In such a domain, TOS can only tell the practitioner that the database that will be generated is almost useless. Therefore, this chapter suggests the following due diligence strategy: 305

306

Theory of Sampling and Sampling Practice

16.2.1. The Wisdom of Prevention for Due Diligence Pierre Gy’s Theory of Sampling is totally preventive in nature. TOS instigates a categorical imperative: sampling processes, sampling procedures, and sampling equipment must be unbiased. There is absolutely no room for any compromise in this matter. One could call this Pierre Gy’s paradigm. 16.2.1.1. Step #1 Always make sure the implementation of the sampling protocol that is going to be used is flawless with respect to the fundamental principles of sampling correctness, as suggested in Part VII of this book. This is best achieved by eliminating biases generated by IDE, IEE, IWE, and IPE. If this condition is not fulfilled, sampling becomes an exercise in futility. Nobody can achieve due diligence in sampling without an unconditional commitment to respect the rules of sampling correctness; nobody can ignore the economic implications of this statement. 16.2.1.2. Step #2 Perform a thorough microscopic investigation of the material to be sampled to quantify the appropriate sample mass with respect to well-defined Data Quality Objectives. Many of Gy’s approaches to minimize the variance of FSE detailed in Part IV of this book provide a basis to start a project correctly. 16.2.1.3. Step #3 Minimize the variance of GSE by taking the three preventive actions to minimize the three factors involved in this variance as suggested in Chapter 13, and especially Section 13.1. 16.2.1.4. Step #4 Optimize field or plant sampling frequency using either Geostatistics or Chronostatistics as suggested in Part X of this book. The TOS covers that subject to a great extent. 16.2.1.5. Step #5 Using the generated existing data according to steps 1–4, perform a verification that due diligence was applied. 16.2.1.6. Step #6 Using the generated existing data according to steps 1–4, calculate the low background content aH easy to sample of the constituent of interest. This information may have important geometallurgical and economic implications. 16.2.2. Difficult Cases Where Good Prevention Is, in Appearance, Not Realistic 16.2.2.1. Step #1 Always respect the fundamental rules of sampling correctness as explained earlier. This step is not negotiable.

Links between Gy and Ingamells to Sampling Strategy

307

16.2.2.2. Step #2 Perform a thorough microscopic investigation of the material to be sampled to quantify the appropriate sample mass with respect to well-defined Data Quality Objectives. Proceed with a Visman’s experiment, calculate the Ingamells’ parameters, and draw an informative Ingamells’ sampling diagram. 16.2.2.3. Step #3 Executive managers must review the information and make a feasibility study to allocate much more money on a wiser and necessary approach using Gy’s requirement to minimize the variance of FSE. Someone may say: why not start with this in the first place? The answer is an economic one and the Visman’s experiment and Ingamells’ sampling diagram provide all the necessary information to perform the necessary feasibility study at a minimum cost. 16.2.3. After-the-Fact Noncompliance with Due Diligence 16.2.3.1. Step #1 Following an audit, make a full assessment of due diligence with respect to sampling correctness. If noncompliance is found, everything stops right there as the generated data is not statistically valid and totally useless. If compliance with respect to sampling correctness is found, proceed with the following steps. 16.2.3.2. Step #2 Using existing data proceed with the Visman’s experiment and calculate the Ingamells’ sampling diagram. Determine what is wrong with the existing data and outline the consequences. 16.2.3.3. Step #3 Executive managers must review the information and make a feasibility study to allocate much more money on a wiser and necessary approach using Gy’s requirement to minimize the variance of FSE and comply with sampling correctness if this has been found as the major issue.

16.3. Visman and Ingamells’ Works Help Justify an Augmented Gy’s Approach The suggestion of this strategy is simple: use Visman’s experiments and Ingamells’ sampling diagrams as a selling tool to justify a wiser approach as suggested by the existing TOS. In 40 years of experience as a consultant there have been many occasions when an important company executive would say, “give me the facts, then I will find the necessary funds to fix the problems.” Using existing data, it is very compelling to use the approach suggested in this chapter to demonstrate the absolute necessity to always be preventive using TOS. For more information on the economic consequences of no diligence with correct sampling, it is beneficial to read Carrasco case studies.81–83

This page intentionally left blank

Part VI

The In-Situ Nugget Effect A Major Component of the Random Term of a Variogram

This page intentionally left blank

17 The In-Situ Nugget Effect A Transition between Geostatistics and the Theory of Sampling

17.1. Summary Exploration and ore grade control programs often face great difficulties with minerals occurring as isolated nuggets, clusters of particles, or clusters of veinlets, leading to rapid changes over a very short distance. The random term of a variogram is made of many components, and the variance of the In-Situ Nugget Effect is only one of them. The original Theory of Sampling, created for sampling particulate materials, did not address the uncertainty created by the selection of a certain mass for an unbroken module of observation during diamond core, reverse circulation, or blasthole drilling. If such uncertainty is not well addressed, it can become a major source of sampling error. It is therefore critically important to include this new error in the Theory of Sampling, where it belongs, as it most certainly has an impact on the selection of a drilling diameter. In this chapter we explore the properties of this error, how it occurs, how to quantify it, and what we can do to minimize its negative effects. This error should not be confused with what geostatisticians refer to as the Nugget Effect in a variogram, which includes the In-Situ Nugget Effect, Fundamental Sampling Errors, Grouping and Segregation Errors, and the random component of Increment Delimitation Errors, Increment Extraction Errors, Increment Preparation Errors, Increment Weighting Errors, and Analytical Errors. It is of the utmost importance to clearly differentiate all these errors, so pragmatic solutions can be found to create a reliable database for geologists, miners, environmentalists, and geostatisticians. Unless such differentiation is effectively implemented the true properties of the In-Situ Nugget Effect will remain elusive, and a source of major frustration for economists assessing the feasibility of new projects, or executives in charge of optimizing existing ones. Solutions are suggested, opening the door to new research and a more in-depth exploration of a problem that can easily undermine the economics of new projects, the selection of an economic ore grade cutoff, and the long-term performance of an entire mining operation. The In-Situ Nugget Effect also affects the reliability of environmental assessments and the behavior of impurities in high purity products used in advanced technology.

311

312

Theory of Sampling and Sampling Practice

17.2. Scope The Nugget Effect, also better referred to as the random variance shown in a variogram, has been the object of many studies by sampling experts and geostatisticians.26,127 However, the referred random variance is made of many components as illustrated later in Figure 17.5. The scope of this chapter is to focus attention on only one component called the In-Situ Nugget Effect (INE). The Theory of Sampling that was exclusively developed for particulate materials, does not mention anything about the In-Situ Nugget Effect taking place before the material has been broken up into small fragments. Ingamells addresses the problem but does not provide material specific enough to include it in the Theory of Sampling in an unambiguous way. The In-Situ Nugget Effect has great implications in minerals economics. Therefore, the following material provides the necessary knowledge to quantify its variance and to provide early warnings to economists in charge of feasibility studies.

17.3. Definitions and Notations The length of this chapter being limited, the reader is referred to the beginning of the book for definitions and notations. But, there are new definitions and notations that need to be introduced. The notation of the In-Situ Nugget Effect, which is a new error, is INE. The maximum size of a nugget (e.g., gold) referred to as the size opening of a screen that would retain no more than 5% of the total gold content of the investigated material is noted dm. The Low Background Content aH is defined as the amount of the constituent of interest that is finely disseminated in the material and easy to sample. The term Nugget Effect was first introduced in the mining industry when studying gold deposits containing large gold nuggets. It is well known that gold nuggets do not comminute because of the high malleability of gold. This is true for other precious metals, native copper, and soft minerals such galena or molybdenite. In this chapter, the concept of Nugget Effect is vastly expanded to similar cases created by micro-concentration of minor constituents or impurities, such as arsenic or antimony in some copper deposits, or PCB concentrations in soil where environmental sampling is requested.

17.4. In-Situ Constitution Heterogeneity 17.4.1. A Special In-Situ Structural Case Often, geologists and even geostatisticians refer to an isolated high grade (e.g., gold) as the intersection by the drilling machine of a high-grade zone that must show some kind of continuity. In such a case, that indeed exists, it is intuitive that the sample mass (i.e., drilling diameter) should make no difference. The concept is illustrated in Figure 17.1. This case is not the object of discussions presented in this chapter and does not present any theoretical difficulty. However, if veins are affected by isolated, limited, random

The In-Situ Nugget Effect

FIGURE 17.1 Illustration of a structural case where the drilling diameter is irrelevant.

FIGURE 17.2 Illustration of a structural case combined with In-Situ Nugget Effect.

313

314

Theory of Sampling and Sampling Practice

enlargements as illustrated in Figure 17.2, then a significant In-Situ Nugget Effect takes place. It is critically important to distinguish both cases clearly. Another conceptual view of the real In-Situ Nugget Effect is given in Figure 17.3, relative to various diamond core drilling diameters; a small diameter may often show no gold, while a much larger diameter may almost always show gold.

17.4.2. Nugget Effect and In-Situ Nugget Effect Difficulties arise when the isolated high grade is the result of intercepting a large nugget, a cluster of small nuggets, a cluster of tiny, discontinuous veins, or lumps leading to unpredictable rapid changes over a very short distance. Examples of this are very common with precious metals, with other minerals containing arsenic, copper, molybdenum, or chromium. But they are also common in many other cases such as alumina and silica in iron ore, PCB concentrations in soil, or lead and mercury contamination. The In-Situ Nugget Effect is a troublesome phenomenon resulting in a large error in the estimation process. Because the error is usually very large it also introduces skewness in the random process, making prediction of grade by interpolation for some areas almost impossible, and most certainly misleading. In this chapter INE is called an error. Someone may object and call it uncertainty. If fully aware of how INE takes place because of a thorough investigation followed by the deliberate choice of an optimized mass for the module of observation, the word uncertainty is more appropriate. However, because in a large majority of cases, parties involved were not aware of the subtle nature of INE, did not investigate it in depth, and did not optimize the necessary mass of the basic module of observation, the word error is more appropriate. The author took the convention of calling INE an error, however the reader must keep the above discussion in mind.

FIGURE 17.3 Illustration of the In-Situ Nugget Effect relative to different diamond core drilling diameters.

315

The In-Situ Nugget Effect

17.4.3. Estimation of the Variance of the Nugget Effect Continuity refers to the smoothness of the transition between very closely spaced samples.178 Such continuity is measured by the rate V[j] growing near the origin of the variogram for a given sampling lag j as illustrated in Figure 17.4. It is intuitive that V[j] should be zero at the origin j=0 of the variogram: it is rarely the case. V[j=0], simplified to V[0] in the text, is called the Nugget Effect by geostatisticians and the variance of INE is only part of it, if and only if, none of the high, isolated grades have been cut. 17.4.4. Components of the Variographic Random Variance Again, for readers not familiar with the Theory of Sampling there is a subtlety: the variance of the In Situ Nugget Effect INE is not the Nugget Effect V[0] shown in Figure 17.4, it is only part of it. It is critically important to keep in mind that the Nugget Effect has many components, and actually should be correctly called the random term of a variogram. Figure 17.5 shows a list of these components if the assumption is made that a complete drilling/sampling/analytical protocol has n sampling stages. The Nugget Effect V[0] includes the variance of the following three major components. 17.4.4.1. Errors Introduced by the Selected Sampling Protocol They include: • The selection of a basic field module of observation characterized by a given volume that is fixed for a given drilling technique. This leads to the In-Situ Nugget Effect characterized by its variance S2 INE . This variance alone is the object of the study in this chapter. • The selection of n sampling stages characterized by a given state of comminution P 2 SFSE of the and an optimum sample mass. This leads to the sum of the variances n Fundamental Sampling Errors.

V[j]

V[j=0]

Theory V[j=0] = 0 0 0

Lag = j

FIGURE 17.4 Illustration of the Nugget Effect V(j=0) on a variogram.

Practice V[j=0]>0

316

Theory of Sampling and Sampling Practice

• The selection of n sampling stages characterized by a selected strategy Pto2minimize the effects of segregation. This leads to the sum of the variances SGSE of the n Grouping and Segregation Errors. The sum of all these variances leads to the variance of QFE1 of the entire protocol illustrated in Figure 17.5. 17.4.4.2. Four Major Sampling Errors Introduced during the Practical Implementation of the Sampling Protocol at the n Sampling Stages. • The Increment Delimitation Errors (IDEn+) • The Increment Extraction Errors (IEEn+) • The Increment Preparation Errors (IPEn+) • The Increment Weighting Errors (IWEn). It is important to mention that IDE, IEE and IPE can take place in several ways for a single sampling stage, explaining for the sign + in the above notations. Therefore, along a sampling protocol characterized by n sampling stages and the actual taking of the basic module of observation, IDE, IEE, and IPE can take place more than n+1 times. These errors are dangerous: they are the sampling bias generators, and this is the reason we strongly emphasize an unconditional commitment to sampling correctness carefully studied in Part VII. 17.4.4.3. The Analytical Error AE The Analytical Error, which itself can be the sum of many errors, as shown by Ingamells and Pitard,22 is a separate concern; nevertheless it is part of V[0].

¦S

2 IDE n

n

2 ¦ S FSE n

2 IWEn

n

2 ¦ SGSE n

n

¦S

n

2 SQFE 1

2 S INE

¦S n

FIGURE 17.5 The many components of the Nugget Effect V[0].

¦S

V[0]

¦S 2 IEE n

n

2 IPEn

2 AE

317

The In-Situ Nugget Effect

17.5. Theoretical Approach 17.5.1. Revisiting Gy’s Formulas In the Theory of Sampling it is demonstrated that for a liberated constituent of interest the surrounding gangue is practically irrelevant to calculate the necessary mass of a sample, as explained in Section 11.5 of Part IV. A similar approach can be used to calculate the necessary mass of the in-situ observation module, or typical sample mass. The difficulty is the fact that there are no fragments: in the following analysis fragments are substituted for pure constituent of interest particles, or cluster equivalents of these particles, and for imaginary tiny volumes of gangue about the same size distribution as the constituent of interest particles, or cluster equivalents of these particles. A cluster equivalent is defined in Figure 17.7. Constitution Heterogeneity of an in-situ, undisturbed, and unbroken lot (CHL) can be expressed as follows: CHL ¼ NF

X ½ai  aL 2 Mi 2 aL 2 ML 2

i

(17:1)

where NF is the total number of constituent of interest particles or cluster equivalents, and of surrounding tiny volumes of gangue about the same size F; ai is the constituent of interest content in its own particles or cluster equivalents, which is one by definition, and in the surrounding tiny volumes of gangue about the same size, which is zero by definition; aL is the unknown true content of the constituent of interest in the lot; Mi is the mass of one constituent of interest particle or cluster equivalent, or surrounding imaginary tiny volumes of gangue about the same size; and ML is the mass of the lot. All masses are expressed in grams. All contents are dimensionless, expressed as part of one. CHL is expressed as a relative, dimensionless variance. To eliminate the task of estimating NF, a new, more pragmatic term IHL called the Intrinsic Constitution Heterogeneity is defined. IHL ¼ CHL

X ½ai  aL 2 Mi 2 ML ¼ CHL Mi ¼ NF aL 2 ML i

(17:2)

The estimation of IHL can be greatly simplified if the lot is arranged as a sum of size fractions and density fractions. It is the same lot. Now, IHL becomes: IHL ¼

X α



X β



2 aαβ  aL MLαβ ρβ aL 2 ML

(17:3)

where Vα is the average volume of one constituent of interest particle, or cluster equivalent, or imaginary tiny volumes of gangue in a given size fraction α, ρβ is the density of a given density class β, aαβ is the constituent of interest content of one sizedensity fraction, and MLαβ is the weight of one size-density fraction. An approximation can be made with two hypotheses already mentioned in Chapter 10, which leads to a more friendly approach.

318

Theory of Sampling and Sampling Practice

" IHL ¼

X α

MLα Vα ML

#" X β

#

2 aβ  aL MLβ ρβ aL 2 ML

(17:4)

In situ, where the material is unbroken and undisturbed, there are no particles or cluster equivalents of any relevance except the ones from the constituent of interest. Hypothesis 1 made earlier is still valid while hypothesis 2 is necessarily true by definition of the imaginary volumes of gangue made earlier. We may rewrite equation (17.4) as follows: IHL ¼

X α

fα d3 α

X β

 ρβ

2 aβ  aL MLβ a L 2 ML

(17:5)

Let’s develop this relation for the density class ρg of the gangue and for the density class ρm of the constituent of interest. Let’s also call ε1 the infinitesimal constituent of interest content of the gangue and ε2 the infinitesimal weight of constituent of interest in the gangue: IHL ¼

X α

" d

3

α

ðε1  aL Þ2 ε2 ðam  aL Þ2 MLm fg ρg þ f ρ m m aL 2 ML aL 2 ML

# (17:6)

Obviously, the first term of the sum is negligible when compared to the second one. Furthermore, am = 1 by definition, and aL is usually very small. Therefore, 17.6 simplifies as follows: IHL ¼

X α

 3

d

α

ρ MLm fm m 2 aL ML

 (17:7)

Also, by definition: X MLm α

ML

¼ aL

(17:8)

Thus, we obtain the very useful simplified formula: IHL ¼ fm gm d3m

ρm aL

(17:9)

where fm is a particle or cluster equivalent shape factor; gm is the usual size distribution factor taking into account that the constituent of interest particles, or cluster equivalents, don’t have all the same size: usually gm = 0.25. The size dm, as shown in Figure 17.6 and Figure 17.7, is defined as the size opening of a screen that would retain no more than 5% of the constituent of interest as applied to particles or to cluster equivalents. Now, the variance S2 INE of the In-Situ Nugget Effect can be calculated as follows:

319

The In-Situ Nugget Effect



2

S

INE

 1 1 fm gm ρm d3m ¼  aL MS ML

(17:10)

However, the mass ML of the lot is often almost infinite relative to the mass MS of the sample (i.e., basic module of observation) that will be extracted from the ground. As a result, equation (17.10) can be simplified to its final form. S2 INE ¼

fm gm ρm d3m aS MS

(17:11)

For gold, fm = 0.2, gm = 0.25 and ρm = 16 (in practice, native gold often alloys with some other metals), useful sampling nomographs can be calculated with the following formula: S2 INE ¼

0:8d3 Au aL MS

(17:12)

Similar derivations could be performed for other minerals of interest, and lead to their own version of equation (17.12). It is interesting to notice that we obtain the same formula as the one used for liberated minerals where the gangue content is irrelevant: it was intuitive it should happen that way, but it is now proven. A little subtlety: in equation (17.12) it would be incorrect to state the liberation factor was 1; there is no liberation factor. By using these formulas valuable nomographs can be calculated to optimize sampling protocols or learn about the limitations of conventional exploration.

17.5.2. Estimation of the Maximum Size of Mineral of Interest Particles or Their Cluster Equivalents The great difficulty in estimating the variance of In-Situ Nugget Effect INE is to have a precise and accurate idea about the maximum size dm of mineral of interest particles or their cluster equivalents. There are two different cases.

17.5.2.1. Particles of the Mineral of Interest Do Not Cluster This is the easy case. Particles of the mineral of interest are disseminated everywhere within a nonmineralized gangue, such as gold in a quartz vein as illustrated in Figure 17.6. These particles are distributed in such a way that it leads to the maximum chances for a selected module of observation (e.g., a 2-meter HQ diameter core sample) to intercept one or several particles within one ore block where the presence of the mineral of interest is expected. In this case the estimation of dm is relatively simple: collect a 300-kg composite sample made of at least 50 6-kg increments collected at random within a single, potentially mineralized geological unit. The mass 300-kg of this composite is only indicative and can be optimized using the existing knowledge of the geologist: iteration of knowledge is the essence of exploration. Crush the composite to about 95% minus 1.25 cm. Screen the material on a 0.17-cm screen. Wash the +0.17-cm fraction. Search for liberated particles of the mineral of interest in this coarse fraction. Recombine the +0.17-cm material with the –0.17cm fraction. Repeat the same exercise

320

Theory of Sampling and Sampling Practice

dm 95% passing a certain screen opening Imaginary dgangue = dm, by definition FIGURE 17.6 Solid nonclustering gold nuggets in a large quartz vein.

after crushing the composite to 95% -0.17-cm, and using a 0.0425-cm screen, and so on until the size dm is estimated according to its definition given earlier.

17.5.2.2. Particles of the Mineral of Interest Do Cluster. This case is far more complex, difficult to identify, and elusive to quantify, such as clustering gold particles in a quartz vein as illustrated in Figure 17.7. Unfortunately, it is a frequent case vastly underestimated and overlooked in most mining projects. Left unrecognized, the problem generated by the clustering effect may affect the economics of an entire project in stunning ways, for example: • A gold deposit can be vastly underestimated, especially for its good ore tonnage resources. • An impurity such as alumina in iron ore can be vastly underestimated, creating havoc with customers. • An impurity such as arsenic in copper ore can be vastly underestimated, leading to very expensive penalties with smelters and possibly environmental litigation.

321

The In-Situ Nugget Effect

dm Cluster Equivalent 95% passing a certain screen opening Imaginary dgangue = dm by definition FIGURE 17.7 Clustering gold nuggets in a large quartz vein.

• A by-product such as molybdenite in copper ore can be vastly underestimated, leading to lost economic opportunities, and also to the mismanagement of natural resources, therefore to possible governmental penalties. Isolated, unexplained high grades is the red flag, yet many people refuse to read the important message and are tempted to dismiss them as outliers: this is probably the greatest mistake ever made in the name of practicality during exploration, ore grade control, and ore reserve estimation. Careful diamond core observation can provide early clues about the magnitude of the clustering effect. It should be emphasized that it is impossible to quantify the clustering effect when using destructive Reverse Circulation exploration drilling; a problem so severe that it could invalidate such exploration technique in many cases as critically important problems will never be identified for the feasibility study. The observation of many diamond core samples can lead to the calculation of the size dm of an imaginary cube that could contain all the neighboring particles of the constituent of interest, as illustrated in Figure 17.7; dm would be defined again as the size opening of a

322

Theory of Sampling and Sampling Practice

dm

FIGURE 17.8 Estimation of the size dm of a cluster equivalent cube.

screen that could retain only 5% of all the cluster equivalents observed in a given geological unit. As anyone may notice by inspecting Figure 17.7, the smaller the drilling diameter the more likely someone may underestimate the size dm. It is the author’s experience that the visual inspection of blasted material or walls in an open pit leads to a more realistic estimate of dm. Nevertheless, everywhere a cluster is noticed in a diamond core sample, isolate a 5, 10, 15-cm block, or whatever is needed to cover what appears to be a cluster. Measure its volume and mass. Crush, pulverize, and assay it separately, which will provide the necessary information to calculate an estimated value for dm. The concept of cluster equivalents is further illustrated in Figure 17.8.

17.6. The Mean of the In-Situ Nugget Effect and Local Biases The importance of the following discussion cannot be emphasized strongly enough, as it has huge economic implications. Exactly like the Fundamental Sampling Error and the Grouping and Segregation Error, the In-Situ Nugget Effect has a mean equal to zero, and should therefore introduce no bias in the long run in a very large database. This statistical reality, common to random variables, is too often an invitation to consider these sources of uncertainty as benign and of limited economic consequences in a mining project; nothing can be so far from reality. First, let’s not forget that when random sources of uncertainty are pushed beyond their normally distributed limits because of sample mass is one or several orders of magnitude too small, they often generate data highly skewed toward low grades (i.e., Poisson process). Second, at any given point of a deposit we must often determine if the material is ore or waste on the basis of one or, at most, very few data. Therefore, at any given point, we don’t have the luxury of making the assumption that our decision is centered on an uncertainty mean equal to zero. Indeed, when it takes tens, hundreds, or even thousands of samples replicated at any given place to correct for the skewness toward low grades, chances are very high that most of the time we have a significant bias towards a low grade, and only once on rare occasions a huge bias towards a high grade. Common statistical techniques, such as averaging or kriging can correct for such anomaly, but often, with trace elements, they go only part of the way. Conclusion: any time a large In-Situ Nugget Effect can be identified, the few data available at any given place of the deposit are extremely weak to create an unbiased estimate of the true local unknown content accompanied by a reliable and predictable uncertainty statement. Anyone failing to understand this subtlety generated by too few, too small samples is heading for severe economic difficulties during exploitation, usually translated into worrisome reconciliation problems.

The In-Situ Nugget Effect

323

17.7. The Low-Background Constituent of Interest Content The constituent of interest may be distributed in situ in several ways, which brings an additional difficulty to address. A certain proportion of that constituent may be finely disseminated everywhere within mineralization and relatively easy to sample: this proportion is defined as the Low Background Content aH . As a result, if aL is the expected average content of the constituent of interest, only the amount aL  aH is difficult to sample and affected by the In-Situ Nugget Effect. Figure 17.9 illustrates the problem: circles represent the drilling diameter. Little dots represent the low background gold content aH . Large gold nuggets represent aL  aH : circles may have a rare chance to account for large gold nuggets or clusters in ore blocks where they are supposed to be accounted for, while they have no problem to account for the low background content.

FIGURE 17.9 Illustration of the Low Background Content aH :

324

Theory of Sampling and Sampling Practice

17.7.1. Estimation of the Low Background Content of the Constituent of Interest Ingamells and Pitard22 suggested several approximate ways to estimate aH , which are further explored in Chapters 14 and 15. The following methods are an over-simplification of nature and have their limitations. 17.7.1.1. Method #1 A quick method is to look at inflexion points in the histogram of many assays generated by many small samples collected from a large composite representing a given geological unit, as illustrated in Figure 17.10 showing the histogram of assays generated for a Heterogeneity Test. The value of aH is estimated by the lower inflexion point around 1.40 g/t gold. The second inflexion point C around 3.8 g/t gold, which is more ambiguous, shows the higher frequency of a given size of gold particles. 17.7.1.2. Method #2 This method is more complex but more accurate. It is easy to implement when two series of samples with different masses MS1 and MS2 are likely to yield assays with two different skewed distributions. As the sample mass diminishes assay results become distributed less symmetrically: the mean aL and the mode γ move farther apart. Formula 14.33 suggested in Chapter 14 is used with two different masses MS1 and MS2 and two different modes γ1 and γ2 . The estimation of γ1 and γ2 is made either by a visual inspection of the respective histograms of results from small samples and large samples, as shown in Figure 17.11, or by calculating the harmonic means from these results.

FIGURE 17.10 Estimation of aH : searching for inflexion points on the histogram of assays from small samples.

325

The In-Situ Nugget Effect

17.7.2. The Most Probable Mineral of Interest Estimated Content The Most Probable Result γ deserves a careful discussion. As illustrated in Figure 17.9 for gold, if the sample mass is too small by one or several orders of magnitude, the likelihood of estimating more than the low background gold content is very small. However, as the sample mass increases, as illustrated in Figures 17.11 and 17.12, the chance of including some coarse

Frequency

J1 J2

aH

g/t gold

FIGURE 17.11 Estimation of the modes (i.e., most probable results) γ1 and γ2 :

FIGURE 17.12 As the sample mass increases, the Most Probable Result slowly converges toward the average expected gold content.

326

Theory of Sampling and Sampling Practice

gold in the sample may drastically increase. At one point, the sample mass becomes sufficient for the Most Probable Result γ to be near the expected average gold content aL . Formula 14.25 suggested in Chapter 14 is used to estimate the Most Probable Result γ as a function of sample mass MS. In this equation, the sample mass MS refers only to the mass of a core sample, or module of field observation, assuming there are no further sources of uncertainty during subsampling and assaying. In other words, the focus is strictly on the basic module of observation in the field. An example is illustrated in Figure 14.7.

17.8. Case Study #1: Arsenic Impurity in a Copper Deposit A porphyry copper deposit in Chile is drilled for its copper and arsenic contents. For the flotation process to deliver copper concentrates within regulatory arsenic specifications for the smelters the mine must either eliminate or carefully blend any ore with an arsenic content higher than 100 ppm. The exploration program uses half 2-meter NQ diameter diamond core drilling samples. It is not rare to observe a cluster of arsenic minerals or neighboring tiny veins within a volume less than 1 liter, more or less equivalent to an imaginary cube with each side dm about 5 cm. 17.8.1. Calculation of the Necessary Sample Mass As an example, a reasonable relative standard deviation SINE for the In Situ Nugget Effect should be no larger than 15% (i.e., SINE ¼ 0:15). Equation (17.11) can be used with fm = 1 because an imaginary cube is considered, gm = 0.25 since all clusters are not equal in size, and ρAs = 4.5 for the average density of arsenic minerals. Since the dominant arsenic mineral is not known it is recommended to use 100 ppm expressed as a part of one for aL: MS ¼

1 0:25 4:5 53 0:000100ð0:15Þ2

¼ 62500000g or 63tons

A 63-ton sample mass is not practical to use. In a case like this it is the message that counts. The message is that if a half 2-meter NQ diameter diamond core drilling sample weighs on average about 5000g, then the arsenic content of a given area within a given geological unit, assumed to be around 100ppm, will be known within 100 ± 15 ppm only after averaging N = 62500000/5000 = 12500 samples. Such conclusion is bad news for a project because it clearly means only the content of very large zones can be known. It also means it would be a great mistake to make any attempt to select what is low arsenic from what is high arsenic. Such selection would most certainly lead to a stunning failure. Figure 17.13 shows the calculated nomograph for arsenic. The X-axis shows the estimated size of the largest cluster equivalent arsenic minerals. The Y-axis shows the minimum sample mass to be extracted from the ground to ensure a relative standard deviation % SINE of the In-Situ Nugget Effect no larger than 15%. The nomograph applies to this value of %SINE alone. Each diagonal line represents an expected average arsenic content expressed as part of one (e.g., 100 ppm is written 1.0x10-4). One solid line shows the minimum sample mass if the size of a cluster equivalent is 5 cm for 100 ppm arsenic,

327

The In-Situ Nugget Effect

FIGURE 17.13 Nomograph showing the necessary sample mass as a function of the arsenic mineral cluster equivalent size for three given arsenic contents.

and another dashed line shows the minimum sample mass if the size of a cluster equivalent is 1 cm for 100 ppm arsenic.

17.9. Case Study #2: Molybdenum in a Copper Deposit A porphyry copper deposit in Chile is drilled to estimate its molybdenite content. The expected average molybdenite content is 0.040%. The exploration program uses half 2-meter NQ diameter diamond core drilling samples. It is not rare to observe a cluster of molybdenite grains or neighboring tiny veins within a tiny volume less than 1 liter more or less equivalent to an imaginary cube with each side dm about 1 cm. 17.9.1. Calculation of the Necessary Sample Mass A reasonable relative standard deviation SINE for the In-Situ Nugget Effect should be no larger than 15% (i.e., SINE ¼ 0:15). Equation (17.11) can be used with fm = 1 since an imaginary cube is considered, gm = 0.25 because all clusters are not equal in size, and ρMo= 4.7 for the average density of molybdenite. The expected molybdenite average content of 0.040% is written as a proportion of one aL = 0.0004: MS¼

1 0:25 4:7 13 0:0004ð0:15Þ2

¼ 130555g or 131kg

A 131-kg sample mass is not practical to use. In a case like this it is the message that counts. The message is that if a half 2-meter NQ diameter diamond core drilling

328

Theory of Sampling and Sampling Practice

FIGURE 17.14 Nomograph showing the necessary sample mass as a function of the molybdenite cluster equivalent size for three given molybdenite contents.

sample weighs on average about 5000g, then the molybdenite content of a given area within a given geological unit, assumed to be around 0.040%, will be known within 0.006% only after averaging N = 130555/5000 = 26 samples. Such conclusion is important for the ore grade control engineer as he must calculate a Moving Average involving at least 26 neighboring molybdenite results if ore misclassification is to be minimized. It also means that grade selectivity on a too small scale is likely to be a failure. Figure 17.14 shows the calculated nomograph for molybdenite. It is interesting to notice that a 3.5-kg sample, shown by the dashed line, cannot represent a cluster equivalent size larger than 0.3cm.

17.10. Case Study #3: Coarse Gold and Clustering of Fine Gold A coarse gold deposit in Brazil is drilled to estimate its natural resources. The expected average gold content is 1.50 g/t. The exploration program uses full 3-meter HQ diameter diamond core drilling samples. Visible gold is not rare. On some occasions, within a small volume no larger than 1 liter several gold particles can be observed and add up to an equivalent single cubic particle that would have each side about 0.25 cm.

17.10.1. Calculation of the Necessary Sample Mass A reasonable relative standard deviation SINE for the In-Situ Nugget Effect should be no larger than 15% (i.e., SINE ¼ 0:15). Equation (17.12) can be used with fm = 1 since an imaginary cube is considered, gm = 0.25 because all clusters are not equal in size, and ρAu= 16 for the average density of gold. A larger density for gold is not realistic as it is

329

The In-Situ Nugget Effect

FIGURE 17.15 Nomograph showing the necessary sample mass as a function of the gold cluster equivalent size for three given gold contents.

almost always alloyed with something else. The expected average content of 1.5 g/t is written as a proportion of one aL = 0.0000015: MS ¼

1 0:25 16 0:253 0:0000015ð0:15Þ2

¼ 1851851g or 1:9tons

A 1.9-ton sample mass is not practical to use. In a case like this it is the message that counts. The message is that if a full 3-meter HQ diameter diamond core drilling sample weighs on average about 25000g, then the gold content of a given area within a given geological unit, assumed to be around 1.5 g/t, will be known within ± 0.23 g/t only after averaging N = 1851851/25000 = 74 samples. Such conclusion is important for the ore grade control engineer, as he must calculate a Moving Average involving at least 74 neighboring gold results if ore misclassification is to be minimized. It also means that grade selectivity on a too small scale is likely to be a failure. Figure 17.15 shows the calculated nomograph for gold. It is interesting to notice that a 25000-g sample cannot represent a cluster equivalent size larger than 0.06 cm.

17.11. Consequences of a Large In-Situ Nugget Effect The magnitude of the variance of the In-Situ Nugget Effect for a given sample mass extracted from the ground has great economic consequences on a project, from the exploration stage and all the way through the life of the mining operation. If left unresolved, in due time several consequences are likely to take place:

330

Theory of Sampling and Sampling Practice

• The mathematical modeling of the variogram to perform Geostatistics becomes an exercise in futility because the random variability is overwhelming and introduces misleading negative skewness in the gold grade distribution. A few very high grades are created in this process and people don’t know what to do with them. • Kriging techniques can no longer work well, as they depend on a good mathematical modeling of the variogram with a reasonably small V[0]. • Unaware practitioners are likely to cut high grades because they don’t understand the subtle message behind them, then of course the variogram starts to make sense to them, but they are assessing only the resource that is easy to sample. • Conciliation problems between the geological model, the mine ore grade control prediction, and the final truth reaching the plant become exacerbated through the use of wishful thinking economic cutoff grades. • The optimum recovery of natural resources becomes an elusive task, the economic consequences of which are staggering, only because the minimum sample mass required was never properly addressed.

17.12. Conclusions The In-Situ Nugget Effect is a serious matter that does not receive the attention it deserves at an early stage of an exploration program or during the actual mining with the selected ore grade control strategies. The main reason for such shortcoming is because most people are incapable to break apart a complex problem into its basic components as recommended by the Theory of Sampling and as illustrated in Figure 17.5: The Theory of Sampling is not taught at most universities, so they were never told. Unless each component is looked at independently one at a time, and the InSitu Nugget Effect is only one of them, then such basic sampling problems may remain elusive and unresolved for the lifetime of a project. Such shortcoming should be the object of careful investigation by stakeholders who determine the market value of the shares of a mining company: a good understanding of the In-Situ Nugget Effect can make the difference between economic failure and success for the mining company involved.

17.13. Recommendations Solutions to problems generated by an excessive variance of the In-Situ Nugget Effect are not simple and often require expensive testing and costly strategies. There is no doubt the V[0] term of a variogram influences the kriging process in Geostatistics. If the support mass becomes too small by several orders of magnitude, then the kriging variance may be at work only for the low background content in many ore blocks. Therefore, if ignored, the negative consequences are inescapable and ultimately far more expensive for the company. The following list of basic recommendations may help resolve these difficulties:

The In-Situ Nugget Effect

331

• The selected drilling diameter must be as large as possible to approach the basic sample mass requirements. Address this issue carefully as early as possible during a drilling program. • The shortcomings of the basic module mass of observation must be quantified, leading to an indicative number of modules that must be averaged to equal the basic sample mass requirements. • Good geology can be the saving grace in areas showing a large In-Situ Nugget Effect: it is critically important to rely on good indicators that may be friendlier to sample and also a good definition of natural ore boundaries. Sufficient diamond core drilling must be performed early enough to obtain a robust geological model and to be able to quantify difficulties created by the In-Situ Nugget Effect. • For the most important constituents, especially the ones leading to large revenues, it is recommended to select cutoff grades below or equal to the Low Background Content aH . The reason is that up to aH there are no significant consequences from the In-Situ Nugget Effect as the material is easy to sample. • Any time that the selected cutoff grade must be above aH it is strongly recommended to build a low-grade stockpile to buffer the many mistakes that are made with the ore grade control strategy. • Under no circumstances should anyone cut high grades to perform a feasibility study. This practice is the result of an inappreciative understanding of the true properties of the In-Situ Nugget Effect. Instead, learn to live with the high grades and make sure the subtle message they carry is understood. • Conciliation problems are expected. There is no such thing as a perfect conciliation. In-Situ Nugget Effect generates large precision ellipses and, as a result, reconciliation problems are inescapable. This can be the object of careful simulations and therefore can be, and should be, predicted in feasibility studies.

This page intentionally left blank

Part VII

The Capital Notion of Sampling Correctness

Introduction So far, previous parts of this book somewhat theoretical, introduced the notion of heterogeneity which is, then, characterized and quantified. Emphasis was placed on the fact that heterogeneity exists independently of the sampling process, and deserves to be studied on its own. Thus, as only one of the numerous consequences of heterogeneity, the continuous sampling selection error, now called the Heterogeneity Fluctuation Error HFE was reviewed in detail and we found that it was made of three major components: • HFE1: the Short-Range Heterogeneity Fluctuation Error, • HFE2: the Long-Range Heterogeneity Fluctuation Error, and • HFE3: the Periodic Heterogeneity Fluctuation Error. HFE ¼ HFE1 þ HFE2 þ HFE3 Each of these components has been reviewed and, now, we know how to minimize them, or how to calculate their effect. There are more independent components of the Total Sampling Error that should now be reviewed, and we cannot strongly enough emphasize about their importance, as they are going to be the major sources of bias in sampling practice. This subject, probably the major contribution of Gy and Matheron’s works, clearly demonstrates beyond any possible doubt that if the sampling process no longer involves a true random component, and minimizes all nonrandom selective and non-selective components, the consequences can be devastating and rapidly lead to non-representative and invalid sampling practices. The new sampling errors are:

334

Theory of Sampling and Sampling Practice

• IDE: the Increment Delimitation Error • IEE: the Increment Extraction Error • IPE: the Increment Preparation Error Their sum is defined as the Increment Materialization Error IME. IME ¼ IDE þ IEE þ IPE These three new errors are not well known by the public, and they are often unappreciated and neglected voluntarily or involuntarily by manufacturers of sampling equipment. Nevertheless, thorough inspection of thousands of sampling devices distributed worldwide among modern facilities (e.g., mines, mills, smelters, power plants, food processing plants, and so on) shows that a worrisome majority of the sampling devices introduce one or several of these new errors. These new errors may take place because of a faulty design, bad engineering, bad installation, poor preventive maintenance, or total neglect. When we consider sampling devices used for sampling the environment, waste piles, treated water, cereals, GMO, high purity chemicals, pharmaceutical products, electronic components, and so on, IDE, IEE and IPE become so overwhelming that we may wonder what the driving force is that leads to their concealment. This is the major reason why the World Conference on Sampling and Blending (WCSB) was created in 2003, so academics can properly teach TOS and manufacturers and engineering firms can correct their shortcomings. Our analysis of many actual cases leads to the following conclusion: for many people the hypothesis that all materials are heterogeneous is an embarrassing hypothesis because they do not really know how to deal with these difficulties. Therefore, it becomes convenient for them to adopt an approach more in line with the “real world”: • Materials, whatever they are, are often homogeneous. • Thus, any part extracted at random has a composition representing the whole lot. • Then there is nothing wrong in retaining as a sample the fraction of the lot that is easily accessible; this does not require a lot of training, it is fast and cheap. This practical attitude coming from total ignorance has been responsible for massive financial losses, and even lawsuits, in many industries. Case studies are presented in this book. Indeed, such inference has led to the design, construction, and marketing of systems that transgress the most elementary rules of sampling correctness. In the Theory of Sampling, we made a point to follow a more logical and prudent approach: • All materials from the “real world” are heterogeneous, and this is a reality that we should not ignore if we think in terms of sampling and its economic impact. • It is completely out of the question to assume that any part of a lot has a composition similar to the entire lot. • Thus, the fraction of the lot that is easily accessible cannot be representative of the entire lot.

The Capital Notion of Sampling Correctness

335

• It is a necessity to comply with the rules of sampling correctness, so the sum of IDE, IEE and IPE is negligible. We strongly emphasize that the most essential quality of a sampling system is the correctness of all its components. This quality shall give all the constituents of the lot to be sampled an equal probability P of being selected, which leads to the notion of equi-probabilistic sampling. The non-respect of this fundamental rule always results in a sampling bias that cannot be accounted for and totally ruins our previous efforts to minimize the Heterogeneity Fluctuation Error HFE. Sampling correctness is a primary structural property of a sampling system, and is the most effective warrant of sampling accuracy, therefore, we shall give it the attention it deserves. Chapter 18 is an introduction to the fundamental components of the sample materialization and the errors attached to it. Chapter 19 is an introduction to the various sampling modes which are an intrinsic part of the notion of sampling correctness.

This page intentionally left blank

18 The Increment Materialization Error

The selection of an appropriate sampling protocol, and the small-scale irrelevant variability it generates, which is the result of the small-scale heterogeneity h1 carried by the constituent of interest in the material to be sampled, is not a simple matter; it involves a necessary theoretical aspect many readers may find quite unfriendly. The sum S2HFE1 of the variances of the major components of this small-scale Heterogeneity Fluctuation Error HFE1 is part of the variance V[0] that was introduced in Variography. These variances can be grouped around V[0] in Figure 18.1 to illustrate in a simple way that they all have an important role to play, and it would be a great mistake to overlook any of them. Any source of inflation for V[0] should be a great concern. The reader may rightly wonder why the variance of the Increment Materialization Error can inflate V[0], which is supposed to be a random component. The reason is simple: sampling biases are never constant because they are introduced by some kind of segregation which is usually an unpredictable transient phenomenon. So, the bias has a precision problem of its own, and the variogram sees it, which inflates V[0] for no apparent reason. Then, the large-scale relevant variability was analyzed, taking into account the largescale heterogeneities h2 and h3 leading to the large-scale, relevant variability. The reason it is called the relevant variability is precisely because it is the one we try to measure and control in the first place. Two new components were defined as the long-range Heterogeneity Fluctuation Error HFE2 and the periodic Heterogeneity Fluctuation Error HFE3, respectively the resulting effect of h2 and h3. The variance of these two heterogeneity components is highly relevant for the preparation of composite samples supposed to represent the average property or mineral content of a large lot, such as in metallurgical accounting. It is also highly relevant, when each sample is supposed to represent the average property or mineral content at a given time, or a given place, within a large lot, such as during exploration, ore grade control, or process control. The variance of the total variability is called the variance of the continuous Heterogeneity Fluctuation Error HFE: S2HFE ¼ S2HFE1 þ S2HFE2 þ S2HFE3

ð18:1Þ

Now that nearly everything is known to minimize the irrelevant variability introduced by a sampling protocol, and to measure process trends and cycles, it is still necessary to find out what happens during the practical implementation of that protocol. During this critically important phase, new sources of irrelevant variability are taking place, and perhaps the most dangerous ones. There are four independent components that should be carefully reviewed in Chapters 18 through 26 of Part VII, and Chapter 27 of Part VIII, which were defined at the beginning of the book in the definition of basic terms and symbols section: 1. the Increment Delimitation Error (IDE) 2. the Increment Extraction Error (IEE) 337

338

Theory of Sampling and Sampling Practice

¦S

¦S

2 FSEn

n

2 GSEn

n

2 S HFE 1

V [0]

2 S INE

2 S IME

2 S AE

FIGURE 18.1 The components of the small-scale, irrelevant variability V[0].

3. the Increment Weighting Error (IWE) 4. the Preparation Errors (IPE) The sum of the variances of IDE, IEE, and IWE is called the variance of the Increment Materialization Error ME. Readers familiar with the TOS may notice the author chose to include the Increment Weighting Error in the family making up the Increment Materialization Error. The reason is that IWE is the direct effect of how the sampling system is collecting increments. If the sampling system is not proportional, IWE is likely to be significant. While, if the sampling system is proportional, IWE may be negligible. It should be clearly understood that the flow rate fluctuations of a stream, which is part of the large-scale variability, may also greatly affect IWE; this is precisely the point, and the sampling system should be designed to take this into account during the materialization phase. In the chapter devoted to IWE, the long-term heterogeneity fluctuation of the flow rate will be addressed. The variance of the Increment Preparation Error IPE is a separate component introduced between sampling stages. It is convenient to include IPE in the Increment Materialization Error, grouping together all the errors generated by the practical implementation of a sampling protocol, and errors that are capable to introduce biases. The variance S2IME of the Increment Materialization Error IME can be written as illustrated in Figure 18.2. The components of the Increment Materialization Error are not well known by manufacturers of sampling equipment, or perhaps they don’t make a clear difference between these four completely independent errors. This problem is also perpetuated by standards on sampling, some of which are obsolete, and reluctance to endorse the TOS. Furthermore, these standards usually do not address strongly enough the difference between errors generated by the selective process (i.e., IDE, IEE, IWE) and those generated by the nonselective process (i.e., IPE). The key to understand problems listed in Part VII and Part VIII can be summarized by the following statement: All the constituents of the lot to be sampled must be given an equal probability P of being selected and preserved as part of the sample, which leads to the notion of equi-probabilistic sampling, which in turn leads to notion of correct sampling systems. The nonrespect of this fundamental rule almost always results in the presence of sampling biases that cannot be accounted for and totally ruins the previous efforts made to optimize the sampling protocol. The correctness or

339

The Increment Materialization Error

2 S IME

¦S

2 IDEn

n

2 2 2  ¦ S IEE  ¦ S IWE  ¦ S IPE n n n n

n

n

Selective Process during sampling stages Non-selective Process during preparation stages FIGURE 18.2 Expanding the materialization error to include all bias generators introduced by the practical implementation of the sampling protocol.

noncorrectness of a sampling system is a structural property. If the sampling system is not correct by design, devastating effects will result, regardless of results from bias tests that would tend to prove otherwise. This is the subtle message the reader should understand when reviewing Matheron’s mathematical developments to help Gy for the first edition of TOS back in 1967. This is the main reason why the rules of sampling correctness should not be negotiable, even though many experts on sampling wrongly think they should be. Variograms cannot see a sampling bias. However, since sampling biases are never constant, they are affected by a precision problem of their own, and the variogram can see this as an inflation of the variance V[0] that cannot be explained by the protocol, as illustrated in Figure 18.3. The economic, negative impact of these biases is usually several orders of magnitudes larger than the cost to insure a sampling system is correct, optimized, and well maintained. To put it more clearly, a biased sampling system, that may cost only half a million dollars to be fixed, can cost several hundred million dollars

¦S

¦S

2 FSEn

n

2 GSEn

n

2 SQFE 1

V [0]

2 S INE

2 S IME

FIGURE 18.3 V[0] can be inflated by many factors; sampling noncorrectness is one of them.

2 S AE

340

Theory of Sampling and Sampling Practice

over the lifetime of a project. Accepting such a scenario is irresponsible; there are plenty of stunning examples that will be reviewed in due time, You may notice the term INE in Figure 18.3; it stands for the In-situ Nugget Effect that was discussed at large in Part 6. Now, before going any further, and for the sake of clarity, it is important to review probabilistic and nonprobabilistic selecting processes.

18.1. Probabilistic and Nonprobabilistic Selecting Processes Before reviewing the Increment Materialization Error in depth, it is essential to recall that sampling is a selecting process. There are two kinds of selecting processes: 1. The probabilistic selecting process according to which all elements of the lot are submitted to the selection with a given, constant probability P, of being selected, that could be called the Gy and Matheron’s principle.111 A probabilistic selecting process can be correct or incorrect. It is correct when it gives all the units Um making up the lot an equal probability Pm of being selected, and all units foreign to the lot a zero-probability of being selected. It is incorrect when one of these two conditions are not fulfilled. 2. The nonprobabilistic selecting process according to which the selection is not based on the notion of selecting probability. Because it is often convenient for the operator, and his supervisor or manager trying to save time and money, a large fraction of the lot may systematically escape the selection (e.g., grab sampling of a large stockpile, the hammer and shovel method, etc.).

18.2. Critical Review of Nonprobabilistic Selecting Processes Because TOS is not taught at the university in a systematic way yet, or in the industry in general, anyone can do whatever he or she wants to. The economically most dangerous approaches are those where nonprobabilistic selecting processes are implemented for practicality and in the name of the “real world”; therefore, it seems logical that we devote a few lines to invalid processes to criticize them and emphasize their danger. 18.2.1. Purposive Sampling In lots where parts are not easily accessible for an operator being told to collect a “representative” sample, it is not uncommon for him to select units the best way he can among those that are easily accessible. This is not a sample and should be called instead a specimen; there is a difference, and we should say that sampling is not gambling. The accuracy of the sampling operation depends on the operator’ s choice, therefore it becomes a secondary, circumstantial property the limitations of which we know too well. From a statistical point of view, it is difficult to admit that the operator’s choice is probabilistic, even if he or she is creditable and honest. Let’s suppose that it was a probabilistic choice, which is a very optimistic and wishful thinking assumption, still there is no chance that it will be correct, which means equi-probabilistic. Furthermore,

The Increment Materialization Error

341

the operator may have a biased concept of probabilities depending on what he or she does not know or wants to prove. The conclusion is clear: purposive sampling cannot be accurate and is very likely to be inequitable when sampling valuable commodities. It is completely inappropriate for quantitative analysis, though it may be acceptable for qualitative analysis. 18.2.2. Grab Sampling In this case, the operator collects increments from the part of the lot that is easily accessible. He or she may use scoops, shovels, or hands. The author even saw an operator using his hard hat across the conveyor belt transporting very fine raw material to a new multi-billion dollars primary smelter. This practice is also common in water treatment plants and in the environment. Aware of segregation problems, the operator tries to collect as many increments as practically possible in the sample (specimen!); nevertheless, a large fraction of the lot is systematically kept away from the sampling tool (e.g., a grab sample collected at the surface of a tank or a thickener). These inaccessible parts of the lot can also be the bottom of a truck load, a railroad car, a conveyor belt, a waste pile, an ore stockpile, freshly blasted material at the mine, and so on. Such practice is completely inappropriate for quantitative analysis, though it may be acceptable for qualitative analysis. The conclusion is clear: grab sampling cannot be accurate because some units making up the lot have a zero probability of being selected. Furthermore, grab sampling cannot be accurate because the selecting probability between units making up the lot cannot be kept constant. An interesting comment from a well-known statistician at one World Conference on Sampling and Blending was “Sampling does not have to be very accurate, if it is accurate enough for a given purpose”; such comment is wishful thinking and wrong because the minute sampling is not correct, the bias cannot be quantified because there is no such thing as a constant bias due to the transient nature of segregation; that person does not understand heterogeneity and is obviously going nowhere. 18.2.3. Sampling with Thief Probes and Augers Sampling using thief probes and augers are only improved grab sampling techniques. The idea is to extract a complete column representing the entire thickness of the lot at a point preselected hopefully at random. These techniques are probabilistic only when the place to perform the extraction is selected in a probabilistic (i.e., random) manner which is rarely done. In most cases the place to perform the extraction is authoritatively preselected. For example, drilling at the center of a barrel, following a diagonal in a bag, drilling at pre-established points from the top of a truck, drilling on a waste pile much too thick for the tools, and so on. Assuming these techniques could be probabilistic, which is a very optimistic assumption, experience proves beyond any doubt that they are never correct (i.e., IDE + IEE biases). The conclusion is clear, sampling with thief probes and augers cannot be accurate since the bottom of the lot often has a zero probability of being selected, and because of serious delimitation and extraction problems encountered during penetration. 18.2.4. Authoritative Sampling Probability sampling is contrasted with authoritative sampling, in which an individual who is well acquainted with the lot to be sampled selects a sample without regard to

342

Theory of Sampling and Sampling Practice

randomization. The validity of data generated in this manner is totally dependent on the knowledge of the operator. It is not a recommended nor valid practice. However, authoritative sampling is often necessary for environmental assessments to separate hot sectors from others in a contaminated area showing obvious boundaries. Then each sector must be submitted to probabilistic sampling afterward. It is the same problem with sampling for food contamination; for example, if sampling a truck full of tomatoes, the only tomatoes you are interested in are the ones that are obviously rotten.

18.2.5. Common Properties of Nonprobabilistic Selecting Processes There are three big problems: 1. An important fraction of the lot is submitted to the sampling process with a zero probability of being selected. This is a critically important point for sampling equity, especially during sampling to determine a settlement price for a given commodity. 2. There is no possible theoretical approach. It is impossible to logically connect the various sampling errors to the mode of selection. Yet, very expensive bias tests and variance analyses are performed using such invalid specimens; regardless of results, the message is not a valid one. 3. Sampling is always biased to an unknown extent. The American Atomic Energy Commission did a long and effective study of probe and auger sampling systems in 1965. The conclusion was that probes and augers always introduce unacceptable biases.192 Duncan193–196 reported the same conclusions. Gy also reached the same conclusions in a series of tests performed in 1967. The author, during his consulting experience, came to the same conclusions with many of his clients. The conclusion is clear: no specimen generated by a nonprobabilistic selecting process should be used to reach important financial decisions. Such decisions are made many times, every day, to optimize a process. Today, presidents of large corporations still persist in risking considerable amounts of money by making decisions based on results obtained from nonrepresentative specimens. It is nothing else than a dangerous, irresponsible form of gambling. Maybe a famous, visionary quote from Albert Einstein would help: “I have little patience with scientists who take a board of wood, look for its thinnest part, and drill a great number of holes where drilling is easy.”

18.3. Probabilistic Sampling of Movable Lots A batch of particulate material is said to be “movable” when it is small or valuable enough to be economically handled in totality for the sole purpose of its sampling. One of the two following probabilistic processes can be used to sample movable lots: 1. The increment process where the batch of material is transformed into a onedimensional stream, then a cross-stream sampler collects a certain number of

The Increment Materialization Error

343

increments to make up the sample. A good example of that is the use of rotary dividers. 2. The splitting process where the batch of material is partitioned into several fractions, one of which is selected at random as a sample. A good example of that is the use of a riffle splitter.

18.4. Probabilistic Sampling of Unmovable Lots The only probabilistic sampling process applicable to unmovable lots of particulate materials is the increment sampling process. Such a process is generally performed during a transfer of the entire lot for a purpose other than sampling. For now, we should remember that the probabilistic sampling of stationary unmovable lots that cannot be mechanically transferred is impossible in most cases. A good example of that is the sampling of an old stockpile.

18.5. Analysis of the Increment Sampling Process A good example of the increment sampling process is the sampling of a flowing stream: • at the end of a conveyor belt • across a conveyor belt • at the discharge of a pipe or a hose • across a river, and so on. by means of a correctly designed cross-stream sampler. On a large scale, the stream can also be made of large units such as truck loads, railroad car loads, sacs, barrels, or jars arranged in a chronological order as a one-dimensional lot. Each unit becomes one potential increment. The increment sampling process can be divided into a logical sequence of four elementary and independent steps: 1. The sampling point selection: all points along the one-dimensional lot are submitted to a selection scheme, that will be further discussed in Chapter 19, that can be: a. either systematic with a random starting point b. either stratified random c. or strictly random. 2. The increment delimitation: moving through the lot L a point X is selected, then the sampling device delimits the geometrical boundaries of the domain DV(X) occupied by the “extended increment” as illustrated in Figure 18.4 (v is a subscript characterizing a given extended volume). At this point we do not worry about what is a correct delimitation yet. The extended increment is a volume that does not take into account the particulate nature of the material.

344

Theory of Sampling and Sampling Practice

DV (X)

(X)

FIGURE 18.4 Illustration of the extended increment (i.e., delimitation).

3. The increment extraction (i.e., increment recovery for geologists): now it is necessary to take the particulate nature of the material into account. The sampler extracts a certain number of fragments making up the “fragmental increment” as illustrated in Figure 18.5. The fragmental increment must coincide with the set of fragments whose center of gravity is located within the boundaries of the domain DV(X) of the extended increment. 4. The increment reunion: the set of punctual increments is called the punctual sample. The set of extended increments is called the extended sample. The set of recovered fragmental increments is called the recovered fragmental sample. This sequence is summarized in Figure 18.6.

(X)

FIGURE 18.5 Illustration of the fragmental increment (i.e., extraction or recovery).

345

The Increment Materialization Error

Increment sampling

Point selection

Punctual increment Non-punctual reality Increment materialization

Increment delimitation Extended increment

Increment extraction

Fragmental increment

Reunion of increments Punctual Sample

Extended and Fragmental Sample

Theoretical sample

Discrete reality

FIGURE 18.6 Illustration of the increment materialization process: a comparison of the continuous model with the discrete reality.

18.6. Analysis of the Splitting Process A few typical examples of the splitting process are: • coning and quartering • alternate or fractional shoveling • riffling, and so on. The splitting process can be broken up into a sequence of four elementary and independent steps presenting a great similarity with the sequence observed with the increment sampling process:

346

Theory of Sampling and Sampling Practice

1. The fraction delimitation: the sampling device delimits the geometrical boundaries of the domain occupied by the geometrical fractions of the lot. Three different cases can be encountered: a. stationary lot, moving tool: coning and quartering, alternate shoveling b. moving lot, stationary device: riffle divider, revolving feeder, sectorial splitter c. moving lot, moving device: stationary feeder, sectorial splitter. As for the increment sampling process, the geometrical delimitation does not take into account the particulate nature of the material making up the lot. 2. Separation of fractions: this operation takes the particulate nature of the material into consideration. The fragmental fraction must, more or less, coincide with the set of fragments whose center of gravity is located within the boundaries of the geometrical fractions. 3. Reunion of fractions: fractions are regrouped together according to a systematic scheme to provide a given set of potential samples. Of course, the set of these potential samples is the entire lot L. 4. Sample selection: this selection step must be probabilistic; therefore, the real sample or samples shall be selected at random, which is a critically important step often forgotten by practitioners in the name of practicality. We cannot emphasize enough on the importance of this step; when forgotten, it can lead to devastating biases: a good example of that is sawing a diamond core sample, and collecting the half making up the sample always on the same side.

18.7. Comparison of the Increment Process with the Splitting Process In Figure 18.7 the four principal steps of both the increment and splitting processes are illustrated. From a logical and practical standpoint there is a fundamental difference between both processes: 1. For the increment process, the selection is made before the delimitation and extraction steps. 2. For the splitting process, the selection is made after the extraction step. The important conclusion is that the splitting process can be equitable even when it is technically biased; however, the increment process is equitable if, and only if, it is technically unbiased. This is an important detail in commercial sampling of valuable commodities. Even if potential samples created by splitting are systematically different, the actual sample is chosen at random. This is not the case for the increment process because the selection precedes the materialization of the sample. Before proceeding with a thorough analysis of the Increment Materialization Error, IME, it is necessary to introduce two complementary models of the increment sampling process.

347

The Increment Materialization Error

Increment process

Splitting process

1. Sampling point selection

1. Fraction delimitation

2. Extension of increment delimitation

2. Fraction separation

3. Increment extraction

4. Increment reunion

3. Creation of two potential samples

4. Sample selection

FIGURE 18.7 Comparison of the increment and splitting processes. It is important to notice the respective areas where the selection process is implemented: a major confusion from most sampling practitioners.

18.8. Introduction to a Group of Models of the Sampling Process The development of the theory of the increment sampling process requires us to consider two models: 1. The continuous selection that does not take small details into account but looks at the lot as a whole set of units. 2. A discrete selection model which on the contrary looks at all the details of the particulate material making up the lot. In Figure 18.8 a lot L of particulate material is illustrated and is made of NF fragment Fi, all included within the domain DL represented in a two-dimensional projection (e.g., a page of this book). In Figure 18.9 we look at the lot using the continuous model. We forget about the boundaries of each individual fragment and we consider the lot as a set of points X located inside the domain DL. We focus our attention on the properties of any point X regardless that it may be inside or outside a fragment.

348

Theory of Sampling and Sampling Practice

Domain DL

Fi

Interstitial fluid

FIGURE 18.8 Schematization of a lot L of particulate material.

Domain DL Any point X(x,y,z)

Fi

Interstitial fluid

FIGURE 18.9 The continuous model of the lot L. The dotted lines recall the boundaries of the fragments Fi. The dotted lines recall the boundaries of the fragments Fi.

349

The Increment Materialization Error

Domain DL

Fi

Interstitial fluid

FIGURE 18.10 The discrete model of the lot L.

In Figure 18.10 we look at the lot using the discrete model. Now we consider the lot L as a set of discrete fragments Fi. These fragments are the basic units of the lot L and their properties are assumed to be defined and characterized as explained in our study of heterogeneity. The dotted line recalls the boundaries of the domain DL.

18.9. The Continuous Model of the Lot L First, we must proceed with the modeling of the lot L, then the modeling of the selection process. 18.9.1. Continuous Model of the Material to Be Sampled With the continuous model, the lot L is considered as the set of all points X, with coordinates are x, y, and z, that are inside the entire domain DL. Let’s call dX the elementary volume targeted on X with: dX ¼ dx  dy  dz

ð18:2Þ

Let’s call dM the weight of active components (e.g., solid fragments) included in dX, and dA the weight of critical components (e.g., components of interest such as gold, sulfur, copper, ash, GMO, etc.) included in dX. Now, let’s define three punctual functions characterized by the subscript p:

350

Theory of Sampling and Sampling Practice

• Mp ðx; y; zÞ or simply Mp ðXÞ which is the weight of active components by volume unit at the point X (e.g., density of the solids at the point X), • Ap ðx; y; zÞ or simply Ap ðXÞ which is the weight of the components of interest by volume unit at the point X. • ap ðx; y; zÞ or simply ap ðXÞ which is the critical content at the point X. These three functions are defined by the following relations: dM dX!0 dX

ð18:3Þ

dA dX!0 dX

ð18:4Þ

da

ð18:5Þ

Mp ðXÞ ¼ lim Ap ðXÞ ¼ lim ap ðXÞ ¼ lim

dX!0 dX

Then, by taking into account the continuous definition of the lot L, we can express the weight ML of the active components included in L, the weight AL of the critical components, and the critical content aL as follows: ððð

ð

ML ¼

Mp ðx; y; zÞdx  dy  dz ¼ DL

ððð AL ¼

ð Ap ðx; y; zÞdx  dy  dz ¼

DL

Mp ðXÞdX

ð18:6Þ

Ap ðXÞdX

ð18:7Þ

DL

DL

Ð Ð Ap ðXÞdX ap ðXÞMp ðXÞdX AL aL ¼ ¼Ð ¼ Ð ML Mp ðXÞdX Mp ðXÞdX

ð18:8Þ

Relation 18.8 clearly shows that the material included at the point X is completely defined by only two of the three functions Mp ðXÞ, Ap ðXÞ, and ap ðXÞ, and by the relation A . defining the critical content a ¼ M 18.9.2. Degenerated Models of the Lot L All the lots submitted to the sampling process have three dimensions. However, as we already saw, it may happen that some of these dimensions are important only to the second order: • Model with two dimensions, such as three-dimensional lots that have been flattened, whose thickness is small and relatively constant. • Model with one dimension, such as elongated piles, flowing streams, material travelling on conveyor belts, etc. In all cases, relations 18.6, 18.7, and 18.8 are still applicable. The domain DL is thus considered as a projection in a plane or an axis, and the triple integral becomes either a double integral or a simple integral.

351

The Increment Materialization Error

18.9.3. The One-Dimensional Temporal Model of Flowing Streams The problem analyzed by this model is very important in practice since it corresponds to the study of flowing streams in general: • material on a conveyor belt • water or other liquid streams • pulps, slurries, and so on. All raw materials in the process of being transformed, and all transformed products, sooner or later are transported as a continuous or semi-continuous stream. When such a transportation takes place, it is always the best opportunity to implement a rational and reliable sampling process. It is not an overstatement to say that nearly all reliable sampling processes are performed when the material to be controlled is presented as a one-dimensional flowing stream. Let’s consider for example a material loaded on a conveyor belt and a certain point x along the flowing stream. We are going to study the properties of a slice dx across this stream. We may refer either to the abscissa x, or to the instant when we cross the stream to study an elementary slice of the material (e.g., an increment collected by a crossstream sampler). We shall define the following notations: t: the instant to cross the stream at x following the plan H, t0; the starting time of the one-dimensional lot, tL: the end time of the one-dimensional lot, TL: the total flowing time of the lot L, TL ¼ tL  t0

ð18:9Þ

V: the speed, supposedly constant, of the conveyor belt. x  x0 ¼ V ðt  t0 Þ

ð18:10Þ

dx ¼ Vdt

ð18:11Þ

DL ¼ VTL

ð18:12Þ

Now, we can define: Mp ðtÞ: the flow rate of the active components at the instant t: Mp ðtÞ ¼ VMp ðxÞ

ð18:13Þ

Ap ðtÞ: the flow rate of the critical component at the instant t: Ap ðtÞ ¼ VAp ðxÞ

ð18:14Þ

352

Theory of Sampling and Sampling Practice

ap ðtÞ: the critical content of the slice of flowing material under consideration between the instants t and t + dt: ap ð t Þ ¼

Ap ðtÞ Ap ð xÞ ¼ ¼ ap ðxÞ Mp ð t Þ Mp ð x Þ

ð18:15Þ

aL : the true unknown average critical content of the lot L that we intend to estimate: Ð aL ¼

ap ðtÞMp ðtÞdt Ð TL Mp ðtÞdt

TL

ð18:16Þ

This relation can easily be generalized to three dimensions: Ð aL ¼

ap ðXÞMp ðXÞdX Ð DL Mp ðXÞdX

DL

ð18:17Þ

Depending on the case, the integral may be triple, double, or simple. This general expression is valid, irrespective of the number of dimensions of the lot under consideration. 18.9.4. Punctual, Extended, and Fragmental Functions 18.9.4.1. Punctual Functions The mathematical model uses punctual functions characterized by their subscript p (e.g., Mp ðXÞ, Ap ðXÞ, and ap ðXÞ). These functions describe the properties of the material included in the elementary fraction dX. Now, we must fill the gap between the theory of the continuous model and the discrete reality. In doing so, we encounter two major difficulties: • the extension from the fraction dX to a nonelementary volume representing the volume taken into consideration by a simple operation of the sampling device, and • the substitution of this volume by an actual group of fragments called the increment.

18.9.4.2. Extended Functions These functions characterized by the subscript v standing for the extension volume (i.e., increment). Let’s define: Dv: a given domain with either three, two, or one dimensions meant to represent the lot L. Generally, Dv is small when compared to the total extent DL of the lot, Dv ðXÞ: the domain Dv at the point X, Mv ðXÞ: the average weight of active components per basic volume at the scale of Dv: Mv ð X Þ ¼

1 Dv

ð Dv ðXÞ

Mp ðX0 ÞdX0

ð18:18Þ

353

The Increment Materialization Error

Av ðXÞ: the average weight of critical components per basic volume at the scale of Dv: Av ð X Þ ¼

1 Dv

ð

Ap ðX0 ÞdX0

ð18:19Þ

Dv ðXÞ

av ðXÞ: the average critical content around the point X at the scale of Dv: av ðXÞ ¼

Av ð X Þ Mv ð X Þ

ð18:20Þ

In Figure 18.4 we have an illustration of the extended function. 18.9.4.3. Fragmental Functions The fragmental functions describe the properties of the groups Gv ðXÞ of fragments whose center of gravity is located within the boundaries of the extended domain Dv ðXÞ. This definition is not arbitrary and will be explained in a later chapter. In Figure 18.5 we have an illustration of the fragmental function. If the fragmental functions are characterized by the subscript f we can state that af ðXÞ is a fragmental or discrete estimator of av ðXÞ, which is itself an extended estimator of the punctual function ap ðXÞ. We can conclude that equation (18.17) defining aL is exact when expressed with punctual functions; but becomes only approximate when expressed with extended or fragmental functions. This approximation is very good under the following conditions: 1. The size of the fragments is small when compared to the extension domain Dv . 2. The size of the extension domain Dv is small when compared to the domain DL of the lot L.

18.9.5. Continuous Model of a Selection Process According to the continuous model, all points X of the domain DL occupied by the lot L are submitted to a given selecting process with a given density ðXÞ of selection probability. The selection process is correct when the function ðXÞ is: 1. equal to a nonzero constant throughout the entire domain DL , and 2. equal to zero outside the boundaries of the domain DL . When one or both of these conditions are not fulfilled, the selection process is necessarily incorrect, therefore likely to be biased. 18.9.6. Continuous Model of the Increment Sampling Process In Section 18.5 we saw that the increment sampling process could be divided into a sequence of four logical steps: 1. the selection of a certain number of punctual increments Ip

354

Theory of Sampling and Sampling Practice

2. the delimitation of a same number of extended increments Iv 3. the extraction (i.e., recovery) of a same number of fragmented increments If 4. the reunion of the fragmented increments making up the discrete sample. We are going to show that the continuous model of the increment sampling process covers: 1. the selection of the punctual increments when we apply it to the punctual functions 2. the sequence “point selection + increment delimitation” when we apply it to the extended functions 3. the sequence “point selection + increment delimitation + increment extraction” when we apply it to the fragmental functions. This property justifies the definition of the extended and fragmental functions that provide the necessary bridge between the continuous model and the discrete model.

18.10. The Discrete Model of the Lot L With the discrete model, the lot L is considered as the set of Nu non-chronological units Um with m = 1,2,3,. . .,Nu . These units are made of: NF : isolated fragments Fi with I = 1.2.3,. . .,NF , and NG : groups of fragments Gn with n = 1,2,3,. . .,NG . The material included in unit Um is defined by two of the three following parameters: Mm : the weight of active components in the unit Um , Am : the weight of critical components in the unit Um . and am : the critical content of the unit Um , Between which we have the following relationships: • for unspecified units: Am ¼ am Mm

ð18:21Þ

Ai ¼ ai Mi

ð18:22Þ

An ¼ an Mn

ð18:23Þ

• for fragments:

• for groups of fragments:

Then, the true unknown critical content aL of the lot is: P m aL ¼ P m

Am

P

am Mm ¼ mP Mm Mm m

ð18:24Þ

The Increment Materialization Error

355

with m = 1,2,3,. . .,Nu . Each unit Um of the lot L is submitted to the selecting process with a given probability Pm of being selected. The selection is correct when: 1. The Nu units Um of the lot L have an equal and constant nonzero probability of being selected. 2. The units outside the lot L have a zero probability of being selected. When one or both of these conditions are not fulfilled, the selection process is necessarily incorrect, therefore likely to be biased.

18.11. The Increment Sampling Process of Flowing Stream The one-dimensional model is selected for representing a flowing stream made of particulate material. This model assumes that all the fragments are concentrated on the extension axis of the lot by projection in a direction parallel to the reference (Ʌ) as illustrated in Figure 18.11. In this figure (Ʌ) is perpendicular to the extension axis; however, the angle made by (Ʌ) and the axis is irrelevant as long as the plane is not parallel to the axis. The lot is made of the material included within the boundaries of the domain (DL) represented by a cylinder in Figure 18.11 (e.g., a flowing stream, but could also be the core cut by a drilling machine) and delimited by two planes Ʌ0 and ɅL of the (Ʌ) family. By projection, the three-dimensional domain (DL) is reduced to a two-dimensional domain, then to a one-dimensional domain (TL) of the extension or time axis. 18.11.1. Definition of the Punctual Increment Ip Figure 18.11 is divided into three parts showing the lot L in three different ways: 1. in a three-dimensional perspective showing the real boundaries of the domain (DL) 2. in a two-dimensional cross-section of the cylinder in which the fragments are illustrated 3. in a one-dimensional model reduced to the segment (TL) occupied by the lot L projected on the time or distance axis. A punctual increment Ip is selected at the instant tp somewhere within the domain (TL), This increment is in fact the material contained in the elementary slice (Ʌp) of material flowing between the instants tp and tp þ dt, where dt is an infinitesimal constant increment added to tp . In Figure 18.11, the distance between tp and tp þ dt is exaggerated. 18.11.2. Definition of the Model Extended Increment Ip In practice a sampler is incapable of collecting an increment of infinitesimal thickness such as dt; therefore, we shall substitute the elementary slice Ʌp with an increment

356

Theory of Sampling and Sampling Practice

Three-dimensional cross section of the lot

TL

0

šp

š0

tp

šL

tp+dt

Two-dimensional cross section of the lot

šp TL

0

šL

š0 tp

tp+dt

One-dimensional cross section of the lot

Ip TL

0 tp FIGURE 18.11 Definition of the punctual increment Ip :

tp+dt

357

The Increment Materialization Error

Iv delimited by the points t1 and t2 on the time or distance axis, or by the planes Ʌ1 and Ʌ2 in the two-dimensional and three-dimensional representation of the lot L as illustrated in Figure 18.12. In this figure, all material in the transversal cross-section of the stream is cut during the same time TI . At this stage, as presented in Figures 18.12 and 18.13, you may notice that planes Ʌ1 and Ʌ2 do not take into consideration fragment boundaries. If we proceed with the collection of an increment during a constant time TI at constant intervals Tsy , the sampling ratio τ remains constant: τ¼

TI ¼ Constant Tsy

ð18:25Þ

In conclusion, the model extended increment achieves a correct increment delimitation. In practice, the extension is an exact and unbiased operation under the two following conditions: 1. The size of the fragments remains small when compared to the extended domain Dv. 2. The size of Dv remains small when compared to the domain DL occupied by the lot.

18.11.3. Definition of the Actual Extended Increment I’v: Increment Delimitation Error IDE The model extended increment is correctly delimitated when the two boundary planes Ʌ1 and Ʌ2 are strictly parallel. The two boundary planes do not have to be perpendicular to the stream; for example, in Figure 18.13 two boundary planes Ʌ’1 and Ʌ’2 delimitate a correct increment as they are parallel. If the sampling device does not respect this rule of correctness that will studied in later chapters, the actual extended increment I’v could be delimitated by two nonparallel planes such as Ʌ’1 and Ʌ”2 as illustrated in Figure 18.13, thus resulting in a nonconstant and preferential sampling probability very likely to introduce a bias. TI ¼ t2  t1 ¼ T10 ¼ t02  t01 ≠ T100 ¼ t002  t01

ð18:26Þ

In conclusion, if the cut across the stream is not correct, and if there is some segregation across the stream, and experience shows that there is always some segregation, the Increment Delimitation Error necessarily introduces a sampling bias. The noncorrectness of the actual extended increment generates an Increment Delimitation Error that cannot be accounted for by the continuous selection model, therefore there is no theoretical analysis possible. Furthermore, the mean of the Increment Delimitation Error is nonzero which is devastating in sampling practice. We cannot emphasize enough the fact that the Increment Delimitation Error is a very substantial error with grave consequences on the reliability of the sampling process. Unfortunately, this error is often unknown, ignored, or unappreciated by those in charge of fabricating, installing and maintaining sampling systems.

358

Theory of Sampling and Sampling Practice

Three-dimensional cross section of the lot

t1

0

š0

t2

š2

š1

TL

šL

Increment Iv

Two-dimensional cross section of the lot

TL

0

š0

š1

Increment Iv

š2

šL

One-dimensional cross section of the lot

TI / 2

TI / 2 tp

0 t1

Increment Iv

TL t2

FIGURE 18.12 Definition of the model extended increment Iv The probability for a given part of the material to fall between the two planes Ʌ1 and Ʌ2 of the model extended increment is uniform, therefore correct.

359

The Increment Materialization Error

t'1

T"I T'I Correct

Incorrect t'2

t"2 š"2

Correct

š'1

š1

0

š'2 š2

TL

TI š0

tI

Increment Iv

t2

šL

FIGURE 18.13 0 Definition of the actual extended increment Iv :

18.11.4. Definition of the Model Fragmental Increment If The next actual extraction step necessitates the transformation of a volume included between the planes Ʌ1 and Ʌ2, which does not respect the fragmental structure of the material into a group of fragments correctly selected.; experience shows again and again that it is easier said than done and this leads to some of the greatest biases in sampling practice. If we observe the cutter of a cross-stream sampler as it travels across the material at the discharge of a conveyor belt, we can observe three groups of fragments: 1. Those that are bypassing the cutter directly 2. Those that are directly entering the cutter 3. Those that are bouncing on the cutter edges as illustrated in Figure 18.14. The bouncing fragments are those cut by planes Ʌ1 and Ʌ2 generated by the edges of the cutter as it travels across the stream. Under ideal conditions that will be defined in later chapters, a fragment hitting a cutter edge bounces on the side of the edge that contains it center of gravity. In conclusion, all fragments having their center of gravity inside the model extended increment Iv belong to the model fragmental increment If . This is a very important rule of sampling correctness; it is called the rebounding rule or rule of the center of gravity. 0

18.11.5. Definition of the Actual Fragmental Increment If : Increment Extraction Error IEE 0

When the rebounding rule is not respected, the actual fragmental increment If is different from the model fragmental increment If and an Increment Extraction Error IEE is necessarily introduced. Many factors that will be studied in following chapters can lead to the respect or nonrespect of the rebounding rule; most of them are either selective or discriminative. Any deviation from the extraction correctness generates a sampling bias. This is illustrated in Figure 18.15 where the fragments marked in black belong to the model increment If , but, for some reasons, have been rejected by the sampling device, resulting in 0 an actual fragmental increment If different from the model fragmental increment If .

360

Theory of Sampling and Sampling Practice

Center of gravity of the fragment

Cutter direction FIGURE 18.14 Illustration of the rebounding rule, or rule of the center of gravity.

In conclusion, any deviation from the rebounding rule generates an Increment Extraction Error IEE that cannot be accounted for by the continuous model, which is a huge issue in sampling practice as it clearly means there is no possible theoretical analysis of the problem, therefore no quantification possible. Furthermore, the mean of the extraction error is non-zero. As we did for the Increment Delimitation Error, we cannot emphasize enough the fact that the Increment Extraction Error is a very substantial error with devastating effects on the reliability of the sampling process. Unfortunately, this error is often unknown, ignored, or unappreciated by those in charge of designing, building, installing and maintaining sampling systems. 18.11.6. Definition of the Prepared Fragmental Increment If00 : Increment Preparation Error IPE All the extracted fragmental increments are recombined to make the primary unprepared sample. Between this stage and the final assaying process, the primary sample may be submitted to secondary and tertiary sampling for which everything we have said so far is applicable. However, between each selecting process there are nonselective steps such as handling, comminution, screening, filtering, drying, packaging and so on. These steps may generate contamination with foreign material, loss of some fragments, mainly fines, alteration of physical and chemical properties, of the original fragmental primary sample, as shown in Figure 18.16. These preparation problems are responsible for the Increment Preparation Error (IPE); for clarification of that given name, sometimes preparation steps are performed on individual increments, or sometimes they are performed on the sample after the reunion of all increments.

361

The Increment Materialization Error

Ideal increment delimitation: model extended increment that does not take into account fragments

T0

TL

š0

š1

Increment Iv

š2

šL

Ideal increment extraction: model fragmental increment respecting the rule of center of gravity

T0

TL

š0

š1

Increment Iv

š2

šL

Actual, biased fragmental increment: the two black fragments have been wrongly rejected by the cutter.

T0

TL

š0

š1

Increment Iv

š2

šL

FIGURE 18.15 0 Definition of the model fragmental increment If and illustration of A potential actual fragmental increment If :

This error IPE will be studied in detail in Chapter 26. The Increment Preparation Error is included in the Increment Materialization Error IME for practicality, although it is an error that does not belong to the selective process; it is more like a series of possible accidents. In conclusion, any deviation from correct preparation can ruin a correctly selected sample by generating several IPE whose mean is nonzero, therefore they are dangerous sources of biases.

362

Theory of Sampling and Sampling Practice

T0

TL

š0

š1

Increment Iv

š2

šL

FIGURE 18.16 Definition of the prepared fragmental increment If00 : Fragments in black are from contamination. Fragments in white within the ideal extracted increment are from losses.

18.11.7. Recapitulation of the Increment Materialization Error IME If we summarize all the observations we have made so far in this chapter, between the mathematical model of the potential increments, and the fragmental reality leading to an extended, extracted, and prepared sample, there are three, and only three, potential sources of errors: 1. the Increment Delimitation Error IDE, part of a selective process 2. the Increment Extraction Error IEE, part of a selective process 3. the Increment Preparation Error IPE, part of a non-selective process. The Increment Weighting Error IWE is a different issue that will be studied in Part 8 of this book.

19 Sampling Modes

19.1. Probabilistic Approach of the Delimitation and Extraction Processes We shall assume, with respect to the conditions of correctness, that all points of the domain TL occupied by the lot on the time axis, are selected with a uniform density  of selecting probability. We may imagine many selection modes; however, only the three following modes can be probabilistic: 1. a systematic selection with the random positioning of the starting point 2. a stratified random selection 3. a strict random selection. Only the two first modes are recommended because the third one has never been found to be superior, either by geostatisticians or by Dr, Pierre Gy. The notion of probability should not be mistaken for the notion of density of selection probability; ðtÞ is the density of selection probability at the instant t, PðtÞ is the selection probability of an instant between t and t + dt. By definition we have: PðtÞ ¼ ðtÞdt

ð19:1Þ

19.1.1. Definition of a Random Selection A selection is said random when it is carried out with a uniform probability (discrete perspective) or probability density (continuous perspective). For example, the selection of one and only one instant t0 within a time domain ðT0 Þ is said to be random if the density of the selection probability ðtÞ is uniform throughout ðT0 Þ and equal to a constant 0 . It is assumed to be nil outside ðT0 Þ. By hypothesis, the probability of selecting one and only one instant t0 is a certainty (i.e., a probability equal to one). Therefore: ð

ð ðt0 Þdt0 ¼ 0 T0

dt ¼ 0 T0 ¼ 1 ! 0 ¼ T0

1 T0

ð19:2Þ

363

364

Theory of Sampling and Sampling Practice

19.1.2. Selection Probability of a Point t on the Time Axis Taking into account that the definition of dt is to be an infinitesimal positive constant increment or elementary time increment, we can express the elementary selection probability dP as follows: dP ¼ 0 dt

ð19:3Þ

Depending on the selection mode we can verify that (Gy73, 1982, Chapter 7): ð TL 0

dP ¼ 0 TL ¼

TL TL or or Qra sy st

ð19:4Þ

19.1.3. Probability for a Fragment Fi to Be Included in the Actual Extended Increment I0 and in the Model Fragmental Increment If Let’s define Pvi as the probability for the center of gravity of the fragment Fi to be within the actual extended increment Iv0 . By definition Pvi is also the probability for the fragment Fi to be part of the model fragmental increment If . If we make the assumption that the delimitation is correct (i.e., Iv0 ¼ Iv ), this probability is uniform and equal to: Pvi ¼ o TI ¼ Constant

ð19:5Þ

where TI is the domain of the model extended increment Iv0 . 19.1.4. Probability for a Fragment Fj Belonging to the Model Fragmental Increment If to Be Included in the Actual Fragmental Increment If0 Let’s define: Fj as a selected fragment within the actual extended increment; therefore, within the model fragmental increment If , PFj the probability for the fragment Fj already included in the model fragmental increment to remain in the actual fragmental increment, If we make the assumption that the extraction is correct (i.e., If0 ¼ If ), this probability is certain for all fragments Fj , thus: (19.6) PFj ¼ 1 ¼ constant, regardless of j If the extraction is not correct, then this probability is not uniform. Experience shows that a nonconstant probability PFj is selectively and discriminatorily altered only for some specific fragment size fractions, an economically devastating problem in sampling practice: PFj 51

ð19:7Þ

19.1.5. Probability for a Fragment Fi of the Lot L to Be Included in the Actual Fragmental Increment If0 Before being included in the actual fragmental increment If0 the fragment Fi of the lot L must necessarily undergo two independent random steps: 1. the selection within the extended increment, and 2. the extraction from the fragmental increment.

365

Sampling Modes

Let’s define PSi as the probability of a fragment Fi to be included in the actual fragmental increment If0 ; therefore, included in the nonprepared sample S. The theorem about the properties of dependent random variables allows us to write: PSi ¼ Pvi  PFj

ð19:8Þ

If we assume that the delimitation and the extraction are correct relations 19.6 and 19.7 allow us to write: PSi ¼ 0 TI ¼ Constant

ð19:9Þ

The conclusion is that when only one of the correctness conditions is not fulfilled, either delimitation or extraction, relation 19.9 becomes false, and PSi becomes a function of the size, density, or shape of the fragment. This function would require a very difficult, unrealistic and impossible modeling. 19.1.6. Preparation of the Sample S In the probabilistic approach, we did not include the nonselective preparation steps. Contamination by foreign fragments, loss of fines, alteration of physical and chemical properties, lead to a very ambiguous definition of the notion of probability. It is exceptional that the Increment Preparation Error IPE can be considered as a random variable or a sum of random variables. This is the reason we keep IPE somewhat separate from the Increment Materialization Error IME, even though it would be logical to include it in IME.

19.2. Random Systematic Selection Mode The implementation of the following procedure is recommended: 1. Select a natural number K = {1,2,3,. . .} equal to the inverse of the sampling ratio 1 τ ¼ that we wish to implement (τ being expressed as part of one). K is the K constant interval between each increment. 2. Select at random a number K1 within the domain defined as the set {1,K}, K1 ¼ randomf1; Kg1

ð19:10Þ

The reader must remember that a systematic selection may be regarded as probabilistic (i.e., valid) if, and only if, the starting point is positioned at random within the first stratum defined by K. Very few people pay attention to this important detail, with all the negative consequences that may follow. 3. Collect as increments Iq the units Uq having the following numbers Kq : Kq ¼ K1 þ ðq  1ÞK

with

q ¼ 1; 2; 3; …; Q

ð19:11Þ

366

Theory of Sampling and Sampling Practice

and satisfying the double inequality: Nu  K5Kq 5Nu

ð19:12Þ

4. Composite the sample S by combining the Q increments Iq . 5. The obtained number of Q increments depends: a. on the remainder of the division of Nu by K, and b. on the value of K1 . If Nu ¼ Q0 K with Q’ and K being natural numbers, the sample S will always be made of Q=Q’ increments regardless of the initial value of K1 . However, if: Nu ¼ Q0 K þ K0 with 05K0 5K

ð19:13Þ

Q ¼ Q0 units if K0  K1 Q ¼ Q0 þ 1 units if K0 4K1

ð19:14Þ

two cases are possible:

where Q’ is the quotient and K’ the remainder of the division of Nu by K. In conclusion, when the number of units Nu in the lot is not a multiple of K, the number Q of increments is a random variable that can take the values Q, or Q’+1 depending on the values of K1 and K’. This may generate a bias if Q’ is small. Therefore, the systematic selection of one unit out of K can only be considered if the quotient: Q0 ¼

Nu K

ð19:15Þ

is either a natural number, or a positive real number at least equal to 30 or 50 if we want to be safe. The systematic selection mode is by far the most popular selection mode, but we must be careful that it is indeed a random systematic mode. It is also the most reproducible except during the sampling of one-dimensional lots where we suspect the presence of periodic phenomena with the sampling interval being almost a multiple of the period; unfortunately, it is often the case.

19.3. Stratified Random Selection Mode In the stratified random selection mode, it is important to note that the stratification is not positioned at random; only the position of the increment within each stratum is selected at random.

367

Sampling Modes

The recommended procedure to implement is the following: 1. Select a natural number K = {1,2,3,. . .} practically equal to the inverse of the sampling 1 ratio τ ¼ that we wish to implement (τ being expressed as part of one). K is the K constant size of the strata or the average interval between the increments. 2. Breakdown the series 1,2,3,. . .,Nu in Q’ strata Sq whose size is equal to K. This can be done in such a way that the stratum Sq (with q = 1,2,3,. . .,Q’) is formed of K units numbered Kqj (with j = 1,2,3,. . .,K) and: ðq  1ÞK5Kqj  qK

ð19:16Þ

3. Select at random within each stratum Sq one unit of rank (random {1,K}q) in the stratum and of rank Kq in the series of Nu units: Kq ¼ ðq  1ÞK þ randomf1; Kgq

ð19:17Þ

We see that relation 19.11 is a special case of relation 19.17 where the systematic selection can be compared to the stratified random selection with positioning of the increments at the same place within each stratum. 4. When Nu is not a multiple of K and can be expressed as follows: Nu ¼ Q0 K þ K 0

ð19:18Þ

with Q’, K, and K’ being natural numbers, the last stratum is made of K’< K units; therefore, we can imagine two techniques: ○ One that is correct, which consists of (same as for the other strata) selecting a number (random {1.K}Q’+1). This number is or is not smaller or equal to K’. If it is smaller or equal, the (Q’+1)th stratum is represented in the sample, which is made of Q = Q’+1 increments. If it is not smaller or equal, the (Q’+1)th stratum is not represented in the sample, which is made of only Q = Q’ increments. ○ The other technique that is incorrect, which consists of selecting at random (random {1.K}Q+1) which is always smaller or equal to K’. This is equivalent to giving the units of the (Q’+1)th and last stratum a probability of selection 1 1 4 , which is basically incorrect, and would generate a bias. K0 K 5. Composite the sample S by combining the Q = Q’ or Q + Q’ +1 increments Iq . This selection mode is not well known throughout the industry; it is slightly less reproducible than the random systematic mode, except in cases where periodic phenomena are suspected. If the period of a cyclic phenomenon is nearly a multiple of the selection interval, the stratified random mode becomes by far the best mode available. As a result, the stratified random selection mode is by far the most reliable selection, every time reconciliation between mine and plant, material balance, and metallurgical accounting are the subject of concern.

368

Theory of Sampling and Sampling Practice

19.4. Random Selection Mode The following is the recommended procedure to implement: 1. Select a number Q of units or increments to make up the sample S. 2. Proceed with the thorough selection of Q random increments among the Nu units. Thus, we select the units of rank Kq: Kq ¼ randomf1; Nu gq

ð19:19Þ

with q = 1,2,3,. . .,Q. 3. Composite the sample S by combining the Q units or increments Kq. It is very seldom that we should use this selection mode because it never offers any advantage over the two other modes: • If there are periodic fluctuations, the stratified random selection mode is always more reproducible. • If there are no periodic fluctuations, the random systematic selection mode is always more reproducible.

19.5. Examples of Selection Modes Used in Practice This section is to make the reader more comfortable with selection modes by looking at a common sampling procedure used to sample particulate solids in a trucks, such as valuable concentrates, cereals, cement, alumina, and so on. There are four possible selection modes that are usually selected, or could be selected: 1. The Simple Systematic selection mode as shown in Figure 19.1, where increments are always taken at the same location. This selection mode is always wrong. If there is no randomness at all involved in a selection mode it is not statistically defensible. This is very important to know in the case of a litigation dispute. 2. The Random Systematic selection mode as shown in Figure 19.2, where the first increment is selected at random within the first stratum, which dictates where all other increments will be collected within all other strata. This selection mode is correct, and it is statistically defensible. 3. The Stratified Random selection mode as shown in Figure 19.3, where a new random position is selected for all strata. This is by far the best sampling mode and it is highly recommended. 4. The Strict Random selection mode as shown in Figure 19.4, where the whole surface of the truck is divided, for example, in 560 possible locations. Then, 34 locations were selected at random between 1 and 560, with no replacement (which

Sampling Modes

FIGURE 19.1 Simple Systematic sampling mode.

FIGURE 19.2 Random Systematic sampling mode.

369

370

Theory of Sampling and Sampling Practice

FIGURE 19.3 Stratified Random sampling mode.

FIGURE 19.4 Strict Random sampling mode.

means that no location should be selected twice). This selection mode is correct, but it is never better than stratified random because it is more likely for large areas not to be sampled as clearly shown in the central left part of Figure 19.4.

20 The Increment Delimitation Error during Exploration, Mining, and Sampling Food and the Environment

The selection of the place to collect one increment within the lot is a punctual decision based on the appropriate selection mode decided upon to optimize the sampling protocol. The actual volume to be collected around that point must have boundaries that are such that equi-probabilistic sampling can be achieved; there is no compromise possible with this stringent condition. The correctness or noncorrectness of the increment delimitation is a structural property, and it is entirely up to management, manufacturers, engineering firms, and regulators to have control over it to prevent negative effects (i.e., biases) that would take place for the lifetime of a selected sampling system.

20.1. Definition and Concept 20.1.1. A Critically Important Decision The following decision is one of the most critical ones when selecting a sampling system, and there are four options, and depending on local conditions, one of these options may be superior to others: 1. The lot to be sampled is considered three-dimensional, which is by far the most difficult way to collect an increment, because the isotropic volume to be taken is the content of a sphere, the center of which is the punctual place selected by the protocol. This leads to an unsolvable sampling problem because of practicality; the device to perform such an operation does not exist. This is precisely why any form of grab sampling should never be an option. Figure 20.1 illustrates the appropriate delimitation if the increment is taken, at random across a flowing stream. 2. The lot to be sampled is considered two-dimensional, which is the appropriate way in all cases where drilling is performed, either during exploration, ore grade control using blast-holes, or taking increments from the entire thickness of a copper cathode. In this case, the isotropic volume to be taken is the content of a cylinder representing the entire thickness of the lot. Figure 20.2 illustrates the appropriate delimitation for drilling. 3. The lot to be sampled is considered one-dimensional, which is by far the most appropriate way to sample the discharge point of a flowing stream. This is the most suitable way to 371

372

Theory of Sampling and Sampling Practice

sample all streams at mills, flotation plants, smelters, loading and shipping facilities, and many chemical plants. In this case, the isotropic volume to be taken is the content of a consistent slice all the way across the flowing stream. Figure 20.3 illustrates the appropriate delimitation to sample flowing streams. There is another case where a circular pile is to be sampled, such as a pile of blast-hole cuttings. Regardless of the irregularities of the pile, the isotropic volume to be taken is the content of a sector, like when cutting a pie, radially positioned at random, as shown in Figure 20.4. 4. The lot to be sampled is considered zero-dimensional, which is a special case for very small lots that can be taken entirely for the sole purpose of sampling. This case is often encountered at the subsampling facilities at laboratories. When sending the entire contents of a sample bag, properly spread on a correct square shovel, through a riffle splitter, as illustrated in Figure 20.5 the lot is considered zerodimensional, which means that because of the design of the sampling system, and how it is fed, the number of dimensions of the lot becomes irrelevant. 20.1.2. Beware of Paradigms The reader may see at this stage that there is a subtlety on what a three-, two-, one, or zerodimensional lot is. It has nothing to do with the way it looks. It is solely based on how it is going to be sampled; there is a huge difference. For example, in the mind of some people a flowing stream is necessarily a one-dimensional lot. It is indeed a one-dimensional lot if a consistent, full slice of material is collected all the way across the stream to make one increment. However, it is a three-dimensional lot if a grab increment is collected in the stream. The paradigm is the wrong belief that it is possible to take a grab sample in a threedimensional lot, and that belief is taken for granted because so many people do it, but it limits us from doing a precise and accurate enough job. Paradigms are the enemy of progress, and they plague the sampling world in many industries, and even worse in regulatory environmental sampling, and standard committees.

Positioning of sphere at random across the stream

y

z Flowing stream

x

Sampling point and sampling time FIGURE 20.1 Considering the lot three-dimensional: an unsolvable sampling problem.

373

The Increment Delimitation Error

Correct sample: The content of a cylinder representing the entire thickness of the mining bench

FIGURE 20.2 Considering the lot two-dimensional: Drilling.

Flowing stream

The width of the increment must remain constant all the way across the stream. FIGURE 20.3 Considering the lot one-dimensional: Stream sampling.

20.1.3. Definition of Delimitation Correctness For the increment delimitation to be correct, the collected increment must coincide exactly with the selected isotropic volume: • For a three-dimensional lot, the selected isotropic volume is a sphere. To some extent, it could also be a cube. This is not an option because it is impossible to implement it properly. • For a two-dimensional lot, the selected isotropic volume is a cylinder crossing the entire lot by following one dimension (e.g., drilling). The selection is made before the sampling operation is performed. Because the selection precedes the part (IDE + IEE) of the materialization, the increment process can be equitable only if the sampling system is correct.

374

Theory of Sampling and Sampling Practice

W

H

L FIGURE 20.4 Considering the lot one-dimensional: sampling of a blast-hole pile.

FIGURE 20.5 Considering the lot zero-dimensional: riffle splitting.

• For a one-dimensional lot, the selected isotropic volume is a consistent, full slice across the lot by following two dimensions (e.g., cross-stream sampling). The selection is made before the sampling operation is performed. • For a zero-dimensional lot, the selection is made after the sampling operation has been performed, and it is essential for that selection to be random. Therefore, a splitting process can be equitable even when it is technically biased.

375

The Increment Delimitation Error

Any deviation from these required isotropic volumes of observation will necessarily introduce a substantial Increment Delimitation Error IDE, the mean of which is not zero; as a result, a bias is likely to take place that may have grave economic consequences. Important remark: When IDE takes place because the sampling system is not correct relative to the appropriate isotropic volume of observation, a sampling bias will take place one way or another due to the transient segregation taking place in the lot, regardless of bias tests that seem to prove otherwise. The reason is very simple: a sampling bias is rarely constant.

20.2. Recommendations for Exploration Programs During exploration, there are many areas that should be concerned by incorrect delimitation. It affects drilling surveys, splitting core samples, sampling reverse circulation chips, and subsampling operations at the laboratory.

20.2.1. Drilling Drifts and Surveys Delimitation problems can occur on a large scale when core samples come from a unrecognized drift, especially for deep drilling. This bias can compromise the accurate development of an underground operation and can have negative consequences in the development of a reliable geological model. Figure 20.6 illustrates such a bias. It can be considered as a delimitation bias because the sample does not come from the place that was preselected.

20.2.2. The Splitting Technique of Diamond Core Samples It has been a tradition in mineral deposits exploration to split a core sample in halves, submitting one half to a laboratory, and keeping the other half for future references. In Part VI, Chapter 17, it was shown that such an operation doubles the variance of the Insitu Nugget Effect INE. But, even more dangerous, if extraordinary precautions are not taken during the splitting, sawing, and selecting process, a delimitation bias is likely to

Drift ized eral Min one z

FIGURE 20.6 A large-scale drilling drift is equivalent to a delimitation bias.

376

Theory of Sampling and Sampling Practice

take place. Operators often fail to cut diamond core samples properly. The company involved may pay a large extra cost with diamond drilling for three reasons: • To obtain a high-quality reference sample, to sometime compare with quicker and cheaper drilling methods, • To perform good geology, • To provide a sample and a saved half core, unbiased, and representative. If the core splitting technique used by operators does not follow the recommended method that is given below, the extra money spent to drill diamond core samples does not provide the necessary advantage to the company. 20.2.3. Selecting the Correct Diamond Core Length The poor recovery of some core drilling intervals, a problem that will be reviewed in Chapter 23, can lead to a bias if a systematic core sample length is selected as illustrated in Figure 20.7. Indeed, mixing different intercepts having different recoveries would lead to a composite 3-meter sample that is not a weighted sample as it should be, a problem that will be reviewed in Part VIII. Therefore, in such a case, it would be advisable to subsample and assay each intercept that seem to have a consistent recovery separately, which we may interpret as a correct delimitation of the boundaries of each category of recovery. 20.2.4. Sampling of Reverse Circulation Drilling Chips There are three ways to perform a good delimitation for reverse circulation drilling chips; in order of preference they are: 1. The use of a continuous and proportional radial cutter under the discharge of a deflecting cone, as illustrated in Figures 20.8 and 20.9. This system can work with dry material, and pulps as well. The great advantage of this device is to provide a sample weight exactly proportional to the weight of the lot; therefore, recovery can be calculated. Feature B illustrated in Figure 20.9 is essential to provide a correct increment delimitation, and later on a correct increment extraction.

100% recovery 65% recovery 97% recovery 3-meter sample FIGURE 20.7 Do not mix different recoveries into a same sample.

377

The Increment Delimitation Error

Circular gap where the stream is intercepted by the radial cutter Illustration of an equivalent cut taken in a linear stream

Flowing stream

FIGURE 20.8 Illustration of a cone and rotating radial cutter sampler.

2. The use of a continuous and proportional Vezin sampler, like the system illustrated in Figure 20.10; such system works only with pulps. If sufficient cuts are taken (e.g., more than 36 cuts), this system is also proportional, which is a good quality. 3. The use of a riffle splitter, like the system illustrated in Figure 20.5. It is essential that the sample is selected at random after the splitting process is performed. Under no circumstances should a multi-stage riffle splitter be used, unless each stage is working at a different time than the others. The simultaneous use of all stages at once creates severe segregation problems within the flow of the superimposed chutes, and worse of all, there is no possible random selection of the sample, which is an unacceptable flaw.

20.2.5. The Correctness of Drilling: Conclusion The beauty of drilling is that it provides the ideal isotropic volume, as long as the sample is perfectly recovered, with nothing more and nothing less. But, this recovery problem is part of the Increment Extraction Error IEE that will be reviewed in Chapter 23.

20.3. Recommendations for Ore Grade Control in Open Pit Ore grade control in open pit has been plagued for many years with huge delimitation biases introduced when drilling and sampling blast-holes. Management of many mining companies has vastly under-estimated this problem and its economic impact, mainly because it generates an invisible money loss, which is difficult to bring into evidence. This bias is one of the main components in conciliation problems between the mine, the geological model, and the plant. For more information on conciliation problems, refer to A. C. Chieregati.197–200

378

Theory of Sampling and Sampling Practice

A. Inspection door

B. Cone insuring a perfect cut by cutters

C. Rotating cutters

Sample

Rejects

FIGURE 20.9 Side view of a cone and rotating cutter sampler.

Cyclone

Holding tank necessary because of pressure in cyclone

Rotating Vezin sampler

Rejects

FIGURE 20.10 Reverse circulation slurry Vezin sampler.

Sample

379

The Increment Delimitation Error

Part going to the plant or the waste

Former Sub-drill

Current Sub-drill

FIGURE 20.11 Delimitation bias introduced by the drilling machine.

20.3.1. Delimitation Bias Introduced When Drilling Blast-Holes The necessary logistic of mining requires that a smooth surface is prepared at the top of a mining bench; to perform this without creating unacceptable earth-moving equipment cost, it is routine practice to drill a mining bench slightly deeper that the forecasted bench: it is called the subdrill, as illustrated in Figure 20.11. The subdrill material must not be part of the sample. Unfortunately, it is deposited at the top of the drill cuttings pile, at the worse possible place for an operator in charge of taking samples. Furthermore, the drilling machine cannot collect anything from the former subdrill because the material already broken up is very difficult to recover. It should be emphasized that the material in the former subdrill, regardless of its origin, belongs to the collected sample. As a result, the collected sample represents the wrong part of the bench, usually the lower part. This is a devastating bias, and there are no geostatistics capable to correct it. Yet, this is only the tip of the iceberg.

20.3.2. Drilling Several Benches at Once In some cases, especially those where conventional, vertical blast-holes are parallel to geological structures, which is a cardinal sin, it is advisable to drill separate holes with reverse circulation drilling. Then several problems can be solved: • Holes can be drilled at an appropriate angle. • Several benches can be drilled at once, as illustrated in Figure 20.12. • Problems with subdrill are solved. • Less interference with mining production is achieved. • More time is allowed for the ore grade control engineer and for the laboratory. I cannot emphasize enough the economic advantages provided by the above list, and it is astounding that it does not get the attention it deserves.

380

Theory of Sampling and Sampling Practice

Z X Y

FIGURE 20.12 Elimination of blast-hole sampling using reverse circulation drilling.

20.3.3. Delimitation Bias Introduced When Sampling the Blast-Hole Cuttings Pile Using a Tube The use of a tube to sample blast-hole piles is a very popular practice because it is practical. Reality is that it is probably the worst possible way to sample such a pile. If a tube is used to collect increments around the pile, as illustrated in Figure 20.13, it is impossible to collect a complete column representative of all levels. If the subdrill material is not removed, then it becomes the part best represented in the sample,

1

Former Sub-drill

Ideal sample

2 Actual sample

3

Current Sub-drill

FIGURE 20.13 Delimitation bias introduced by the use of a tube: Area 1 is not represented. Area 2 is poorly represented. Area 3 is wrongly part of the sample.

381

The Increment Delimitation Error

which is totally wrong. Then, the bottom of the pile, where a lot of coarse fragments have segregated is not represented in the sample. Furthermore, the former subdrill cannot be represented either, a problem that will be addressed in Chapter 23. Another problem with the tube is the location where increments are collected. Figure 20.14A shows incorrect positioning of the tube. Why should all the increments be taken at the same place around the pile? There is absolutely no good reason to do so. This kind of sampling is “anti-democratic,” if an easily understood analogy could be made. A better way to do this, which is far from perfect, is to divide the pile in several sectors. Each sector should contain about the same amount of material, which is difficult for an operator to appreciate. Then, within each sector, one increment should be taken at a place selected at random, as shown in Figure 20.14B. It is clear that from case A to case B, improvement has been made from very bad to bad: sampling a pile with a tube is not a good thing to do. It is the same problem if an auger is used.

20.3.4. Sampling the Blast-Hole Cuttings Pile Using a Radial Bucket Blast-holes that are not too deep and not too large diameter, generating a cuttings-pile no larger than 700 Kg can be sampled effectively by placing one or two radial buckets at random around the drilling rod, as shown in Figure 20.15. The fact that buckets must be radial, and the minimum opening near the hole must respect the rules of extraction correctness discussed in Chapter 23 leads to unacceptably large samples for large mining operations where the thickness of the mining bench exceeds 10 meters. Under no circumstances should the buckets overflow. Buckets must be long enough to cover the entire radius of the pile. The advantage of the radial bucket is that it can be removed before the driller enters the subdrill.

A. Non-probabilistic delimitation

B. A more probabilistic delimitation

FIGURE 20.14 A illustrates the wrong way to sample with a tube, showing a top view of the pile. B shows a better way at stratified random, within the radial isotropic volume of observation.

382

Theory of Sampling and Sampling Practice

FIGURE 20.15 Sampling blast-holes using radial buckets.

20.3.5. Sampling the Blast-Hole Cuttings Pile Using Radial Channels Many mining operations are not willing to invest in automated sampling systems as they are difficult to engineer, build, install and maintain. Every attempt that the author is aware of ended in failure, not because they did not work, but because they invariably slow down production, which is unacceptable for any mining engineer. There is a possible compromise that may help these companies minimize the negative effects of poor blast-hole sampling. The following procedure has its limitations, but it is much better than sampling with a tube or an auger. Make a team of six people: four for digging channels manually, and two for taking samples. Proceed as follows: • Dig 2 radial channels as shown in Figure 20.16. All the material in the channels must come out. • On each side of the channels, scrap off about 5 cm representing the subdrill material. • Place a flat, radial piece of metal at the bottom of each channel. • Using a sharp, square shovel, cut a tiny radial channel from each side of the main channels, as shown by hatched areas in Figure 20.16, all the way from the top to the bottom of the pile, and for its entire diameter. All that material must be recovered as a composite sample: no subsampling of this material is allowed in the field. • The mass of the composite sample should be compatible with the recommended protocol.

The Increment Delimitation Error

383

FIGURE 20.16 Suggested manual channels for blast-hole sampling.

• When taking a duplicate sample (At least three blast-holes every day), dig two new radial channels, and repeat the entire procedure. • Make sure the procedure is well supervised all the time. 20.3.6. Trenching Soil, Clay, Laterite, or Saprolite Mineral deposits may show subvertical geological structures. It would be a great delimitation mistake to drill blast-holes parallel to such structures, which may create later ore selection illusions as shown in Figure 20.17. A pragmatic alternative, when the ore is not too hard (e.g., soil, clay, laterite, or saprolite), trenches can be dug across the mineralized areas as illustrated in Figure 20.18. Of course, if the trench is 2-meter deep, samples from that trench will tell nothing about the ore composition at a 3-meter depth; such extrapolation would again generate a delimitation bias. 20.3.7. Monitoring Blast-Holes Using Neutron Activation: the Future Neutron activation followed by the reading of the copper prompt gamma-ray emission has been considered by several companies. The common denominator these companies have is to try to perform the job in-situ sending a probe inside the drilled blast-hole: they are wasting time and money and will not succeed with the necessary degree of precision and accuracy to perform reliable ore grade control because they assume the copper is homogeneously distributed. The in-situ calibration is an unsolvable problem as soon as the mineralization becomes spotty or shows isolated high-grade veins, which is very often the case. The main problem is the unpredictable angle under which the prompt gamma emission is read by the detector. Unfortunately, the amount of copper

384

Theory of Sampling and Sampling Practice

Drilling grid position A

Drilling grid position B

FIGURE 20.17 The location and quantity of ore depends on the position of the drilling grid: Both cases are highly misleading.

FIGURE 20.18 Parallel trenches crossing the mineralized areas have a better chance to provide a realistic and consistent estimate of the ore grade.

read by the detector is directly proportional to that angle. A simple sketch, like the one illustrated in Figure 20.19 may help to understand the problem. Let’s assume that two copper veins exactly the same size are present in situ; one is very close to the detector,

385

The Increment Delimitation Error

Copper Vein Neutron D2 Neutron Copper Vein

D

D1

FIGURE 20.19 In-situ copper measurement using neutron activation-prompt gamma emission.

but the other one is far away from it. When these similar copper clusters are hit by neutrons, they emit prompt gamma rays in all directions. The key words are “all directions.” The respective angles under which the detector D sees the prompt gamma rays are α1 and α2, respectively. The angle α2 is necessarily larger than the angle α1, therefore will show much more copper, and the measuring system is strictly incapable to correct for the difference; this necessarily leads to a delimitation bias. Furthermore, the danger of losing a probe in the hole is real and should raise environmental issues. There is a much better way to do this. An automated system could collect a very large sample made of minus 1.5-cm chips. Present the sample into a constant volume, which is always located at the same place between the neutron source and the detector, as illustrated in Figure 20.20. This way offers several advantages: • There is no danger of losing the neutron source. • Large clusters have been broken up. • It is easy to calibrate the system. • The weight of the sample in a constant volume corrects for the size distribution of the material. • All new smaller copper clusters are roughly at the same distance from the detector. • The system could be mounted on a four-wheel drive pickup, offering field flexibility, and excellent safety. • The system eliminates the cost of subsampling and laboratory assaying. • The system can assay a 100-Kg sample or more instead of 1 gram.

386

Theory of Sampling and Sampling Practice

Constant Volume and Geometry

Neutron Source

Gamma Ray Detector

FIGURE 20.20 A more logical, portable blast-hole neutron/prompt gamma analyzer.

• Several cylinders and detectors can be mounted on the pickup around a single source, allowing the simultaneous assaying of several blast-holes.

20.4. Sampling for Underground Ore Grade Control Sampling in underground mines has been plagued by nonprobabilistic sampling procedures for many years (e.g., grab sampling at draw-points, grab sampling in front loader buckets, grab sampling of blasted material, grab sampling of runoff mine, channel sampling performed by hand, etc.). Such practice invariably leads to unsolvable conciliation problems. Solutions are not easy, and often unpopular. But, there is no doubt the economic impact of all these bad procedures is enormous, and not well known. 20.4.1. Block Caving Problems Block caving blasted material collected at draw-points is supposed to come from a welldelimited ore block, but reality can be different. Large boulders coming from older and upper levels can contaminate the material. This is equivalent to a large-scale delimitation bias: the ore is not exactly coming from the expected area and may result in a substantial conciliation problem with the ore grade revealed later at the plant. We don’t have a solution for this problem, but it is worthwhile to mention it; awareness can go a long way. 20.4.2. Channel Sampling in General A channel sample must always cross the mineralization, and the amount of material collected must be consistent all the way across mineralization. The size of the channel may depend on local in-situ nugget effect; however, a correct delimitation can be achieved either by a small or large volume as suggested by the two correct cases illustrated in Figure 20.21.

387

The Increment Delimitation Error

Correct

Correct

Incorrect

FIGURE 20.21 Geometric consistency all the way across the mineralization is essential.

20.4.3. Channel Sampling of an Underground Face Development Underground face sampling is not an easy task to perform and it is dangerous because of omnipresent exposed rocks submitted to sudden decompression. Nevertheless, if such sample is collected, it is better achieved using a diamond saw to collect a well delimited channel as shown in Figure 20.22. It is then recommended to take a picture of the face to calculate the appropriate ore surface. An alternative would consist of drilling small holes of consistent depth either along the preselected channel or following a systematic pattern covering the entire surface of the exposed mineralized area.

20.4.4. Sampling at Draw-Points The fragment size of blasted material at draw-points in underground mines ranges from fine particles all way up to 4-feet across boulders, as illustrated in Figure 20.23. Under such conditions, it is almost impossible to apply the conditions of delimitation correctness. Furthermore, because all fragments and boulders must be represented in the sample, the optimum sample would require the collection of several tons of material,

Ore

Waste

Waste

FIGURE 20.22 The ore sample must represent the entire thickness of the mineralized area in a consistent way.

388

Theory of Sampling and Sampling Practice

which is not an option in many cases. It should be emphasized that the collection of a few kilograms of fist size fragments, as usually done, is a nonprobabilistic, operatordependent procedure strictly incapable to provide reliable data. Such practice can be highly misleading, especially in gold mines where severe clustering effects can be expected. Any attempt of grade conciliation between the grade estimated at the plant and the grade estimated underground is an exercise in futility resulting in very poor accounting practice. The other solution consisting of grab sampling from the front loader bucket, as illustrated in Figure 20.24, is not any better. In Chapter 23, a few solutions will be explored.

FIGURE 20.23 Sampling of draw-points in underground mines: a difficult problem.

Stockpile

Grab increment

Composite sample FIGURE 20.24 Grab sampling at the mine.

21 The Increment Delimitation Error at a Processing Plant

The plant offers the unique opportunity to sample materials either traveling on conveyor belts after a crushing facility or traveling as a large stream with suspended solids after the milling facility. In both cases, we can always consider the stream one dimensional to perform reliable sampling. However, as we are going to see, reality is often different, disappointing, loaded with misconceptions and wishful thinking practice.

21.1. Options for Sampling a Flowing Stream Sampling of flowing streams is usually performed using one of the following three methods: 1. Taking the whole stream part of the time as illustrated in Figure 21.1A. 2. Taking part of the stream, all the time as illustrated in Figure 21.1B. 3. Taking part of the stream part of the time as illustrated in Figure 21.1C.

21.1.1. Taking the Whole Stream Part of the Time This type of sampling is performed by cross-stream samplers, usually installed at a transfer point such as the discharge of a conveyor belt, of a hose, of a duct, of a large stream feeding a flotation plant or a tailing pond, and so on. The increment delimitation can either be correct (e.g., cross sections 1, 2, and 3 in Figure 21.1A) or incorrect (cross sections 4 and 5 in Figure 21.1A). In practice, all these devices can be designed in such a way that they generate a correct delimitation; therefore, they are capable of performing equi-probabilistic sampling with the sampling probability P maintained constant for all parts of the stream to be sampled. These samplers are studied in great detail in subsequent sections.

21.1.2. Taking Part of the Stream All the Time This type of sampling is performed by in-stream point samplers or probes, or by instream stationary cutters, as illustrated in Figure 21.1B. These devices are often used to sample liquids with or without suspended solids. Two typical devices are illustrated in Figures 21.2 and 21.3. 389

390

Theory of Sampling and Sampling Practice

These devices cannot be probabilistic because the fraction of the stream that is systematically collected is predetermined in advance, making the intolerable assumption that the stream has been perfectly homogenized. If there is the slightest correlation between the position of a critical constituent in the section of the stream and the physical and chemical nature of this constituent, which is most of the time the case, the composition of the sample is systematically different from the average composition of the lot. In some cases, these devices could be acceptable if there are no suspended solids in the stream and if the stream does not consist of the recent reunion of several streams of different composition; then it is advisable to introduce a turbulence in the process stream prior to the sampling device, such as the insertion of two or three small baffles as illustrated in Figure 21.4. We repeat, in the presence of suspended solids, with turbulence or not, such devices are unacceptable and should be vigorously rejected. Their use is a typical case of professional malpractice, with all the economic long-term consequences it carries. Under no circumstances should such devices be listed as acceptable ones in sampling standards. A standard mentioning these devices without condemning them is not a standard; it is only a compilation of existing practices, good or not, and does not help anyone. 21.1.3. Taking Part of the Stream Part of the Time We can review three methods using this kind of sampling, and none of them are probabilistic or correct. They should be rejected unconditionally and listed as intolerable professional malpractices, at the exception of one-phase liquids with no suspended solids.

Only for these 3 correct cases is the stream considered 1-dimensional

A 1

2

3

4

5 If well done, in this case the lot may be considered two-dimensional

B

C For all other incorrect cases, the stream becomes 3-dimensional, generating an unsolvable sampling problem in daily practice. FIGURE 21.1 Three methods to sample a flowing stream.

The Increment Delimitation Error

FIGURE 21.2 The main stream is diverted after introducing a turbulence to homogenize the stream.

Sample

FIGURE 21.3 A portion of the stream is diverted after introducing a turbulence to homogenize the stream.

391

392

Theory of Sampling and Sampling Practice

Stream

Sample FIGURE 21.4 Intermittent sampling using an in-stream probe.

1. Direct grab sampling on a conveyor belt as illustrated in Figure 21.1C. This kind of sampling has been responsible for devastating financial losses. 2. In-stream point sampling as illustrated in Figure 21.4. With suspended solids, three things are wrong: A – Between each increment, large particles accumulate inside the sampling device between the probe opening and the valve that is closed. The unaware operator may not flush the line well enough before taking an increment; B – The necessary short-time interval between increments requires the operator to remain full time nearby the sampling device, and nobody is willing to do such a thing; C – A large portion of the stream is never part of the sample, which renders the method nonprobabilistic, which is a cardinal sin in sampling. 3. By-stream line sampling as illustrated in Figure 21.5 is responsible for large, unacceptable increment delimitation biases. A large portion of the stream is never part of the sample and the portion that is collected has an erratic geometry, opening the door to

The fine material moving at the top of the stream is never sampled.

Stream Sampling point

Sample incremen t FIGURE 21.5 Sampling method often used for fine, dry flowing solids.

393

The Increment Delimitation Error

the possibility of confusing a process change with a change in segregation across the stream with all the negative consequences that may result from such confusion at the control room. This case has often been observed in plants involving fine powders (e. g., cement plants, alumina plants, plants producing dry concentrates). 21.1.4. The Special Case of In-Stream Stationary Cutters Devices illustrated in Figures 21.6 and 21.7 are common in the mining industry. The cutter must take the entire thickness of the stream, with a sufficient safety margin at the top to provide for flow rate fluctuation, preventing a vertical delimitation error from taking place. Of course, the cutter opening must be constant from bottom to top of the cutter. Unfortunately, such systems are extremely vulnerable to lateral segregation, which led some manufacturers to install several stationary cutters across the stream, like the system illustrated in Figure 21.8. Let’s make a list of the problems affecting multi-stationary cutters sampling systems. The bottom view of Figures 21.7 and 21.9 show a deep box, called weir, that led to an unacceptable system. • In other words, multistationary cutters systems will never be reliable systems for accurate metallurgical accounting. • However, it is possible to make a few improvements to move from a very bad system to a system that may be acceptable for process control. Figure 21.9 makes a list of the various weak points for which corrective action to improve the system is possible. • Point #1: The entire box is often a flotation cell. If there are no sprayers to suppress the flotation of critically important minerals (e.g., copper and molybdenum sulfides), it would completely ruin the validity of the entire system. The installation of two lines of strong water sprayers, as shown in Figure 21.10 is essential. • Point #2: The unwelcome weir promotes the accumulation of heavy particles and heavy minerals generating a possible bias. The installation of one or two slow-turning large-diameter propellers at the bottom of the weir as shown in Figure 21.10 is essential, so the accumulation of heavy particles is minimized. This adds one or two rods into the system, which is unfortunate, so the rod should be as small diameter as possible. As we see, no matter what, the system will never be a good one. • Point #3: If there is no inspection door easy to open and close once every working shift, for inspection of the cutters, for cleaning, and for mandatory cleaning of the

Stream

Sample FIGURE 21.6 Stationary cutter taking the entire thickness of the stream.

394

Theory of Sampling and Sampling Practice

cutter using an appropriate tool provided by the manufacturer, the system would be useless. There are no sampling systems in the entire world, regardless of how good they are, that can provide a good sample if there is no inspection door to perform these critically important tasks once every working shift. • It is of the utmost importance to install appropriate inspection doors as illustrated in Figure 21.10. It is unthinkable that someone would have to ask the maintenance department to open them as is often the case. • Large screen (10-cm opening) should be installed below the inspection door to ensure the safety of the operator in charge of doing the recommended tasks.

The bottom area of cutters plugs very fast, leading to a deficiency of coarse particles in the sample.

Cutter

Material constantly accumulating, sending the wrong message to cutters FIGURE 21.7 Top view with no weir, and bottom view with a weir which is a monumental source of trouble.

395

The Increment Delimitation Error

3 4 Cutter

1

2

FIGURE 21.8 Use of five stationary cutters across the stream.

3 4 1

Cutter

2

FIGURE 21.9 Three major defects for the primary sampling stage.

• Point #4: The permanent presence of a large diameter rod used to drain the box once in a while, located ahead of the cutters would be an unacceptable flaw. That rod must be moved away as suggested in Figure 2.19. See further comments Chapter 24, as these systems are very vulnerable as far as the Increment Extraction Error is concerned. 21.1.5. Straight Path Cross-Stream Cutters The importance of this section is enormous because it affects the way many sampling manufacturers and engineering firms do business. It is also critically important for many existing international standards on sampling. 21.1.5.1. Stopped Belt Sampling Stopped belt sampling is a method commonly used to test if an automated sampling station introduces a bias or not. However, this reference method can itself be biased if stringent conditions are not respected. The collected increment must have a correct

396

Theory of Sampling and Sampling Practice

chute

chute

chute

chute

chute

chute

FIGURE 21.10 Suggestions to minimize the negative effect of three major problems.

delimitation consisting of two parallel planes made of good-quality steel and shaped to fit the curvature of the belt exactly, as illustrated in Figure 21.11. The best way to implement such a reference sampling system is to install a permanent hydraulic system helping to place the frame correctly and always the same way on the surface of the belt.

21.1.5.2. Straight-Path Cross-Stream Sampling The correct increment delimitation for a straight-path cross-stream sampler is illustrated in Figure 21.12. A complete interception, with a steady cutter traveling at constant speed is essential.

hydraulic positioning

Stopped conveyor belt

Reference sample FIGURE 21.11 Stopped belt sampling used to check the correctness of an automated sampler.

397

The Increment Delimitation Error

Falling stream

Sample increment

Width of cut kept constant

Sample increment

Stream

Top view Cutter direction FIGURE 21.12 A correct straight-path cross-stream sampler intercepting a falling stream in a flotation plant.

21.1.5.3. Straight-Path Cross-Stream Sampling: Cutter Accelerating Discontinuous samplers are often used when the flow rate of the stream is very large. Under these conditions it is necessary to rely on driving systems allowing a constant speed across the stream. It is our experience that only electric drives can fulfill this condition after the sampler has been in service for some time. Constant speed has been achieved by some hydraulic drives under special conditions. Nevertheless, it is our opinion hydraulic, pneumatic, magnetic, and manual drives should be eliminated. Sometimes, the constant speed is not achieved with electric systems as well when the parking place of the cutter is too close to the stream. For the cutter to reach its cruising speed prior to reaching the stream it is necessary to provide a minimum distance between the parking place and the falling stream. This distance may greatly vary depending on the size of the sampling system. The distance may vary from 30 cm for small sampling systems (e.g., sampling streams with flow rate anywhere between 50 and 500 tons/hour) to more than 1 m for large sampling systems (e.g., sampling streams in a flotation plant with flow rates anywhere between 5000 and 12000 m3/hour), or even 2 m for large sampling systems crossing coarse solid materials (e.g., streams of iron ore with a flow rate over 10000 tons/hour). Figure 21.13 illustrates the delimitation problem.

398

Theory of Sampling and Sampling Practice

Stream

Cutter direction

Width of cut not constant

FIGURE 21.13 Illustration of an increment delimitation generated by an accelerating cutter.

21.1.5.4. Straight-Path Cross-Stream Sampling: Damaged Cutter The cutter may not be built with strong enough material. As a result, following intense impacting on the cutter edges, the cutter opening may become enlarged near its middle, as illustrated in Figure 21.14, introducing an unacceptable delimitation bias.

Falling stream

Width of cut not constant

Sample increment

Stream

Top view Cutter direction FIGURE 21.14 Straight-path cross-stream cutter built with weak material, becoming incorrect after some time of service.

399

The Increment Delimitation Error

21.1.5.5. Straight-Path Cross-Stream Sampling: Dirty Cutter The routine daily cleaning of sampling systems, and especially the cutters is an essential task to be performed on any automated sampling system. This requires easy access, large inspection doors, and permanent on-site water hoses. Such water hoses can also be automated following each cut from the primary sampler in a multistage sampling station. The cutter becomes obstructed by sticky materials such as fines and wood fibers, as often observed in flotation plants. These materials accumulate around the cutter edges, partially closing the cutter opening, as illustrated in Figure 21.15, resulting in a massive delimitation bias. 21.1.5.6. Straight-Path Cross-Stream Sampling: Full-Time Contamination Usually, a contamination is listed as an Increment Preparation Error. However, a dusty stream may be constantly surrounded by a cloud of dust that is actually part of the stream. In that case, the cutter located at its parking place may constantly receive some fine material. The resulting effect is a cutter nonintentionally collecting part of the stream all the time, combined with the same cutter taking all the stream part of the time intentionally, resulting in a major delimitation bias as illustrated in Figure 21.16. Another similar situation is a parking place located much too close to the flowing stream resulting in the case illustrated in Figure 21.17; this is a case commonly encountered in a flotation plant.

Falling stream

Width of cut not constant

Sample increment

Stream

Top view Cutter direction

FIGURE 21.15 Straight-path cross-stream flotation pulp cutter partially obstructed by sticky material.

400

Theory of Sampling and Sampling Practice

Stream

Area affected by a dust problem Cutter direction

Contamination

Stream

FIGURE 21.16 The cutter at the parking place may receive part of the stream all the time.

Stream

Cutter direction

Contamination

Stream

FIGURE 21.17 Part of the stream is collected all the time: Parking place too close to the stream.

21.1.5.7. Straight-Path Cross-Stream Sampling: One Parking Place and Reversing Too Soon Some cutters have only one parking place. An incorrect layout may be observed when the cutter reverses its motion too soon. There may be a small area where the cutter has nothing to take during its return, as illustrated in Figure 21.18, introducing a delimitation bias.

401

The Increment Delimitation Error

During the return trajectory, there is an area where nothing can be taken by the cutter.

Stream

Only one parking position

FIGURE 21.18 The cutter has only one parking place and reverses its motion too soon.

21.1.5.8. Straight-Path Cross-Stream Sampling: A Large Stream and A Small Cutter Under-sizing a cutter is a frequent effort to cut cost, but the price to pay for the consequences can be of catastrophic proportion. This situation also arises when the productivity of a plant is increased, while the size of the sampling system remains the same. Invariably, a part of the stream may never have a chance to be part of the sample, as shown in Figure 21.19, resulting in a delimitation bias. 21.1.5.9. Straight-Path Cross-Stream Sampling: Secondary Stream With wet or sticky materials, some fragments or fines stay in contact with the conveyor belt and the installation of an appropriate scraper is necessary to recover the material and channel it back toward the main stream as shown in Figure 21.20. It is an absolute must for the cutter to be designed and positioned in such a way that it can cut both streams with the same sampling probability. 21.1.6. Rotating-Path Cross-Stream Cutters Samplers equipped with rotating cutters, like the one illustrated in Figure 21.21, have been popular for many years. They are commonly called Vezin samplers. They indeed have many qualities: • They are compact, requiring very little space. • They are sturdy and have a long life. • They are easy to clean and maintain. • If built correctly, they are very accurate. • They are not very expensive. • They are perfectly proportional when continuous: This alone is an enormous advantage we will further analyze in Chapter 27, as proportional sampling is the future of sampling systems.

402

Theory of Sampling and Sampling Practice

Falling stream

Part of the stream is missing

Sample increment

Stream

Cutter direction

Top view FIGURE 21.19 A large stream sampled with a small cutter.

There are a few disadvantages: • They are appropriate only for relatively small streams. Therefore, they are commonly used as secondary or tertiary samplers in automated sampling stations. • They take a lot of vertical space when required slopes exceed 60o to sample sticky materials. 21.1.6.1. Rotary Samplers: Correct Geometry Any rotating cutter must have radial cutter edges. Cutter edges must be exactly radial relative to the center of rotation: only a small deviation from this condition can introduce an unacceptable delimitation bias. When edges are radial, as illustrated in Figure 21.22, it is easy to notice that for a given point along the radius, the width of the cutter opening is directly proportional to its speed, and also to the circumference:

403

The Increment Delimitation Error

1

2

1. Main stream 2. Secondary stream

Stream

Increment

FIGURE 21.20 Material recovered by the scraper must pass well inside the trajectory of the cutter, leading to a correct delimitation.



w v c ¼ ¼ ¼ Constant W V C

ð21:1Þ

The probability P of a fragment to be part of the increment must remain uniform all the way across the intercepted stream. 21.1.6.2. Rotary Samplers: Incorrect Geometry There are still on the world market rotary samplers equipped with rectangular cutters as illustrated in Figure 21.23. Such devices have a large sampling probability near the center of rotation and a small one far away from the center, introducing an unacceptable delimitation bias. 21.1.7. Shape of the Intercepted Stream The shape of the intercepted flowing stream is irrelevant. As illustrated in Figure 21.24, if the delimitation correctness conditions are respected, all fragments in the stream, regardless of the shape of the stream, are intercepted with the same sampling probability. The assumption is that the velocity of the stream is uniform all the way across the stream; it is not always the case with liquids. 21.1.8. Cross-Stream Flap Sampler Samplers equipped with a cross-stream flap cutter are always incorrect. Figure 21.25 shows a flap sampler where a deflector moves across the stream to collect an increment. The sampler always collects more material on the side of the closed idle position, giving to this portion of the stream a larger sampling probability. There is no solution possible to render such a sampler correct. Therefore, such sampler should be systematically disqualified in any standard on sampling.

404

Theory of Sampling and Sampling Practice

Stream

u ≥ 3d

Stream

Cutter direction

Width of cut kept constant

Sample increment

Stream

Top view

The inspection door must be very large, easy to open and close, and equipped with 1-cm opening screen to protect the operator from the moving cutter.

FIGURE 21.21 A popular rotating sampler: The Vezin sampler.

Cutter opening W Cutter opening w Cutter speed V Cutter speed v

Circumference c Circumference C FIGURE 21.22 The rotating cutter must have perfectly radial edges.

405

The Increment Delimitation Error

Top view

Stream

Sample increment

FIGURE 21.23 Circular path cutter equipped with parallel edges: always incorrect.

FIGURE 21.24 If the cutter is correct, the shape of the stream is irrelevant.

406

Theory of Sampling and Sampling Practice

In this category we may include several devices that are not correct for the same reasons, such as flap diverters or plug diverters as shown in Figure 21.26. The case shown in Figure 21.26. B could be solved by allowing the valve to describe a full circle between each increment, thus the last side entering the stream is also the last out. 21.1.9. Flexible Hose Samplers For the same reasons mentioned for cross-stream flap samplers, most flexible hose samplers found on the market are incorrect; however, it is possible to make them correct. Figure 21.27 shows an incorrect design. Figure 21.28 shows a correct design, a stationary rectangular cutter has been installed; this case is equivalent to a straight path cross-stream sampler, but instead of a cutter moving across the stream it is the stream that moves back and forth across the cutter. The speed of the flexible hose must be kept constant. 21.1.10. Header Tanks Header tanks are used to stabilize the flow rate of a stream prior to its presentation to a sampling system or an online analyser, as illustrated in Figure 21.29. The slurry with suspended solids is kept relatively homogenous by using a stirrer. As long as the level of slurry inside the header tank is going up and down without overflowing, the system is working correctly. However, as soon as the header tank is allowed to overflow, the system becomes instantly incorrect and introduces a delimitation bias: Usually the material overflowing is finer than the material going to the on-line analyzer.

Falling stream The width of the increment across the stream is not kept constant.

Opened, idle position

Closed, idle position 1 2

Stream 1

Stream

Sample increment

FIGURE 21.25 Illustration of a flap sampler, always incorrect.

2

407

The Increment Delimitation Error

Sample increment

Stream

A

Sample increment

B

Stream FIGURE 21.26 [A] is a flap diverter, always incorrect. [B] is a plug diverter, correct only if the valve completes a full circle for closing: Always incorrect if the valve goes back and forth.

21.1.11. Rotating Cutter Located under a Cone Deflecting a stream by using a cone, and positioning cutters under the edge of the cone is an old idea already mentioned in Gy’s literature in 1971.51 Such devices were called OstravaRadvanice samplers. Recently, GR Sprenger Engineering Inc. based in Colorado, strongly revived the idea with a unique configuration illustrated in Figure 21.30, using stationary cutters under the edge of the cones. The system is fully described in U.S. patent number 5,426,987 and has been used in the cement industry. Unfortunately, because the cutters were stationary, this system is not probabilistic and should be avoided. A greatly improved system using rotating cutters under the edge of the cones is now available on the market. Figure 21.30 shows a new system in a compact, circular configuration. Two sampling stages allow a 0.04% sampling ratio, which is very attractive to feed an online analyzer. This sampling system is ideal for material balance and plant feed mass determinations if the radial cutters, located under the inverted cones and which continuously sample the entire width of the stream, rotate. We do not recommend stationary cutters. If cutters rotate, such a sampling system is perfectly proportional; it therefore provides a sample mass, which is completely proportional to the total mass passing through the plant, and thus permits verification of tonnages that are determined independently at the mine and by mill weightometers.

408

Theory of Sampling and Sampling Practice

Falling stream

Hose trajectory

The width of the increment across the stream is not kept constant.

2 1

1

Stream

Stream

Sample increment

2

FIGURE 21.27 Flexible hose slurry sampler. This model is always incorrect.

The conditions for delimitation correctness are: 1. The deflecting cone must spread the flowing stream evenly all around its edge. 2. A circular deflector must insure that the entire falling stream actually enters the rotating cutter opening. The absence of such deflector would necessarily introduce a delimitation bias. 3. The space between the cone and the outer chamber of the sampler should be calculated in such a way that it does not interfere at any time with the flow of the incoming stream. 4. The rotating cutter must be radial with respect to the center of rotation. The advantages of such device are enormous, and it is stunning that the industry has been so slow to endorse such economical and far superior sampling system: • If well designed, the system is perfectly proportional, leading to an Increment Weighting Error IWE that is totally negligible, and providing a remarkable check on weightometers and flowmeters in a plant. • There is no such thing as taking a cut from the stream every so often, therefore the long-range heterogeneity fluctuation error HFE2 and the periodic heterogeneity fluctuation error HFE3, are totally negligible, which is an enormous advantage for material balance and process control as well.

409

The Increment Delimitation Error

Falling stream The width of the increment across the stream is kept constant.

Hose trajectory 1

2

Stream

1 2

Stream

Side view of the cutter

Sample increment

Top view of the cutter

FIGURE 21.28 Illustration of a correct flexible hose slurry sampler.

• The sampling ratio is very small and decreases rapidly with the diameter of the system. • There is only one machine, and one drive for several superimposed samplers, which is an enormous advantage over the conventional primary sampler, followed by a secondary sampler, and then followed by a tertiary sampler. • The cone slows down the main stream, and reduces it to a gentler, much smaller stream at the point of impact with the cutter, reducing the stress on the cutter, and therefore increasing the chance for the cutter to perform well. • The cutter, being constantly within the stream, can actually rotate very slowly (e.g., one rotation per minute), increasing the chances for the cutter to perform an excellent job. • Because the cutter rotates slowly and intercepts a smaller stream, the need for power on the driving system is greatly reduced, making it possible to be very conservative and greatly increase the life of the driving system. • The system held by two flanges is extremely easy to install and to remove for maintenance.

410

Theory of Sampling and Sampling Practice

Stream

Should not overflow

On-line analyzer

Stream FIGURE 21.29 Illustration of a sampling system using a header tank.

It is puzzling that such a system, unsurpassed in correctness, and about ½ the cost to build and maintain has not been an industry standard since day one it was proposed. The industry still prefers nonprobabilistic systems, which transgress the most elementary rules of sampling correctness, and conventional cross-stream samplers, which are expensive and more profitable for the manufacturer, and most of the time built, installed, and maintained incorrectly. 21.1.12. Rotating Sweepers The sampling system shown in Figure 21.31 consists of a stationary circular table on which the stream to be sampled is continuously discharged into its center. An arched sweeper slowly rotates on the surface of the table pushing the material toward the edge of the table under which radial cutters have been installed. The system is correct as long as the cutters are radial and all the material on the table finds its way to the edge.

411

The Increment Delimitation Error

Main Stream 1 2 3

Rotating cutters under cones

4

Main Stream

Sample to On-line analyzer and chemical analysis

FIGURE 21.30 Rotating cutters located under deflecting cones.

A W > 3d + 1cm

B FIGURE 21.31 Sampler consisting of a rotating sweeper and stationary radial cutters.

This page intentionally left blank

22 The Increment Delimitation Error during Sampling at the Laboratory

The subsampling area at the laboratory is an important place where many sampling stages take place. The equipment used in this area must be correct. It is not good practice to call this area the sample preparation area, as it gives the impression that all sampling errors taking place in this area can be called preparation errors, which is a great mistake.

22.1. The Use of Spatulas, Scoops, and Shovels 22.1.1. The Incorrect Use of a Well-Designed Spatula or Scoop It does not matter if a spatula or a scoop is correctly designed, its improper use can ruin efforts to collect a representative sample. Figure 22.1 shows a procedure often called the Japanese Slab Cake as is often mentioned in JIS standards. The idea of collecting many increments to minimize the variance of the Grouping and Segregation Error is a good one; however, it is assumed all the collected increments are correct as far as IDE and IEE are concerned; in that picture it is not the case, unfortunately, which is a detail many operators and even chemist may miss or ignore. This is part of good training and the correct use of spatula or scoops must be enforced as suggested in Figure 22.2. An elongated pile would greatly facilitate the collection of correct increments all the way across the pile. Such incorrect practice may have been used on purpose to underestimate the gold content of some material. It may indeed take a long time for anyone to unmistakably detect this kind of fraud.

22.1.2. Illustration of Correct and Incorrect Delimitation and Extraction Examples illustrated in Figure 22.3 may help the reader to visualize the difference between IDE and IEE. In the top view, A shows the use of a round scoop. Assuming heavy minerals segregate preferentially at the bottom of the pile, it would then be impossible to collect an unbiased increment because of a major Increment Delimitation Error. The scoop should be Square as shown in B. In the bottom view, A shows the use of a flat spatula; on its way up to collect the increment, material would slide down on the sides and be lost. This is a recovery error which is called an Increment Extraction Error. The scoop should be Square as shown in B, with walls on the sides to prevent losses during the upward trajectory.

413

414

Theory of Sampling and Sampling Practice

Incorrect sample increment

FIGURE 22.1 The incorrect use of a good scoop by tipping the material from the top of the pile.

Correct sample increment

FIGURE 22.2 The correct use of a good scoop by crossing the entire elongated pile.

22.1.3. Incorrect and Correct Use of the JIS Spoon Some sampling practitioners may think that because they use a properly designed JIS spoon they are immune from biases generated by IDE and IEE; this is a myth generated by ignorance and malpractice. The JIS spoon can perform a correct job only if it

415

The Increment Delimitation Error

A

B

Incomplete, incorrect increment

Complete, correct increment

FIGURE 22.3 Illustration of IDE (top A view) and IEE (bottom A view) biases.

successfully crosses an elongated stream, with the increment well contained in the spoon as illustrated in the top view of Figure 22.4. The use of the JIS spoon as illustrated at the bottom of the same figure opens the door to massive IDE and IEE, which is unfortunately the way it is used most of the time. 22.1.4. Correct and Incorrect Shovel, Spoon, and Spatula Figure 22.5 on the left side illustrates the wrong shovel, scoop, and spatula. On the right side the correct tools are illustrated. It is easy to find properly designed shovels on the market; it is more difficult to find correct spoons; it is almost impossible to find correct spatulas and the best way to handle this is to build them yourself using very thin good quality stainless steel. 22.1.5. The Use of a Correct Spoon for the Wrong Application Figure 22.6 shows a case where the operator uses a correct spoon to collect increments across an elongated pile; but, in this case he uses it to collect increments around a circular pile, which would invariably vastly overrepresent the center of the pile. There is every reason to believe that segregation took place when the material was released on a flat surface, and most certainly the material in the center is somewhat different from the material located on the edges. Such wrong practice has even been recommended by some sampling standards.

416

Theory of Sampling and Sampling Practice

FIGURE 22.4 Illustration of the proper use of the JIS spoon (top view), and the incorrect use (bottom view).

Correct design

Incorrect design

Spatula

Scoop

Shovel

FIGURE 22.5 Illustration of the incorrect and correct shovel, scoop, and spatula.

22.2. The Use of Rotary Dividers 22.2.1. Rotary Dividers Sectorial dividers are common in modern laboratories and very effective to minimize the variance of the Grouping and Segregation Error. However, they must be well designed, cleaned, and maintained. There are several categories: 1. Sectorial dividers with revolving feeders 2. Sectorial dividers with stationary feeders

417

The Increment Delimitation Error

FIGURE 22.6 A spoon vastly overrepresenting the center of the lot.

3. Dividers equipped with one, two, or four buckets (reject type) 4. Dividers equipped with many adjacent buckets (no rejects). In the first two categories, the most important condition of the delimitation correctness is to ensure that the edges of the divider be radial with respect to the center of rotation, as illustrated in Figure 22.7 by the top view of the divider. With dusty materials, minimizing

Correct, radial increment 1

2

4

3

5

6 7

1. Hopper 2. Vibrating conveyor 3. Vibration generator 4. Revolving feeder 5. Radial, sectorial splitter + top view 6. Jars 7. Motor and gear

FIGURE 22.7 Top view and side view of a correct sectorial splitter equipped with a revolving feeder.

418

Theory of Sampling and Sampling Practice

the distance between the edges of the divider and the exit of the feeder prevents problems of delimitation associated with clouds of dust. The same remark applies for the distance between the exit of each sector and the top of the receiving containers. Some sectorial splitters are liable to introduce a delimitation bias when the sectors are not radial as illustrated in Figure 22.8. This defect is often encountered on devices built in house. The shaded area for each increment shows the collected material that does not belong to the potential sample.

22.2.1.1. Sectorial Splitters with Revolving Feeder These sampling systems are very useful to obtain a certain number of identical samples that can be used for comparative tests. They can be used for slurries, pulps, and very fine dry material. With radial sectors the only source of delimitation bias is the nonuniformity of the revolving speed or the flow rate at the exit of the feeder. In any case, no material should escape the series of identical stationary sectors. All potential samples must be of equal weight. Be careful about cyclic fluctuations that could be generated by parts touching each other always at the same place; this can be prevented by regular inspections and good maintenance.

22.2.1.2. Sectorial Splitters with Stationary Feeder The stationary feeder may consist of a vibro-conveyor or better, a small conveyor belt. The conveyor belt type does not introduce as much segregation in the stream. The rotating table containing all sectors must revolve at constant speed beneath the discharge of the feeder. The receiving jars or buckets are attached to the rotating table under the sectors and revolve with it. The conditions of delimitation correctness are: • All sectors must be strictly equal and perfectly radial with respect to the center of rotation. • The revolving speed must be maintained constant.

Bin + Vibro-conveyor

Stream

FIGURE 22.8 Illustration of an incorrect rotary splitter: Receiving sectors must be radial.

The Increment Delimitation Error

419

• No material should escape the sectors at any time, unless a reject type machine is used which consists of few buckets only. • The distance u between the exit of the feeder and the edges of the sectors should be minimized to prevent the formation of dust. We recommend u = 3d + 1cm. • The flow rate at the exit of the feeder should be maintained as uniform as possible.

22.2.1.3. Rotary Dividers Equipped with One, Two, or Four Buckets (Reject Type) Rotary dividers equipped with one or two radial buckets as illustrated in Figure 22.9 are very common. For example, one bucket may have a 5% sampling ratio, and the other bucket usually with the same sampling ratio is used as a duplicate to assess the precision of the machine in a Quality Control program. It is a convenient rotary divider to be installed in a sample preparation robot, and rejects can be directly fed to a conveyor belt that may take them to an appropriate location either for their collection or their disposal.

22.2.1.4. Rotary Dividers Equipped with Many Adjacent Buckets (No Rejects) The advantage of rotary dividers equipped with many adjacent buckets as illustrated in Figure 22.10 is the capability to provide many replicate splits for the metallurgist to perform many tests using exactly the same material. They are also convenient to prepare reference materials. The disadvantage is that they are slow to be arranged, placing bridges between buckets to prevent spillage, and to be disassembled and cleaned. They are totally inappropriate to process long series of samples on a routine day, or to be used in a sample preparation robot.

FIGURE 22.9 Illustration of a correct rotary divider, reject type, equipped with two radial buckets.

420

Theory of Sampling and Sampling Practice

FIGURE 22.10 Rotary divider equipped with many adjacent radial buckets.

22.3. The Use of Riffle Splitters 22.3.1. True Splitting Devices – Riffle Splitters Riffle splitters are probably the most well-known sampling devices. They are usually designed and built so that delimitation errors are nonexistent; however, bad maintenance and poor designs may greatly alter their delimitation correctness. Riffle splitters, often called Jones splitters, should always have an even number of identical adjacent chutes. To preserve the advantage of equity given by the splitting operation, it is essential that an equal number of identical chutes lead the material toward the left bucket and the right bucket. Figure 22.11 shows a good design. An odd number of nonidentical chutes are sometimes encountered in systems equipped with adjustable chutes. These systems are not correct and should be rejected. Figure 22.12 illustrates the combination of an incorrect number of chutes with an incorrect width of the feeding scoop. An incorrect delimitation often takes place because: • misuse of the feeding scoop

The Increment Delimitation Error

FIGURE 22.11 Correct riffle splitter: Even number of chutes on both sides, and all having an equal width.

Or, even worse:

FIGURE 22.12 Incorrect use of a riffle splitter.

421

422

Theory of Sampling and Sampling Practice

Cyclone exit

Receiving bin A

B

FIGURE 22.13 Superimposed riffle splitters: The misuse of a good sampling tool.

• unfortunate modification of the original scoop • scoops that are not designed and built according to the exact length of all the chutes combined. A correct riffle splitter delivers two potential samples absolutely identical, one on each side of the riffle splitter. When the operator uses a third receiving bucket to feed the riffle splitter the length of the bucket is often superior to the length of all chutes combined as illustrated in Figure 22.12. This occurs because the bucket is only designed to receive the potential sample without creating spillage; as a result, it is slightly oversized which prevents loss of material during the sampling operation. Some operators do not hesitate to feed the riffle splitter with a receiving bucket sideways as illustrated at the bottom of Figure 22.12. Under these conditions none of the chutes provide the same selection probability and the potential samples become nonprobabilistic and useless. This is a very bad practice that should be prevented with adequate training. Riffle splitters have been used for the continuous sampling of one-dimensional flowing streams or also arranged in a cascade of two, three, or even four to accelerate the sampling procedure when successive splitting steps are necessary, as suggested in Figure 22.13. Such practices have been the object of experiments demonstrating that they are likely to introduce a sampling bias. Again, we should emphasize that the rules of delimitation correctness should not be forgotten, and they are not negotiable, thus: • The material coming to the riffle splitter should be equally distributed among all chutes. • The material coming out of the chutes on one side should be spread evenly all the way across the feeding scoop before proceeding with the next splitting operations. This operation becomes impossible when the riffle splitters are arranged in a cascade. • The selected sample, among the two available, should always be selected at random, such as by tossing a coin. Again, this operation becomes impossible when the riffle splitters are arranged in a cascade.

23 The Increment Extraction Error during Exploration and Mining

Recovery problems when drilling during exploration programs is a well-known concern by geologists all around the world. Such problem is typical of an extraction problem when the drilling bit is selective on what it takes. The Increment Extraction Error IEE could have been called the Increment Recovery Error.

23.1. Definition and Concept In Chapter 20, we saw how to implement a correct sample delimitation when drilling the successive slices of an orebody (i.e., each sample is assumed to represent the entire thickness of a certain slice of the orebody). Now, we shall see how to implement a correct sample extraction, or at least find out what the likely problems are and see if we can minimize them. When drilling orebodies, IEE is probably responsible for one of the most important sources of bias during geochemical exploration, ore reserve estimations, feasibility studies, and grade control at mining operations. This extraction error takes place every time a fragment with its center of gravity within the boundaries of the correctly delimitated core, is pushed outside these boundaries for some reason and not part of the actual fragmental core. The problem may also be reversed for fragments having their center of gravity outside the ideal boundaries but, for some reasons, are pushed inside, thus contaminating the actual fragmental core. Figure 23.1 shows a perfect sample recovery.

23.2. Review of Some Drilling Techniques 23.2.1. Drilling Orebodies with Diamond Coring Bits The long discussion made earlier in Chapter 18 about the model fragmental increment obtained by a correct cross-stream cutter may be extended to the model fragmental increment obtained by a drilling machine. As illustrated in Figure 23.2, when a fragment collides with the edge of the coring bit, two different scenarios may arise: • The fragment, solidly cemented to the surrounded matrix, as in a breccia, is cut by the coring bit. If the extraction is correct, then the fraction outside the core boundaries is rejected toward the cuttings, and the fraction inside the core boundaries is recovered as part of the actual extracted core. 423

424

Theory of Sampling and Sampling Practice

FIGURE 23.1 Illustration of a good core sample recovery.

• The fragment, loose in a highly fractured matrix, or in finer material such as silt, clay, gravel and so on, is not cut by the coring bit; it is simply pushed. For the extraction to be correct, two conditions must be fulfilled: ○ If the center of gravity of the fragment is inside the correctly delimitated core boundaries, the entire fragment belongs to the actual extracted core. ○ If the center of gravity is outside these boundaries, the entire fragment should be rejected aside with the rejected cuttings generated by the progression of the tool. Obviously, the drilling operator may have very limited control over these requirements. Excess air or water from the cooling device may also introduce an extraction error by forcing fragments, and especially small size particles, from outside the core boundaries, to become part of the actual extracted core, and vice versa. Again, when drilling through highly fractured materials or mixtures of blocks, clay, silt, sand, etc., it is recommended to use diamond coring bits of type A as indicated in Figure 23.3. Type B should be rejected as it would apply too much stress on the fracture to be recovered, and also the cooling water comes at the worse possible place.

425

The Increment Extraction Error

G

FIGURE 23.2 Illustration of a correct extraction of a core sample.

A

B

FIGURE 23.3 Drilling bits: A is appropriate for highly fractured material, while B is acceptable only for hard, competent material.

23.2.2. Drilling Orebodies with Tungsten Carbide Coring Bits Tungsten carbide coring bits are often used to drill through soft or loose materials. As long as the coring bit does not encounter large fragments it is unlikely that a significant extraction error will take place if the driller is well trained. Figure 23.4 shows a frequent problem encountered with coarse fragments; the driller is aware of the problem as the drilling bit and tubes start vibrating, and it is a difficult problem for him to resolve satisfactorily. Usually, the recovered material is too fine.

426

Theory of Sampling and Sampling Practice

Of course, samples should not be too long, or a recovery problem may take place as it becomes harder and harder to push the core into the drilling tube. When drilling relatively soft material that has a high moisture content, such as clays or some tropical soils or laterites, the recovered core corresponds to the material included between the dotted lines Y1 and Y2 as illustrated in Figure 23.5, which gives the impression of obtaining recoveries superior to 100%. This technique of drilling is characterized by a strong and tight contact between the wall and the tubing, which leaves no room for the material between dotted lines Y1 and X1, and Y2 and X2, which has no other choice but to be extracted with the ideal core by extrusion effect. If properly handled this extraction is correct as long as the driller keep a good account of how deep the sample really is, and as long as he does not try to collect a too long sample (i.e., maximum 2 or 3 meters at the time). By forcing too much material into the tubing, there is a risk of recovering about 120% at the beginning and no more than 80% at the end; still the operator is satisfied in having a 100% recovered core, which is, in fact, highly biased because of the sampling probability changing all the time. This common phenomenon may cause challenging difficulties to an impartial logger. 23.2.3. Drilling Orebodies with Destructive Bits For drilling cost considerations, destructive drilling techniques are becoming more and more popular, often to the detriment of good and effective geology. There are several types such as solid head or star head coupled with a hammering system, or rotating cone types as illustrated in Figure 23.6, These bits are generally used for reverse circulation drilling, providing a sample made of small size cuttings. The main advantage is the high speed of

FIGURE 23.4 Recovery problems encountered when large fragments are present.

427

The Increment Extraction Error

Y1

X1

X2

Y2

FIGURE 23.5 Extrusion effect creating an extraction error.

drilling, which considerably reduces the drilling cost. The delimitation error is usually very small however many problems may take place with the actual extraction of the material, especially in the presence of fractured material, gravels, sands, and so on. The high water or air pressure may aggravate the problem by creating caving beyond the ideal sidewall. Furthermore, when water is added or coming from the water table, fine cuttings may be lost by overflowing from the bucket collecting the sample, vastly over representing the coarse material as illustrated in Figure 23.7. It is essential to recover all sediments and combining them back to the sample, otherwise a huge extraction bias may take place and totally ruin the validity of such drilling technique.

23.3. The Plucking Effect As part of the Increment Extraction Error, there is a subtle phenomenon that takes place during exploration drilling. It can be called the “Plucking Effect,” and it has escaped the attention of many geologists around the world, as shown by Pitard.127 For example, in a copper sulfide deposit, as a diamond drilling machine cuts its way through the mineralized area, a slick core sample showing full account of the chalcopyrite grains, veinlets and larger

428

Theory of Sampling and Sampling Practice

Air

Air

Excessive caving

FIGURE 23.6 Reverse circulation drilling: some of the recovered cuttings do not belong to the sample.

Overflow Overflow Coarses

Sediments

FIGURE 23.7 A large amount of fine material may be lost by overflowing.

429

The Increment Extraction Error

veins is expected: reality is different. Figure 23.8 shows the “Plucking Effect” generated by a diamond core drilling machine, while Figure 23.9 shows what happens with a reverse circulation drilling machine and with a blast-hole drilling machine as well. As a result of the inward effect illustrated in Figure 23.8, diamond core drilling shows slightly lower chalcopyrite contents than it should, and there is nothing anyone can do about it unless the diameter is increased. Sawing the core with a diamond saw or a core splitter aggravates the problem. As a result of the outward effect illustrated in Figure 23.9, Reverse Circulation drilling shows slightly higher chalcopyrite contents than it should.

23.3.1. Modeling the Plucking Effect An attempt can be made to quantify the Plucking Effect, and determine if it is indeed a concern for the exploring geologist. Let’s define a few terms further illustrated in Figure 23.10. aL the true chalcopyrite content of the rock expressed in %, aC the chalcopyrite content estimated from diamond core drilling expressed in %, aR the chalcopyrite content estimated from RC drilling expressed in %, dL the average observed size of the chalcopyrite grains or veinlets, thickness expressed in centimeters, RC the radius of the diamond core expressed in centimeters, RCeff the effective radius of the diamond core expressed in centimeters, RR the radius of the RC hole expressed in centimeters, RReff the effective radius of the RC hole expressed in centimeters, H the length of the drilling intercept expressed in centimeters, ρ the density of the rock expressed in grams per cubic centimeter. Then, the following relationships could be written:

In situ FIGURE 23.8 Plucking effect for diamond core drilling.

RCeff ¼ RC  0:5dL

ð23:1Þ

RReff ¼ RR þ 0:5dL

ð23:2Þ

Ideal recovery

Plucking Effect

430

Theory of Sampling and Sampling Practice

Plucking Effect

In situ FIGURE 23.9 Plucking effect for Reverse circulation drilling.

RC

RR

RCeff dL RReff

FIGURE 23.10 Illustration of the basic Plucking Effect terminology.

Then, the respective sample weights WC and WR can be calculated: WC ¼ ρ H π R2C

ð23:3Þ

WR ¼ ρ H π R2R

ð23:4Þ

WCeff ¼ ρ H π ½RC  0:5dL 2

ð23:5Þ

WReff ¼ ρ H π ½RR þ 0:5dL 2

ð23:6Þ

431

The Increment Extraction Error

Finally, the respective grade correcting factor CC and CR can be calculated: CC ¼

R2C WC ¼ WCeff ½RC  0:5dL 2

ð23:7Þ

CR ¼

WR R2R ¼ WReff ½RR þ 0:5dL 2

ð23:8Þ

23.3.2. Practical Exercises A NQ core has a diameter of 4.75 cm. The copper content from the core analysis is 1.00%. The average size of the chalcopyrite grains is about 0.20cm. What should the true copper content be? CC ¼

R2C ½RC  0:5dL 2

¼

½2:382 ½2:38  0:5ð0:20Þ2

¼ 1:09%

A RC hole has a diameter of 12cm. The copper content from the analysis is 1.00%. The average size of calchopyrite grains is about 0.20cm. What should the true copper content be? CR ¼

R2R ½RR þ 0:5dL 2

¼

½62 ½6 þ 0:5ð0:20Þ2

¼ 0:98%

It can be observed that the differences are of a concern, and especially if the drilling diameter is small.

23.4. Extraction Problems during Drilling and Sampling of Blastholes 23.4.1. Refluxing of Coarse Fragments during Blasthole Drilling The deeper a blasthole at the mine is the more large fragments are refluxing as illustrated in Figure 23.11. This phenomenon is easy to observe as lower layers of the blasthole pile are much coarser than the upper layers that are usually made of very fine material. This is another form of segregation that can be devastating as tubes or augers are incapable to collect material from the lower levels. 23.4.2. Recovery of the Former Sub-Drill The former subdrill area is already broken up when drilling a new bench. During that phase it is easy to observe the recovery of chips into the blasthole pile is practically nil, as illustrated in Figure 23.12, area A. Experience proves that such unacceptable recovery may last for 3 or even 4 meters and it is exacerbated by the use of a sampling tube or an auger incapable to collect any material from the bottom of the pile accumulated on the ground. Of all problems generated by blasthole sampling, this extraction bias is probably the worst, and necessarily invalidates blasthole sampling for reliable grade control. If the cutoff ore grade boundary occurs as illustrated in Figure 23.13 the economic impact of the extraction bias can be huge. If repeated too often, severe reconciliation misfortune will soon raise grave concerns.

432

Theory of Sampling and Sampling Practice

FIGURE 23.11 Refluxing of coarse fragment during blasthole drilling: a severe segregation problem for representative sampling.

Ideal sample Actual sample

% Recovery

0

30

70

100%

FIGURE 23.12 Illustration of the drilling recovery as a function of depth for blastholes in a typical open pit.

433

The Increment Extraction Error

Former Sub-drill

Ideal sample Actual sample

Current Sub-drill

te as W Or

e

FIGURE 23.13 An ore/waste boundary combined with poor drilling recovery on the upper bench: a sure scenario to financial losses.

A Recommended test: Let’s assume a 10-m bench is drilled, with an additional 2-m subdrill. Before drilling starts, place a correctly designed radial bucket near a new hole. Drill for 5 meters, retrieve the sample, and weigh it. Then drill for an additional 5 meters, retrieve the sample, and weigh it. It can be shown that the weight of the first sample is rarely more than 60 to 70% of the weight of the second sample. This is a huge, unsolvable extraction bias, demonstrating that even if someone could collect the entire pile, it would still be impossible to obtain a representative sample. It is fair to say that blasthole samples vastly overrepresent the second half of a bench. If subhorizontal geological structures exist, massive ore grade misclassification may take place, and there is nothing the grade control engineer can do about it. Some people do not want to include any foreign material accumulated at the top of a new mining bench into the blasthole samples because such material is not representative of what was predicted by the geological model. The argument and way of thinking is fully understood; however, the only purpose of blasthole samples is to predict the average grade that goes to the mill, regardless of its real origin; therefore, the material in the former subdrill must be accounted for by the sample, and there is nothing to debate on this fact.

434

Theory of Sampling and Sampling Practice

23.4.3. Upward Contamination during Drilling During the drilling sequence of a blasthole, upcoming chips constantly scrub the walls of the hole creating an upward contamination. The deeper the hole, the more the contamination, and there is nothing anyone can do about this problem. It is an unsolvable sampling problem because that drilling technique was never intended to collect a representative sample. 23.4.4. Downward Contamination During the drilling sequence of a blasthole, the rotating rod constantly applies stress on the blasted material belonging to the former subdrill. Excessive contamination of the sample results, due to sloughing of blasted rocks. Such sloughing is not representative of the former, upper subdrill, as the contaminating material is from a depth-related cone as illustrated in Figure 23.14. It would be very naïve to believe such contamination compensates for the fact that almost zero recovery is achieved during the drilling phase of the former subdrill. 23.4.5. Moving from Very Bad to Something Acceptable Due to many factors such as size, density, and shape of fragments, wind, rotation speed of the drilling rod, and compressed air, enormous lateral segregation can be observed in the

Former sub-drill and sloughing

Current bench to be assessed for its mineral content

Current sub-drill and contamination

FIGURE 23.14 Cross-sectional view of blasthole drilling illustrating sloughing and subdrill zones.

435

The Increment Extraction Error

For each side of a channel, a different bucket is needed.

FIGURE 23.15 Digging two large radial channels, and collecting four thin, radial increments to make a composite sample.

blasthole pile. Such segregation is an evil, transient phenomenon changing all the time. As a result, it would be unwise to collect all increments at the same distance from the center of drilling, as many people do. A good way to account for such segregation is by using a radial bucket correctly designed and correctly positioned at random around the hole; its edges must be radial toward the center of drilling. The problem with the radial bucket arises when the bench is thick (e.g., 15 meters) and the hole is of a large diameter. In such a case up to 2 or even 3 tons of material lies on the ground. Because the small opening of the radial bucket should be at the very least three times the size of the coarsest fragments, the radial sample may weigh several hundred kilograms which is not practical on a routine day. An option that gives reasonable results is the one illustrated in Figure 23.15. Overall, it is very difficult if not impossible to account for lateral segregation; a bias is likely to take place. Furthermore, current sampling practices observed in a large majority of open pits are a total disgrace as far as this issue is concerned.

23.5. Channel Sampling of an Underground Face Development This is a follow up of Section 20.4.3 that was illustrated by Figure 20.22. This was a case in a silver mine where the silver mineral got enriched near the mineralized boundaries. As a result, the unaware operator taking a channel for the waste material on each side of the main mineralization would take chips that overlapped with the enriched boundaries, introducing a positive bias and making the waste look attractive. To prevent this bias, it was suggested to cross the mineralization a little when collecting the mineralized sample, as illustrated in Figure 23.16, and stay away from the boundaries when collecting the waste samples. The new delimitation is illustrated by the new two straight vertical lines indicated by the double arrow. Then the reconciliation problem with the plant was substantially minimized.

436

Theory of Sampling and Sampling Practice

FIGURE 23.16 Suggested sample delimitation to prevent a devastating extraction bias.

23.6. Drilling with Augers 23.6.1. Sampling a Leaching Pad Using an Auger Augers are commonly used to drill leach pads to assess the copper or gold recoveries. However, it is also well known that augers reject coarse fragments if the drilling diameter is too small as illustrated in Figure 23.17.

FIGURE 23.17 Augers are well known to reject coarse fragments.

437

The Increment Extraction Error

Electric motor

Diameter > 8d FIGURE 23.18 A recommended diameter for augers.

23.6.2. Sampling a Blasthole Pile Using an Auger Augers are also commonly used to drill blasthole piles to obtain a representative sample, as illustrated in Figure 23.18. For such task to succeed without introducing a significant extraction bias the diameter of the auger and the space between each flite should be at least eight times the size of the largest fragments.

This page intentionally left blank

24 The Increment Extraction Error During Sampling in a Processing Plant

The importance of this chapter is paramount for anyone designing, building, installing and maintaining sampling systems in a processing plant. Furthermore, most of the information presented in this chapter should be present in sampling standards.

24.1. Definition of the Extraction Correctness We already know from Section 18.11.4 that all the fragments having their center of gravity inside the model extended increment belong to the model fragmental increment. Furthermore, from Section 18.11.5 we know that if the rebounding rule is respected, the actual fragmental increment is strictly equivalent to its correct fragmental model. If these conditions are fulfilled the extraction of the actual fragmental increment does not introduce any new error and it can be stated that the increment extraction error is nil. We may wonder why the center of gravity, and the rebounding rule are so important to achieve a correct extraction. An in-depth analysis of the rebounding rule gives the answers to these questions. To render this analysis relatively simple, we are going to assume that the fragments are perfect spheres.

24.2. Analysis of the Rebound of a Fragment on a Cutter Edge 24.2.1. Definitions and Notations Let us define the following notations: F: a spherical fragment of diameter DF, falling vertically without spinning except specified otherwise. G: the center of gravity of F. It is assumed to coincide with the geometric center of the spherical fragment. ~G : the velocity vector of G, relative to earth. V ~G and is a function of time. VG: the speed of G. It is the magnitude of V C: a rectangular cutter moving on a straight horizontal path. CL: the leading edge of C, perpendicular to the path of C. CT: the trailing edge of C, parallel to CL. H: the horizontal plane generated by the movement of both edges CL and CT. W: the cutter width with respect to the cutter edges. ~C : the velocity vector of C, relative to earth. V 439

440

Theory of Sampling and Sampling Practice

~C that we shall assume constant. VC: the speed of C. It is the magnitude of V ~ VGC : the velocity vector of G. relative to the cutter C. It is a function of time, both in direction and magnitude. ~GC . VGC: the magnitude of V ~G and V ~C are Figure 24.1 and equations (24.1) and (24.2) show how the two vectors V ~ combined, and how VGC is calculated: ~G þ V ~C ~GC ¼ V V

ð24:1Þ

2 ¼ VG2 þ VC2 VGC

ð24:2Þ

Figure 24.1 illustrates the velocity vectors applied to the center of gravity G, The ~G ) with the cutter geometrical study of the collision of the fragment F (velocity vector V ~C ) is strictly equivalent to the study of the collision of the fragment F C (velocity vector V ~C ¼ 0). ~GC ) with the cutter C in a stationary position (V (velocity vector V ~C with the surface of the fragment F. X: the intersection of the vector V ~C with the surface of the fragment F. X’: the intersection of the vector V ~G with the surface of the fragment F. Y: the intersection of the vector V ~GC with the surface of the fragment F. The position of Z: the intersection of the vector V Z along the arc XY is a function of time. tF: the time variable characterizing the chronology of the fragment F. tL: the time variable characterizing the chronology of the leading edge CL. tT: the time variable characterizing the chronology of the trailing edge CT. tC: the collision time of the fragment F with the leading edge CL. 24.2.2. Chronology of the Fragment F As illustrated in Figure 24.2, we shall take the following instants into consideration: H: horizontal plane generated by the movement of the cutter.

Fragment F

r  VC

X

G

X'

Z

Y

r VGC

r VG

FIGURE 24.1 Illustration of the velocity vectors applied to the center of gravity G of the fragment F.

r VC

441

The Increment Extraction Error

F

F F

G

X'

X

G

Z

Y

H

YH

X

X'

ZH

G

X XH

Z

Z

Y

Y

Time tF1 Y reaches YH

Time tF2 Z reaches ZH

Time tF3 X reaches XH

FIGURE 24.2 Chronology of the fragment F.

tF1: the instant when Y crosses the plane H in YH. tF2: the instant when Z crosses the plane H in ZH. We shall call the point ZH the “critical point” along the arc XY. tF3: the instant when X, G, and X’ cross the plane H in XH, GH, and X’H, respectively. The chronology of the fragment F is characterized by the two following inequalities: tF3 4tF2 4tF1

ð24:3Þ

tF3 4tC 4tF1

ð24:4Þ

We should notice that at the instant tF1 Y is identical to YH, at the instant tF2 Z is identical to ZH, and at the instant tF3 X is identical to XH. 24.2.3. Chronology of the Leading Edge CL As illustrated in Figure 24.3, we shall take the following instants into consideration: tL1: the leading edge CL reaches point XH. tL2: the leading edge CL reaches point ZH,

XH ZH

X'H

X'H

XH ZH

XH ZH

X'H

r VC

r VC

r VC CL

CL

CL Time tL1 CL reaches XH FIGURE 24.3 Chronology of the leading edge CL.

Time tL2 CL reaches ZH

Time tL3 CL reaches X'H

442

Theory of Sampling and Sampling Practice

tL3: the leading edge CL reaches point X’H. The chronology of the leading edge CL is characterized by the two following inequalities: tL3 4tL2 4tL1

ð24:5Þ

tL3 4tC 4tL1

ð24:6Þ

We should notice that at the instant tL1 CL is identical to XH, at the instant tL2 CL is identical to ZH, and at the instant tL3 CL is identical to X’H. 24.2.4. Chronology of the Trailing Edge CT In a same way: tT1: the trailing edge CT reaches point XH. tT2: the trailing edge CT reaches point ZH, tT3: the trailing edge CT reaches point X’H. The chronology of the trailing edge CT is characterized by the two following inequalities: tT3 4tT2 4tT1

ð24:7Þ

tT3 4tC 4tT1

ð24:8Þ

24.2.5. Collision between the Fragment F and the Cutter C As illustrated in Figure 24.4, we should distinguish: • the cutter edges CL and CT (we will assume that their thickness is negligible) • the inner walls of CL and CT • the outer walls of CL and CT.

Trailing edge CT

Leading edge CL

H Outer walls of CT

Inner walls of C

r VC FIGURE 24.4 Illustration of the edges and walls of the cutter C.

Outer walls of CL

443

The Increment Extraction Error

Then, the fragment F ends its course in one of the three following areas: • the leading reject RL located on the cutter side of the leading edge CL • the increment I located inside the cutter C • the trailing reject RT located on the cutter side of the trailing edge CT. 24.2.6. Respective Positions of the Fragment F and the Leading Edge CL at the Instant tF3 We assume that tF3 < tT3. Let us analyze what happens at the level of the plane H when the center of gravity G of the fragment F reaches the plane H at the instant tF3. Figure 24.5 illustrates five possible scenarios: 1. tL1 > tF3: X reaches XH before the leading edge CL of the cutter. Then, at time tL1 the fragment F collides with the outer wall of the leading edge CL and falls toward the leading reject RL. 2. tL1 = tF3: X and CL reach XH at the same time. Then the fragment F collides with the upper part of the outer wall of the leading edge and eventually falls toward the leading reject RL. 3. tL3 > tF3 > tL1: collisions X and X’ cannot reach XH and X’H because the leading edge CL is in the way of the fragment F. Then, at some time tC such as tF3 > tC > tL1 F

F 1

2

XH

X'H

G

G

XH

X'H

H

H

F

t L1 ! t F 3

t L1

tF3

CL

CL XH

G

X'H H

3

t L 3 ! t F 3 ! t L1 CL

F

G

XH

F

G

XH

X'H

X'H H

H

4

t L3

5

tF3

t F 3 ! t L3

CL FIGURE 24.5 Respective positions of the fragment F and the leading edge CL at the instant tF3.

CL

444

Theory of Sampling and Sampling Practice

the fragment F collides with the leading edge CL and rebounds toward a direction that will be specified in Sections 24.2.7 and 24.4.9. 4. tL3 = tF3: X and CL reach X’H at the same time. The fragment F slides against the inner wall of the leading CL and falls inside the cutter. 5. tF3 > tL3: X’ reaches X’H after CL. The fragment F falls directly inside the cutter and becomes part of the increment I. We may notice that the final destination of the fragment F is uncertain only in the third scenario. We should now analyze what happens in this case.

24.2.7. Collision between the Fragment F and the Leading Edge CL Figure 24.6 illustrates the three possibilities that may arise at the instant tF2 when Z crosses the plane H at the point ZH. Let us define ɅL as the “critical plane,” which contains the leading edge CL and which ~GC as indicated in Figure 24.7. is parallel to the velocity vector V When the fragment F collides with the leading edge CL it is submitted to two forces ~GC , therefore parallel to the plane ɅL. The first force is associated parallel to the vector V with the movement of the fragment F and applied to its center of gravity G. The second force is the result of the collision between F and CL applied at the collision point. We assume that the leading edge CL is not affected by the collision. The fragment F should rebound toward a direction which depends on the respective positions of the center of gravity G and the plane ɅL at the collision time tC. The three following scenarios may arise: 1. tL2 > tC > tF2: Z reaches ZH before the leading edge CL. Therefore, Z is below the ~GC are on the right side of the plane ɅL. The fragment F plane H, and G and V rebounds toward the leading reject RL and spins clockwise. 2. tL2 = tC = tF2: Z reaches ZH at the same time as the leading edge CL. Therefore, Z is ~GC are in the plane ɅL. This corresponds to the “first in the plane H, and G and V critical position” of the fragment F. It is the random collision of the fragment F with the other fragments that will determine the direction to be taken. For the time being this direction is unspecified. 3. tL2 < tC < tF2: the leading edge CL reaches ZH before Z. Therefore, Z is above the plane ~GC are on the left side of the plane ɅL. The fragment F rebounds toward H, and G and V the cutter C and belongs to the “model fragmental increment” if the increment extraction is correct. As the fragment F rebounds, it starts spinning counterclockwise. 24.2.8. Respective Positions of the Fragment F and the Trailing Edge CT at the Instant tF3 In this section we analyze what happens at the instant tF3 when the center of gravity G of the fragment F crosses the plane H or comes near it in the vicinity of the trailing edge CT. We are still assuming for the time being that the fragment falls vertically. What happens when this fragment is a rebounding fragment coming from the collision with the leading edge is another problem that will be studied in a later section of this chapter. This problem is similar to the one analyzed in Sections 24.2.6 and 24.2.7 except that the right side of the trailing edge CT corresponds to the increment I, while the left side corresponds to the trailing reject RT. Thus, we can retain the conclusions reached previously.

445

The Increment Extraction Error

F

G XH

X'H

ZH

H 1

t L2 ! t F 2

CL F

G XH

X'H

ZH

H

tL2

2

tF 2

CL F

G XH

X'H

ZH

H

3

tL2  tF 2

CL FIGURE 24.6 Respective positions of F and CL at the instant tF2 when tL3 > tF3 > tF2 > tL1.

24.2.9. The Rebounding Rule and Definition of the Model Fragmental Increment We know that the speed VG is a function of time and varies with the position of the ~GC is center of gravity G about the plane H. Thus, the direction of the velocity vector V also a function of time. Figure 24.8 illustrates a vertical section of a few typical fragments located inside a falling stream and of a cutter C at a given instant. The figure also shows two parabolic curves CA and CB that can be superposed by a translation parallel to the cutter direction. These curves are the envelopes of the relative ~GC applied to the center of gravity G of a fragment F falling in the velocity vectors V “first critical position” as defined in Section 24.2.7. This figure shows the identity of the “rebounding rule” and the “rule of the center of gravity.”

446

Theory of Sampling and Sampling Practice

/L

F G

H

Z

r VGC

t L2 ! tC ! t F 2

1 CL

/L

F

G Z H

r VGC

2

tL2

tC

tF 2

CL F

/L

G

Z

H

r VGC 3

t L 2  tC  t F 2

CL FIGURE 24.7 Collision between F and CL. Respective positions of G and ɅL at the collision Time tC. Respective positions of Z and H. The rebounding rule.

The rebounding rule is respected when a certain number of conditions relative to the fragments making up the flowing stream (Section 24.3) are fulfilled.

24.3. Conditions of Extraction Correctness Relative to the Material to Be Sampled In Section 24.2 we made three assumptions:

447

The Increment Extraction Error

CB

Model Reject

CA

G

r  VC Model Reject

r  VC

r VGC

G

r VGC

r VG

C

r  VC

r VGC

r VG

G Model Increment

r VG

r VC FIGURE 24.8 ~G increases because of Definition of the actual extended increment and of the discrete model increment. Since V the gravity, CA and CB follow a parabola.

• The fragments fall one by one and do not interfere with each other. • They do not spin. • They fall in the plane containing our figures. Obviously, in a real situation, things are different: • Fragments fall toward the cutter as a compact stream. • They are likely to spin. ~G does not belong to a vertical plane containing the velocity • The velocity vector V ~ vector VC .

448

Theory of Sampling and Sampling Practice

24.3.1. Fragments Do Not Fall One by One Let us consider a fragment that belongs to a stream made of many fragments whose trajectories are more or less parallel. When this fragment rebounds on one edge of the cutter, its new trajectory necessarily crosses the trajectory of many other fragments and a chain reaction of collisions takes place. This chain reaction is likely to modify the initial probability for the fragments respecting the rebounding rule. Inside the stream, at some distance away from the periphery, we may safely admit that a statistical balance exists between the fragments having their trajectory modified toward one direction (e.g., toward the rejects instead of the cutter) and the fragments having their trajectory modified toward the other direction (e.g., toward the cutter instead of the rejects). On the contrary, at the periphery of the stream, the resulting effect can only be centrifugal. Consequently, for a given edge there is a systematic alteration of the extraction probabilities of some fragments. Fortunately, the cutter is made of two perfectly symmetrical edges, hopefully; therefore, the perturbation observed at the level of the leading edge can, under specific conditions, be statistically balanced by the perturbation observed at the level of the trailing edge. The statistically balanced perturbations are taken into account by the term h1 of the heterogeneity carried by the increment I, and by the term V(j=0) of the variogram. Conclusion: The fact that fragments do not fall one by one does not introduce any new error and does not alter the extraction probability. 24.3.2. Fragments Are Spinning When fragments are spinning, they carry additional energy that has not been taken into account by the rebounding rule. Here also we find that the phenomenon is perfectly symmetrical at the level of both edges of the cutter, if that cutter is well designed as we will see later in this chapter. Conclusion: The fact that fragments are spinning does not introduce any new error and does not alter the extraction probability. ~C 24.3.3. Fragments Do Not Fall in a Vertical Plane Containing the Velocity Vector V ~G and V ~C as if they were contained in the Figure 24.1 illustrates both velocity vectors V same vertical plane of the figure; but in practice it often happens differently. When the fragment F collides with the leading edge CL with a different angle, it still rebounds toward the side that contains its center of gravity. Conclusion: The colliding angle made by the falling fragment does not affect the rebounding rule and does not introduce any new error as long as the length of the cutter is sufficient to allow all fragments bouncing toward the inner sides of the cutter to be recovered in the increments.

24.4. Conditions of Extraction Correctness Relative to the Cutter Characteristics The following characteristics are likely to play an important role in the respect of the rebounding rule:

The Increment Extraction Error

449

• the straightness of the cutter edges • the thickness of the cutter edges • the shape of the cutter edges • the length of the cutter opening • the inclination of the cutter opening • the cutter width • the cutter speed • the depth, capacity, and general design of the cutter • the speed of the stream hitting the cutter edges. 24.4.1. Straightness of the Cutter Edges So far, we have assumed that both cutter edges were straight and perpendicular to the cutter trajectory. This is what we observe most of the time with new samplers since it is the easiest way to build them. The main condition to fulfill is to ensure that the trailing edge can be perfectly superposable on the leading edge by a translation for straight path cutters or by a rotation for circular path cutters, as illustrated in Figure 24.9. If this condition is fulfilled, the straightness of the cutter edges is irrelevant and it is often chosen because of its simplicity. There are cutters on the market with curvilinear edges that are perfectly correct because they are superposable by rotation. The possible superposition of cutter edges has already been a necessity to ensure a correct increment delimitation; it is still a necessity to ensure a statistical balance between some effects observed at the level of both edges. This statistical balance is essential to prevent new errors that could alter the extraction probability of some fragments (see Sections 24.3.1, 24.3.2, and 24.3.3). 24.4.2. Thickness of the Cutter Edges So far, we have assumed that the cutter edges were immaterial lines without thickness. Either by construction, or after some deformation generated by the hammering effect of large fragments, cutter edges always have a certain thickness. As illustrated in Figure 24.10 the fragment F1 colliding with the trailing edge has its center of gravity inside the cutter, therefore it belongs to the increment. However, the cutter velocity toward the right makes this fragment roll toward the trailing reject. Now we can imagine an equivalent fragment F2 colliding with the leading edge with the same ~FC2 ). Its center of gravity is on the side of the leading reject; ~FC1 ¼ V velocity vector (V however, the cutter velocity toward the right makes this fragment roll toward the increment. If both edges are flat as indicated in the figure, there is a statistical balance between the effect observed at the level of the trailing edge and the same effect observed at the level of the leading edge. Conclusion: The thickness of the cutter edges does not introduce any new error and does not alter the extraction probability if the cutter edges are flat and perfectly symmetrical. 24.4.3. Shapes of the Cutter Edges When the cutter edges become appreciably thick as is often the case, we need to remain cautious about the shape of these edges. Several shapes are likely to alter the extraction

450

Theory of Sampling and Sampling Practice

Correct

Incorrect

FIGURE 24.9 Cutter blades must perfectly mirror one another.

F1

r VFC 2

r V FC1 Trailing edge Lost fragment

Leading edge Gained fragment

Inner walls

Cutter direction FIGURE 24.10 Influence of the thickness of cutter edges on the rebounding rule: correct design.

451

The Increment Extraction Error

probability of some fragments and render the statistical balance of effects observed at the level of both edges impossible. Figure 24.11 illustrates a case where the fragment F1 colliding with the trailing edge is easily lost as it rolls down along the moving edge, while the fragment F2 colliding with the leading edge is recovered with difficulty as it rolls up along the moving edge. In this case there is no statistical balance of the effects observed at the level of both edges and an extraction error is likely to be generated. Sometimes edges consist of large adjustable plates which are designed as illustrated in Figure 24.12. Not only this type of cutter presents the same disadvantages as the one illustrated in Figure 24.11, but it is also likely to accumulate material on the top of the leading edge. This is due to the fact that the leading edge prevents the material from sliding down toward the leading reject and may actually reach the increment, while the trailing edge encourages the material to slide down toward the trailing reject. This phenomenon may introduce both a delimitation and an extraction bias. It should be avoided at all cost. Conclusion: If edges are thick for mechanical reasons, they should be perfectly symmetrical as illustrated in Figures 24.13C and 24.13D. The design shown in Figure 24.13A can be a source of big trouble and deserves a special discussion in Section 24.4.4. The case illustrated in Figure 24.13B is likely to introduce an extraction error as explained in Figure 24.11. 24.4.4. Problems with Inner Slopes at the Level of Cutter Edges Inner slopes inside the cutter edges as illustrated in Figures 24.13A and 24.14B lead to narrowing of the cutter opening right after the material enters the cutter. If fragments were falling one by one, or if the stream to sample was very small, it would not be a

F1

F2

r V FC1

r VFC 2

Trailing edge

Fragment easily lost

Fragment gained with difficulty or lost Leading edge

Inner walls

Cutter direction FIGURE 24.11 Influence of the shape of cutter edges on the rebounding rule: incorrect design.

452

Theory of Sampling and Sampling Practice

Clean trailing plate

Accumulated material

Rotating cutter trajectory

FIGURE 24.12 Influence of large adjustable plates on the rebounding rule: Example of an incorrect rotating Vezin sampler installed under the discharge of a small conveyor belt.

Bad Designs

B

A

Good Designs

C

D

FIGURE 24.13 Correct and incorrect designs for cutter edges.

problem. However, if the stream is massive, fragments hitting the inner slope have nowhere to go, they accumulate at this position, and finally find their way up flushing out. The rate of flushing highly depends on the size of the fragments as the fines may find their way down easier.

The Increment Extraction Error

453

Conclusion: Any inner slope inside the cutter edge leading to a narrowing of the cutter opening will introduce an extraction bias as the cutter becomes selective on what it is taking, favoring the fine particles over the large fragments. 24.4.5. Correct Outer Slopes of the Cutter Blades Many cutter designs in the industry have outer slopes at shallow angles such as 45o or 50o at best as shown in Figure 24.13B. These old guidelines happen to be wrong, especially for cutters moving fast. The recommended outer slope is at least 70o, as illustrated in Figure 24.15, so no material can climb the leading blade and be collected in the increment. 24.4.6. Conditions for Designing a Correct Cutter In summary, a good cutter design should comply with a few important rules as illustrated in Figure 24.15: • The cutter blades should be made of superior stainless steel quality so they don’t get damaged too quickly. • The cutter blades should have only one position possible, so they always mirror one another perfectly. Someone may want adjustable blades; it is a mistake as the user should do his homework first and decide before he purchase the unit what the cutter opening W should be. • The outer slopes of the cutter blades should be at least 70o. • The cutter edges can have a small flat top; we may call this a flat thickness. For very aggressive material such as iron ore loading a ship, we may consider a 2-cm flat. For less aggressive material such as coal, we may consider a 0.5-cm flat. For very large flowrate slurries such as the Final Tail of a flotation plant, we may consider a 0.2-cm flat. For very small flowrates such as secondary or tertiary Vezin samplers, we may consider a 0.05-cm flat. For sticky material there should be no flat at all. • There should be absolutely no restriction about the space available inside the cutter, so no material can flush out; so Z > X > W. 24.4.7. Length of the Cutter Opening In Chapter 21 we saw that the stream to be sampled passes well inside the cutter trajectory preventing the introduction of a delimitation error. This implied that the cutter was long enough to fulfill this condition. But the cutter length may also affect the correctness of the extraction if the fragments that rebound toward the increment have, in fact, their trajectory leading outside the trajectory of the cutter as indicated in the view of Figure 24.16. In this figure, the straight path cutter that is represented does not introduce a delimitation error, however, it most certainly introduces an extraction error because it is too short to recover the rebounding fragments that belong to the increment. A good guideline to follow is that the length of the cutter, assuming the stream falls in the middle of the cutter, should be at least twice, or better, three times the thickness of the stream at the point of impact as shown in the bottom view of Figure 24.16. Sometimes, with wet or sticky materials, some fragments or fines stay in contact with the conveyor belt and the installation of an appropriate scraper is necessary to recover the material and channel it back toward the main stream. In this case it is important to

454

Theory of Sampling and Sampling Practice

X

X

Y

Y

A

B

X < Y is correct: IEE = 0

X < Y is incorrect: IEE > 0

FIGURE 24.14 Inner slopes inside cutter edges: a source of big extraction problems.

Flat thickness

W X

Dt70o

Z

Z!X>W FIGURE 24.15 Correct design of a cutter.

design the cutter in such a way that the secondary stream coming from the scraper passes well inside the cutter trajectory as illustrated in Figure 24.17. A cutter installed in such a way that it would miss the secondary stream would be a disaster, but it is unfortunately a very common flaw. 24.4.8. Inclination of the Cutter Opening Many manufacturers of sampling equipment recommend that the inclination of the cutter edges be such that the stream is perpendicular to the plane defined by the

455

The Increment Extraction Error

Wrong extraction

Lost fragment

Increment

Stream

Correct extraction

Recovered fragment

Increment

Stream

FIGURE 24.16 The top view shows a correct delimitation, but an incorrect extraction. The bottom view shows correct delimitation and extraction.

cutter edges trajectory. In Figure 24.18 a cutter with inclined opening is represented. As long as the rebounding fragments that belong to the increment and secondary stream from the scraper pass inside the cutter trajectory, this design remains correct. The cutter is perpendicular to the falling stream and the maximum tolerance is ±15o, providing a narrow compromise that depends on flowrate fluctuations. Sometimes open-discharged straight-path pulp cutters have vertical edges as shown in Figure 24.19. In this case two major sources of incorrectness are taking place:

456

Theory of Sampling and Sampling Practice

1

2

1. Main stream 2. Secondary stream

Stream

Increment

FIGURE 24.17 Material recovered by the scraper must pass well inside the trajectory of the cutter: correct delimitation and extraction.

15o 15o

Stream

Stream Increment

FIGURE 24.18 Illustration of a correct sampler equipped with inclined cutter edges.

457

The Increment Extraction Error

Stream

Increment

Stream

FIGURE 24.19 Illustration of a cross-stream sampler equipped with vertical cutter edges, which is always incorrect.

• All fragments rebounding on the cutter edges fall toward the leading or trailing rejects, regardless of the fact that they may belong or not to the increment, which is not correct. For each rebounding fragment the resulting effect is equivalent to a narrowing of the cutter width equal to the diameter of the fragment. Furthermore, we know that the sampling ratio is proportional to the cutter width and the reduced width is a decreasing function of the fragment size. The size analysis is systematically distorted toward the fine side. An important extraction error, more specifically an extraction bias, takes place. Conclusion: With vertical edges, the smaller the cutter width and the wider the particle distribution of the material, the larger the increment extraction bias. • The material recovered by the scraper cannot be channeled inside the trajectory of the cutter. Conclusion: A cutter with vertical edges is likely to introduce an increment delimitation bias. The combination of IDE and IEE is devastating for the correctness and reliability of such sampling system. 24.4.9. Cutter Problems Associated with Overflowing Figure 24.20 illustrates a cutter with insufficient capacity that starts overflowing before its trajectory inside the stream is completed. Such problem is extremely

458

Theory of Sampling and Sampling Practice

Stream

Increment

Stream

FIGURE 24.20 Illustration of an overflowing cutter.

Maximum acceptable volume in cutter

Stream cutting time

Cutter discharge time

Acceptable plateau

Time FIGURE 24.21 Dynamic of the volume of slurry inside the cutter as it crosses the flowing stream.

common and responsible for massive increment extraction biases. It is essential to study the dynamic of how much volume goes inside the cutter as it crosses the stream as shown in Figure 24.21. The objective is to reach a plateau, which is the point where there is as much material entering the cutter as material going out at the same time. That plateau must remain below a maximum acceptable volume inside the cutter beyond which it may start overflowing.

459

The Increment Extraction Error

Sometimes the cutter was initially designed correctly, but an expansion took place and the flowrate in the main stream substantially increased to the point where the cutter is no longer capable to perform a correct job. In such a case, the entire sampling system must be rebuilt.

24.4.10. The Appropriate Depth and General Design of the Cutter Figure 24.22 shows a cross-stream sampler that will never work. The reason this picture is displayed in this book is because it is representative of most cross-stream samplers found on world market today, which is the result of not teaching TOS properly at universities. Defects are: • The cutter is vertical. • The cutter cannot cross the secondary stream created by the scraper. • The cutter bucket is not deep enough therefore a lot of material bounces out at the top of the cutter as it crosses the stream, just like a geyser. • The cutter chamber is too small and quickly overflows as the cutter crosses the stream. • The cutter does not have a strong support to handle the energy carried by the stream, and so on. Be aware that just one of these defects makes the sampling system useless and highly biased.

Stream

?

Increment

Stream

FIGURE 24.22 Illustration of a common, poorly designed cross-stream sampler.

460

Theory of Sampling and Sampling Practice

24.4.11. Width and Velocity of the Cutter: A Logical Qualitative Approach 24.4.11.1. Introduction The effects of cutter width W and cutter velocity VC are ballistic in nature and cannot be dissociated from one another. These two parameters become critical for all fragments rebounding on the leading edge or trailing edge toward the inner part of the cutter; by definition, they belong to the model increment and we should determine under what conditions they can stay in the actual increment. With the exception of samplers handling exceptionally large tonnages for which important safety factors must be taken, the cutter W and the cutter velocity VC are irrelevant for all fragments: • falling directly toward the leading or trailing reject • falling directly inside the cutter • rebounding on one edge toward the leading or trailing reject. In the following sections, we restrict our demonstration to fragments rebounding toward the inner part of the cutter and assume that air resistance and turbulence are negligible. This hypothesis is justified because W and VC are especially critical for the largest fragments. Of course, our assumption would not be acceptable for fine particles that should be the object of a separate study.

24.4.11.2. Large Fragments Rebounding on the Leading Edge Let us consider a fragment F falling vertically on the leading edge CL of the cutter C ~C . Figure 24.23 shows the relative rebounding travelling with a velocity vector V trajectory of the center of gravity G of the fragment (i.e., relative toward the edge movement). This trajectory is made of two parabolic branches intersecting in GC which is the position of the center of gravity G at the instant tC of the collision. This double ~GC . curve is the envelope of the relative velocity vector V Let us define:

Relative trajectory of G Spinning fragment r  VC

r V ' GC

r  VC

G

Z'H

r V 'G

r V 'G

r V ' GC

Z'

Z

GC H

r VGC

r VG CL

Direction of cutter FIGURE 24.23 Trajectory of the center of gravity G of a fragment in a plane H moving with the cutter. Graphical definition of the critical point Z’H.

461

The Increment Extraction Error

~GC with the surface of the fragment F immediately after the Z’: the intersection of V collision. Z’H: the intersection of the trajectory of Z’ with the plane H which shall be called “the second critical point.” tF4: in the chronology of the fragment F, the instant at which Z’ reaches Z’H. tT4: in the chronology of the trailing edge CT, the instant at which CT reaches Z’H. As illustrated in Figure 24.24, at the instant tF4 three different scenarios are possible: • tF4 > tT4: the trailing edge CT reaches the critical point Z’H before Z’ (see CT1 and CT2). Then the fragment F either fails directly into the trailing reject or collides with CT at some point of its surface between Z’ an X’. In both cases, the fragment F is lost, therefore the actual fragmental increment is biased by the introduction of an extraction error. • tF4 = tT4: the trailing edge CT reaches the critical point Z’H at the same time as Z’ (see CT3). The fragment is at the second critical position. It is the random collisions of the fragment F with the other fragments that will determine the direction it will take. Because of the symmetrical reasons, in general, no new error is added. • tF4 < tT4: the trailing edge CT reaches the critical point Z’H after Z’ (see CT4). The fragment F falls into the inner part of the cutter and becomes part of the actual fragmental increment. There is no extraction error. For a given fragment F of diameter dF falling with a speed VG, and for a given leading edge traveling with a speed VC, there are as many possible trajectories for G and Z’, and as many points Z’H, as possible collision points between X and Z at the surface of the fragment F. Therefore, there is an infinite number of possibilities. Let us define:

Trajectory of Z'

F G

X

X'

Z'H

H

Z'

Increment CL

CT4 Safe

CT3 Critical

CT2 Wrong

CT1 Wrong

Direction of cutter

Z' and Z'H at same place FIGURE 24.24 Trajectory of Z’. Collision of the second type between a fragment F and the trailing edge CT, Possible positions of the trailing edge.

462

Theory of Sampling and Sampling Practice

Z’MAX: the point Z’H corresponding to the maximum rebound of the fragment F. For a given lot L characterized by a certain fragment size distribution gd, the distance between Z’MAX and CL is an increasing function of the diameter of the fragment F. Z’FAR: the farthermost point Z’H corresponding to the maximum rebound of the largest fragment F to be sampled. W0: the distance illustrated in Figure 24.24 between CL and the farthest possible position of Z’H where Z’H = Z’FAR. Then we can define W0 as the “critical width” of the cutter C. It is a function of the maximum fragment size d and of the cutter speed VC. Our demonstration allows the following conclusions: • If W ≥W0: all the fragments that belong to the model increment are recovered in the actual increment and the extraction is correct. • If W < W0: some of the fragments that belong to the model increment rebound beyond the trailing edge and are not recovered in the actual increment. In this case the extraction is not correct, and obviously this error mainly affects the largest fragments. Their probability of being part of the increment diminishes and the sampling operation is biased in favor of the finer material. In terms of extraction probability, for a given cutter width W, we define: d0: a critical fragment size such as: • For all fragments of diameter dF ≤ d0, the extraction probability is uniform and equal to one. • For all fragments of diameter dF > d0, the extraction probability is smaller than one and, in fact, tends toward zero when dF approaches W. Now, if we consider the cutter speed VC, we find that the faster the cutter the longer the trajectory of Z’. thus the farther Z’MAX and Z’FAR are from the leading edge CL. From this we deduct that W0 is an increasing function of the cutter speed VC. Furthermore, the fragment F and the cutter C carry a certain amount of kinetic energy. During the collision of F with CL this energy is transformed into: • heat • possible comminution of the fragment • wear of the edge • rebounding motion of the fragment • spinning motion of the fragment. As we already assume in this section, if the center of gravity G is on the left side of the curve CA illustrated in Figure 24.8, the fragment F rebounds on the leading edge CL toward the left and spins counterclockwise. Then let us assume that after the rebound, the fragment F collides with the trailing edge CT at the instant t’C such as tT4 > t’C > tF4. If t’C is very near tF4, the spinning effect may, during the collision, send the fragment toward the trailing reject and therefore introduce an extraction error. This problem is amplified as VC and dF increase.

463

The Increment Extraction Error

24.4.11.3. Large Fragments Rebounding on the Trailing Edge We assume in this section the fragment F directly collides with the trailing edge CT without rebounding first on the leading edge CL, then falls toward the inner part of the cutter. We could proceed with the same demonstration as in Section 24.4.11.1 and define a new critical point Z”H and a new critical width W’0. However, since the leading edge travells in the same direction as the rebounding fragment, it is obvious that W’0 is always smaller than W0. Conclusion: It is always the rebound from the leading edge CL that should determine the minimum width and the maximum speed of the cutter. In the following sections we will study the relation existing between the size d of the largest fragments, the minimum cutter width W0, and the maximum cutter speed V0. 24.4.12. Cutter Width and Velocity – Rules of Extraction Correctness This section summarizes the conclusions of many experiments performed between 1955 and 1978 by Dr. Pierre M. Gy. It is beyond the scope of this book to recall these experiments that have been repeatedly described in former publications. We should only emphasize that the following rules are extremely important. 24.4.12.1. First Rule of Extraction Correctness For d  0:3cm : W  W0 ¼ 3d

ð24:9Þ

For d5 0:3cm : W  W0 ¼ 1cm

ð24:10Þ

24.4.12.2. Second Rule of Extraction Correctness For all values of d, with W = nW0 and n ≥1 and V0 defining the critical cutter speed beyond which the extraction becomes incorrect, we have: VC  V0 ¼ ð1 þ nÞ30cm=s

ð24:11Þ

24.4.12.3. Third Rule of Extraction Correctness The optimum, and cheapest, correct solution is defined by: W ¼ W0

and

VC ¼ V0 ¼ 60cm=s

ð24:12Þ ð24:13Þ

Important remark: When V0 is 60 cm/s. it does not mean it is the recommended cutter speed. Instead, it should be regarded as the very last chance to perform a reliable extraction; there is a difference that is often ignored by practitioners. Therefore, if 60 cm/s is selected as a cutter speed, it may lead to the absence of an extraction error; it may also lead to the presence of an extraction error. The practitioner should not forget

464

Theory of Sampling and Sampling Practice

that materials to be sampled can have very different characteristics, therefore it would a great mistake to take these guidelines for granted; these guidelines are not universal as we are going see very soon in the following sections.

24.4.13. Critical Cutter Width W0 For coarse material we find a general agreement that the critical width be proportional to the size d of the largest fragments. For fine materials there is also a general agreement that, irrespective of d, there must be an absolute minimum for the critical width. However, there is great difficulty in agreeing upon a given value of this absolute minimum, which is because local situations, material characteristics, flowrates, moisture contents, viscosities, and many other factors can be very different. Important remarks: Experiments, that have been performed in the past, reach conclusions based on the kind of material that was used at the time, and they are not necessarily applicable to others materials. Also, and this is a big disease, computer simulations are very useful of course, but there again, they do not necessarily represent what is happening in daily reality for a particular material and the surrounding conditions. Simulations can become dangerous when the practitioner starts to believe that it is the real thing; don’t fall in that trap. These words of caution being said let’s consider several common cases.

24.4.13.1. Coarse Materials From one author to another, and from one manufacturer to another, the ratio W0/d varies between 2 and 4 with a value of the median around 3. For ratios between 2 and 2.5 it has been demonstrated beyond any possible doubt that the large size fractions were misrepresented to the advantage of the finer fractions. These tests were mainly conducted in the coal industry. A very elaborate experiment performed by Dr. Pierre M. Gy in 197777 led to the results summarized in relations 24.9, 24.10, and 24.11.

24.4.13.2. Fine Materials Obviously, for very fine materials the rule W0 = 3d is no longer valid. Many values of the absolute minimum have been suggested, ranging from 0.2cm to 1cm. For pulps and flowrates not exceeding a few hundred cubic meters per hour, and for values of d not exceeding 0.3cm, we recommend an absolute minimum cutter width equal to 1cm. This value is the result of extensive tests performed by Gy77 and ArmstrongSmith.201 Since the definitions of coarse materials and fine materials are vague in nature and usually left to the reader, it is recommended to combine relations 24.9 and 24.10, which leads to the following slightly conservative guideline, and this is the guideline this author has been using for many years: W  W0 ¼ 3d þ 1cm

ð24:14Þ

The Increment Extraction Error

465

Unfortunately, there are a number of manufacturers who, on a large scale, still design, build, and advertise samplers with too narrow a cutter width such as W = 2.5d or even 2d for coarse materials, and W = 0.3 or even 0.2cm for fine materials.

24.4.14. The Peculiar Case of Very Large Flowrates for the Critical Cutter Width W0 The following guidelines are based on discussions with Dr. Pierre M. Gy, on the experience of other sampling experts, on the experience of some of the author’s customers, and on the author’s long experience. There is no doubt more research needs to be performed in this important domain. For very high flowrates (e.g., above 500 metric tons per hour, or 500 cubic meters per hour) the above guideline may not be conservative enough. With such flowrates, other factors rapidly come into play (e.g., the speed of the stream we will discuss in a later section). No realistic study has ever been conducted to optimize W0 with such streams. In any case important safety factors should be applied by the engineer who will rely heavily on his practical experience. The following guidelines are strictly from the author’s experience.

24.4.14.1. Guideline for Pulps in Flotation Plants Figure 24.25 applied for main stream velocity smaller than 2m/s. Be aware that if the stream velocity is larger than 2m/s, larger safety factors must apply, and for a stream velocity superior to 3m/s nothing is likely to work properly, and it becomes necessary to re-engineer the stream to slow it down as illustrated in Figure 24.26; it can be expensive. As for the uncertainty dotted curves illustrated in Figure 24.25, they highly depend on several factors such as %solids, viscosity, temperature, etc. . . There are no magic answers, so remaining conservative is a good preventive strategy. Figure 24.26 shows a case where two similar streams travelling anywhere between 3 and 4m/s were combined to a Final Tail for a flotation plant. It was necessary to build a deep, vertical tank called a needle tank. Streams had such energy that nothing can settle at the V-shape bottom of the tank. Then, the discharge of the tank can be designed in such a way that the velocity has been slowed down below 2m/s; this is the place where the primary sampler of the sampling station must be installed.

24.4.14.2. Guideline for Solids Travelling on a Conveyor Belt Coarse materials travelling on a conveyor belt, assuming the speed of the conveyor belt is no more than 2m/s can be sampled correctly for high flowrates as follows: • for flowrates between 500 and 2500 tons/hour W0 = 4d • for flowrates between 2500 and 5000 tons/hour W0 = 5d • for flowrates between 5000 and 8000 tons/hour W0 = 6d • for flowrates above 8000 tons/hour W0 = 6d is probably acceptable, but with reservations.

466

Theory of Sampling and Sampling Practice

10000

Slurry stream flow rate in m3/hour

9000

? 8000

?

7000

6000

5000

4000

3000

2000

1000

0 0

1

2

3

4

5

6

7

8

9

10

Recommended cutter opening W0 in centimeter FIGURE 24.25 Necessary cutter opening as a function of flowrate in a flotation plant.

If the stream carries too much energy by travelling at 3 or 4m/s, it is unlikely the sampling system will work satisfactorily; There are indeed situations where good sampling is almost impossible, and that is the way it is. 24.4.15. Critical Cutter Speed V0 As early as 1964, Minemet-Industry, a well-known French sampling equipment manufacturer, conducted various elaborate tests whose conclusions were for coarse materials, no significant increment extraction bias is likely to appear if the critical cutter speed is

467

The Increment Extraction Error

Streams

Top view of needle tank

Diverging chute

Cutter

Working platform

Stream

FIGURE 24.26 Necessary engineering to slow down the main stream to be sampled.

V0 = 40cm/s for all their samples. We can state that the value recommended by the American Society for Testing and Materials (ASTM) in its standard D2234 Section 6.5: V0 = 45.7c/s, is correct and safe. Following our demonstration started in Section 24.4.9, and our experience as sampling consultants, it was logical to conclude that the cutter speed could increase if at the same time the cutter width was increased. Thus, we suggested the following rule: VC  V00 ¼

W V0 with V0 ¼ 40cm=s W0

A series of tests were conducted in 197777 to verify and support this rule. Now, in light of these experiments, we can suggest the following new rule:

ð24:15Þ

468

Theory of Sampling and Sampling Practice

  W V0 with V0 ¼ 60cm=s VC  1 þ W0 2

ð24:16Þ

At this stage it is prudent to recall that W ≥W0, and not confuse the absolute limit V0 with a velocity recommendation. The same experiments also clearly showed that the absolute maximum of VC was 120cm/s. Beyond this value relation 24.16 is no longer valid. It is the experience of this author that the value of VC becomes highly suspicious beyond 45cm/s and this is the reason this author highly recommends the ASTM standard set at 45cm/s. 24.4.16. Optimum Width and Optimum Cutter Speed If µ0 is the average flowrate of the active components and MI the average weight of the increments, we may write: MI ¼  0

W VC

ð24:17Þ

The average increment weight MI is proportional to the cutter width W and inversely proportional to the cutter speed VC. The preparation cost of these increments is proportional to the weight MI. An economical optimum is reached when W approaches W0 and when VC approaches V0. When we observe this rule, a correct extraction is achieved, and the actual increment is affected by a negligible extraction error at the lowest possible cost. It is important to notice that the economical optimum also corresponds to the minimization of the Grouping and Segregation Error GSE studied in Chapter 13.

24.4.17. Guidelines for Rotating Cutters Rotating samplers, such as the Vezin sampler, are often used in flotation plants as secondary or tertiary samplers. Figure 24.27 shows the locations where to measure W0 and V0 relative to the cutter and the stream to be sampled. W0 should be measured at the closest point from the center of rotation where the cutter cuts the stream. V0 should be measured at the farthest point from the center of rotation where the cutter cuts the stream. It is the author’s experience following tests performed with some of his clients that V0 is about 40cm/s for large diameter Vezin smaplers, and 30cm/s for diameters smaller than 50 cm. If these rules are not respected, it is likely the particle distribution of the sample will not exactly match the particle distribution of the main stream. Figure 24.28 shows important details to look for, either when building a Vezin sampler or when buying one. Point #1: The cutter needs to cut the entire stream, and on each side of the stream the cutter should have an excess length of about 5cm to give enough space for bouncing fragments that belong to the increment to be recovered. Point #2: Make sure the speed V0 of the radial cutter is measured at the right place. Point #3: The number of radial cutters is dictated by the time it takes to feed a primary increment to the secondary or tertiary Vezin sampler. The rule of thumb to follow is that you need to take at least seven cuts from a primary increment. The % sampling ratio must not be too small, so the rule for W0 is respected (W0 ≥ 3d + 1cm).

469

The Increment Extraction Error

Stream to sample

Cutter

V0

W0

FIGURE 24.27 Top view of a Vezin sampler showing locations to measure W0 and V0.

Maximum speed V0 of cutter at that point of the stream: 40 cm/s

5

2

Minimum 5 cm

Stream

1

Four 2% Radial Cutters 3

Automated water spray

4

FIGURE 24.28 Important details in a rotating Vezin sampler.

Point #4: Automatic sprayers should be installed to prevent the accumulation of solids on the cutter edges. Point #5: It is mandatory to have an easily accessible inspection door, easy to open and close. The inspection door should be large and cover almost half the surface of the Vezin’s top. Under the door, a screen with about 2 cm mesh should serve as a safety feature, so operators inspecting or cleaning the cutters cannot put their hand at the

470

Theory of Sampling and Sampling Practice

Rotating Speed at that point: 3d + 1 cm

Distance between exit and cutter edge: 3d + 1 cm FIGURE 24.29 Overview of a well-designed Vezin cutter.

Necessary inner insert to prevent material from going to the opposite cutter

Sample FIGURE 24.30 A typical extraction error created by a fast-falling stream.

wrong place. A switch should be installed to automatically turn off the Vezin sampler drive when the inspection door opens. Figure 24.29 shows an overview of a well-designed rotating Vezin cutter; it consists of a single piece of steel, therefore there are no such things as questionable cutter blades. It is very easy to remove and replace for repair or maintenance. You may

The Increment Extraction Error

471

notice the recommended distance between the exit of the stream and the cutter edges is 3d + 1cm. A common problem that is easy to resolve with multi-cutter Vezin samplers is illustrated in Figure 24.30. A fast-falling stream may have enough energy for the collected increment to be partially lost by overflowing in the opposite cutter. This is common with Vezin samplers equipped with two or four cutters. The solution is to install an inner plate inside the vertical chamber of the Vezin sampler, that would divert the stream to its proper destination. A problem like this can go undetected for a long time, and this is a reason to perform daily visual inspections. 24.4.18. The Special Case of In-Stream Stationary Cutters and Probes All sampling systems consisting of in-stream probes, by-stream probes, and in-stream stationary cutters have a major common flaw that is an unsolvable problem: they cannot collect a sample under isokinetic conditions, and therefore introduce a major Increment Extraction Error IEE. Therefore, not only do they transgress the most elementary conditions of correct increment delimitation as explained in Chapter 21, but they are not suitable to minimize extraction biases. Further discussion on this subject does not merit additional space in this textbook; it is simply fair to say that the use of such sampling systems is irresponsible, and there are plenty of them on the market. Any bias test that may show otherwise is probably a flawed and useless test performed by those who understand nothing about the transient nature of sampling biases due to the transient nature of segregation.

This page intentionally left blank

25 The Increment Extraction Error during Sampling at the Laboratory

Exactly as we did for the delimitation correctness of the splitting process, we can distinguish several cases: • the true splitting devices • the degenerated splitting devices • the splitting methods • the degenerated splitting methods.

25.1. True Splitting Devices – Riffle Splitters When the chutes of a riffle splitter are correctly delimitated, the symmetry of the sampling system works in such a way that no extraction error is likely to take place. However, there are a few precautions to take to prevent increment extraction errors and we shall review them.

25.1.1. Possibility of an Operating Bias An extraction bias may easily be introduced by an incorrect use of a correct device. Figure 25.1 shows the correct use of the riffle splitter on the left side. After the material is evenly distributed on the surface of the scoop (very important), it is slowly fed exactly in the middle of the splitter where all chutes, going left or right, have the same depth (very important). But, if the operator is careless, or in hurry, or not properly trained, he may discharge the scoop without spreading the material carefully, too close to the left side, or to the right side, and too fast, as illustrated in Figure 25.1 on the right side. When the operator feeds the riffle splitter too fast, it is almost equivalent to feeding the riffle splitter preferentially on one side, the chutes that are not deep on that side immediately overflow in the chutes that are deep on that same side. Thus, one of the potential samples is systematically heavier than the other one. An experiment conducted in 195351 showed that this overflowing problem is selective, the coarse particles overflow more readily than the fines. Such a bad practice is very common and can be disastrous during the sampling of a concentrate or other valuable product.

473

474

Theory of Sampling and Sampling Practice

Correct

Incorrect

FIGURE 25.1 Correct and incorrect use of the riffle splitter.

25.1.2. Correct Riffle Chute Width With respect to the largest fragments of the material to be split, an operator should make sure that he is not using an instrument with too narrow riffle chutes. The prevention of any bridging inside a riffle by two large fragments is very important, which leads to the following guideline for the correct riffle width. W  W0 ¼ 2d þ 0:5cm

ð25:1Þ

Some authors recommend W0 = 3d; because the riffle splitter is stationary with no moving parts, it is possible to slightly compromise with the 3d rule without significantly altering the accuracy of the system. But, keep in mind that when riffles become too narrow, the following problems may develop: • fine particles may easily clog the riffles • the riffles should be easy to clean; therefore, they should be easily accessible for any cleaning equipment • dry fine particles may stick to the walls of the chutes because of their electrostatic properties. The use of a small brush is required, which makes the miniaturization of the riffle splitter unadvisable.

25.1.3. Design of Riffle Splitters Riffle splitters must be built with very smooth material such as good quality stainless steel. Sharp angles between parts must be avoided or carefully rounded by the manufacturer, especially at welded places. Slopes of chutes must be at least 45o for very dry materials. For materials still containing appreciable amounts of moisture such as coal, the slope of the chutes should be at least 60o.

The Increment Extraction Error

475

25.1.4. Equity of Riffle Splitters The accuracy of the splitting process can be preserved in cases where several splitting stages are necessary to achieve the appropriate sampling ratio. For example, the sample of the first stage is selected at random, then the sample of the second stage is selected on the opposite side of the first one, the sample of the third stage is selected on the same side as the first one, and so on. Therefore, the equity of this technique is ensured by its accuracy; an accurate sampling process being always equitable.

25.1.5. Feeding the Riffle Splitter in the Wrong Direction The material should never be fed to the riffle splitter sideways as shown in Figure 25.2. Usually, this is difficult to do with the original scoop that is built much larger than the width of the splitter. However, it is not unusual to see an operator empty a bag directly into the riffle splitter. Being more or less aware that something is wrong, the operator tries to feed all riffles by traveling back and forth above the splitter. This practice completely kills the accuracy and equity of the splitting process and should strongly be prohibited.

25.1.6. The Seven Cardinal Rules to Follow with Riffle Splitters The following rules are important and are not negotiable: 1. All the chutes are the same width. 2. There are the same number of chutes on both sides. 3. The feeding shovel has exactly the same width as the effective opening of all chutes combined. 4. Separations between chutes are very narrow blades, and never square fingers. 5. The riffle splitter is fed slowly, in the middle. 6. The sample must be selected at random, or alternated from one side to the next. 7. Both potential samples must have equal weight and similar particle size distributions.

FIGURE 25.2 Feeding the riffle splitter in the wrong direction: always incorrect.

476

Theory of Sampling and Sampling Practice

25.2. True Splitting Devices – Sectorial Splitters Sectorial splitters with either revolving feeder or revolving radial buckets as shown in Figure 25.3 are unlikely to introduce an Increment Extraction Error because of the perfect symmetry of the system. However, we should check that the edges of the adjacent sectors are not too thick, which would allow the falling material to accumulate on the edges and interfere with the proper functioning of the sampling device. Slopes on the sectors should be at least 60o with dry material and 70o with other materials containing small amounts of moisture. It is recommended to install caps with a proper edge and slope between each adjacent bucket to prevent material losses between buckets; they should be easy to install and remove. When the machine stops, all sectors must be checked for material that did not find its way to the receiving containers. Finally, rules concerning the cutter width and speed given in Chapter 24 must be respected.

25.3. Degenerated Splitting Devices – Reject Type Splitters Degenerated sectorial splitters are often used, like the one illustrated in Figure 25.4 where only two radial buckets are used, one for the sample, and another one to create a duplicate sample for QA-QC purpose. If conditions listed in Chapter 24 are carefully followed such reject type rotary splitter should not introduce an increment extraction bias. However, they are far more sensitive on the rotation speed than the system shown in Figure 25.3. Accordingly, it is suggested that this type of rotary splitter does not rotate faster than 25cm/s at the chute of the material from the conveyor belt. It is easy to perform a validation test: let’s assume the

Adjacent rotating buckets

FIGURE 25.3 Top view of a rotary splitter equipped with adjacent radial buckets and a conveyor belt feeder.

477

The Increment Extraction Error

Non-adjacent rotating buckets

Sample #1

Rejects

Rejects

Sample #2

FIGURE 25.4 Top view of a rotary splitter equipped with only two radial buckets.

4 to 6 degrees

A: Feeding a conveyor belt

4 to 6 degrees

B: Feeding a vibro-conveyor

FIGURE 25.5 Recommended shapes of a bin exit.

478

Theory of Sampling and Sampling Practice

buckets are 5% sampling ratio buckets; if you feed 10000g to the system you should get 500g samples. If they weigh less than 500g you must slow down the rotation speed. The reader may notice the shape of the feeding bin exit in Figures 25.3 and 25.4. Figure 25.5A illustrates the correct design of the exit of the bin if the feeder to the rotary divider is a small conveyor belt. Figure 25.5B illustrates the correct design of the exit of the bin if the feeder to the rotary divider is a vibro-conveyor. Small details like these can be very important for the overall system to work satisfactorily.

25.4. True Splitting Methods Examples of true splitting methods are coning and quartering, true fractional shoveling, alternate shoveling. And so on. These methods may generate delimitation and extraction errors because they involve shovels, scoops, or spatulas. However, they are unlikely to generate an important systematic difference between potential samples if the actual sample is selected at random, a condition that is not negotiable. Again, we should emphasize that to preserve the equity of a sample, the actual sample should always be selected at random following a true splitting process.

25.4.1. Coning and Quartering Coning and quartering is a very old method that was used at mines early in the nineteenth century or even before. It is probably the oldest probabilistic sampling method. It can be applied to the splitting of small lots (e.g., a few grams) and to large lots (e.g., up to several tons). There are numbers of versions of this method, but we will describe only the most typical one which is illustrated in Figure 25.6. There are four steps: 1. The lot is spread on a smooth surface that can easily be cleaned. The material is mixed by making a conical heap, each shovelful being gently dropped onto the apex. This operation can be repeated several times, and its objective is to achieve a revolution homogeneity around the vertical axis of the heap. This is the only kind of homogeneity this method may achieve. We should keep in mind that many homogenizing processes are often wishful thinking processes. 2. The top of the cone is flattened by dragging down the material evenly with a shovel all around the cone. This step is performed with respect to the symmetry achieved in the first step. Then, the lot is transformed into a flat and circular cake. 3. The cake is divided into four identical quarters A, B, C, D. This can be achieved with the shovel or better with a clean and sharp wooden or metal cross. The conditions of delimitation and extraction correctness are: • The cross must have sharp edges. • The cross must be positioned at random, with its center right in the middle of the cake. • The cross must cut the cake vertically everywhere at the same time. • All quarters must equivalent; therefore, the branches of the cross must be at 90o angles. If this last condition is not fulfilled the sampling operation loses the advantage of being equitable. For example, the two opposite small portions are known to be the sample in advance.

479

The Increment Extraction Error

B C

A D

A

C

FIGURE 25.6 Coning and quartering.

4. The choice between the two potential samples A,C or B,D should be made at random by tossing a coin, especially during commercial sampling, which is the warrant of sampling equity. This method is not recommended and its interest should be historical only. The alternate shoveling method uses the same tool, is quicker, requires less manpower, is cheaper, and is more reproducible since it contains a large number of random increments in each potential sample. 25.4.2. Alternate Shoveling The alternate shoveling method is always more effective in reducing the Grouping and Segregation Error than coning and quartering because each potential sample is made of many more increments. The shovelfuls extracted from the lot are deposited on top of two distinct heaps as indicated in Figure 25.7. For example shovelfuls with odd numbers go to heap A and those with even numbers go to heap B. The advantage of this method is that we are not limited to choosing an appropriate number of increments to minimize the variance of GSE.

480

Theory of Sampling and Sampling Practice

The prehomogenization steps, often ineffective and promoting segregation, are not necessary. The method relies on the reduction of the grouping factor. We know from Chapter 13 that the grouping factor is much easier to reduce than the segregation factor. The conditions of delimitation and extraction correctness are: • All shovelsful should be approximately the same size. • Each heap or potential sample should be made of an equal number of shovelsful. • One heap should contain only the shovelsful with odd numbers, and the other heap only the shovelsful with even numbers. • No material should remain in the original lot at all, and the fact that the potential sample is selected at random after the job is completed will mitigate the risk of having introduced IDE, IEE, and even IWE during the procedure. When the sample is selected at random by tossing a coin, the condition of sampling equity is preserved and the risk of having systematic biases vanishes. 25.4.3. Fractional Shoveling Fractional shoveling is certainly the simplest and cheapest bulk sampling method together with the alternate shoveling method. The true fractional shoveling consists in extracting shovelsful from the lot and depositing them on the top of N distinct heaps. Again, each heap may contain 10, 30, or 50 increments depending on how effectively we want to reduce the variance of GSE. Figure 25.8 illustrates this method with N = 5. The conditions of delimitation and extraction correctness are: • All shovelsful should be approximately the same size.

Odd increments

A

Reject

Lot

B Even increments FIGURE 25.7 True alternate shoveling method.

Sample selected at random

481

The Increment Extraction Error

S1

S2

Lot

Select one sample at random

S3

S4

S5

FIGURE 25.8 True fractional shoveling method.

• Each heap or potential sample should be made of an equal number of shovelsful. • All shovelsful should be alternated from heap 1 to heap N, then again from heap 1 to heap N, and so on (e.g., shovelful 1 in heap 1, shovelful 2 in heap 2,. . . shovelful N in heap N. shovelful N+1 in heap 1, shovelful N+2 in heap 2,. . . shovelful 2N in heap N, and so on). It is essential to select the sample at random to preserve the condition of sampling equity. This precaution not only discourages a clever operator from cheating, but also suppresses the risk of having systematic biases from IDE, IEE, or even IWE.

25.4.4. Degenerated Splitting Methods An example of degenerated splitting methods is the degenerated fractional shoveling method. This method allows the operator to voluntarily alter the composition of a sample whose selection is predetermined. This can be done by selecting shovelsful richer in coarse fragments, or fines when they go into the sample. Such opportunities have been used in commercial sampling. Of course, the bias is to the advantage of the operator’s party, and all forms of sampling equity are lost. Every shovelful is deposited on the top of heap A, and the remaining shovelsful of the cycle are deposited on heap B known to be the reject. Figure 25.9 illustrates this nonprobabilistic method. This method should be avoided at all cost.

482

Theory of Sampling and Sampling Practice

A

Authoritative sample

Lot B Reject

FIGURE 25.9 Degenerated fractional shoveling method.

25.5. The Special Case of a Rotating Cutter Travelling under a Deflecting Cone A rotating cutter travelling under the discharge of a deflecting cone was already discussed in Section 21.1.11 for the delimitation correctness. This is a case where the entire stream is continually collected as illustrated in Figure 25.10. The conditions for a correct extraction are the same as those described for rotating crossstream samplers such as Vezin samplers. They have several advantages, and a few disadvantages: • The concept of number of increments vanishes since the cutter is always in the falling stream. Therefore, fewer rotations are necessary to perform a good job.

Circular gap where the stream is intercepted by the radial cutter Illustration of an equivalent cut taken in a linear stream

Flowing stream FIGURE 25.10 Taking the entire stream all the time.

The Increment Extraction Error

483

• If well designed with respect to IDE and IEE, they are perfectly proportional, which is a huge quality and would make IWE inexistent. • Materials must flow well otherwise fines will accumulate on the cone, which would ruin everything. • Because the cutter is always in the stream, abrasion can quickly become a problem, therefore they must be carefully inspected every day.

This page intentionally left blank

26 The Increment Preparation Errors and the Notion of Sample Integrity

All of the nonselective operations performed on a lot of material, or on the successive samples are referred to as “preparation stages.” These preparation stages are necessary to convey the increments and samples to a predetermined location and to modify them into an appropriate form for the ultimate analytical stage. A large number of articles and books written on sampling confuse selective operations (i.e., sampling in a proper sense) with nonselective operations (i.e., preparation stages). It should be emphasized that unless this differentiation is clearly made, and accepted, it is impossible to conduct a logical analysis of all the errors involved in the Overall Estimation Error OEE. At this stage the reader should again refer to Figures 4.1 and 4.2 in Chapter 4. These preparation stages can be classified into various categories such as: • Transfer of the increment from the cutter to a conveying system, which may or may not be a continuous operation. • Transfer from the conveying system to the next sampling or preparation stage. • Comminution stages, whose functions are to diminish the particle size d, and to increase the number of particles (e.g., crushing, grinding, and pulverizing). • Wet or dry screening, often in connection with a comminution. • Drying of solids loaded with various amounts of moisture. This operation is often necessary to minimize clogging of sticky materials in the sampling circuit. • Filtration of pulps to separate the solid phase from the liquid phase, and so on. • Homogenizing processes. We notice that none of these operations are selective in a proper sense; however, they can selectively alter the composition of the sample, thus bias its critical content. Preparation errors are strictly inherent to those who design, build, operate, clean, and maintain sampling systems. They are the result of ignorance, carelessness, clumsiness, unintentional mistakes, and sometimes fraud and sabotage. Finally, preparation errors are accounted for as sampling errors for practical purposes, and their sum leads to the Increment Preparation Errors IPE. We are going to review the following categories: • Contamination errors • Loss errors • Errors generated by a change in chemical composition • Errors generated by a change in physical composition 485

486

Theory of Sampling and Sampling Practice

• Errors generated by unintentional mistakes • Errors resulting from fraud and sabotage. Many of these errors are nonrandom in nature and cannot be easily quantified in probabilistic terms. Exactly the same as for IDE and IEE, the only wise strategy for IPE is not how to estimate them, but how to understand them and prevent them. In a preparation stage, the weight of the material does not change; if it does, such as after drying, it is not because of a selection process.

26.1. Errors Resulting from Contamination It is almost impossible to extract, grind, filter, screen, or homogenize materials without introducing some contamination either from the equipment that is used or from the surrounding environment. The best preventive action that can be taken is to make sure that crucial contaminants are excluded, or at least relevant only to the second order. Also, which contaminants can be tolerated, and which cannot, depends on objectives given to the sample. For instance, if the sample is analyzed for trace amounts of chromium, nickel, or cobalt, any contact of the material with stainless steel may become an overwhelming issue. Therefore, it is not enough to simply list general conditions to minimize contamination but, in addition, the operator responsible for preparing the sample must be constantly aware of the purpose of sampling to prevent the use of equipment incompatible with the sample. This, in turn, requires that the operator be a well-trained sampling specialist. Let us now analyze some of the main sources of contamination.

26.1.1. Contamination by Dust Sampling of materials containing large amounts of fine and dry particles always generates difficult contamination problems which are often impossible to solve satisfactorily. For example, the formation of a cloud of dust around a sampler may lead to the contamination of the sample by fines that belong to the rejects. The only solution is to minimize the problem to an acceptable level, and the following ideas are suggested: • Reduce free falls of the material to be sampled to a minimum. For example, feed the sampler with a screw conveyor. Suppress long chutes from conveyor belts. For small lots vibratory feeders are preferred. Feed riffle splitters slowly. • Enclose all sources of dust inside hoods with slight negative air pressure. Too much ventilation is always detrimental to the sampling operation. • Protect cutters on idle positions with caps to prevent the collection of dust at all times. 26.1.2. Contamination by Material Present in the Sampling Circuit In sampling systems used in routine practice, the contamination is generated by material remaining in the circuit from a former operation performed on a different sample. Usually, there are two kinds of procedures:

The Increment Preparation Errors

487

26.1.2.1. The Same Material Is Submitted to the Sampling Circuit at Regular Intervals In that case, between each operation the content of the constituent of interest does not change by several orders of magnitude. Solutions to minimize contamination problems are: • Make sure that between each operation the content of the constituent of interest does not change too much. • Quick cleaning between operations is necessary (2 or 3 minutes). A thorough cleaning that will take 25 minutes may not be a must and is most certainly impractical when you have several hundred samples to process in one day. • In geochemical exploration, or for blast holes subsampling in grade control at the mine, it is advisable to keep the exact collecting sequence of the samples through all the subsequent preparation and sampling stages. Cross contamination between neighboring samples may not be as bad as cross contamination between samples collected at very different places, as clearly illustrated in Figure 26.1. Accordingly, it is highly recommended for geologists to number the samples in a logical way and be cautious about barcode practices that are an invitation to randomize the samples since operators no longer have access to logical sequences. • Keep in mind there is a close correlation between the increase of contamination and the decrease of the cost of sample preparation. • Always selecting the cheapest laboratory is not wise and leads to unreliable industry standards. • If using a commercial laboratory, make sure in the contract that you can show up unannounced at any time you wish to verify how the sample preparation performed on your samples is handled. This should be a nonnegotiable condition.

FIGURE 26.1 Diamond core drilling: cross contamination between samples close to one another may not be as bad as between samples far apart.

488

Theory of Sampling and Sampling Practice

26.1.2.2. Different Materials Are Submitted to the Sampling Circuit This case can lead to severe contamination problems; recommendations are: • It is a must to process feed, concentrates, and tailings with different equipment installed in different rooms. • If, for some reason, different equipment and rooms are not available, the circuit must be thoroughly cleaned between each operation, which is time consuming and costly. • The circuit can be fed with material similar in composition to the next real material to be sampled and prepared. • All materials collected in the course of preventive cleaning must be carefully rejected. • There is less risk to pulverize tailings, then feed, then concentrates, than the other way around. • In all cases, wet cleaning and dry cleaning are necessary between each operation. Besides these obvious possibilities of cross-contamination due to imperfect cleaning of the sampling equipment, there are other sources of cross-contamination that are difficult to detect. For example, plastic jars that have been washed with materials containing phosphate (e.g., some detergents) should not be used to collect samples whose critical constituents include low-level phosphate. There are many similar examples as reviewed by Ingamells and Pitard in 1986.22 26.1.3. Contamination by Abrasion Crushing, grinding, pulverizing, and to a lesser extent all handling operations performed on abrasive materials (e.g., silica sand, material containing quartz, chromite, hematite, etc.) are likely to introduce in the sample small amounts of material abraded from the equipment. Sometimes this can become a critical problem as the quartz sand currently used to process valuable white glass; such sand can be penalized for small amounts of iron much below those potentially introduced by contamination of the sample. The solution, often expensive, is to use nonferrous materials to build crushers, pulverizers, or mortars. Screening can be a potential source of abraded material contaminating the sample. Small amounts of copper, zinc, or iron can affect the determination of these elements when they are part of low-level critical constituents. The use of galvanized plates to dry zinc ore had catastrophic consequences on the outcome of a drilling campaign in evaluating the feasibility of a zinc deposit. Again, sometimes the source of contamination by abrasion is not obvious. For example, in geochemistry the determination of chlorine in rocks is critical, but the measured chlorine is often derived from handling the material with bare hands. Large amounts of chlorine are present in perspiration and on the surface of the skin.22 26.1.4. Contamination by Corrosion Corrosion of the sampling and preparation equipment is likely to take place with aggressive materials such as:

The Increment Preparation Errors

489

• Wet materials developing acid reactions (e.g., ore containing sulfides). • Pulps containing acid reagents in flotation plants, or slurries containing strong acids in hydrometallurgical plants. • Processing plants involving seawater or other alkaline solutions (e.g., the magnesium and potash industries). • Rust due to air loaded with high humidity. • Sampling stations installed near the seashore. • Sampling stations installed in plants using bacteria, and so on. Each case is particular and requires preventive actions. But, in most cases, the solution is to build sampling equipment with stainless steel of excellent quality, especially those parts which are likely to be in contact with the material to be sampled.

26.2. Errors Resulting from Losses Errors resulting from losses take place as soon as the particles which belong to the correct actual increment are voluntarily or accidentally withdrawn after the selection process is completed.

26.2.1. Loss of Fines as Dust Loss of fines occurs as soon as dry materials are transported, transferred, or sent into a free fall toward the cutter of the sampler. Another source of losses is the crushing, grinding, or pulverizing equipment. These losses are even amplified when these pieces of equipment are enclosed inside dedusting systems working under strong vacuum. These systems are attractive for an environmental standpoint; however, they can be extremely detrimental to the integrity of the sample, or even the lot before it is sampled. Dedusting systems should use minimum vacuum, small airflow, and be carefully controlled by the operator. Reverse circulation drilling during exploration or grade control usually sends the material to cyclone first before the material is sampled. It is extremely common to witness huge amounts of fines escaping the chimney located above such a cyclone, to the point where operators have to wear a mask to protect themselves. Under such conditions it is fair to say that the drilling campaign is highly misleading, useless, and most certainly invalid. There are on the market today very efficient cyclones that would show insignificant loss of fines and they are highly recommended. The use of a filter to collect fines is not recommended as these filters are nearly impossible to clean properly between consecutive samples. Important air drafts around cross-stream cutters or splitters can lead to a selective removal of the finest fractions of the material to be sampled and result in a preparation bias. Hammer mills rotating at high speed and generating strong turbulences are not recommended; the high reduction rate ratio of these mills renders them attractive, however they are likely to generate significant losses of fines. To pulverize small samples, below 3000g, completely enclosed systems are recommended, such as vibratory ball mills, planetary ball mills, ring and puck mills, and so on. To pulverize samples larger than 3000g, continuous ring and puck mills are recommended.

490

Theory of Sampling and Sampling Practice

26.2.2. Loss of Material Left in the Sampling and Preparation Circuit After each sampling operation, the sampling and preparation equipment must be cleaned, and all the material left in the circuit must be recovered and sent back either to the rejects or to the sample depending on which of the two it belongs. This task is almost impossible to perform with sample preparation robots, as compressed air is used to eliminate all fine material regardless of where it belongs. 26.2.3. Loss of Specific Fractions of the Sample Several examples will illustrate the critical importance of this section. 26.2.3.1. Loss Generated by Careless Screening Analytical methods may call for the sample to be 100% minus 63, 106, or 150µm depending on the case. Then the analyst grinds the sample either by hand or mechanically and screen it several times until all the material passes through a given screen. But, it is not unusual to see an impatient or careless analyst throw away the third or fourth oversized material because it represents only a very small fraction of the sample. Unfortunately, it is also not unusual to find that small refractory oversized materials are made of a hard constituent of interest. This was observed with chromite, wolframite, magnetite, and so on. Another example is topaz in material intended to be analyzed for fluorine. The reverse is also true. This problem can be observed with soft precious metals. Small oversized fractions are likely to be made of very thin flakes that have formed upon grinding or pulverizing. Samples containing native precious metals should never be crushed by compression, which excludes disk pulverizers and mortars. The safest solution consists in concentrating these metals by means of shaking tables, superpanners, or small gravity spinning concentrators, then melting the whole of the concentrate to determine the gold content. Thus, the gold contained in the tailings can be determined by cyanide leach on a sample as large as practically possible to estimate the gold content which is easy to recover. Finally, the leach residues can be analyzed for gold by standard fire assay to estimate the finely disseminated background gold content which may never be recovered. This procedure is long and tedious, however any short cut can lead to devastating economic conclusions. Conclusion: Never forget that screening, especially fine screening, may achieve a perfect segregation of some critical constituents, such as precious metals, molybdenite, galena, chromite, and many other sulfides. 26.2.3.2. Loss Generated by Smearing of the Critical Component Native precious metals and soft minerals such as molybdenite and galena easily smear on crushing, grinding, and pulverizing equipment. Actually, grinding of liberated gold is an excellent way to plate the equipment with 24-carat gold, generating an unfortunate preparation bias. In this respect, for gold, finer is not better for the sample; out of good will many people think that fine grinding, such as 95% minus 106 µm gives better reproducibility. What is the point to have good reproducibility if in this process you loose a large portion of the gold by smearing? As such, grinding to 95% minus 150 µm may be a better option, even if the reproducibility is not as good.

The Increment Preparation Errors

491

26.2.3.3. Loss Generated by Electrostatic Properties of the Critical Component Some minerals like scheelite (a tungsten mineral) and biotite occur as liberated tiny flakes after the material has been submitted to several comminution stages. These flakes easily become electrically charged, especially in plastic containers and when the sample is very dry. This often generates unsolvable segregation difficulties as soon as the analyst extracts the analytical subsample. The use of very clean glass containers and large analytical subsamples is the only method to minimize the problem.

26.3. Errors Resulting from the Alteration of the Chemical Composition So far, we have reviewed the Increment Preparation Errors introduced either by the addition or subtraction of fragments. In this section we are interested in the addition or subtraction of atoms or molecules whose result is equivalent to a modification of the chemical composition of some constituents making up the sample. Let us recall the definition of the critical content whose estimation is the final objective of the sequence “sampling, preparation, subsampling, and assaying”: Critical content ¼ weight of thecritical component=weight of active components ð26:1Þ The estimation of the critical content is altered each time the numerator, or the denominator, or both are changed. In this section we do not explore the notion of sampling correctness in terms of fragments, but instead in terms of constituents such as atoms, parts of molecules, or molecules. We change only the scale of our observation. Exactly the same as with fragments, defects affecting the selective probability of some constituents directly alter the sampling accuracy in a broad sense. 26.3.1. Errors Resulting from an Addition or Fixation A few examples can illustrate this type of error. 26.3.1.1. Oxidation of Sulfides Raw materials used for the preparation of many metals often contain large amounts of sulfides (e.g., pyrite, chalcopyrite, molybdenite, sphalerite, galena, and many others). These sulfides can easily fix the oxygen of the air when wet, finely pulverized, and stored in large bulk. The sulfides are transformed into sulfates or oxides with release of heat which accelerates the process, and loss of sulfur as sulfur dioxide. The denominator and the numerator of relation 26.1 increase by the same amount, which obviously changes the critical content. 26.3.1.2. Fixation of Water or Carbon Dioxide The surrounding atmosphere is always loaded with H2O and CO2 molecules in variable amounts. These molecules are easily fixed by many oxides which transform them into hydroxides and carbonates. Samples should be protected from the air as soon as their preparation is completed. Some tropical soils such as laterites, after drying, reabsorb water

492

Theory of Sampling and Sampling Practice

from the air at very high speed. For example, a tiny subsample on the balance pan will show a constantly increasing weight making the determination of the critical content difficult for the analyst. In many cases, the use of desiccators and desiccants is necessary. Some materials are very stable and not sensitive to excessive drying temperature. Nevertheless, we should be careful because many constituents may release moisture, then adsorbed water, then occluded water, then water of crystallization, then water of constitution. Therefore, the drying operation is always a critical operation that should be conducted with care and competence. In many cases a drying baseline can be fixed in the interval 105 and 110oC, but there are many exceptions. Sand baths, hot plates, microwaves, gas ovens, and infrared lamps can be used only with extreme precautions. Well-regulated electric ovens should be used whenever possible. These problems have been extensively studied by Ingamells and Pitard.22 Sometimes the fixation of carbon is not immediately obvious. For example, the meaning of a total carbon determination on a rock sample can be useless if lichens have been growing on the original specimen. 26.3.2. Errors Resulting from Subtraction or Elimination A few examples can illustrate this type of error. 26.3.2.1. Elimination of Crystallization and Constitution Waters A large quantity of materials may contain water that is part of the molecule. This water is defined as water of constitution (e.g., Ca(OH)2, Mg(OH)2, NaOH, KOH, and so on). They may also contain water of crystallization where water is attached to a molecule rather than part of it, such as stable solid hydrates (e.g., CaC2O4,2H2O, BaCl2,2H2O, Fe2O3,3H2O, and so on). Gypsum for example (i.e., CaSO4,2H2O) loses 75% of its water between 110 and 130oC, and all of it at 143oC. A bad definition of the weighing baseline of the sample would greatly affect the accuracy of the determination of a critical constituent. Especially, beware of operators boosting the drying temperature to minimize the necessary time to dry samples, which can be a severe mistake leading to reconciliation problems between the geological model, the mine production, and later, the plant production, with all economic consequences. 26.3.2.2. Elimination of Carbon Dioxide Carbonates are relatively stable minerals; however, high temperature drying conditions may sometimes release some carbon dioxide. Again, a well-defined weighing baseline is necessary either before the CO2 starts to be released, or after it has been completely released. Drying temperature reached by infrared lamps or sand baths are often high enough to start releasing some carbon dioxide.

26.4. Errors Resulting from the Alteration of the Physical Composition In the mineral industry the physical composition of a material often refers to its moisture content and/or its particle size distribution. Therefore, this section is dedicated

The Increment Preparation Errors

493

to the errors affecting the measurement of these parameters. These errors directly affect the sampling accuracy in a broad sense. 26.4.1. Addition or Creation of a Critical Component 26.4.1.1. The Critical Component Is Moisture The reader may not believe this, but more than once, we saw samples to be analyzed for their moisture content that were exposed to rain in open bags without any protection. In hot countries, especially near the sea where the humidity of the air is very high, it is essential to protect samples from any addition of moisture. This addition may occur by direct contact with rain or fog, but it can be added by slow absorption and adsorption on the surface and inside porous fragments from the circulation of humid air between the fragments. If the sample is not protected, the quantity of captured water becomes a function of time and of the moisture content of the environment. Some materials such as clay or laterite may capture this humidity very fast. An accurate moisture determination must be performed without any change in moisture content between the sampling point and the weighing step. This is a rather obvious statement, but it is often forgotten because of carelessness or convenience. 26.4.1.2. The Critical Component Is a Given Granulometric Class Some contracts penalize various granulometric classes when they reach a certain limit beyond which they may affect the optimum functioning of a process. Sometimes the fine particles are penalized, thus it becomes essential not to break larger fragments during the sampling operation. In this case there is an artificial production of fines which will positively bias the determination. These fines are often generated by excessive screening. Particle size distribution analysis of commodities such as coal, coke, iron ore, manganese ore, bauxite, alumina, and many others, can be rendered inequitable if performed after too many operations at the buyer’s facility. This analysis may be greatly affected by successive loading, unloading, transfer, reclaiming, and sampling operations. This problem is difficult to solve because what counts for the seller is what he is shipping and for the buyer is what goes into his process, which can be two very different things. 26.4.2. Subtraction or Destruction of a Critical Component 26.4.2.1. The Critical Component Is Moisture The reverse of what we said in Section 26.4.1.1 is true. Samples collected for moisture determination should not be left opened with no protection in sunny or very dry areas, or near heat sources such as furnaces, chimneys, ovens, and so on. Protect them with airproof containers. 26.4.2.2. The Critical Component Is a Given Granulometric Class It is not unusual that the critical component is the oversize to a given mesh. In this case, it is necessary to prevent the breakage of the oversize, which is equivalent to the destruction of the critical component.

494

Theory of Sampling and Sampling Practice

26.4.2.3. The Particular Case of Native Sulfur There are still some mines that extract native sulfur, or some natural gas treatment plants that produce large amounts of sulfur and sell such products. Sulfur sublimates at 80oC, therefore any sample containing native sulfur should be dried slowly at room temperature. 26.4.2.4. Biological Sampling Environmental sampling is often performed to determine critical organic or biological compounds. If the equipment used to perform the sampling operation and preserve these critical compounds is not appropriate, losses may ruin the product of a long and expensive sampling campaign.

26.5. Old Technology Equipment to Avoid In summary, there are various piece of laboratory equipment that are notorious to alter the integrity of the lot and samples. Figure 26.2 illustrates some old-technology sampling and preparation equipment that are likely to introduce severe problems, especially when dedusting systems are installed to protect operators: • Jaw crushers are very dusty. • Roll crusher are also very dusty, difficult to clean between samples, and often used two or three times to achieve a certain grinding performance. • Riffle splitters are often used two or three times to homogenize samples before the next splitting stage. The only thing that is achieved in that case is to lose fines two or three times. • If the riffle splitter is used four times to achieve a 1/16 split, the operator loses fines four more times. • Disc pulverizers are extremely dusty and notorious to smear soft minerals such as gold, galena, molybdenite, native copper, and so on. It is, perhaps, a small problem to lose fines once or twice; however, if you lose fines a dozen times during the sampling and preparation stages, it is obvious the integrity of the sample is gone long before the job is completed. So, it would be a good idea to avoid the use of some of this equipment, as today, on the market you may have much better options available.

26.6. New Technology Equipment to Favor Today, new-technology equipment can greatly reduce the loss of fines, as illustrated in Figure 26.3; the components are as follows: • High efficiency jaw crusher that can directly reduce 5-cm material to less than 0.3-cm. • Continuous ring mills that are completely enclosed units. You may need one or two. • Rotary dividers that are not as dusty as riffle splitters.

The Increment Preparation Errors

495

FIGURE 26.2 Losses generated by using old-technology equipment at the laboratory.

26.7. Errors Resulting from Unintentional Mistakes For some psychological, political, or economic reasons, sampling within the estimation circuit is the most neglected area. This is true for most countries to the point that it is not even taught at universities and colleges. As a result, the sampling operator is unaware

496

Theory of Sampling and Sampling Practice

FIGURE 26.3 Losses generated by using new-technology equipment at the laboratory.

of the most elementary notions of sampling correctness. As he blindly follows a vague recipe based on empirical observations, he necessarily introduces unintentional mistakes. These mistakes can also be the result of carelessness and awkwardness. The most common mistakes are: • Dropping of samples followed by incomplete recovery • Mixing of fractions belonging to different samples • Mixing of labels • Poor maintenance of sampling and preparation equipment • Poor cleaning of sampling and preparation equipment • Contamination • Losses

The Increment Preparation Errors

497

• Alteration of chemical and physical properties of the material to be sampled, and so on. At the risk of being accused of elitism, we state that sampling is a subtle technique that requires highly skilled experts. It should be taught at different levels, especially those of engineer and technician. However, without these badly needed specialized qualifications, it is obviously the quality control analyst who is the most suitable person to perform and supervise all sampling operations.

26.8. Errors Resulting from Fraud and Sabotage The fact that so few people are familiar with many subtle aspects of sampling and sample preparation still perpetuates the survival of fraud in commercial sampling. For the past 30 years, agencies in several countries have been commissioned to establish if some industries are in compliance or not with the law, which unequivocally establishes permissible concentration limits for a large number of toxic constituents in effluent wastes and waters. Then, incorrect sampling may voluntarily or involuntarily become a way to assess authoritatively that an entire industry is polluting the environment, but this should not be an excuse to use incorrect sampling techniques as an ecological means against other industries that are not polluting. Some money maker private environmental laboratories receive hundreds or even thousands of samples each day that instantaneously become specimens at the balance room, generating a huge form of sabotage. Deliberately, someone does not have the time, or the desire, to seriously look at subsampling and preparation problems if the contract will be given to the cheapest laboratory. Today, it is difficult to measure the consequences of these commercial practices. Those in charge of collecting samples in effluent waters or wastes should bear in mind that the solution to reduce the risk of sabotage exists and is not unreasonably expensive, certainly not as expensive as false results followed by inappropriate litigation cases. The solution consists of eliminating all human handling, the same as in commercial sampling, when controlling effluent wastes and waters. We know perfectly well how to design, build, and manage entirely automated sampling stations that can eventually be locked from an external control room where several parties may voice their opinion. Another advantage is to reduce the manpower necessary for these sampling operations. Exactly the same as for commercial sampling, environmental sampling needs to remain equitable.

26.9. Conclusions The prevention and minimization of sample preparation errors can only be achieved if sampling is under the responsibility of engineers and technicians with analytical backgrounds. Ideally, these people should acquire reliable training in sampling; for example, to properly digest the content of textbooks like this one, someone needs a full semester

498

Theory of Sampling and Sampling Practice

of work in college or university. Good short courses, lasting four or five days are better than nothing, but not many are available. In all cases, and this is a strong statement, sampling and sample preparation should never be under the responsibility of production people. It should be the responsibility of quality control, whose function is to control the quality of how jobs are done. It should not allow production operators to control their own quality performance; unfortunately, it is done that way more often than not.

Part VIII

The Increment Weighting Error and the Weighing Error

This page intentionally left blank

27 The Increment Weighting Error

27.1. Introduction The low-key position of the Increment Weighting Error IWE in this third edition of the book is justified by the fact that for the moments of this error to be accessible in a logical way it is of the utmost importance to make sure the sample selection process is absolutely correct; in other words, the sum IDE + IEE + IPE is negligible. With this important restriction in mind, the Increment Weighting Error is defined as the error whose moments are the difference between the moments of the Heterogeneity Fluctuation Error HFE and the moments of the Quality Fluctuation Error QFE. The reader may refer back to Sections 7.6 and 7.7 to have a better perspective on the importance of this chapter. In these sections we compared the variograms of am , Mm. and hm. Our conclusion was that the variogram of am was very similar to the variogram of hm clearly showing that the influence of Mm on hm was negligible. It is not necessarily always the case unless some precautions are taken, which is the objective of this chapter to review these precautions.

27.2. The Moments of IWE 27.2.1. The Mean of IWE By definition we can write: mðIWEÞ ¼ mðHFEÞ  mðQFEÞ

ð27:1Þ

Gy demonstrated in his 1979 textbook64 that the weighting bias is always negligible when the sample selection is correct. Only under these stringent conditions we may write: mðIWEÞ ≈ 0

ð27:2Þ

27.2.2. The Variance of IWE In a same way, by definition we can write: 501

502

Theory of Sampling and Sampling Practice

s2 ðIWEÞ ¼ s2 ðHFEÞ  s2 ðQFEÞ

ð27:3Þ

This property is well illustrated in figures of Sections 7.6 and 7.7.

27.3. Practical Experience on the Behavior of IWE 27.3.1. Sampling Systems Installed under the Discharge of a Flowing Stream The variance of IWE depends on the fluctuations of stream flowrate from which the increments making up the sample are collected. Some manufacturers of sampling equipment have implemented a systematic selection mode of the increments based on a constant tonnage interval rather than a constant time interval. It is an expensive and high maintenance solution, the effectiveness of which is highly debatable. Let’s analyze the problem more in depth. The mass M of an increment can be written as follows: M¼

WxR V

ð27:4Þ

where the mass M is expressed in grams if the cutter opening W of the cross-stream sampler is expressed in centimeters, the stream flowrate R is expressed in grams/ second, and the velocity V of the cross-stream sampler is expressed in centimeters/ second. If a systematic selection mode of the increments based on a constant tonnage is selected, IWE cancels if, and only if, the increment mass M is also constant, which has huge implications on how such a system is designed, installed, monitored, and maintained. In practice, sampling systems based on a constant tonnage interval can be classified into two categories: 27.3.1.1. Sampling Systems Using a Constant Cross-Stream Cutter Speed These sampling systems are very naïve and wrong. They present exactly the same problem as incorrect sampling systems based on constant time intervals, except that the problem is transferred somewhere else. Be aware these systems do not reduce the variance of IWE and can even be suspected of increasing it, and even worse, they can now increase the possibility of having a bias. 27.3.1.2. Sampling Systems Using a Proportional Cutter Speed In theory these systems are correct, but in practice they present enormous difficulties that are unlikely to be resolved in a satisfactory way: • Measuring the instantaneous flowrate at the sampling point where the cutter crosses the stream is nearly impossible because it assumes the weightometer is installed nearby the sampling system and synchronized with it.

503

The Increment Weighting Error

• Setting the cutter speed V proportional to the flowrate R is extremely difficult to achieve. • Realizing such a reliable system at an acceptable initial cost is nearly impossible because of its complexity. • The operating maintenance cost of such a system is nearly unacceptable. Conclusions: If a financial effort is to be made to reduce IWE, it is cheaper and far more effective to regulate the flowrate during the sampling duration of one entire lot represented by one sample. If the flowrate is going to change drastically, it is recommended to collect a new sample with appropriate settings to the new average flowrate. Therefore, it is cheaper and more effective to rely on a constant time interval between increments if the following guideline is respected: • The variance of IWE is acceptable when the fluctuations of the flowrate are maintained within ±10% relative at the 68% level of confidence. • The variance of IWE is negligible when the fluctuations of the flowrate are maintained within ±5% relative at the 68% level of confidence. Overall, it is important to know that no regulation of the flowrate at all is likely to result in an unacceptable variance of IWE.

27.3.2. Sampling a Mineral Deposit The accurate determination of ore reserves in a mineral deposit requires that the length of each drilling intercept be proportional to the thickness of the ore zone at the point of drilling, so a weighted average can be calculated, as illustrated in Figure 27.1. In the same way, if a composite sample to represent the ore zone was prepare, each increment or number of increments would have to be proportional to the thickness of the ore zone at the points of drilling.

27.3.3. Subsampling at the Laboratory In a similar way, at the laboratory, if a composite sample must be collected from a slab of irregular thickness, as illustrated in Figure 27.2, each increment must

Waste

Ore

Waste FIGURE 27.1 Illustration of proportional sampling during exploration.

504

Theory of Sampling and Sampling Practice

FIGURE 27.2 Illustration of proportional sampling at the laboratory.

represent the entire thickness of the slab at the selected sampling point, so proportional sampling can be achieved, which ultimately makes IWE negligible. Obviously, when performing such a sampling operation, it is advisable to make sure the slab is of regular thickness, which would greatly facilitate the sampling operation. A good recommended reading for the Increment Weighting Error is G. J. Lyman paper presented at the 2017 World Conference on Sampling and Blending.202

27.4. Introduction to Proportional Sampling The idea of proportional sampling is not new; the idea is to collect a sample under such conditions that its weight is proportional to the tonnage being sampled. This also provides a representative sample that can be used for quality control purposes. It is a natural extension from our search to make sure the Increment Weighting Error IWE is negligible. For proportional sampling to be effective, a sampling device or a series of sampling and subsampling devices, whose sampling ratio can be determined with great accuracy, is required. Generally, weightometers used for solids travelling on a conveyor belt and flowmeters used for liquid with or without suspended solids are not as accurate as their manufacturers claim them to be as mentioned by Pierre Gy (1981).67 Experiments performed on two weightometers installed in series on the same stream, calibrated every morning, gave systematically different values for the tonnage at the end of the day. This remark alone shows the need for a technique such as proportional sampling, 27.4.1. Definition and Purpose The sample S, representative of a lot L, must be accurate and precise. The sample weight MS must be proportional to the weight ML of the lot. The proportionality factor can then be estimated if the quantity: M0L ¼

MS P

ð27:5Þ

is an unbiased estimator of the weight ML. These are the conditions that should be complied with in practice, by following the rule of sampling correctness listed in Part VII unconditionally. If it is assumed that these conditions are met, then a single operation provides all qualitative and quantitative

505

The Increment Weighting Error

information required to calculate the data needed to establish a mass balance of each constituent of interest. Those familiar with the mineral processing industry know how a material balance in a flotation plant can be imprecise and inaccurate. Discrepancies often exist between several estimates derived from a given characteristic. For instance, it is common practice to assay the feed and tail, to assay the various products, to estimate the weight of each of these products, and to reconstruct the feed by calculation. More often than not, there is a systematic difference between the reconstructed feed and the analyzed feed, which is commonly called a reconciliation problem. To solve these problems of metallurgical accounting across the plant, proportional sampling is suggested as a tool to achieve better accountability. 27.4.2. Time and Mass-Sampling Ratios The time-sampling ration PT is the proportion of the flowing time TL, during which the increments forming the sample S are extracted from the stream. TS is the time during which the stream is diverted. Hence, the time-sampling ratio is expressed as: PT ¼

TS TL

ð27:6Þ

The mass-sampling ratio PM is the proportion of the mass ML which is recovered in the sample S. For a recovered sample of mass MS, the mass-sampling ratio is: PM ¼

MS ML

ð27:7Þ

Now, if it is assumed that the sampling system is correctly designed, operated, and maintained, and if the flow rate remains constant, then the time-sampling and masssampling ratios are equal, that is: PT ¼ PM

ð27:8Þ

or ML ¼

MS MS ¼ PM PT

ð27:9Þ

The sample mass MS can be measured by weighing the dry sample S, and the timesampling ratio PT can be calculated from the sampler characteristics, as shown later. 27.4.3. Estimation of the Lot Mass and Discussion of Assumptions Two assumptions are made to calculate the lot mass: 1. Most samplers on the market are incorrect for their use in proportional sampling. Therefore, samplers used to achieve proportional sampling must be designed, built, and operated with the purpose of realizing an accurately known and constant timesampling ratio.

506

Theory of Sampling and Sampling Practice

2. The rate of flow never remains exactly constant, but fluctuates; for this reason, the estimate M’L, which is equal to MS/PT, is no longer a constant but a random variable with its variance and its mean ML. For a series of Q increments of mass Mq each, the mass of the sample is: MS ¼

XQ q¼1

Mq

ð27:10Þ

Because its variance must be random, it is seen that as the number of increments increases, the value of MS approaches the true unknown average. When sampling is correct, M’L becomes an unbiased estimator of the true unknown value ML. This means that in the long run the random errors practically disappear, and that a fair estimate of the lot weight becomes possible. For example, if an increment is taken every 5 minutes, that makes 96 increments for an 8-hour shift, or 288 increments per day. This produces a very accurate and precise estimate of the lot weight.

27.5. Practical Implementation of Proportional Sampling 27.5.1. Important Factors Affecting the Choice of a Proportional Sampler A proportional sampler should respect the rule of extraction and delimitation correctness according to the following criteria: 27.5.1.1. Cutter Geometry For a straight trajectory, edges should be parallel; for a circular trajectory, edges should be radial; cutter openings should be periodically checked for deformation and wear, and should be cleaned. 27.5.1.2. Cutter Velocity Use only an electric drive. Motor torque should be generously specified; cutter speed should be less than 20 cm/s. 27.5.1.3. Sampler Layout At the level of the cutter opening, the stream should be made to pass through a welldefined area; the stream should fall well inside the area generated by the cutter; dust should be minimized around the cutter. These are very restrictive conditions and the only reliable way to fulfill these conditions lies in the proper choice of the sampler and the design of the cutters. Old devices involving a timer are not precise enough to allow their use in proportional sampling. According to our experience, only cross-stream samplers, which include straight-path and circular-path samplers, provide delimitation and extraction correctness. The following two conditions are essential:

507

The Increment Weighting Error

1. Samplers should be running continuously. This offers the additional advantage of collecting a very large number of increments from the stream, thus reducing all random sampling errors. 2. Their speed should be either constant (which is seldom achieved) or if it is not (for instance due to fluctuations in the electric current), the speed variations should be uncorrelated with the cutter position relative to the stream. The sampler should, therefore, be strong and well maintained. A description of two recommended devices follows. The straight-path cross-stream sampler is shown in Figure 27.3: 1. is the driving chain of length C 2. is the driving finger fixed to the chain 3. is the electrically driven pinion 4. is the loose pinion 5. is the vertical frame driven by the finger and transmitting the movement to either the moving cutter or the moving feeder 6. is the loophole receiving the driving finger. If C is the chain length and W is cutter width (assumed to be constant along the cutter opening). Then the time-sampling ratio achieved by the proportional sampler is: PT ¼

2W C

ð27:11Þ

The factor 2 accounts for the fact that the cutter crosses the stream twice during each chain cycle. This circular-path cross-stream sampler, typical of a Vezin sampler, is shown in Figure 27.4. This is the second option for a proportional sampler which is easier to set up, especially at the scale of a pilot plant.

2

3

4 1 5 6

FIGURE 27.3 Schematic representation of the driving system of a straight-path cross-stream proportional sampler.

508

Theory of Sampling and Sampling Practice

2

Cutter trajectory 1

Stream

FIGURE 27.4 Schematic representation of a circular-path cross-stream. Proportional sampler. (1) Part of the circumference C corresponding to the cutter width W. (2) Total circumference C.

At a pilot plant, it is often convenient if a second stage of sampling is performed on the primary sample of weight MS1, using the same sampler to obtain a secondary sample of weight MS2: ML ¼

MS1 MS2 andM0S1 ¼ PT1 PT2

ð27:12Þ

Because MS1 and MS2 may be accurately estimated, it becomes possible to check whether the proportional sampler is accurate. In this case: MS1 ¼ M0S1

ð27:13Þ

27.5.2. Required Characteristics of a Proportional Sampler 27.5.2.1. The Cutter Velocity As far as the sampling ratio is concerned, the cutter speed is irrelevant. However, to avoid any perturbation such as turbulence of the pulp, the cutter should move as slow as possible. The prevention of any extraction error is a must. As a good safety margin, a speed above 20 cm/s is not recommended. 27.5.2.2. The Cutter Opening Cutter openings for pulps or solids are normally chosen at three times the diameter of the largest particles plus 1 cm. For proportional sampling, twice this opening is recommended (i.e., six times the diameter of the largest particles plus 2 cm). This means a minimum of 2 cm in the case of pulps with very fine suspended solids, or very fine dry material.

509

The Increment Weighting Error

27.5.2.3. The Minimum Limit of the Sampling Ratio PT A 2% time-sampling ratio should be regarded as the minimum that can be obtained by a single sampling stage. Smaller ratios would require either longer chains (impractical) or narrower cutters (dangerous due to the possible accumulation of material on the cutter edges, such as wood fibers). If a ratio smaller than 2% is to be obtained, it requires a two-stage sampling operation, with an intermediary surge tank acting as a flow-rate regulator.

27.5.2.4. Sample Processing Any further division of the secondary samples would reduce the precision of the estimated mass, which is contrary to our objective. Suppose two sampling stages of sampling ratio equal to 4% each are provided, then the resulting sampling ration is 0.16%. The sample, if it is a pulp, is filtered, dried, and weighed. An estimation of M’L of the weight ML of the lot L is made from: ML ≈M0L ¼

MS 0:0016

ð27:14Þ

The sample so obtained is also used for the analysis of the content of the constituents of interest, offering unsurpassed accuracy and precision.

27.5.3. Flow Regulation of the One-Dimensional Lot It is always advisable, when sampling a turbulent pulp flow, to allow the stream to stabilize and to achieve a laminar regime if possible in a wide, slightly inclined chute feeding the proportional sampler. It is advisable to check the variability of the mass of each increment by conducting a variographic experiment on the flowing stream.

27.5.4. Reliability of Proportional Sampling In most plants today, the sampling system is considered the same as any other mechanical operation. The maintenance team generally does not know what kind of conditions have to be fulfilled for the sampling devices to work correctly. In the case of proportional sampling, it is essential to correctly maintain the sampling equipment in a preventive way. When dealing with a pulp, it is necessary to periodically check that the cutter opening is not obstructed. If material accumulates on the cutter edges, a special brush could be devised to clean the cutter without stopping it, because it is essential that the operation remain continuous. If the cutter moves only 10 to 20cm/s, it is easy to brush it even when it moves. The reliability of the system is definitely associated with the care with which it is designed, installed, and maintained. A qualified maintenance mechanic should check the sampling equipment. A general-purpose mechanic may not be aware of the many problems which can affect the sampling system. This qualified mechanic should depend on the head of quality control for his training and supervision; he should never depend on the head of production.

510

Theory of Sampling and Sampling Practice

27.6. Conclusion If proportional sampling was generally applied in pilot plants, it would provide a powerful tool to the metallurgist, enabling him or her to accurately measure plant performance. Only cross-stream samplers are adequate for the purpose of proportional sampling, either straight-path or circular-path samplers can be used, provided that they run continuously, and their speed variations are uncorrelated with the cutter position relative to the stream. If such sampler has been correctly designed, built, and operated, and if the flow rate remains uniform throughout the sampling period, then it can be shown that both the time-sampling ratio PT, and the mass-sampling ratio PM are equal. The sampling ratio, which is the ratio of two lengths that can be measured with great accuracy, can thus be calculated. However, bear in mind that most existing samplers are inadequate for use in proportional sampling and may lead to great disappointments. For more information on proportional sampling, and especially for the calibration of a proportional sampler, the reader should refer to the more complete text of Pierre Gy.78

28 The Weighing Error

The precise and accurate determination of material balance at the plant highly depends on correct sampling and appropriate sampling protocols. Using this information it becomes possible to reconcile the information given by the miners to the information provided by metallurgists. To perform such a task it is essential to assess the tonnage that has been processed with great precision and accuracy. Therefore, the determination of the tonnage of material feeding the plant is essential, and is an integrated part of the overall estimation process. But, unfortunately, weightometers are notorious sources of reconciliation problems; as a result it is essential for the sampling expert to be aware that the accuracy of the tonnage fed to the plant depends on many factors. The accuracy of many weightometers is doubtful because of the location they are installed and the way they are cleaned, maintained, and calibrated. Furthermore, field experience proves to be far less optimistic than manufacturer’s guarantees. A successful program consists of three elements: • The initial installation of the weightometer needs to be correct. • A logical test program needs to be implemented. • Deviations in information given by two different weightometers installed in series not too far away from one another need to be constantly monitored.

28.1. Comparing Two Weightometers with a Stockpile between Them Figure 28.1 is an illustration of the official tonnage determined at the mine with a weightometer (W1) installed on a conveyor belt after the Run-Off-Mine material has been processed by the primary crusher. Usually, the official tonnage for the plant is determined with a weightometer (W2) installed on a conveyor belt between the stockpile and the SAG mill. There is no doubt weightometers W1 and W2 should register the same tonnage over a very long period, such as one year, when the contribution of the tonnage in the stockpile becomes second order. If the flowrate read at W1 is not too erratic, it is possible to monitor a relative difference plot between W1 and W2, by looking at a moving average equivalent to the average tonnage present in the stockpile. Long-term deviations seen between W1 and W2, lasting for weeks or even months clearly cannot be explained by the tonnage present in the stockpile. 511

512

Theory of Sampling and Sampling Practice

ROM

W1

SAG

W2 FIGURE 28.1 Illustration of two tonnage determinations often in disagreement.

28.2. Comparing Two Weightometers with No Stockpile between Them It is highly recommended to compare two weightometers measuring the same tonnage either installed in series on the same conveyor belt, or on two different conveyor belts with no stockpile between them, as illustrated in Figure 28.2. This ideal setup may look like an unnecessary and costly duplication; the extra cost is nothing when compared to a system where you have no idea, and for how long, the weightometer has been off calibration. The advantage of constantly monitoring the differences between W1 and W2 is a tremendous asset to anyone concerned about elusive and costly reconciliation problems.

28.3. Correct Installation of a Weightometer The weightometer and the conveyor do not exist independently. It is necessary for the engineering firm to build them together as a well-thought-out integrated system. Retrofitting a weightometer to a conveyor belt after it has been built is not likely to provide satisfactory results. The location selected to install the weightometer is critical, as it must be placed where the tension of the belt is least. The scale location should be more than 7 meters but less than 17 meters from the discharge point to the belt. A good distance is about 12 meters as shown in Figure 28.2. Skirting and training idlers should not be located closer than 7 meters from the scale location. The weightometer must be installed where the conveyor is horizontal; it does not belong in an area where the conveyor is inclined, which is a very common mistake.

513

The Weighing Error

Belt #1

Sampling Tower Less than 70 meters

Belt #2 Bed-blending stockpile

To clients

W1

Less than 70 meters About 12 meters W2

Belt #3

About 12 meters FIGURE 28.2 Comparing two weightometers, and illustration of a correct weightometer installation.

Horizontal conveyors are preferred over inclined conveyors because the belt tension is much lower, more predictable, and less susceptible to excessive vibrations, making calibration last much longer. The conveyor belt should be relatively short, no more than 70 meters long. Conveyor belts longer than 100 meters cannot be reliable as the tension of the belt during calibration which is done when the belt is empty cannot match the tension of the belt when it is fully loaded with material. In such a case, the problem is not the weightometer, but rather the unpredictable tension of the belt. This clearly means the weightometer can indeed be biased regardless of calibration. The weightometer must be plumb and level, which is an unsolvable problem if it is installed on slope in a suspended and inclined bridge. For weightometers to work well, run out of idler rolls must be minimized. The angle of troughing idlers must be very accurate. Deflection of conveyor stringers must be minimized. The ambient temperature must be kept as constant as possible; therefore, weightometers do not belong outdoors where temperature changes between night and day are likely to be substantial. Calibration chains should be selected to permit two calibrations, one at 50% of scale capacity, and another one at about 85%. The layout of the weighing system should be such that a predetermined quantity of the material travelling on the belt can be weighed separately in a separate weigh hopper, or in a truck, or in a railroad car if required. Hendrik Colijn203 in his textbook Weighing and Proportioning of Bulk Solids demonstrates that the force P exerted on the scale of a weightometer depends on many factors such as: • the number of conveyor mounted idlers • the normal conveyor belt loading • the spacing between idlers • the tension on the conveyor belt at the scale location • the modulus of elasticity of the belt carcass material • the moment of inertia of the carcass cross-section

514

Theory of Sampling and Sampling Practice

• the misalignment between scale idlers and adjacent idlers • the angle of the conveyor incline. In other words, the measurement of the force P exerted on the scale of a weightomater is extremely vulnerable to many minor changes. In theory, on a normal day, only the belt loading should change, and it is the one that is measured. But, if the tension on the conveyor belt at the scale location is no longer predictable, the system becomes useless. It is important to keep in mind that a weightometer is submitted to tremendous vibration and frequent temperature and wind changes. There are three methods of calibration: • using material of a known weight run • using chain tests • using static weight tests. A common step to any of these three methods is the zero balancing. The electronic zero balancing may be useful to check the electrical circuits, but it is completely useless for checking the mechanical aspects of the weighing system, which are actually the most vulnerable parts. Basically, the electronic zero is not a valid zero.

Chain Span = nL

L

#2 fixed idler

#2 fixed idler

Weight Reading

Chain Test

Standard Weight

#2 fixed idler

Weight Reading

Static Weight Test FIGURE 28.3 illustration of typical calibration systems.

The Weighing Error

515

Calibration begins with a check of scale alignment, cleanliness, and general condition of the installation. The conveyor belt should be operated about 20 minutes to ensure that the conveyor itself approximates operating conditions. The zero balancing should fulfill the following requirements: • Include an integral number of belt revolutions, which is making sure zeroing takes into consideration the variations of the belt thickness. • Include at least three integral revolutions or 10 minutes, whichever comes first, which reduces the problem of observation accuracy. • If possible, begin and end the zero balancing with the conveyor running. In general, the chain calibration is better than the static weight calibration. Refer to Hendrick Colijn for a logical test program to calibrate a weightometer using a reference calibrated chain. Figure 28.3 shows typical calibration systems.

This page intentionally left blank

Part IX

Review of Some Notorious Sampling Problems

This page intentionally left blank

29 Sampling for the Determination of the Moisture Content

29.1. Introduction The logical analysis of techniques and methods used for the determination of the average moisture content of large lots of particulate materials shows in an unambiguous way that sampling and analysis are two inseparable aspects of a same problem of estimation. Moreover, it is the unappreciation of this evidence that is responsible for the many difficulties encountered during the determination of the moisture content and, to a large extent, for the bad reputation of this problem.

29.2. Definition of Moisture Two cases that are very different should be distinguished: moisture in materials of vegetable origin and moisture in materials of mineral origin; the former being far more complex than the latter. 29.2.1. Moisture in Materials of Vegetable Origin Materials of vegetable origin that are mostly concerned are all cereals and their products. It would be impossible to completely review this problem within this book, and we refer the reader to an excellent publication of Multon (1982).204 In this text we will mention only one of his important conclusions: The theoretical and practical distinction between the different forms of water contained in biological matter is extremely subtle. To define water, we shall refer to its many forms and a clear distinction between each form is not always obvious. Under those conditions it is particularly difficult to speak of moisture and to define it.

29.2.2. Moisture in Materials of Mineral Origin Without being simple, the problem generated by the moisture content in materials of mineral origin is far less complex. Unless stated otherwise, the content of this chapter is limited to the problem of moisture in materials of mineral origin only. The moisture content of material of mineral origin is made of three main forms of water: 519

520

Theory of Sampling and Sampling Practice

• The absorbed water (e.g., as we often observe with sponges). It refers to a penetration of the water. • The adsorbed water, which refers to an adhesion of a thin layer of water on exposed surfaces. This adhesion is promoted by powerful surface-tension forces. Capillary water, hygroscopic water, and water trapped in crystal lattice of minerals, are good examples. • The occluded water trapped inside cavities. This water may diffuse gradually upon drying or leave after decrepitation when the boiling temperature of water is reached. Some material such as clay or laterite may retain up to 40% moisture and still not appear wet. However, some of this water can be eliminated by simple extrusion. Moisture waters should not be confused with essential waters. There are two main kinds of essential water: • The water of constitution, which refers to the water that is an intrinsic part of a molecule. Here the water is not present as such but only appears when the solid undergoes decomposition as a result of excessive heat; for example Ca(OH)2 will give CaO and H2O. • The water of crystallization such as in stable solid hydrates like Ni(NO3)2,6H2O and many others. In some minerals such as silicates, the crystallization water can be present in proportions that are not well defined and may vary with surrounding physical conditions. Experimentally, we can define the moisture of a material as the water that can be eliminated by drying under specific conditions of temperature, air pressure, air circulation. And time. Its determination consists of: 1. weighing before drying 2. drying under specific conditions 3. weighing after drying.

29.3. Possible Definitions of the Moisture Content MLA ¼ MLD  aLD ¼ MLH ð1  HÞaLD 29.3.1. Moisture Content of a Material as Received Let’s define: MLH: MH: MLD: H:

the weight of a lot of material as received. the weight of water eliminated after drying under specific conditions, the weight of the lot after drying, the moisture content as received defined as: H ¼

MH MLH

ð29:1Þ

521

Sampling for Moisture Content

29.3.2. Moisture Content of a Material after Drying In a same way we can define: H’: the moisture content after drying: MH H0 ¼ M LD

ð29:2Þ

Both definitions are equivalent; it is only necessary to specify which one we are using. However, for practical reasons, the most logical definition to use is the first definition. Then we can write: MLD ¼ ð1  HÞMLH

ð29:3Þ

But, we shall remain cautious because some industries use the second definition of the moisture content after drying.

29.4. Determination of the Moisture Content for Itself The determination of the moisture content for itself is necessary each time it represents a parameter of interest for the good functioning of a process. Often, in this case, we are more interested in the fluctuations of the moisture content rather than its accurate absolute value. This is the best domain of application of indirect measurement methods of the moisture content. These methods do not require sampling and are based on the hypothesis that there is a reciprocal relationship between the moisture content and a certain physical property easy to measure. In short, the problem becomes only a matter of good calibration, accepting a few random fluctuations of no concern. At least three methods have been used: 1. The slowing down effect of hydrogen atoms on fast neutrons 2. The decrease of dielectric properties 3. The downward movement of water in materials. All these methods are characterized by a major pitfall; the reciprocal relationship between the moisture content and the measured physical property is uncertain. At best there is a poor correlation. When the material has a particle size distribution that varies within a large domain, this correlation is not reliable, and the determination becomes questionable. Methods using fast neutrons are particularly doubtful because they measure the hydrogen content which may have nothing in common with the moisture content. Indeed, the hydrogen may be present under three different forms: 1. As moisture, which is the only form of interest 2. As essential water, which is irrelevant

522

Theory of Sampling and Sampling Practice

3. As hydrogen contained in organic molecules, which eliminates this method for coal, cereals, food, chemicals, and so on. Conclusion: The correlation between the moisture content and the hydrogen content of a material is by far too unreliable to be of any interest for the determination of this moisture content.

29.5. Determination of the Moisture Content for the Calculation of the Tonnage of a Critical Component In a large majority of cases, and particularly during commercial transactions so important on a worldwide scale, the final objective of sampling for the determination of a moisture content is to estimate the true tonnage of a given constituent of interest. Thus, the amount MLA of this critical constituent A is the object of a price settlement. If the moisture determination is not accurate one of the two parties, vendor or buyer, may suffer a severe financial loss sometimes measured in millions of dollars. Sometimes the moisture content is determined for safety reasons, for example, if the moisture content of a maritime shipment is excessive it may not comply with maritime laws; the captain of that ship does not want to sink his ship on a rough sea.

29.5.1. The Chemical Analysis Cannot Be Performed on Wet Materials A wet tonnage MLH was determined and our objective is to ascertain the tonnage MLA of the constituent of interest A contained in the lot L. The simplest way of doing this would be to directly determine the content aLH of the component A in the wet material as received. Let’s define: aLH:

the critical content of the constituent A in the material as weighed. Then, it would be easy to calculate: MLA ¼ MLH  aLH

ð29:4Þ

The problem to solve is the same as in Parts 1 through 4, but several difficulties are encountered and are all connected to the moisture itself: 1. Wet materials are sticky, generating clogging problems with crushers, splitters, pulverizers, and so on. 2. For the analysis, materials must be pulverized to a fine particle size, reducing the Fundamental Sampling Error and facilitating the chemical decomposition. 3. The moisture content can vary considerably between the sample extraction and the time it is weighed at the balance room. For the accuracy of the estimation, it is essential that the material submitted for analysis has exactly the same moisture content as during the weighing step.

523

Sampling for Moisture Content

Conclusion: The direct analysis of wet materials is generally impossible. For mechanical reasons it is necessary: 1. To dry materials before crushing, splitting, and pulverizing, 2. To thoroughly record the amount of eliminated water. 29.5.2. The Drying Step Cannot Be Performed on the Entire Lot The entire lot cannot be dried because it is not cost effective. Therefore, it is necessary to dry only a representative part of the lot by following the procedure summarized in Figure 29.1. We can write the following equality: MLA ¼ MLD  aLD ¼ MLH ð1  HÞaLD

ð29:5Þ

Determination of the wet tonnage MLH

The chemical analysis cannot be performed on wet materials

The drying step cannot be performed on the entire lot

SAMPLING Sample for the determination of the moisture content

Weighing before drying

Drying #1

Main stream

Sample for analysis

Crushing, pulverizing,...

Drying #2

Weighing after drying

Weighing the dry analytical subsample

Calculation of the moisture content H

Determination of the content a LD of the constituent of interest

Estimation of the dry tonnage MLD

Estimation of the tonnage MLA of the constituent of interest in the lot

FIGURE 29.1 General flow sheet used for the estimation of the tonnage MLA of the constituent of interest A in the lot L.

524

Theory of Sampling and Sampling Practice

Where aLD is the critical content of A in the material after drying. Thus, it is not realistic to dissociate any of the following steps of the procedures for this equality to remain coherent, or any of the three measures leading to the estimation of MLA: • Determination of the wet tonnage MLH • Selection and extraction of the sample used for the determination of the moisture content • Determination of the moisture content H • Selection and extraction of the sample used for the determination of the critical content • Determination of the critical content aLD. 29.5.3. Conditions to Ensure Accuracy For equation (29.5) to be valid and the estimation of MLA accurate, thus equitable, it is necessary and sufficient that the three parameters MLH, H, and aLD be compatible between themselves. This condition is subdivided into two parts: 1. H must be compatible with MLH. To fulfill this condition the sampling operation and the determination of the moisture content must be performed on a material strictly identical to the material weighed for the determination of the wet tonnage. This condition ensures the accuracy of the estimation of MLD. 2. aLD must be compatible with MLD. The determination of the content of the critical constituent A and especially the weighing of the analytical subsample must be performed on a material strictly identical to the dry material characterized by the weight MLD. When using MLD, this condition ensures the accuracy of the estimation of MLA. 29.5.4. Accuracy of the Estimation of the Dry Tonnage MLD Let’s assume that the estimation of the wet tonnage MLH is accurate. For the estimation of the dry tonnage to be accurate, it is necessary and sufficient that the sample used to determine the moisture content H be in a state strictly identical to the weighted material. Therefore, we must satisfy the two following conditions: 1. The sample used for the determination of the moisture content must be selected and extracted immediately before or after weighing. 2. Neither the weighing nor the sampling are performed when the material is in a rapidly changing physical state.

29.5.5. Accuracy of the Estimation of the Tonnage of the Constituent of Interest A Let’s assume that the estimation of MLD is accurate and the analysis is exact. For the estimation of the tonnage of the constituent of interest A in the lot L to be accurate, it is necessary and sufficient that the dry stage reached for the analytical determination of the moisture content be strictly identical to the dry state reached for the analytical determination of the critical content. In some cases, the dry state exists in an unambiguous way, especially when the material is very stable under heat. This is true when it does not contain any crystallization water and does not oxidize at temperatures between 100 and 110oC.

Sampling for Moisture Content

525

However, the dry stage is far more difficult to define and to reach in many experiments. Many silicates, and other minerals, show curves representing the loss of water as a function of time and temperature with no transition zone with constant weight around 105oC. The elimination of the water of constitution overlaps with the elimination of the moisture. Then, the dry state is a fictitious state impossible to reach in a reproducible way, especially if the weight of the sample, temperature, and ventilation conditions are slightly different. There are two solutions: 1. Refer to a given drying state common to both drying stages #1 and #2 in Figure 29.1. 2. Refer to a stable state that can be reached. This stability should not be a function of weight, location, or ventilation (e.g., ignition stage). 29.5.6. Carrying the Condition of Accuracy into Effect In general, when the above conditions cannot be fulfilled, the accuracy of the estimation of MLA would necessitate a good coordination between the implementation of the five operations summarized in Figure 29.1. But let’s see what we often observe in most industries: 1. Weighing of the wet material is often performed by the production people: a. b.

Either in a continuous way with belt scales, flow density meters, or nuclear instrumentation. Or in a discontinuous way.

For more information and a critique of these methods refer to Chapter 27 with discussions about proportional sampling. 2. The sampling operation for the determination of the moisture is often performed by the production people and only sometimes by the quality control people. It is an operation that presents difficulties that are often underestimated. 3. The determination of the moisture H of the sample is generally performed by laboratory personnel. Sometimes, it is done under inappropriate conditions with hot plates, sand baths, infrared lamps, and so on. 4. The sampling operation for the determination of the critical content aLD has been the object of a complete study in Parts 1 through 4. We know that it is not often performed under ideal conditions. 5. The determination of the critical content aLD is often performed with sophisticated laboratory instrumentation by highly qualified people. The only thing wrong at this stage is that problems encountered and points 1, 2, 3, and 4 are frequently ignored. Because of this inconsistency, sophisticated efforts provided at the laboratory become irrelevant and misleading. Conclusion: Between these five indissociable operations, there is often a lack of coordination. For each of these operations to be conducted correctly, accurately, and with equity, it is essential to integrate them under common control and responsibility.

526

Theory of Sampling and Sampling Practice

29.5.7. Classical Method: Two Primary Samples and Two Drying Stages This method is the most popular and is often recommended by standard committees. It is the method summarized in Figure 29.1. There are several versions of this method but, in all cases, the drying stages #1 and #2 are performed under different conditions, and often at different places.

29.5.7.1. First Version There are two primary samplers, either in series or in different circuits. Often, the sample selection is performed on materials with different particle size distributions, which can be considered as a fatal flaw. One sample is used for the moisture determination, the other for the determination of the critical content. This version is summarized in Figure 29.2.

Feed

Determination of the wet tonnage MLH

Primary sample for moisture content

Primary sampling for moisture content

Secondary sampling

Feed

Weighing of wet material

Fine grinding

Drying

Primary sampling for chemical analysis

Weighing of dry material

Sample for chemical analysis

Calculation of the moisture content H

Secondary sampling, pulverizing, and drying

Chemical analysis on dry material Calculation of the tonnage of the constituent of interest to be delivered MLA

Constituent of interest content on a dry basis a LD

FIGURE 29.2 Classical method #1: two primary samplers and two separate drying stages. This method is common but not recommended.

527

Sampling for Moisture Content

29.5.7.2. Second Version There is only one primary sampler. Only one sample is selected and extracted, then two samples are split, one for the determination of moisture and one for the determination of the critical content. An example of this version is illustrated in Figure 29.3.

29.5.7.3. Third Version There is only one primary sampler with alternate selection and extraction. When travelling in one direction the sampler collects one sample for the determination of moisture, and when travelling in the other direction the sampler collects one sample for the determination of the critical content.

Feed

Determination of the wet tonnage MLH

Primary sampling

Primary sample

Splitting

Sample for misture determination

Sample for chemical analysis

Splitting (Optional)

Splitting and pulverizing

Weighing of wet sample

Drying

Drying

Chemical analysis of dry sample

Weighing of dry sample

Determination of the constituent of interest content aLD

Calculation of the moisture content H

Calculation of the tonnage MLA

FIGURE 29.3 Classical method #2: one primary sampler and two separate drying stages. This common method is not recommended.

528

Theory of Sampling and Sampling Practice

29.5.7.4. Fourth Version There is only one primary sampler with two adjacent cutters of different widths if necessary. One cutter collects the sample for the moisture analysis, the other cutter collects the sample for the determination of the critical content. Conclusion: None of these four versions are structurally correct and all should be rejected. The double drying stage always introduces a risk of bias. The bias is an increasing function of: • Differences in particle size between the two samples • Differences in drying techniques • Differences in drying temperature • Differences in drying time. This bias directly affects the accuracy of the determination of the tonnage MLA of the constituent of interest in the lot L.

29.5.8. The Simultaneous Drying Method This method includes two versions: one on very fine material, the other on coarse material. It has been developed in the United States by the uranium industry and can easily be applied to many other materials. This method is characterized by the following steps: 1. One single sample is selected and extracted. 2. The subsamples used for the determination of the moisture or the critical content have about the same weight and are dried exactly under the same conditions in the same oven. The sample for the moisture determination is spread into two trays A and C. The sample for the determination of the critical content is spread into two trays B and D. Then, the four trays are placed in alphabetic order above each other inside a regulated and ventilated oven for the same amount of time. 3. The loss of moisture is measured with samples A and C. 4. The sample B+D is immediately split into a large number of replicate subsamples that are sealed under vacuum. 5. The chemical analysis is imperatively performed on samples as received without further drying. Important remark: Drying the sample again at the laboratory prior to analysis is a mistake. The objective of the simultaneous drying method is not to reach a dry state. Its only objective is to reach a given state that is strictly identical for both samples: the sample for the determination of the moisture and the sample for the determination of the critical content. This method very effectively solves the difficulty encountered in the methods described in Section 29.5.7. This advantage is destroyed when a complementary drying stage is performed upon receiving the sample. This is the reason why the procedure calls for the following recommendations:

529

Sampling for Moisture Content

1. The samples should be enclosed inside airtight containers. 2. The container should be opened only at the instant of weighing the analytical subsample. 3. The samples should never be reused in a future time frame. Figures 29.4 and 29.5 illustrate the two various versions of this method. 29.5.9. Method of the Single Sample This method is the one we recommend. It is not well known but is the logical evolution of the methods described in Section 29.5.8. It is characterized by the fact that both the determination of the moisture content and the determination of the

Feed

Determination of the wet tonnage MLH

Primary and secondary sampling; Common sample

Splitting

Moisture

Analysis

Preparation and splitting under same conditions

Calculation of moisture content H

Weighing of wet sample

Drying in the same oven

Calculation of MLA

Weighing of dry sample

Splitting (Optional) and water tight packaging

Constituent of interest content aLD

Analysis on sample as received

FIGURE 29.4 Recommended method for very fine material. Simultaneous drying of both samples inside the same oven.

530

Theory of Sampling and Sampling Practice

Coarse Feed

Determination of the wet tonnage MLH

Primary sampling: single sample

Crushing (Optional)

Secondary sampling

Splitting

Sample for moisture determination

Sample for analysis

Weighing of wet sample

Preparation and pulverizing on sample as received

Drying in the same oven

Weighing of dry sample

Calculation of moisture content H

Calculation of tonnage MLA

Splitting, air and water tight packaging

Analysis on sample as received

Constituent of interest content aLD

FIGURE 29.5 Recommended method for coarse material: simultaneous drying of both samples inside the same oven.

critical content are performed on the same sample. The method is summarized in Figure 29.6. The implementation of this method requires a certain number of precautions: 1. The primary sample must satisfy its double objective that implies: ○ To collect the sample very near the weighing place of the wet tonnage MLH. ○ The sample must respect the particle size distribution of the material as explained in Chapter 11. In other words, the sample weight must be sufficient, and the sampling operation must be correct as explained in Part 7.

531

Sampling for Moisture Content

Feed

Determination of the wet tonnage MLH

Primary sampling: single primary sample

Crushing

Secondary sampling (optional)

Weighing the wet sample

Drying

Weighing of dry sample Calculation of moisture content H

Preparation of sample, splitting. pulverizing

Air and water tight packaging

Analysis of sample as received aLD

Calculation of MLA FIGURE 29.6 Recommended method of the single sample.

2. No moisture change is permissible between the estimation of MLH and the moisture determination of the sample. 3. No moisture change is permissible between the weighing of the dry material and the weighing for the analysis. This often necessitates performing all weighing operations in a balance room equipped with effective air conditioning, especially near the sea or in tropical countries.

532

Theory of Sampling and Sampling Practice

Let’s define: MAD: MAA:

the weight of the dry analytical subsample. the weight of the constituent of interest in the analytical subsample.

Then, we can write: MSD MLA ¼ aLD ð1  HÞMLH ¼ MLH M  SH

MAA MAD

ð29:6Þ

In this equation: MSD MSH ¼ 1  H represents the content of dry material in the wet material and, MAA MAD ¼ aLD represents the average content of the constituent of interest on a dry

basis.

Equation (29.6) can be arranged in a strictly equivalent way: MSD LH MLA ¼ M MSH  MAD  MAA

ð29:7Þ

MSD LH Where M MSH is the total sampling ratio on wet material and MAD is the total sampling ratio on dry material. From this equation it becomes easy to see that an accurate determination of the tonnage MLA requires:

1. That MLH and MSH be determined on wet materials having exactly the same moisture content. 2. That MSD and MAD be determined on materials dried exactly under the same conditions.

29.6. Typical Difficulties Encountered during the Selection of a Sampling Weight Base Line A real example may better illustrate the various difficulties that would be encountered during the selection of a sampling weight base line. Many are those who are tempted to say: “Why so much fuss about the moisture determination, since drying the material until it reaches a constant weight is a very simple thing to do?” By taking a close look at the dehydration curves of many materials, experience proves that the solution to the problem is not simple. The example of tropical laterites is enlightening. Laterites, like many other soils rich in varieties of clay and iron oxides, present difficult sampling problems. Among these, the problem of the sampling weight base line is present at all sampling stages, from collection in the field through sample preparation, pilot metallurgy, and analytical work. The very high surface area of the fine particles of these soils results in a high absorbed water content. Furthermore, water crystallization occurs in several minerals such as psilomelane, serpentine, montmorillonite, etc. Water of constitution occurs in goethite, lepidocrocite, gibbsite, boehmite, and so on. Table 29.1 shows the temperatures at which water is lost from several minerals present in laterite ores. The net effect is a

533

Sampling for Moisture Content

TABLE 29.1 Examples of minerals found in lateritic ores: Endothermic peaks (EN.) and Exothermic peaks (EX,) observed. Mineral

Mineral formula

Temperature in oC

Goethite

α.FeO.OH

(EN.) 200–400

Lepidocrocite

γ.FeO.OH

(EN.) 300–400

Hematite

α.Fe2O3

Anhydrous

Magnetite

Fe2O3.FeO

(EX.) 275–375 and 590–650

Ferric oxide gels

Fe2O3.nH2O

(EX.) 259–500 Dehydration 100–200

Chromites

Anhydrous

Gibbsite

R(RM)O4 R = trivalent metallic element M = bivalent metallic element γ.Al(OH)3

Boehmite

γ.AlO.OH

(EN.) 500–600

Psilomelane

MnO2.nH2O

(EN.) 900–1200

Asbolanes

(nCoO,mMnO2,rNiO).xH2O

Can widely vary

Kaolinite Talc Serpentine (antigorite) Montmorillonite

(EN.) 300–330

(EX. 980–1000 (EN.) 480–540 Si4O10Mg3(OH)2

(EN.) 870–1050 (EN.) 600–660 (EX.) 800–830 (EN.) 600–750

dehydration curve of the shape shown in Figure 29.7. Figure 29.8 illustrates the weight losses of the same material between 75 and 300oC. Figures 29.9 and 29.10 show the effect of drying time, temperature, and particle size on water loss. As the water content of a laterite changes, so does the content of all other constituents of interest, such as nickel, cobalt, manganese, chromium, iron, aluminum, silica, magnesium, and so on. A statement concerning the content of any critical constituent is meaningless unless there is a corresponding statement concerning its water content at all relevant sampling stages. This can be defined as the sampling weight base line problem. If drastic precautions such as those indicated in Sections 29.5.8 and 29.5.9 are not taken, the problem is an intractable one. Figures 29.7 and 29.8 show how precarious the assumption is that a constant weight can be reached at any temperature. The only thing that can be said is that there are better zones of drying temperatures than others to be selected. Even if we accept the optimistic assumption that a constant weight can be reached, Figures 29.9 and 29.10 clearly show that a small difference in the drying temperature, or a small difference in the particle size distribution of the material can drastically change the sampling weight base line. Conclusion: The only safe solution is to rely on a simultaneous drying method or on the single sample method. Then, only as a complementary factor, we shall define a sampling base line by selecting a temperature located in a zone where a small change in temperature does not result in a great change in the weight of the material (e.g., for laterite ores this zone is obviously between 95oC and 200oC; usually a temperature of 105oC is chosen).

534

Theory of Sampling and Sampling Practice

% H2O

D

50 A: Adsorbed water B: Crystallization water C: Absorbed water A+B+C = D

40

C 30

20

B 10 A

100

200

300

400

500

o

C

FIGURE 29.7 Illustration of the dehydration curve of a lateritic ore.

29.7. Correctness of the Sampling Equipment Used for the Moisture Determination The sampling equipment selected for the collection of samples used for the determination of the moisture content must, in all points, satisfy the rules of sampling correctness reviewed in Part 7 of this book. A sampler or a divider strictly accurate (i.e., correct and capable of collecting representative samples regardless of the nature of the material) for the estimation of a critical constituent will also be accurate for the estimation of the moisture content.

535

Sampling for Moisture Content

Weight in grams

105o C generally accepted drying temperature 1.000

0.97

0.94

80

100

120

140

160

o

C

FIGURE 29.8 Weight losses of a lateritic ore between 75 and 180oC.

A sampler or a divider that is not correct and likely to introduce a significant bias for the estimation of a certain constituent of interest will also be incorrect and likely to introduce a significant bias for the estimation of the moisture content. A grab sample used for the determination of the moisture content is invalid and highly misleading, consists of a flagrant disrespect of due diligence, and can in many cases be regarded as a fraudulent sample.

29.8. Complementary Remarks and Conclusions By taking the necessary precautions to minimize errors of increment delimitation, extraction, and preparation, it is possible to conduct a variographic experiment to estimate with sufficient precision, the total variance of the Heterogeneity Fluctuation

536

Theory of Sampling and Sampling Practice

% H2O

Size = d

Size < d

X1

X2

Q Time in hours

2

4

6

8

10

FIGURE 29.9 Effect of the particle size on the dehydration curve of a lateritic ore. X1 is the residual absorbed and adsorbed waters for the particle size = d. X2 is the residual absorbed and adsorbed waters for the particle size < d. Q is the crystallization and combined waters.

Error HFE and the variance of each component HFE1, HFE2, and HFE3. In such an experiment the moisture content is considered as a critical component with no differences with other critical components such as minerals. As far as the Fundamental Sampling Error is concerned, experience proves that requirements to minimize this error are much more stringent for components such as minerals than for a component such as the moisture. It can be safely assumed that if a sampling protocol is adequate to minimize the Fundamental Sampling Error of a critical component it is also adequate to minimize the Fundamental Sampling Error concerning the moisture content.51 In the introduction of this chapter we said that sampling for the determination of the moisture content had a bad reputation. This bad reputation is the result of real difficulties encountered in practices that classical methods such as those described in Section 29.5.7 did not solve. Of course, the problem is not simple to solve. If we intend to perform a correct sampling operation and an accurate moisture determination on the sample, drastic

537

Sampling for Moisture Content

% H2O

40oC 40oC 107oC

X3 Q4

X2

Q3

X1

Q2 Q1

Time in hours

2

4

6

8

10

FIGURE 29.10 Effect of the drying temperature on the dehydration curve of a lateritic ore. Q1 is the sum of crystallization and combined waters. Q2 – Q1 is the residual adsorbed water at 107oC. Q3 – Q1 is the residual adsorbed water at 80oC. Q4 – Q1 is the residual adsorbed water at 40oC. X1 is the residual absorbed water at 107oC. X2 is the residual absorbed water at 80oC. X3 is the residual absorbed water at 40oC.

precautions like those described earlier, should be taken with a careful review of all possible sources of significant bias. Our conclusion is that the only warrant for performing an accurate moisture determination is to follow the methods described in Sections 29.5.8 and 29.5.9 as closely as practically possible. Therefore, we can state that the bad reputation of the moisture determination is no longer justified; it is only the result of implementing old and obsolete classical methods.

This page intentionally left blank

30 Peculiarities about the Sampling of Precious Metals

30.1. Introduction Theoretical and, especially practical problems generated by the sampling of materials containing precious metals have been given enormous attention by many specialists for a long time. It is not by chance that D. W. Brunton,1–3 whose contribution constitutes the best precursor to Pierre Gy’s theory of sampling, was working in a gold mine in Colorado at the end of the nineteenth century. Nevertheless, solutions to the sampling of precious metals in general are still unsatisfactory: sometimes propositions are too theoretical and useless for the engineer, and sometimes they are too simplistic and based on empirical observations without solid foundations. In the material presented in this chapter there are still a number of necessary approximations. We will keep these approximations in mind to make sure that they are always valid. A special effort is made in presenting an easy text illustrated by simple practical examples. This makes it possible to quickly calculate the variance of the Fundamental Sampling Error involved during the sampling of precious metals. The objective is to find out, with no ambiguity, if a sample should weigh 30g. 3kg, or 3tons! This chapter is also an introduction to the intriguing subject of sampling of trace constituents which was already the very important objective of Part V in this book. Peculiarities about the sampling of precious metals can be divided into three categories: • Financial • Theoretical • Practical

30.1.1. Financial Peculiarities Relatively small amounts of material can involve very large amounts of money; therefore, problems of precision and accuracy quickly become a primary concern. This is the case during early exploration, for mining evaluations and planning, for milling operations, for recycling smelters, and so on. There are probably no other materials than precious metals for which the achievement of sampling precision and accuracy is so critical; however, there are no other materials whose sampling is as challenging. 539

540

Theory of Sampling and Sampling Practice

30.1.2. Theoretical Peculiarities One of the main distinctions between precious metals and other metals is the fact that they are economic at very low levels. Base metals, for example, are often estimated in percent, while gold, platinum, palladium, and so on are often estimated in parts per million. Tailings of precious metals can be as low as 10ppb (aL = 10-8). This is the reason why the sampling of precious metals is an excellent introduction to the particular case of sampling of trace constituents already reviewed in Part V. Under those conditions we can easily predict that the Fundamental Sampling Error and therefore, the Grouping and Segregation Error can rapidly become overwhelming. 30.1.3. Practical Peculiarities Because gold has an important place in the sampling of precious metals, and is known to generate numerous practical difficulties, our discussion will often concern only this metal. However, the extension of these discussions to other precious metals is straightforward. The gold content of a sample and the gold content of the surrounding ore can be very different. Furthermore, the gold content of a tiny analytical 30-g subsample and the gold content of the 10000-g from which it was selected can also be very different. The density of gold is enormous (ρAu = 19.3), promoting strong segregation phenomena as soon as some gold particles are liberated. Gold particles do not comminute very well; therefore, gold smears and easily coats sampling equipment generating unacceptable losses and cross-contamination problems. As a result, a finely ground analytical 250-g subsample, believed to be 100% minus 100 microns, can still contain a few gold particles that may be 200 microns or even larger. This delayed comminution confuses many sampling experts in their calculations of the variance of the Fundamental Sampling Error, generating endless debates. All these problems are amplified as gold grade becomes lower, as the economics of gold deposits become marginal, and as the distribution of gold in rocks becomes erratic as studied in Part VI of this book. Preliminary conclusion: If something can go wrong with the sampling of precious metals, usually it will, and in a devastating way.

30.2. A Logical Approach Practically, everything that has been established for the composition of sampling errors up to and including Part VIII is applicable when the constituent of interest is a precious metal. There are six conditions to fulfill to eliminate or at least minimize most of these errors: 1. All the sampling equipment is designed, built, maintained, and used in such a way that all Increment Preparation Errors IPEn are only relevant to the second order. 2. The increment delimitation must be correct: IDE = 0. 3. The increment extraction must be correct: IEE = 0. 4. The increments must be numerous enough to minimize the Grouping and Segregation Error GSE until it becomes relevant only to the second order. We must emphasize that the homogenization of liberated gold is impossible due to its density, therefore the only hope to minimize GSE is by minimizing the

541

Sampling of Precious Metals

Fundamental Sampling Error first, and then increasing the number of increments per sample. 5. For the sampling of one-dimensional lots, the interval θ between each increment is small enough to render the long-range Heterogeneity Fluctuation Error HFE2 relevant only to the second order. 6. The sampling selection mode is chosen with respect to the periodic Heterogeneity Fluctuation Error HFE3. Usually, the selection mode must include some randomness. When all of these conditions are fulfilled, the only error left is the Fundamental Sampling Error (FSE), which is the main subject of this chapter. What has been said in Chapters 10 and 11 applies to the sampling of precious metals; however, a certain number of remarks should be made, especially concerning some approximations that may no longer be robust enough in the case of liberated gold. Precious metals, and especially gold whose modes of occurrence are many, present sampling difficulties which should be solved in different ways. The analysis of the Fundamental Sampling Error can be divided into three categories: 1. Alluvial orebodies, or other materials in which gold is liberated. 2. In-Situ orebodies where gold occurs in small veins, clusters, or other materials in which gold is not liberated. 3. Associations of gold, or other precious metals, with other minerals. Each of these categories should be reviewed with completely different approaches.

30.3. Gold Is Liberated In this category we find alluvial and eluvial gold of placer deposits. In these deposits, gold is generally free or completely liberated within an alluvial or eluvial gangue. In this category we also find materials containing nonliberated gold, after they have been crushed or pulverized to the liberation size of gold. The problem presented by liberated gold can be solved using a binomial model, and it is what Prigogine205–208 and Ingamells and Pitard22 have been doing. In this particular case, the results obtained by Prigogine, Ingamells, and Gy are in good agreement. The following formula to calculate the variance of the Fundamental Error FSE that is used for liberated gold in this chapter was carefully derived in Section 11.5 of this book. 

s2FSE

 1 1 0:8 ¼  dAu 3 MS ML aL

ð30:1Þ

30.3.1. Important Limitation The following discussion can go a long way to reconcile many sampling experts who do not agree with Gy’s general formula. The author hopes it will give them food for thought.

542

Theory of Sampling and Sampling Practice

We found that the top particle size d of the overall material (mostly gangue) was negligible; however, we may rightly wonder to what extent this is true. For example, we could imagine a small sample containing fine liberated gold particles mixed with large gangue fragments. In such a case we intuitively see that something must be wrong with relation 30.1. To prevent the misuse of this formula we can safely state that the sample must also be representative of all the particle size fractions of the material from which it has been collected, as clearly stated in the cardinal rules established in Chapter 12. We could write:  S2FSE

¼

     1 1 0:8 3 1 1 1 fρ  d ¼   2 d3 MS ML aL Au M S ML aLc

ð30:2Þ

with f = 0.5, ρ = 2.8. and aLc = 0.05 by definition. We obtain:   1 0:8 3 1 dAu ¼ IHLAu d ¼ 25 aL 25 3

ð30:3Þ

Then, in our practical application, we have to remember the following limitation: d3 

1 IHLAu 25

ð30:4Þ

30.3.2. Practical Examples 30.3.2.1. Example #1 The gold content of a lot is about aL = 0.4g/t. Locally, the largest gold particles are about 500µm. What is the weight MS of the sample we should collect if we do not want to exceed a Fundamental Sampling Error FSE with a relative standard deviation 2sFSE = 0.32, which is ±32%? We assume that the weight ML of the lot L is larger than MS by several orders of magnitude. 1. Calculation of the allotted variance s2FSE : 2sFSE expressed as part of one is 0.32, therefore s2FSE ¼ ð0:16Þ2 ¼ 0:0256 2. Calculation of the Intrinsic Heterogeneity IHL for gold: IHLAu ¼

0:8 3 0:8 dAu ¼ ð0:05Þ3 ¼ 250g aL 4 107

Remark: aL is always expressed as part of one, and if MS is expressed in grams, dAu must be expressed in centimeters. 3. Calculation of the minimum sample weight MS:

543

Sampling of Precious Metals

Ms ¼

IHL 250 ¼ ¼ 9766g s2FSE 0:0256

4. Condition of validity: d3 

1 250 IHL ¼ ¼ 10 25 25 d  2:15cm

If we collect a 9766g sample, the maximum size d of the gangue particles should not exceed 2.15cm. If it does, an appropriate fraction of the lot (it depends on the value of d) will have to be collected then screened on a sieve with the opening closest to 2.15cm. The upper and lower fractions are carefully weighed. The sample is collected in the under fraction. Then the upper fraction can be sampled and analyzed for the gold content separately.

30.3.2.2. Example #2 A gold ore has a grade aL = 0.2g/t. A microscopic examination of a gravity concentrate reveals that the gold is liberated and the largest gold particles are around 100µm. What is the minimum weight MS of the sample we should collect if we do not want to exceed a Fundamental Sampling Error FSE with a standard deviation 2sFSE = 0.32, which is ± 32%? We assume that the weight ML of the lot L is larger than MS by several orders of magnitude. 1. Calculation of the allotted variance s2FSE 2sFSE expressed as part of one is 0.32, therefore s2FSE ¼ ð0:16Þ2 ¼ 0:0256 2. Calculation of the Intrinsic Heterogeneity IHL for gold: IHL ¼

0:8 ð0:01Þ3 ¼ 4g 2x107

3. Calculation of the minimum sample weight MS: MS ¼

4 ¼ 156g 0:0256

4. Condition of validity: d3 

4 ¼ 0:16 25

d  0:5cm

544

Theory of Sampling and Sampling Practice

If we collect a 156-g sample, the maximum size d of the gangue particles should not exceed 0.5cm prior to the sampling selection. 30.3.3. Study of an Alluvial Orebody Using a Classical Drilling Campaign Let’s assume that we are dealing with an orebody containing coarse free gold. The average gold content is believed to be around 0.4g/t. Locally, the largest gold particles are around 0.24cm. The necessary sample weight, not to exceed a Fundamental Sampling Error FSE with a standard deviation 2sFSE = 0.32, which is ±32%, is around 1 metric ton. But a drilling machine equipped with HQ wire line drill rods extracts core samples with 6.35cm diameter, which corresponds to a 25-kg sample for every 3-m intersection. Obviously, in such a case the estimate of the Fundamental Sampling Error FSE becomes completely meaningless and the distribution of the gold data is likely to obey an artificially introduced Poisson distribution. Conclusion #1: Many gold results are much too low and a few are much too high. In such a case it is very dangerous to draw any conclusion with too few samples (see Section 30.6). However, if we can have access to 40 results from 40 samples of effective weight MS = 25kg (i.e., the entire 25-kg sample must be analyzed, using gravity concentration if necessary), everything is as if we analyzed a single composite sample of effective weight MS = 1 metric ton. Conclusion #2: In this example, the average gold grade would be known at ±32% relative with a 95% chance of being right after completely analyzing 40 25-kg core samples. 30.3.4. Relation between the Total Weight of Gold in a Sample and the Weight of Its Largest Gold Particle Let’s go back to formula 11.22 that can be written as follows:  s2FSE

¼

 1 1 ρ fAu  gAu  ‘Au  Au  d3Au  aL MS ML

ð30:5Þ

We see that the product: MAu ¼ fAu  ρAu  d3Au

ð30:6Þ

is the weight MAu of the largest gold particle. Also, the products:

and

MLAu ¼ aL  ML

ð30:7Þ

MSAu ¼ aL  MS

ð30:8Þ

are, respectively the weight of gold MLAu in the initial lot L and the weight of gold MSAu in the sample S. By definition ℓAu = 1, and gAu is around 0.25, thus we can rewrite relation 30.5 as follows:

545

Sampling of Precious Metals

 s2FSE

¼

 MAu MAu 0:25  MSAu MLAu

ð30:9Þ

Assuming that MLAu is larger than MSAu by several orders of magnitude: s2FSE ¼

0:25MAu MSAu

ð30:10Þ

Then with a tolerated standard deviation 2sFSE = ±32% we can calculate MSAu: MSAu ¼

0:25MAu 0:25MAu ≈ 10MAu ¼ 0:0256 S2FSE

ð30:11Þ

Conclusion: A sample which contains ten times more gold (i.e., MSAu) than the largest gold particle of the lot (i.e., MAu) is representative of the lot with a Fundamental Sampling Error standard deviation 2FSE = ±32%.

30.3.5. Problems Attached to the Variability of Gold in Stream Sediments during Geochemical Reconnaissance Gold deposits in general have three main characteristics: 1. They have very low gold content: therefore, it is necessary to collect large samples. 2. Gold deposits are small; therefore, many samples are necessary to define their contour. 3. The distribution of gold within the outcrop is often erratic; therefore, many samples are necessary to define economic blocks (see Section 30.6). These requirements, especially the first two, when considered with respect to geochemical reconnaissance, suggest that a very useful role can be played by sampling of stream sediments. The reason is that these sediments provide a gross geochemical overview of large drainage basins.

30.3.5.1. Gold Content of Rocks without Outcropping Gold Mineralization Most rock types without outcropping gold mineralization have a very low background gold content averaging from 2 to 8ppb with the exception of some sandstone arkose which may have about 26ppb of gold. Of course, these values are only averages. If we exclude sandstone arkose the standard deviation observed in undisturbed rocks ranges from 2s = 19 to 85ppb. Standard deviations observed in stream sediments should be much smaller because of the narrow range of average values. Erosion has a smoothing effect on the variability of stream sediments compared to the variability of in situ rocks. Then, it is expected that stream sediments should have a background gold content on the order of 3 to 4ppb.

546

Theory of Sampling and Sampling Practice

30.3.5.2. Gold Content of Stream Sediments When a Drainage Basin Contains Outcropping Gold Mineralization When a drainage basin contains outcropping gold mineralization the stream sediments contain two types of gold: 1. Normal background gold (3 to 4ppb) 2. Gold coming from cross-cutting mineralization Of course, the gold bearing fraction is proportional to the fraction of the of the total area of the upper drainage basin, which is composed of the mineralized rocks. Dilution by distance and volume of nonmineralized rocks rapidly brings the gold content from mineralization near the normal background gold content.

30.3.5.3. Practical Example During a geochemical reconnaissance, we want to determine gold at the 3ppb level. Consider the following questions: 1. What should the size of the largest gold particles be to obtain estimates with a standard deviation of the Fundamental Sampling Error not exceeding sFSE = ±32%, if we use 30-g samples either for fire assay or for acid digestion? 2. If the answer to the first question is not satisfactory, what should the weight of the analyzed sample be if the expected largest gold particles are around 70µm and if we want to maintain 2sFSE = ±32%? From formula 30.1, with ML much larger than MS, we can write:

dAu

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 M a 3 s FSE S L ¼ 0:8

dAu ¼ 14μm

ð30:12Þ ð30:13Þ

This low value of dAu is unrealistic. If liberated gold is present in the sediments, it is likely to be much coarser in many cases. Therefore, the analysis of 30-g samples, either by fire assay of acid digestion, is totally meaningless. There are at least two more reasons why these approaches will not give good information to the exploration geologist: 1. The analysis of a silver or palladium bead obtained by fire assay will show gold contamination at this very low level from the silver or palladium inquart, the clay crucibles, and the flux. 2. Fire assay or acid digestion breaks down rock-forming minerals, thus the determination of gold includes both background gold and mineralization gold which is confusing.

547

Sampling of Precious Metals

Under these conditions what should we do? First let’s answer the second question: 0:8d3Au s2FSE aL

ð30:14Þ

MS ¼ 3573g

ð30:15Þ

MS ¼

The only practical way to quickly and economically analyze such large stream sediment samples is by cyanide leaching as described by Ingamells and Pitard.22 This method presents many advantages: 1. Samples are large enough. 2. Unlike acid digestions or fire assay methods, cyanide leaching extracts only free gold, and not normal background gold contained inside silicate minerals. 3. There is very little chance of contamination. 4. Because of the size of the sample, an extremely low detection limit can be reached with instrumentation such as graphite furnace AA or ICP (i.e., 0.1ppb). 30.3.5.4. Recommended Sampling Procedure Collect a 10000-g bulk sample made of at least 25 400-g random increments located around the theoretical sampling point. This sample is entirely sieved through a screen with 710µm openings. This can be conveniently performed in the field. Split out about 3500g using an alternate shoveling method. If the sample is too small, screen more of the original material by collecting more increments. Send the entire sample for cyanide leach just as it is. Any additional crushing may only contribute to a gold contamination. 30.3.5.5. Interpretation of Results The geochemist interpreting the results from his reconnaissance is immediately confronted with the detection of trends. At this stage, all the necessary material is already at his disposal (see Chapter 7). 30.3.6. Problem Associated with the Liberation of Gold during Sample Preparation Because liberated gold particles have a very high density (e.g., twice the density of galena, and six times the density of an ordinary gangue). danger of segregation is extremely high. Any handling operation immediately generates strong segregation. Gold particles can be lost inside all angles and parts of the sampling device that are difficult to access. We shall state that the homogenization of a lot or of a sample containing liberated gold is impossible. Increment Preparation Errors are likely to be severe. 30.3.6.1. Loss of Gold Loss of gold may easily occur in three different ways: 1. Gold particles remaining in the sampling and preparation circuit are lost.

548

Theory of Sampling and Sampling Practice

2. Loss of gold generated by pulverization in a close circuit with a 100 or 150µm screen can become disastrous as soon as gold is liberated. Gold particles tend to flatten upon pulverizing, and the impatient operator is often tempted to throw away the second or third oversize. We knew of an example when a negative bias of about 50% was introduced that way, 3. Liberated gold smears easily on disk pulverizers, pestles, or mortars. This type of equipment should not be used if gold is liberated unless the objective is to plate equipment with 24-carat gold. The only satisfactory solution is expensive: it consists in handling a large sample by cyanide leach or concentrating the liberated or near liberation gold with shaking tables, super-panners, or other methods. Weigh concentrate and tailings separately. Melt the whole of the concentrate and as large a sample from the tailings as possible. Calculate the gold content of the processed sample. 30.3.6.2. Contamination of Gold The reverse is also true: in a laboratory receiving samples of different grades such as feed, concentrates, and tailings from a processing plant, distinct splitters, crushers, or pulverizers are a must, and whenever possible distinct rooms should be utilized for each product. If pulverizing a high-grade sample, after pulverizing a low-grade sample, does not result in an appreciable error, the opposite will nearly always result in a positive bias due to gold contamination. 30.3.7. Useful Sampling Nomographs Useful sampling nomographs can be drawn in logarithmic coordinates, thus it is convenient to use the logarithm of formula 30.14: log MS ¼ log0:8 þ 3logdAu  log s2FSE  logaL

ð30:16Þ

We found that the most useful nomograph gives the value of the sample weight MS as a function of the top gold particle size dAu for a given value of the standard deviation sFSE. Under those conditions log0.8. logs2FSE , and logaL are constants. It readily follows that the curve representing the function f(MS, dAu) in logarithmic coordinates is a straight line with a slope equal to 3. This slope is easily estimated by calculating the derivative of equation (30.16). Let’s define logMS as Y, log0.8 as constant C1, 3logdAU as X, logs2FSE as constant C2, and logaL as constant C3. We obtain: Y ¼ C1 þ 3X  C2  C3

ð30:17Þ

dY ¼3 dX

ð30:18Þ

Then:

Sampling of Precious Metals

549

Figures 30.1–30.3 illustrate nomographs whose Fundamental Sampling Error standard deviation sFSE has been fixed at sFSE = ±16%, ±5%, and ±1% respectively. For 2sFSE at ±32%, ±10%, and ±2% respectively, you would obviously use the same nomographs. Practical examples of Sections 30.3.2.1, 30.3.2.2, and 30.3.5.3 can be solved by using the nomograph illustrated in Figure 30.1. Two examples are illustrated: 1. If the coarsest gold particle is 100µm and the expected gold content is 0.2g/t, then the necessary sample mass to assay is 156g. 2. If the coarsest gold particle is 500µm and the expected gold content is 0.2g/t, then the necessary sample mass to assay is 19531g. Gravity concentration and large cyanide bottle roll must be used. The nomograph illustrated in Figure 30.2 is set for a 5% standard deviation for FSE. Two examples are illustrated: 1. If the coarsest gold particle is 100µm and the expected gold content is 10g/t, then the necessary sample mass to assay is 32g. 2. If the coarsest gold particle is 500µm and the expected gold content is 10g/t, then the necessary sample mass to assay is 4000g. Gravity concentration and large cyanide bottle roll must be used. The nomograph illustrated in Figure 30.3 is set for a 1% standard deviation for FSE. Two examples are illustrated:

FIGURE 30.1 Liberated gold. Calculation of the sample weight with an allotted standard deviation of the Fundamental Sampling Error sFSE = ±16%.

550

Theory of Sampling and Sampling Practice

FIGURE 30.2 Liberated gold. Calculation of the sample weight with an allotted. Standard deviation of the Fundamental Sampling Error sFSE = ±5%

FIGURE 30.3 Liberated gold. Calculation of the sample weight with an allotted. Standard deviation of the Fundamental Sampling Error sFSE = ±1%.

Sampling of Precious Metals

551

1. If the coarsest gold particle is 100µm and the expected gold content is 10g/t, then the necessary sample mass to assay is 800g. 2. If the coarsest gold particle is 500µm and the expected gold content is 10g/t, then the necessary sample mass to assay is 100000g. Gravity concentration and large cyanide bottle roll must be used.

30.4. Gold Is Not Liberated In this category we find most other types of gold deposits, mainly in-situ gold deposits such as vein deposits spatially associated with igneous rocks, massive sulfide deposits, hot-spring associated deposits, caldera associated deposits, and many others. These materials remain in this category as long as they are not crushed and pulverized to the liberation size of gold. In this section we assume that the gold is not associated with another mineral, which is studied separately in Section 30.5. If we assume that the results of the general theory presented in Chapter 10 can be applied to extremely low grades, the difficulty is to estimate the liberation factor. In Chapter 11 we found ways to solve this difficulty. To prevent useless repetition, a quick test is described to estimate the Intrinsic Heterogeneity IHL. This test only consists in finding the two or three richest fragments in gold among 200 fragments collected one by one at random in a representative large composite sample of the material to be investigated. These two or three rich fragments can be quickly identified either visually or by a short XRF scan. After identification they are carefully analyzed in totality by fire assay. We shall notice that there is no need to analyze the 200 fragments, which could be time-consuming and costly. When the highest gold grade for a given fragment size is known, the calculation of IHL is becoming possible.

30.4.1. Three Important Assumptions To solve this case, we make three assumptions: 1. We assume that the maximum gold grade amax of the largest fragments of the lot to be investigated is known. 2. We assume that all size fractions ɑ roughly have the same average grade aɑ, or at least are in the same order of magnitude. 3. We assume that inside each size fraction ɑ all the gold is contained in a subfraction of grade amax of relative weight M’ = MLβ/ML, and the remainder of the fraction ɑ of relative weight 1 – M’ is sterile.

30.4.2. Calculation of the Intrinsic Heterogeneity IHL We know from Section 10.5 that the term Y of IHL = XY can be calculated as follows:

552

Theory of Sampling and Sampling Practice



X β

 ρβ

aβ  aL a2L

2 

MLβ ML

ð30:19Þ

We also know that the ore is divided into two density fractions: • ρβ = ρ’ corresponding to the fraction containing the gold: aβ = amax. This density fraction has a relative weight M’. From relation 10.34 we easily deduce that: ρ0 ¼

ρAu ρg ð1  amax ÞρAu þ amax ρg

ð30:20Þ

• ρβ = ρg corresponding to the sterile gangue where aβ = 0, or at least is second order of magnitude. This density fraction has a relative weight 1 – M’. Then: Y ¼ ρ0



amax  aL aL

2

M0 þ ρg ð1  M0 Þ

ð30:21Þ

But, M’ = aL/amax, then:    2 amax aL aL 1 þ ρg 1  Y¼ρ aL aax amax 0

ð30:22Þ

For convenience let’s write (amax/aL) = r with r >1.   1 1 Y ¼ ρ0 ðr  1Þ2 þ ρg 1  r r

ð30:23Þ

After rearrangement of the terms we obtain:   r1 0 Y ¼ ρ ð r  1Þ  ρ  ρg r 0

ð30:24Þ

We can see that the second term is always very small: 1. When r is near one, then ρ’ ≈ ρg and ρ’ – ρg ≈ 0 2. When r is large, then (r – 1)/r becomes very small when compared to r – 1. Generally, r becomes large before amax is large enough to make ρ’ very different from ρg. therefore: Y≈ρ0 ðr  1Þ Now, we can calculate the Intrinsic Heterogeneity IHL:

ð30:25Þ

553

Sampling of Precious Metals

IHL ¼ f  g  ρ0 ðr  1Þ  d3

ð30:26Þ

  ρAu ρg amax d3 1 IHL ¼ fg aL ρg amax þ ρAu ð1  amax Þ

ð30:27Þ

Finally:

If f = 0.5 and g = 0.25, as is often the case, we obtain:   d3 amax  1 ρ0 IHL ¼ 8 aL

ð30:28Þ

30.4.3. Useful Sampling Nomograph We remember the following relation from Section 11.2: IHL ¼ fgc‘d3 ¼ Cd3

ð30:29Þ

A useful nomograph can be drawn in logarithmic coordinates. For convenience, we divide the nomograph into two parts: 1. We have:  0  ρ amax 1 logC ¼ log 8 aL or

logC ¼ logρ0 þ logamax  log8  logaL

ð30:30Þ

ð30:31Þ

We have found that the most useful nomograph gives the value of C as a function of the average gold content aL for a given value of amax. Under these conditions log8, logamax, and logρ’ are constants. It readily follows that the curve representing the function f(C,aL) in logarithmic coordinates is a straight line with a slope equal to –1. 2. We have:

or

  logIHL ¼ log Cd3

ð30:32Þ

logIHL ¼ logC þ 3logd

ð30:33Þ

logd is a constant. It readily follows that the curve representing the function f(C,IHL) in logarithmic coordinates is a straight line with a slope equal to 1.

554

Theory of Sampling and Sampling Practice

30.4.4. Practical Examples 30.4.4.1. Example #1 We are studying a vein-controlled gold deposit where the expected average gold grade is around 10g/t (aL = 1 x 10-5) and a representative bulk composite collected for various metallurgical tests is crushed to a fragment size 95% minus 2.5cm. A previous drilling campaign shows, locally, some 2.5-cm fragments obtained from the crushing of core samples containing up to 2500g/t of gold (amax = 2.5 x 10–3). What should the sample weight MS be if we tolerate a Fundamental Sampling Error with a standard deviation not exceeding 2sFSE = ±8%? Using the factor that correspond to C in formula 30.27 we obtain:   1 2:5 x 103 16x2:8     ¼ 87 1 C¼ 3 8 1 x105 2:8 2:5x10 þ 16 1 ¼ 2:5x103 Then: IHL ¼ Cd3 ¼ 87ð2:5Þ3 ¼ 1359g Now, we can calculate the necessary sample weight MS: MS ¼

1359 ð0:04Þ2

¼ 849375g

30.4.4.2. Example #2 We have routine reverse circulation drilling samples that weigh about 50000g. The material is about 95% minus 1.25cm. The approximate average gold content is expected to be around 2g/t. An XRF investigation followed by fire assay of the richest 1.25-cm fragments has determined that amax is 2500g/t. On a routine basis each sample is split as extracted at the drilling site until a 5000-g sample is obtained. Then each 5000-g sample is sent to the laboratory for gold analysis. We want to know if this procedure is safe or not. If not, what can we do about it? We assume that the density of gold (always associated with some other element) is about 16 and the density of the gangue is 2.8. The first thing to do is to calculate IHL using formula 30.28: IHL ¼

  ð1:25Þ3 2:5x103  1 2:8 ¼ 854g 8 2x106

with ρ’ = ρg = 2.8 Then, we can calculate the variance and standard deviation of the Fundamental Sampling Error FSE:  s2FSE ¼

 1 1  854 ¼ 0:1537 5000 50000

555

Sampling of Precious Metals

sFSE ¼ 39% This procedure is not safe; the material should be crushed before splitting. It is recommended to crush core samples or reverse circulation samples to minus 0.17cm before proceeding with any splitting. Then the same XRF investigation can be performed on 0.17 cm fragments to find out the new value of amax and IHL.

30.5. Gold Is Associated with Another Major Mineral Sulfides, selenides, arsenides, and tellurides of usual metals, such as pyrite, sphalerite, chalcopyrite, iridosmine, jarosite, arsenopyrite, bismuthinite, allemontite, and many others, often contain some precious metals such as gold or silver. Some spinels such as chromite may contain gold, palladium, and platinum. For a given ore, we can often safely assume that the grade of the precious metal in the associated mineral does not vary nearly as much as the grade of the mineral in the ore, and as a first approximation we may consider the grade of the precious mineral in the associated mineral as constant. Let’s consider the example of gold, palladium, and platinum associated with chromite. If we consider the chromite as a gold, palladium, and platinum mineral whose contents are αAu, αPd, and αPt respectively, then we can estimate these contents on the mineral itself and come back to the general case studied in Chapters 10 and 11. Let’s define all our terms: aL: αAu: αPd: αPt: αCr: bLAu: bLPd: bLPt: bLCr:

the average mineral content (i.e., chromite) in the lot the grade of gold in the chromite the grade of palladium in the chromite the grade of platinum in the chromite the grade of chromium in the chromite the grade of gold in the lot the grade of palladium in the lot the grade of platinum in the lot the grade of chromium in the lot

When the lot under study is a chromite ore, the mineralogical factor c that we try to estimate is a function of bLCr/αCr, bLAu/αAu, bLPd/αPd, or bLPt/αPt ratios. We find out that these ratios stay the same when the chromite is either considered as a chromium ore or as a gold, palladium, or platinum ore. Then we have: aL ¼

bLCr bLAu bLPd bLPt ¼ ¼ ¼ /Cr /Au /Pd /Pt

ð30:34Þ

Under these conditions the relative Fundamental Sampling Error FSE affecting the chromium is equal to that of the other precious metals. Of course, it is important to

556

Theory of Sampling and Sampling Practice

verify that the grade of gold, palladium, or platinum does not vary very much in this particular pure chromite. Conclusion: When a pure mineral contains several elements in reasonably constant proportion, and when we try to estimate the Fundamental Sampling Error FSE generated by a sampling protocol, FSE remains the same for all the elements of this mineral. Then, it is in our interest to calculate the variance of FSE for the element for which the estimation of bL and α are the easiest and the most precise. In the example of chromite containing chromium, gold, palladium, and platinum, this element is obviously chromium. In the case of galena containing lead, gold, and silver, this element is lead.

30.5.1. Practical Example A non-calibrated chromite concentrate, of maximum fragment size d ≈ 1000µm contains 4.5% chromium, about 0.5g/t gold, 2g/t palladium, and 0.3g/t platinum. A previous study of the heterogeneity of the precious metals within this particular pure chromite has revealed that they are finely disseminated everywhere within the chromite and their respective grade is reasonably constant. The density of the chromite is ρM = 4.5 and the density of the gangue is ρg =2.9. This particular chromite contains 30% chromium. What sample weight MS is required if we accept a Fundamental Sampling Error FSE with a standard deviation 2sFSE not exceeding ±1% on the estimate of the content of each of these precious metals? In this case, the grades of gold, palladium, and platinum are irrelevant. We shall focus our attention only on the chromium and chromite contents: • The chromite fragment shape factor f is around 0.5. • The fragment size distribution factor g of the noncalibrated chromite is around 0.25. • The liberation factor of a concentrate such as chromite is very close to ℓ =1. Now, we have to calculate the mineralogical factor c using equation (10.23), with aL = bLCr/αCr = 0.045/0.3 = 0.15, then: c¼

ð1  0:15Þ2 x4:5 þ ð1  0:15Þx2:9 ¼ 24 : 0:15

IHL ¼ fgc‘d3 ¼ 0:5x0:25x24x1x0:13 ¼ 3x103 grams If we assume ML is very large when compared to MS, we obtain: MS ¼

IHL 3x103 ¼ ¼ 120grams 2 sFSE 2:5x105

When such a small sample weight is obtained, do not forget there are other kinds of sampling errors that may rule what the sample weight should be, such as IDE, IEE, and so on.

557

Sampling of Precious Metals

30.6. Notion of Maximum Acceptable Fundamental Sampling Error Usually, the guidelines we recommend are the following: • sFSE ≤ ±0.5% for commercial sampling. • sFSE ≤ ±5% for technical sampling, material balance, and process control. • sFSE ≤ ±16% for exploration, grade control, and environmental sampling. Several practical examples presented in this chapter seem to indicate that a standard deviation sFSE ≤ ±16% is an upper limit beyond which the Fundamental Sampling Error FSE becomes unacceptable. In fact, if the conditions of sampling correctness are strictly respected nothing indicates that sFSE could not be larger than ±16%. The reader may wonder what is magical about this value. The logic of this guideline is carefully explained in Section 11.8 of this book. The sampling theory being a preventive “tool,” it is our recommendation to avoid, as long as possible, the generation of samples whose estimates of the content of the constituent of interest are Poisson distributed. The reason is obvious: many estimates may become much too low and a few much too high which renders a single estimate completely meaningless. In many instances, the nonrecognition of this fact can be devastating on decisions that the engineer has to make.

30.6.1. Equation and Shape of a Poisson Distribution The Poisson model was introduced in Section 3.8. A Poisson distribution is often encountered when an operator has ground a sample, and the gold particles which cannot comminute occur with a very small probability. To simplify our discussion, we assume that we are dealing with liberated gold. Let’s define the following parameters: P(x=r): the probability that r low-frequency gold particles appear in the sample. We refer only to the largest gold particles of size dAu that have an overwhelming effect of FSE. θ: the hypothetical average number of low-frequency gold particles of size dAu per sample. Thus, we can write: Pðx ¼ rÞ ¼

θr θ e r!

ð30:35Þ

A Poisson distribution has the remarkable property to have its mean equal to its variance, which is in itself equal to the hypothetical average number θ of low frequency gold particles per sample. mðaSi Þ ¼ s2 ðaSi Þ ¼ θ

ð30:36Þ

Where aSi is the estimate obtained for the replicate sample Si. The shapes for various values of θ are illustrated in Figure 30.4. It is important to note that the highest probability P(x=r) occurs when r is less than θ, except when θ is an integer. If θ is an integer we have:

558

Theory of Sampling and Sampling Practice

P(x=r)

P(x=r)

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

1

2

3

4

5

6

7

8

r

1

2

T=0.5

3

5

6

7

8

r

4

5

6

7

8

r

5

6

7

8

r

T=3

P(x=r)

P(x=r)

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1 1

2

3

4

5

6

7

r

8

1

2

3

T=1

T=4

P(x=r)

P(x=r)

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

1

4

2

3

4

5

6

7

8

r

1

T=2

2

3

4

T=5

FIGURE 30.4 Different possible shapes for a Poisson distribution, θ being the hypothetical average number of gold particles or clusters per sample.

Pðx ¼ θÞ ¼ Pðx ¼ θ  1Þ

ð30:37Þ

When θ > 4 or 5, the mode of the distribution is practically equal to the mean of the distribution and the distribution appears symmetrical. When θ ≤ 3, an estimate made with a single sample is likely to be biased. As soon as we are considering several samples, their sum may contain an average of low frequency gold particles superior to four or five, then the average of the estimates given by these samples may be unbiased. Conclusion: If θ is becoming too small, the most probable result given by the estimate of a single sample is rapidly drifting away from the true average. If no consideration is

559

Sampling of Precious Metals

given to a dangerously increasing Fundamental Sampling Error, very painful experiences are likely to develop. 30.6.2. The Most Probable Result There are cases where several thousand grams of material are needed to include a single large gold particle of size dAu. Still, the gold analysis is commonly performed on 30-g or even 10-g subsamples; θ becomes so low that a triplicate analysis may provide exactly the same results around the mode, despite the fact that the mode is far away from the true average gold content. This can be particularly misleading during the planning of a mine when very few analyses are at our disposal to determine if an entire mining block is above or under the economic cutoff grade. How can a mine be well managed if the estimated gold content of samples has very little resemblance with their true unknown gold content? Even a talented geostatistician is embarrassed to answer such a question. Figure 30.5 shows an example of the drift of the most probable result from the true average when a single sample is too small and affected by a very large Fundamental Sampling Error. This brings an important point for sampling practitioners: It is not a good idea to calculate the variance of FSE for sample weights that are too small by one or two orders of magnitudes. Instead, the practitioner should calculate the necessary sample weight to avoid the presence of a Poisson distribution affected by excessive skewness; this would eliminate many unnecessary debates where Gy’s work is vastly misused, and then criticized. Conclusion: The suggested upper limit of sFSE = ±16% is only arbitrary. We think it is a safe limit which ensures that a single sample will contain enough large gold particles of size dAu to keep the most probable result not too far away from the true average. The reader must remember that this limit is entirely unacceptable for process control and for commercial transactions.

Range of results

g/t gold

True, unknown gold content

Most probable result

1g

10g

100g

1000g

10000g

Low background gold content Sample weight

FIGURE 30.5 Illustration of the drift of the most probable result as samples become too small.

This page intentionally left blank

31 Sampling of Liquid and Solid Wastes and Sampling of the Environment

31.1. Introduction Environmental protection agencies have been created in many countries around the world because, as an industrial society, we have failed to evolve and grow in harmony with the fragile heritage that was given to us: the environment in which we live and prosper. Cost effective management of this extremely complex environment requires an integrated effort from many competent people such as geologists, metallurgist, hydrologists, biologists, chemists, statisticians, lawyers, and so on. All of these people have one thing in common: the decisions they make heavily depend on estimates made by proxy on samples supposedly representative of a selected fraction of the environment. Measuring the progression and the impacts of an environmental problem, and finding solutions, are goals that are successfully reached only if all sources of sampling errors are well understood. Therefore, in our following discussions we assume that the reader is already familiar with the content of the previous 30 chapters of this book. There are two equally important sources of sampling errors: 1. All materials studied in the environment are heterogeneous. The subject was studied in Parts I through VI. 2. All sampling processes used to implement a sampling plan are either correct or incorrect. The subject was studied in Part VII. The study of the various forms of heterogeneity is essential to the environmentalist who wants to detect and understand random, non-random, and cyclic fluctuations affecting the behavior of a given pollutant. Yet, very little attention has been given to sampling errors such as: • The Fundamental Sampling Error FSE • The Grouping and Segregation Error GSE • The long-range Heterogeneity Fluctuation Error HFE2 • The periodic Heterogeneity Fluctuation Error HFE3.

561

562

Theory of Sampling and Sampling Practice

Only a good understanding of the heterogeneity carried by a potential pollutant in a given environment can lead to the development of an appropriate and effective sampling plan. To be cost effective, this sampling plan often consists of an iterative action. Now, the practical implementation of a sampling plan can be correct or incorrect. Sampling errors due to incorrectness are numerous and of great concern because they are difficult to detect and to quantify; these sampling errors are: • The Increment Delimitation Error IDE • The Increment Extraction Error IEE • The Increment Preparation Errors, IPE. These errors always affect the accuracy of a sample, resulting in sampling biases. Yet, these errors are critical, but they have not been given the attention they deserve, with the exception perhaps of IPE. The development of the liquid waste sampler called “Coliwasa” (i.e., composite liquid waste sampler) is a good example of a tiny step made recently in the right direction to minimize errors such as IDE and IEE. It should be clearly understood that it is entirely up to us to perform correct sampling. Correct sampling is often completely uncorrelated with sampling cost. Excuses to perform incorrect sampling cannot be justified by time and budget limitations. They are often justified by the desire to comply with standards implemented by committees who know very little about sampling. It is a fact, for environmental sampling, that the technology of correct samplers remains to be developed or at least improved. We hope that the contents of this chapter will be helpful to those who are in charge of developing sampling systems to be used in the environment and will bring forth progressive new ideas.

31.2. Key Questions and Issues Specifically Related to Sampling in the Environment Sampling of mineral ores, products, concentrates, coal, or chemicals presents many difficulties that were reviewed in the previous 30 chapters of this book. Sampling the environment presents the same difficulties; however, there are a few peculiarities that could lead to important issues. These peculiarities need to be addressed. When a given pollutant creates an environmental concern, the first questions the environmentalist asks himself are: “At what level does this pollutant causes a health risk? Consequently, what should its appropriate regulatory threshold be?” Obviously, answers to these two questions have a tremendous impact on the development of an appropriate sampling plan. From previous chapters, we know how to prepare such a plan. So far, there is no apparent difficulty but, as a direct consequence of the notion of heterogeneity, the environmentalist could ask himself another question: “Should the notion of threshold be relative to the sample size?” For example, let’s assume that a pollutant is present at a level well below the regulatory threshold when large samples of weight MS2 are collected and analyzed. It could very well be that much smaller samples of weight MS1 (e.g., MS1 = MS2/100), collected at the same locations than the large samples, will most of the time, show lower pollutant amounts and, sometimes, will show higher levels than the regulatory threshold. This is observed when samples are small enough to skew the distribution of assay values until the distribution obeys a Poisson law: many values are much too low, and a few are much too high.

Sampling of Wastes and Sampling of the Environment

563

Important recommendation: It is critically important for the environmentalist to become familiar with Part V of this book, as it has profound implications for sampling the environment. This immediately leads to more questions: If this pollutant behaves erratically because of its heterogeneity, is there a small chance that the same pollutant, at times, become very hazardous because of a smallscale clustering behavior, even though estimates from large samples are always below the threshold? Does a large sample cover the nature of the real problem? Is it necessary to completely redefine the notion of threshold or, at least, connect it somehow with the notion of heterogeneity?

Again, this leads to more questions: Is the strength of the bond between a pollutant and its support (e.g., sediment, mineral, vegetal, microorganism, colloid, and so on) an important criterion to define the value of a regulatory threshold? Is the heterogeneity carried by a pollutant a function of the strength of this bond? Is the amount of heterogeneity carried out by a pollutant correlated to the amount of heterogeneity carried out by its support?

In this chapter, we will not bring satisfactory answers to these delicate questions; however, the environmentalist may realize now that sampling the environment is not a simple matter. It also appears that a strong standardization of sampling plans may not be a great idea. Many are those who encourage standardization to ensure possible comparisons from one environmental site to another. The idea is legitimate. Indeed, a sampling strategy can be and should be standardized. The implementation of sampling plans should use standardized sampling systems whose primary quality is correctness. The standardization of an incorrect sampler, as it is often done, should be carefully prevented. However, the notion of heterogeneity being what it is, the standardization of sampling plans and sampling protocols is total nonsense. Each case is unique, and each pollutant involved carries a particular amount of heterogeneity. There are no possible ways to guess what this amount of heterogeneity will be without performing preliminary iterative tests. Then, and only then, can we standardize a sampling protocol for a local condition only. In other words, the standardization of a strategy and of correct sampling systems can be very general, while the standardization of a protocol can only be local. This distinction is of paramount importance to make. The standardization of a sampling strategy and of correct sampling systems can tremendously reduce lag time in updating or developing new methodologies as regulations change; but, and this is very important, enough flexibility should be allowed in those regulations to adjust sampling protocols with special local conditions characterized by the amount of heterogeneity carried by the pollutants of interest. This is the essence of a logical approach.

31.3. A Logical Approach The most difficult part of an environmental sampling problem is its formalization which requires solid knowledge in conventional and nonconventional statistics such as

564

Theory of Sampling and Sampling Practice

Geostatistics, knowledge of the Theory of Sampling, but also knowledge in the domain in which the problem takes place, such as geochemistry, hydrology, biochemistry, sedimentology, and so on. Therefore, the statistician alone cannot decide upon an appropriate sampling plan. The environmentalist (e.g., geochemist, biologist, sedimentologist, chemist, metallurgist, and so on) should not only participate in the development of appropriate sampling plans, but should also initiate them in a logical way. Thus, the environmentalist should be familiar with the various parts of the Theory of Sampling. An effective coordination between the environmentalist and those in charge of the necessary logistic support for the implementation of a sampling plan is a must. Special local conditions such as topography, temperature, water flows, meteorology, tides, and so on, are likely to play an important role. Indeed, these are numerous factors that may directly or indirectly affect the validity of a sampling plan. The environmentalist should ascertain that the sample collector will proceed exactly as indicated in the sampling plan, and that all the necessary and pertinent information will be logged. All this is already the object of well-established standard procedures that are valid only if the environmentalist can quickly provide an effective sampling plan. It is impossible to prepare a valid plan without first characterizing the various kinds of heterogeneity carried by a pollutant in a particular waste. This part is often considered so superficially that the conclusions, reached after a thorough analysis of the data, are nothing short of being very ambiguous. Furthermore, the implementation of the sampling plan can be successful only when the environmentalist can quickly make a judgment on the correctness of a sampling system. This critical part is not well known by the public and is rarely addressed. It is the largest source of sampling biases. 31.3.1. Structural Property of a Pollutant The most relevant structural property of a pollutant is characterized by the amount of heterogeneity h it carries. We saw, in Part I of this book, that in a one-dimensional waste (or two- or three-dimensional waste by following a given direction) carrying an amount of heterogeneity h, that heterogeneity can be decomposed into three parts: h ¼ h1 þ h 2 þ h 3

ð31:1Þ

where: h1 is the small-scale heterogeneity. It is the random fluctuations shown by the pollutant at the scale of the sampling tool. h2 is the large-scale heterogeneity introduced by local trends. Those are nonrandom fluctuations; they have a cause. h3 is the heterogeneity introduced by cyclic phenomena. They are numerous. We also have for a zero-dimensional waste: h ¼ h1

ð31:2Þ

If we purposely consider a one-dimensional waste, or a multidimensional waste, as a zero-dimensional waste (in a way, the naïve view of a waste), then h3 cancels and h2 becomes part of h1 (a lot of valuable information is lost). A one-dimensional waste, or a

Sampling of Wastes and Sampling of the Environment

565

multi-dimensional waste, can become a zero-dimensional waste, by choice, if the chronological order of the samples collected to monitor a pollutant is voluntarily ignored. Conventional statistics often ignore the chronological occurrence of a pollutant. The trouble is that, most of the time, the occurrence of a pollutant is not a random phenomenon, only our model is. Conclusion #1: For zero-dimensional wastes, we should characterize h1 by calculating the Intrinsic Heterogeneity IHL. When we have an idea about the value of IHL we can predict the variance of the Fundamental Sampling Error FSE, and the variance of the Grouping and Segregation Error GSE. Thus, we can select an appropriate sampling protocol with respect to the increment weight, sample weight, and subsample weight. We can also optimize the number of increments per sample collected over a small scale (i.e., the scale of the sampling device) that will minimize somewhat the random variance. Conclusion #2: For one dimensional wastes (or two- or three-dimensional wastes by following a given direction), we should characterize h1, h2, and h3 by conducting variographic experiments, or by calculating variograms with existing data. With the variogram we can identify all sources of the pollutant fluctuations; therefore, we can decide upon an appropriate sampling protocol with respect to the increment weight, sample weight and subsample weight, the optimum number of increments per sample, the small-scale spacing between increments, the large-scale spacing between samples, and the most adequate sampling mode (i.e., random systematic or stratified random). 31.3.2. Structural Property of Sampling Correctness Sampling must be correct. It is often stated that sampling accuracy is usually achieved by some form of random sampling. It is true, and it would be indeed very easy to achieve sampling accuracy if randomness was the only factor on which it depends. Unfortunately, in the “real world,” the implementation of probabilistic sampling is much more complicated. Accuracy depends mainly on sampling correctness, which is a structural property of the sampling process. Conclusion: If the sampling process is not correct, the sampling operation cannot be accurate, regardless of how good the sampling plan is. Furthermore, we cannot directly control accuracy; however, we can directly control correctness. We also know that a correct sampling operation is always accurate. Many are those who are tempted to test an incorrect sampling system for its accuracy. The trouble is that an incorrect sampling system can be circumstantially accurate today, biased in one direction tomorrow, and biased in another direction the day after tomorrow. In sampling, there is no such thing as a constant bias. The desire of controlling accuracy without controlling sampling correctness is certainly the worst judgement error that a person can make. It is a direct departure from basic logic. 31.3.3. Interaction between Sampling Cost, Accuracy, Precision, and the Regulatory Threshold Intuitively, it is logical to think that the closer the expected level a given pollutant comes to its regulatory threshold, the better the precision should be; therefore, the greater the sampling effort and sampling cost. Contrary to widespread opinion, the same statement does not apply to accuracy. Accurate sampling mainly depends on the correctness of the sampling system. In many cases, a correct sampling system is less expensive than an incorrect one. In fact, there are

566

Theory of Sampling and Sampling Practice

many sophisticated online sampling systems and analyzers that are totally incorrect; onstack emission monitoring for sulfur dioxide is an excellent example. The achievement of sampling accuracy is not a matter of cost, but rather a matter of awareness and a matter of choice. The trouble is that correct sampling systems for sampling wastes in the environment are almost nonexistent. These systems remain to be developed, tested, and implemented. This important aspect of the correctness of environmental sampling will be discussed later in this chapter.

31.3.4. Standardization of a Sampling Strategy If we could standardize a sampling strategy for the sampling of liquid and solid wastes, the following four essential steps should be considered: 1. Modeling the waste: • Zero-dimensional (easy to solve) • One-dimensional (easy to solve) • Two-dimensional (difficult to solve) • Three-dimensional (often impossible to solve) 2. Characterizing the heterogeneity carried by the pollutant of interest in the waste: • Determine IHL, for zero-dimensional wastes. • Determine HFE1, HFE2, and HFE3, for one-dimensional wastes. • Two- and three-dimensional wastes can also be investigated as one-dimensional wastes, except that several directions or transects must be considered. This goes beyond the purpose of this book and enters the domain of geostatistics. We also should remember that sampling correctness is difficult to achieve with twodimensional wastes, and often impossible to achieve with three-dimensional wastes. 3. Developing an appropriate sampling plan: • Transform a three-dimensional waste into a sum of several superimposed twodimensional wastes each time it is possible. This is what geologists do to study a three-dimensional mineral deposit. • Determine appropriate sample and subsample weights with respect to IHL or HFE1, and targeted level of precision (DQO). • Determine the appropriate number of increments per sample and subsample. • Select an appropriate selection mode with respect to IHL, HFE1, HFE2, and HFE3. Remember that a selection mode that does include randomness, is not a valid one. • Optimize the laboratory subsampling plan. 4. Implementation of the sampling plan: • Thorough verification of the correctness of the sampling systems. In the field, and at the laboratory. Any of these four major steps cannot and should not be overlooked in any sampling strategy concerning the sampling of liquid and solid wastes.

Sampling of Wastes and Sampling of the Environment

567

31.4. The Components of the Overall Sampling Error This section is not a repetition of the contents of Chapter 4. Its objective is to make a clear correlation between our vocabulary and the usual vocabulary used by environmentalists. An opening remark is necessary: when the environmentalist states that a waste is heterogeneous in terms of its chemical properties, for example, it should be clearly understood that this statement is meaningless as long as he does not specify what kind of heterogeneity he is referring to. The reason is simple: causes, effects, and cures are different.

31.4.1. Errors Generated in the Sampling of Zero-Dimensional Wastes In this category we include all small wastes that are considered essentially as statistical populations where there is no obvious order. This includes nearly all samples containing particulate solids that are submitted to a subsampling operation at the laboratory and also in the field where subsampling may be performed for practical purposes. FSE: The Fundamental Sampling Error is introduced by the variability of a pollutant from one fragment to another. It is the Constitution Heterogeneity. Obviously, this error is negligible for liquids and gases without suspended solids. It is small for liquids and gases with fine suspended solids. It often becomes overwhelming with particulate solids. Subsampling protocols of particulate wastes used by certified laboratories are often naïve and inappropriate. It should be mandatory to write on analytical logs both: • The maximum particle size of a subsample • The weight of a subsample This should be done for all sampling and subsampling stages, including the final subsampling stage at the balance room where the final subsample is collected to perform chemical assays. GSE: The Grouping and Segregation Error. This error is generated by the Distribution Heterogeneity. It is the variability of a pollutant from one collected increment to another. The most effective way to minimize GSE is to increase the number of increments. It should be clear that for a given sample weight, the number of increments has no effect on FSE.

31.4.2. Errors Generated by the Sampling of One-Dimensional Wastes In this category we include larger wastes or flowing streams that we intend to sample as a function of distance by following a given transect, or as a function of time on a given point in a river or emission stack. HFE1: The short-range Heterogeneity Fluctuation Error introduced by h1. It is a smallscale random error and we may write: HFE1 ¼ FSE þ GSE

ð31:3Þ

568

Theory of Sampling and Sampling Practice

HFE2: The long-range Heterogeneity Fluctuation Error introduced by h2. It is a largescale nonrandom error. It is generated by the pollutant trends whose origin needs to be understood and often goes back to the waste producer. HFE3: The periodic Heterogeneity Fluctuation Error introduced by h3. Cycles may be directly caused by the waste producer, or they may have natural origins (e.g., fluctuations of a pollutant regulated by intermittent discharges, or by the effect of tides, or by weekend activities, or by day and night, and so on). This sampling error may become important if sampling is implemented at regular time intervals in phase with the period of the cycle; in this case selecting a stratified random sampling mode would be advised.

31.4.3. Errors Generated by Incorrect Sampling FSE, GSE, HFE1, HFE2, and HFE3 are the type of sampling errors that can be minimized by the development of appropriate sampling plans. However, there are other kinds of sampling errors that can be introduced during the implementation of a sampling plan, and these errors are dangerous because they are notorious bias generators. IDE: The Increment Delimitation Error. The sampling tool should be capable of delimiting the volume of an increment in such a way that all parts of the waste have exactly the same probability of being part of the model increment. Many people may think this is trivial detail; actually, it is a very serious matter with stunning consequences if it is not performed the right way. This error is not well known by sampling practitioners. IEE: The Increment Extraction Error. When the model increment is correctly delimited within the waste, its actual extraction (i.e., recovery) remains to be realized. This extraction is an error-generating process that can lead to huge biases. In a way, the sampling tool is designed or used in such a way that it becomes selective on what it is taking which is an intolerable mistake in probabilistic sampling. This error is also not well known and rarely addressed properly. IWE: The Increment Weighting Error. Every time compositing is involved, either on a small scale to minimize GSE or on large scale to minimize HFE2, it is advisable to collect weighted increments with respect to the amount of material present at the sampling point. IPE: The Increment Preparation Errors. These errors include all nonselective sampling errors, which are introduced by all steps that are likely to alter the integrity of the actual extracted increments (e.g., contamination, losses, alteration of the physical or chemical composition, human errors, fraud, sabotage, and so on). These Increment Preparation Errors are not studied in this chapter because they have been the object of effective studies and guidelines produced by environmental agencies.

31.4.4. The Analytical Error AE: The Analytical Error does not include the last subsampling error at the balance room which is a selecting process generating sampling errors. The Analytical Error only includes operating errors, weighing baseline errors due to rapidly evolving moisture contents, calibrating errors using Reference Materials, errors generated by interferences, and so on. See Ingamells and Pitard textbook.22 All these errors are summarized in Figure 4.2.

Sampling of Wastes and Sampling of the Environment

569

31.5. Characterization of the Heterogeneity Carried by a Pollutant Because of obvious economic reasons, environmental sampling must remain as inexpensive as possible. Therefore, it is out of the question to proceed with long and expensive tests like those described earlier in this book. However, simplified versions of the same tests may be used to obtain the basic information necessary for the optimization of a sampling plan. Emphasis should be placed on the fact that there is no empirical way that allows decisions to be made in advance of the minimum sampling effort that should be performed on a waste. As far as heterogeneity is concerned, experience from one waste transposed to another waste is always illegitimate. For a given pollutant, the amount of the various kinds of heterogeneity it carries in one waste can be completely different in another waste, even if the wastes appear similar. For the same reasons, generalization from one pollutant to another within the same waste is illegitimate. The environmentalist should not confuse the notion of common sampling strategy, which is always legitimate, with the notion of common sampling plan, which is often illegitimate. Thus, the effective monitoring of a given pollutant, within a given waste, implies that a minimum effort is made that would allow a satisfactory characterization of h1, h2, and h3, and the quantitative prediction of the variance of the various sampling errors FSE, GSE, HFE2 and HFE3.

31.5.1. Heterogeneity of a Zero-Dimensional Waste The test described in Section 14.4 is perfectly appropriate with: N1 ¼ N2  10

ð31:4Þ

Because the regulatory threshold of many pollutants is at the trace level, it would be beneficial for the reader to become familiar with the contents of Part 5. With the knowledge of IHL, or at least its order of magnitude, it becomes possible to predict what the appropriate sample weight should be and decide upon a sampling protocol. The objective is to find out if the sample should weigh 1, 10, 100, 1000g, . . . or 100kg, or even 1 ton! If a 1-ton sample is needed, some people may become offended because it may not be economically feasible to collect it. In such a case the answer is straightforward: If a very large sample is necessary, it is better to be aware of it, and it should be mentioned in the sampling log as a critical issue.

31.5.2. Heterogeneity of a One-Dimensional Waste In Chapters 7 and 8, we explained that variographic experiments can be conducted to investigate the heterogeneity carried by the pollutants of a one-dimensional waste. Monitoring water streams, coal on conveyor belts, on-stack emissions, and so on, often necessitates the collection of many samples at regular intervals. Therefore, it is not unusual to discover that the variographic experiment can conveniently be performed on already existing data.

570

Theory of Sampling and Sampling Practice

With these existing data we may calculate variograms and proceed with their pointby-point interpretation as explained in Section 7.12. We emphasize that the point-bypoint technique was developed to facilitate the use of small computers in the interpretation of variograms. This eliminates the complicated and unnecessary mathematical modellings. Another point that should be emphasized: the interpretation of the variogram goes far beyond the sole purpose of optimizing the sampling plan. It can also be used in the interpretation of routine data that allows a characterization of all sources of the pollutant fluctuations. The variogram is a much more effective “tool” than the usual and simplistic control chart. In fact, the variogram is an attractive complement to the control chart.

31.5.3. Heterogeneity of a Two- or Three-Dimensional Waste If we make the optimistic assumption that the correct implementation of a sampling plan for two- or three-dimensional wastes is possible, then the solution to the problem is to find out whether we are dealing with large-scale trends or cycles, or with small-scale composition and distribution heterogeneity. The first case is a generalization of the heterogeneities h2 and h3 studied in one-dimensional wastes. The second case is the type of heterogeneity h1 characterized with zero- and one-dimensional wastes. A quick preliminary test can be performed by the environmentalist to find out on which type of heterogeneity the sampling effort should be directed. First, it is recommended to stratify a three-dimensional waste into a sum of twodimensional wastes and investigate one two-dimensional stratum at a time. The thickness of each stratum is dictated either by practical considerations connected with the nature of each stratum, or connected with the sampling tool used to implement the sampling plan. We shall recall that the condition of sampling correctness for a twodimensional waste is the delimitation and extraction of a cylinder representing the total thickness of the stratum. Now, within each stratum, perform the test described in Section 14.4 after selecting the appropriate sampling mode (i.e., systematic random, stratified random, or simple random). The comparison of the variance s21 of the content of a pollutant of interest from small samples, with the variance s22 of the content of the same pollutant from large samples, can provide the necessary information to find out how to direct the sampling effort.

31.5.3.1. Case #1: s21 = s22 = Large variance If the variance of the estimates from small samples is roughly equal to the variance of the estimates from the large samples, and if this variance is large, we have a strong indication that the sample weight is relevant only to the second order. We are dealing with large-scale heterogeneity (e.g., local trends or cycles). The sampling plan can be optimized by using geostatistical techniques that are very effective in locating and quantifying local trends and cycles. Of course, the implementation of the sampling plan must still be correct, which is an entirely different issue to be resolved separately.

Sampling of Wastes and Sampling of the Environment

571

31.5.3.2. Case #2: s21 = s22 = Small variance If the variance of the estimates from small samples is roughly equal to the variance of the estimates from the large samples, and if this variance is small, we have an indication that the material is homogeneous, both on a large and small scale. Therefore, the weight of the small samples is adequate. This would be good news; however, it is a very unusual case.

31.5.3.3. Case #3: s21 >s22 If the variance of the estimates from small samples is significantly larger than the variance of the estimates from large samples, we have a strong indication that the sample weight is playing an important role. It is recommended to estimate the Optimum Sample Weight MSopt as explained in Section 14.10.2, and to evaluate the waste only with data generated by samples of weight MS ≥ MSopt, Several cases may take place: 1. MSopt < MS1: both series of estimates from small and large samples are valid. 2. MS1 < MSopt < MS2: only the series of estimates from large samples are valid. 3. MSopt > MS2: both series of estimates from small and large samples are not valid. We should iterate the sampling effort and prepare and implement a sampling plan using samples of weight MS = MSopt. Unfortunately, this third case is the most common case due to the fact that regulatory thresholds are extremely low, and naturally leads to Case #4. 31.5.3.3. Case #4: s21 < s22 One might think that such a case is not possible. We agree that it appears rather odd, but it does happen frequently with trace pollutants. As far as we know, it is not necessarily generated by analytical problems. The only logical explanation is the one described in Chapter 15 with Poisson processes. It could be an indication that the small samples only reflect the homogeneous low background content of the pollutant under study. This is a case of great concern and should raise the suspicion of the environmentalist. The pollutant may exist at high levels, but it is difficult to bring this fact into evidence because the samples are too small by several orders of magnitude. It is recommended to repeat the test with a series of larger samples of weight MS3 > 10 MS2. If the expected content of the pollutant is near the regulatory threshold, this repetition may save time by immediately enlightening the nature of the problem. In this case, the averages from both series of small and large samples are likely to be substantially different. Usually, the average of the small samples is smaller than the average of the large samples. This would confirm the fact that small samples are not capable of representing the low-frequency pollutant clusters. This form of heterogeneity can be environmentally dangerous because the small samples would give a false sense of safety and compliance.

572

Theory of Sampling and Sampling Practice

31.6. Development of an Appropriate Sampling Plan When we know the model that best represents a waste, and the amount of the various kinds of heterogeneity a given pollutant carries, then, and only then, we may proceed with the development of an appropriate sampling plan. An appropriate sampling plan for a solid or liquid waste must be responsive to regulatory and statistical objectives over which we have control, and must also be responsive to the various kinds of heterogeneity carried by the pollutant in the waste because they are primary structural properties of the waste.

31.6.1. Regulatory and Statistical Objectives Environmental agencies have determined regulatory thresholds for many pollutants that cannot be equaled or exceeded in a waste. The effective management of hazardous wastes requires that representative samples be selected, delimited, extracted, prepared, properly preserved, and analyzed. We know that all these stages are error-generating stages. By definition, a representative sample must exhibit the average properties of a waste within given standards of accuracy and reproducibility. In other words, sampling must at the same time be correct to ensure accuracy and be optimized to ensure a certain standard of reproducibility. Accuracy is difficult to quantify; however, we know what the criteria are that guarantee correct sampling. The sample is accurate when these criteria are met as explained in Part VII of this book. Precision is easier to quantify after proceeding with the few tests suggested in Chapters 14 and 15, whose results are used in this section to prepare appropriate sampling plans.

31.6.2. Zero-Dimensional Wastes In this category we find all wastes that are small enough to be submitted to a true or degenerated splitting process. By definition, most waste samples submitted to a laboratory for analysis are zero-dimensional wastes. Every subsampling stage, including the weighing of the final analytical subsample, generates a Fundamental Sampling Error (FSE) and a Grouping and Segregation Error (GSE). It is not unusual for certified laboratories to receive large numbers of samples, representative or not, some collected and preserved at great expense, and watch the analyst dig out a tiny analytical subsample: • with no respect to sampling constants C or IHL • with no knowledge of what could be the variance of FSE • with no attempt to reduce the variance of GSE • with no attempt to find out if the tiny subsample is representative of all particle size fractions • and with no respect to the most basic rules of increment delimitation and extraction correctness.

Sampling of Wastes and Sampling of the Environment

573

This is how many laboratories are making money. If the reader is not convinced of this fact, he should ask for the analytical logs and look at subsampling protocols. This is also how so-called reliable data are provided to lawyers in courts, whose decisions are critical for many industries. Sometimes we should be cautious that our quest for competitiveness between laboratories be a blessing for everyone and not a curse. Let’s close this discussion hoping that environmental agencies will take proper actions to minimize the negative effects of a problem that has been overlooked for a long time. It is for the benefit of all of us. The knowledge of the sampling constants C or IHL allows the environmentalist to select an optimum sample weight MSOPT. This selection is made with respect to sampling cost, and the maximum allotted variance s2FSE MAX of the total Fundamental Sampling Error, which includes the Fundamental Sampling Error generated by each sampling and subsampling stage. When sampling plans are made that way, the fluctuations of the pollutant under investigation in replicate samples taken at a same place are likely to be within predictable limits. Having an idea about the value of the variance s2FSE of the Fundamental Sampling Error for each sampling or subsampling stage, it is easy to minimize the variance of the Grouping and Segregation Error introduced by each of these stages. This is done by ensuring that each sample or subsample is made of as many correctly delimited and extracted increments a practically possible. It is important to note that for the primary sample collected in the field, we are not talking about compositing on a large scale. We recommend only compositing on a small scale (e.g., immediate surroundings of the selected sampling point). If N is the number of these increments per sample or subsample, we can write for each sampling or subsampling stage: s2Total ¼ s2FSE þ

s2Segregation N

ð31:5Þ

The reader should notice the similarity of this formula to equations presented in Section 14.3. s2 ¼ s2GSE should be well below s2FSE . And, if N = 25, As a rough guideline, if N = 10, Segregation 10 2 sGSE should be very small. It should be emphasized that if N = 1, as is often the case with operators who know very little about the Theory of Sampling, s2GSE may be inflated because it is unlikely that the selection point is equi-probabilistic; there is a subtlety difficult to grasp for those who may think that s2GSE cannot be superior to s2FSE . This assumes that the sampling plan is well designed. But, what could we do if for some reason we cannot perform the necessary preliminary tests to estimate the sampling constants C or IHL? The environmentalist should make sure that all samples are at least representative of all the particle size fractions. At this stage, before proceeding any further, the reader should make sure he or she is comfortable with the contents of Section 11.7. Practical Example: A toxicity characteristic leaching procedure (TCLP) is using 100-g analytical subsamples and the particle size of the tested waste is 9.5mm. Are those 100-g subsamples appropriate? If not, what should their weight be? We can use Formula 11.38 assuming that f = 0.5, ρ = 2.7, and aLc = 0.05, thus:

574

Theory of Sampling and Sampling Practice

s2FSE

  0:5x2:7 1  2 ð0:95Þ3 ¼ 0:208 ¼ 100 0:05

ð31:6Þ

sFSE ¼ 46% This extremely large standard deviation is not acceptable as it is capable to introduce a Poisson process in the generated data base. If we accept a maximum standard deviation of the Fundamental Sampling Error sFSE equal to ±16% as suggested in Section 11.8.2.1, using equation 11.38, the weight MS of the sample should be:

MS 

0:5x2:7 ð0:16Þ2



 1  2 ð0:95Þ3 ¼ 814g 0:05

ð31:7Þ

With an allotted standard deviation sFSE = ±16% and a 100-g sample, what should the maximum particle size d be?

d3 ¼

100 1 0:5x2:7ð0:05 2Þ

ð0:16Þ2

d ¼ 0:47cm Conclusion: For many wastes, but not necessarily for all of them, small 100-g samples that are used to perform the TCLP test may generate unacceptable pollutant fluctuations because a 100-g sample cannot be representative of a top size class as large as 0.95cm. If previous tests show that such a statement is not true, it is only because the tested pollutant was homogeneously distributed among all size fractions, which are limit cases that should not be generalized. This leads to a practical recommendation that could be used in a regulatory guideline: a sampling and subsampling protocol should be considered as nondefensible if the samples and subsamples it generates are not representative of all the fragment size fractions present in the material to be assessed. In this respect, it is our recommendation that such a guideline asks for the maximum size fraction, defined as no more than 5% by weight over the size d, be represented in all sampling and subsampling stages by a maximum standard deviation of the Fundamental Sampling Error sFSE = ±16%. Important remark #1: This guideline asks for a very minimum sampling effort. If less effort is done, then the protocol is not defensible. However, it should be clearly understood that if the guideline is complied with, it does not make the sampling protocol necessarily good. The reason is simple: the advantage of this approach is its simplicity, however, let’s have no illusion, this approach tells us nothing about the intrinsic heterogeneity of a given pollutant. This being clear, the reader must use equations (31.7), and (11.38) with caution.

Sampling of Wastes and Sampling of the Environment

575

Important remark #2: How is it possible to collect representative samples if the waste is made of large fragments up to several feet across? Indeed, this is a difficult problem to solve satisfactorily. The following approach is recommended: find out a reasonable cutoff by fragment size above which you have no more than 20% of the waste by weight. Then collect a representative sample from the 80% of the waste below cutoff. If the results from the analyses are way above the regulatory threshold, you don’t have to analyze the 20% above cutoff. If the results are below or near the regulatory threshold, then you must sample the 20% above cutoff; chips collected from large fragments become your only alternative which is not very good. Again, you must always address the weaknesses of your sampling protocols, as awareness can go a long way in a fair interpretation. 31.6.3. One-Dimensional Wastes When the variance of the various kinds of heterogeneity carried by a one-dimensional waste is determined, the development of an appropriate sampling plan becomes much easier; however, it still depends on what objectives pursued by the environmentalist are. There are several possible scenarios. 31.6.3.1. Estimation of the Average Content of a Pollutant over a Given Interval of Time or Space The following comments assume that an iterative approach is used as a logical strategy. The knowledge of the variance s2HFE1 of the short-range Heterogeneity Fluctuation Error renders possible the optimization of the necessary sample weight MS and the number of increments per sample to be collected immediately around the sampling point. The knowledge of the variance s2HFE2 of the long-range Heterogeneity Fluctuation Error renders possible the optimization of the appropriate time or space interval between each sample making up the composite if the objective is the calculation of an average over time or space only. The knowledge of the maximum possible variance s2HFE3 of the periodic Heterogeneity Fluctuation Error should dictate if the random systematic selection mode of the samples to make a composite is safe or not. If not, the environmentalist should implement a stratified random selection mode that is much less sensitive on cycles. We know already from Chapter 19 that the simple random selection mode is never better than the stratified random selection mode. The random systematic selection mode applied to a waste carrying a large amount of periodic heterogeneity can lead to very inaccurate samples. 31.6.3.2. Estimation of the Pollutant Content at a Given Instant or at a Given Point The knowledge of the variance s2HFE1 of the short-range Heterogeneity Fluctuation Error renders possible the optimization of the sample weight MS and of the number of increments per sample to be collected immediately around the sampling point. Practical example: Let’s suppose that to conduct a TCLP test the optimum sample weight to meet a given standard deviation sFSE = ±16% is 814g. Calculate the maximum standard deviation of the total sampling error due to the small-scale random

576

Theory of Sampling and Sampling Practice

heterogeneity fluctuations. Given the fact that only one 814-g increment is collected and assuming that sampling is correct: s2Total Max ¼ s2FSE þ s2GSE

ð31:8Þ

If sampling is correct (i.e., randomness in the sampling mode and IDE + IEE + IPE = 0) it is not likely that s2GSE be larger than s2FSE when using only one increment. Therefore, as a rough guideline, we suggest the collection of at least 10 random increments to make up any sample. Of course, increments are collected in the immediate neighborhood of the sampling point, and this should not be confused with a composite sample that would be collected on a larger scale to smooth out s2HFE2 . Experience proves that if the sample is made of 25 random increments, usually s2GSE becomes very small.

31.6.4. Two-Dimensional Wastes These types of wastes are much more complicated to investigate. Unfortunately, they are very common cases. We already know from Part VII that the isotropic module of observation for such a waste is a cylinder representing the entire thickness of the waste. In what follows, we assume this condition is fulfilled and concentrate our attention on the sampling modes. There are five major sampling selection modes at our disposal: 1. strict random sampling 2. stratified random sampling 3. authoritative stratified random sampling 4. random systematic sampling 5. simple systematic sampling. So far, we never addressed authoritative sampling in this book because we do not recommend it. However, the authoritative approach is often necessary for environmental assessments to separate hot sectors from others in a contaminated area. Then, each sector must be submitted to probabilistic sampling, of course. From a practical standpoint, some of these sampling selection modes should be rejected. A simple systematic sampling mode is not an option because there is no randomness involved, which is statistically wrong. The strict random sampling mode is never better than the stratified random sampling mode, therefore it is not recommended to use it. Therefore, we shall concentrate our attention only on the three following sampling selection modes: 1. random systematic sampling 2. stratified random sampling 3. authoritative stratified random sampling. We shall review each of these modes in detail after we define a few terms necessary for a good understanding of this section.

577

Sampling of Wastes and Sampling of the Environment

31.6.4.1. Definition of Area, Sector, Stratum. Substratum, and Unit Area: It is the entire starting lot for which a defensible assessment of the average level and variance of a given pollutant must be performed. Its surface and contour are dictated by local conditions: each case is a special case justifying an authoritative approach. Sector: It is a fraction of the area. Its size is the object of rules that should have been established by regulatory agencies. If such rules do not exist, its size must be the object of recommendations from the statisticians: usually such recommendation is impossible to make if an iterative sampling protocol is not used.28,31,173,177 If such an iterative protocol is not possible we strongly recommend the approach described in Section 31.5.3. The contour of each sector can be either part of a systematic grid as shown in Figure 31.1, or authoritatively selected because of obvious boundaries as shown in Figure 31.2. The number of sectors shown in Figure 31.1 is limited to nine for the clarity of the picture, and can be a much larger number. Each sector must be the object of a complete, independent assessment of its average and variance of a given pollutant of concern. Stratum: Each sector is submitted to the preselected sampling mode, either random systematic, authoritative stratified random or stratified random. This is performed by dividing each sector into either a systematic grid, as shown in Figure 31.3, or as an authoritative grid because of obvious boundaries, as shown in Figure 31.4. Within each stratum, one sample must be collected and analyzed for the pollutant of concern. The

1

2

3

4

5

6

7

8

9

Sector

Area FIGURE 31.1 Sectors set as a systematic grid.

578

Theory of Sampling and Sampling Practice

1

2

3

5

6

Hot sector

4

Hot sector 7

8

9

Sector

Area FIGURE 31.2 Sectors set authoritatively because of obvious boundaries.

correct number of strata to be assessed highly depends on the health hazard at stake. Nevertheless, under no circumstances should less than four strata be assessed. If in doubt, it is suggested that at least nine strata be assessed as described in Section 31.5.3. Substratum: At this stage, only two sampling modes remain at our disposal, either random systematic or stratified random. In both cases, each stratum must be divided into a certain number of basic units called substrata. In the case of random systematic sampling, one substratum is selected at random in one of the strata. Sampling is performed at the center or at random within this selected substratum. Then, the location of the selected substratum dictates the location of sampling in all other strata. In the case of stratified random sampling, one substratum is selected at random in each stratum. Sampling can be performed at the center of the selected substratum. But, the location of the selected substratum in one stratum does not dictate the location where another substratum will be selected in another stratum. The correct number of substrata to divide each stratum can be controversial. It is the author’s opinion that there is no need to make this number larger than the number of samples to be analyzed per sector. Yet, someone may argue that equiprobability should be unaltered all the way to the end of the sampling process; therefore, to satisfy everybody, it is suggested that a division of each stratum in a number of substrata at least 5 x 5 = 25, or more be applied within one sector, as suggested in Figure 31.6. It is not difficult to do, anyway.

579

Sampling of Wastes and Sampling of the Environment

1

2

3

4

1

2

3

4

Stratum

FIGURE 31.3 Strata set within a sector as a systematic grid.

Hot stratum

FIGURE 31.4 Strata set within a sector as an authoritative grid due to the presence of obvious boundaries.

31.6.4.2. Random Systematic Sampling of an Area There is no advantage to select a random systematic sampling mode over a stratified random one, unless a very large number of samples are collected, which allows the use of

580

Theory of Sampling and Sampling Practice

sophisticated geostatistical techniques. In such a case, random systematic sampling may considerably simplify the calculations involved. But, most of the time, the number of samples that are collected is relatively small. Therefore, the use of geostatistical techniques becomes less effective. Under such conditions, it is better to keep the selection equiprobable for each stratum separate and select a stratified random sampling mode. Furthermore, anywhere authoritative contouring of some sectors and/or strata must take place, the random systematic sampling mode of selection is not recommended. Summary of procedures: Systematic sampling involves a regular, preestablished pattern for selecting sampling locations. The location of the initial sampling point is randomly selected in the preselected stratum, so the procedure retains a random element. Using a map of the area under investigation, the area is divided into smaller roughly equal size sectors. The data acquisition and data analysis for each sector are managed independently. A systematic grid is laid out within each sector, dividing the sector into a certain number of strata as shown in Figure 31.6. The number of strata corresponds to the predetermined number of sampling points. The initial stratum is divided in a number of substrata at least equal to 25. A random number is generated between 1 and 25 to select one substratum where sampling must be performed. Taking all the increments making up the sample near the center of this substratum is acceptable. The location of this selected substratum dictates where the sampling points will be in all subsequent strata. Then, all other sectors will be sampled by following the same method. 31.6.4.3. Stratified Random Sampling of an Area Stratified random sampling is applicable to the monitoring of all waste piles or contaminated areas. It is particularly recommended when only few samples are collected. It is by far the safest mode of sampling in the environment. It is always equiprobable and applies to the selection of a substratum in each stratum of a sector. If guidelines were to be simplified to the extreme, stratified random sampling should become mandatory. There are two cases: Case #1: Sectors and/or strata are set as a systematic grid. In this case, there are no obvious boundaries between the various sectors and/or the various strata. The entire area is considered with an equal level of suspicion for pollution, and we know nothing about the distribution of the pollutant over the area. So, the unique starting criterion must satisfy the following guideline: 1. Each sector must have a surface smaller than the local regulatory limit. 2. The number of strata per sector must be at least four. 3. The monitoring data and data analysis for each sector must be managed independently. 4. Each sector must be assessed for the average content and variance of the pollutant of concern. Figure 31.5 illustrates this kind of sampling mode. Case #2: Sectors and/or strata are set as an authoritative grid. In this case, there are obvious boundaries between what could be slightly polluted and what could be highly polluted. Or, it may concern an area in which different wastes have been accumulated at different periods of time, by different processes, and different

581

Sampling of Wastes and Sampling of the Environment

Area

Sector

Stratum

Substratum

FIGURE 31.5 Diagram showing strata and substrata in one sector.

people. In all cases, it is suspected that there is nothing in common between some sectors and/or strata. Figures 31.2 and 31.4 illustrate such a case. Again, the monitoring data and data analysis for each sector are managed independently. However, in this case, it is possible and acceptable that the hot sector may be very different in surface and volume than the other sectors. Summary of procedures: Each sector is first divided into smaller strata, either using a systematic grid or using an authoritative grid. The number of strata depends on the sampling effort that must be performed. But, under no circumstances should less than four strata be selected. The necessary number of strata can be calculated if an iterative sampling protocol is used. Again, some of these strata can be defined authoritatively because of obvious boundaries. Each stratum is then partitioned into smaller substrata to a hypothetical and systematic grid. The locations of the sampling points are selected by choosing one substratum at random within each stratum. This sampling mode is completely illustrated in Figure 31.7.

31.6.5. Three-Dimensional Wastes These types of wastes present unsolvable sampling problems if not approached properly. Three-dimensional wastes are impossible to sample in an equi-probabilistic way. It should be clearly understood that when we sample any waste with incorrect sampling

582

Theory of Sampling and Sampling Practice

Subsequent sampling points

Initial stratum

Initial sampling point in selected substratum

Sector

Area FIGURE 31.6 Diagram showing the selection at random of one substratum in the initial stratum of one sector where random systematic sampling is to be performed.

tools that do not respect the conditions of increment delimitation and extraction addressed in Part VII, it is impossible to collect samples that are accurate and precise enough to meet data quality objectives. Fortunately, it is entirely up to us to consider a waste as a one- or two-dimensional waste. If we consider it as a one-dimensional waste, we know that each increment must represent a full cross section of the waste. If we consider it as a two-dimensional waste, or a sum of several superimposed two-dimensional wastes, we know that each increment must represent the entire thickness of each two-dimensional slice. These rules may be greatly altered because of practicality: in such a case it is the duty of the environmentalist to address the limitations about the representativeness of his samples in an unambiguous way. Practicality is often a poor excuse to perform mediocre sampling and to endorse the status quo. Yet, it is astounding to see that results generated by such specimens mislabeled as samples are used every day in court to reach extremely important decisions.

31.6.6. Areas That Would Benefit from the Implementation of a Variographic Experiment A variographic experiment is the key for the environmentalist to characterize the various kinds of heterogeneity a given pollutant carries. It allows the optimization of a sampling plan that would best determine if the regulatory threshold is exceeded for short or long

583

Sampling of Wastes and Sampling of the Environment

Another random number is generated to select a substratum in each other stratum.

Initial stratum

Initial sampling point in selected substratum

Sector

Area FIGURE 31.7 Diagram showing the selection at random of one substratum in each stratum of one sector, where stratified random sampling is to be performed.

periods of time, or at regular intervals. The knowledge of these kinds of heterogeneity can also be valuable for the waste producer in solving his problem by identifying the various sources of the pollutant fluctuations. It should be emphasized that variograms can be calculated with existing data in many instances. The objective of this section is to list a few areas that would benefit from variography, but there are many others. 31.6.6.1. Monitoring Sulfur in Coal The monitoring of sulfur dioxide at fossil-fuel-fired steam generating plants is a difficult problem. Often, it is performed by monitoring on-stack emissions. Unfortunately, onstack monitoring is nearly impossible to implement correctly. Sampling of gases presents many problems that have not been, to this day, satisfactorily solved. Sampling of suspended solids in gases prior to the boiler using online probes generates unsolvable sampling problems because the segregation of sulfur minerals in coal streams between storage bins and pulverizers, and between pulverizers and the boiler, is likely to be high and non-reproducible. Therefore, the sampling with on-line probes is inaccurate. A much better alternative is to accurately estimate the sulfur content of a coal using a regular automated sampling station. Such a station can conveniently be installed just before the entry of the storage bins. We know that it is always possible to correctly

584

Theory of Sampling and Sampling Practice

sample a one-dimensional coal stream. We know how to solve sampling problems generated by the heterogeneity of sulfur minerals in one-dimensional coal streams, and we also know that the worst possible approach is to rely on online probes taking part of the stream part of the time. There are three conditions to fulfill if the environmentalist wants to accurately assess sulfur dioxide emissions: 1. The automated sampling station must be correct (i.e., minimization of IDE, IEE, IPE, and IWE). Most sampling stations on the market are incorrect, mainly because of the presence of manufacturers with conflict of interest in standard committees. 2. The automated sampling station must be optimized by means of a variographic experiment (i.e., minimization of HFE1, HFE2, and HFE3). 3. The subsampling methodology must be correct and optimized (i.e., minimization of IDE, IEE, IPE, IWE, FSE, and GSE for all subsampling stages.). Some of the problems associated with the monitoring of sulfur emissions have been addressed by Wayne.209

31.6.6.2. Monitoring of In-Stream Suspended Sediments Assuming that in-stream composite sampling can be performed correctly (see Part VII), which is a very optimistic assumption, the difficulty that the environmentalist is facing is to effectively quantify temporal or spatial pollutant fluctuations. Suspended sediments are so finely disseminated that the long-term heterogeneity fluctuations HFE2, and the periodic heterogeneity fluctuations HFE3, are likely to be much greater than the shortterm heterogeneity fluctuations HFE1. However, this needs to be checked in every case. An in-stream sampling station located at inlets, at draw-off points, at reservoir outlets, at water plant intakes, and so on, can greatly benefit from a variographic evaluation of the data generated over a long time. Variograms can be used to optimize the composite sampling operation, and especially to select the optimum sampling intervals. Some of the problems associated with sampling in-stream suspended sediments have been addressed by Suffet et al.210

31.6.6.3. Efficiency of a Virus Detection Methodology Rare and isolated dangerous viruses can present a serious threat to public health. If these low-frequency viruses are also clustering on the surface of some particulate materials whose nature promotes their development and survival, conventional virus detection methodology may fail to detect them. This particularly concerns the sampling aspect of the methodology. The implementation of the test suggested in Section 31.5.3 combined with variographic techniques may render virus detection methodologies far more effective. It may also bring valuable information on the behavior of some viruses.

31.6.6.4. Monitoring Toxic Chemicals in the Marine and Estuarine Environments The correct sampling of marine and estuarine wastes is very complex and extremely difficult to implement as described in Section 31.7. Nevertheless, the development of an

Sampling of Wastes and Sampling of the Environment

585

appropriate sampling plan can be facilitated by conducting variographic experiments along several transects of interest. It is recommended to study one transect at a time. For example, we may study a transect parallel to the coast, then a transect perpendicular to the cost, then a transect from the surface of the water to the bottom. Many other considerations are also taken by the environmentalist such as the effects of tides, of undercurrents, of temperature changes, of seasons, and so on. All the effects can be better understood by using variography. Whatever the variable of interest, the basic principle of the variographic interpretation of data remains the same.

31.6.6.5. Sampling of Potable Water Supplies Potable water supplies are of critical importance for our society, therefore certified authorities decide upon the frequency of sampling at fixed locations. For example, a water supply reservoir may regularly be monitored at inlets, other possible sources of pollution such as agricultural or industrial contamination points, at draw-off points or water plant intakes, and finally at the reservoir outlet. Generally, all these locations remain the same for long periods of time. They are selected to obtain a representative set of data on a regular basis that allows a complete coverage of a given water supply. Often, the minimum number of samples that must be collected and examined each month is based on the density of the population served by the water plant. In fact, there is a direct correlation between the size of the served population and the importance of the sampling effort. We may wonder about the validity of such an approach that appears rather empirical. Variographic experiments performed on a regular basis, or variograms calculated with existing data, could provide authorities with information that cannot be obtained as effectively otherwise: • Expected water quality fluctuations over very short periods of time • Expected water quality trends over long periods of time • Existence and origin of periodic water quality fluctuations. Variograms are particularly effective to detect periodic phenomena that often escape attention on conventional control charts.

31.6.6.6. Sampling Shellfish to Monitor Contaminants Exposure Sampling of shellfish in harvesting areas can be critical in determining if their consumption is safe or not. Usually, such sampling is performed when hydrographic conditions are unfavorable. When a contaminant is detected we may immediately wonder if the observed fluctuations of this contaminant are random or not. We recommend the following approach if distance is the factor of interest: • Starting from a given point along a preselected transect, collect at least 30 shellfish at very short intervals from one to another. Usually, when the intervals are very short, the direction of the transect is irrelevant. These intervals called θmin can be as short as 25 cm, but they largely depend on local conditions. The variogram

586

Theory of Sampling and Sampling Practice

obtained with the analyses of the pollutant content of these 30 increments should mainly show random fluctuations. • Then, 40 to 60 shellfish can be collected at regular intervals (e.g., θmin = 10 meters or more depending on local conditions) by following the preselected transect. This time the direction of the transect should be highly relevant. Calculated variograms for several transects can give information on the direction followed by the pollutants or on a different shellfish response to a given pollutant as a function of changing hydrographic conditions with distance. These are trends. A variogram showing a small slope may indicate that the selected transect is parallel to the direction followed by the pollutant, while a variogram showing a sharp slope may indicate that the selected transect is perpendicular or at an angle with the direction followed by the pollutant. There is another approach if time is the factor of interest. • Large numbers of shellfish always collected at the same point, at regular time intervals preselected by shellfish sanitation committees, could provide the necessary data to detect trends, periodic phenomena, and quantify the random component of these fluctuations. Again, the variogram is the best “tool” to detect and quantify these different variability components. With variograms it is much easier to estimate the period and the amplitude of a cyclic phenomenon. Many periodic phenomena can escape attention by plotting data on conventional control charts. It is recommended to use the variogram as a complement to these control charts and not as an alternative. Sometimes marine bivalves can be used to measure their ability to bioaccumulate some contaminants such as PCB, PAH, chlorinated pesticides, and so on. This could be the perfect example where effects resulting from different causes add up to an overall resulting effect with apparent erratic behavior. Variography could help the biologist understand and quantify the various effects he may be confronted with. Over a period of time three kinds of effects may take place and overlap each other. • h1: random fluctuations of the pollutant from one bivalve to the next one nearby. • h2: long-range trends of the pollutant in the bivalve. • h3: cyclic phenomena affecting the pollutant in the bivalve. These various forms of heterogeneity can be induced either by fluctuations from the pollutant itself or by the ability of the bivalve to process it through its system. Failure to identify and quantify all these basic components individually may give the false impression to the biologist that results are very erratic while, in fact, they are not. Only variography can allow the detection of several effects generated by different causes.

31.6.6.7. Sampling Mushrooms in Their Natural Habitats According to Stamets211 “mycelium is the neurological network of nature. Mycelium is the web of life,” and we still vastly underestimate the possibilities of this domain of biology. The experiments described in the former section could greatly help make

Sampling of Wastes and Sampling of the Environment

587

profound advances in the many applications of using mycelium and mushrooms in many domains of remediation such as: • mushrooms and targeted therapeutic effects • antimicrobial properties of mushrooms • viral disease prevention using mushrooms • fighting fungal blights using mushrooms • mycofiltration of microbial pathogens • soil preservation using mushroom mycelium, and many more. We cannot emphasize enough the importance of Stamets’ work over the last 30 years, and it is our strong belief that well-planned variographic experiments could make a breakthrough in this new, fascinating science.

31.7. Implementation of Sampling Plans It would be naïve to believe that sampling accuracy is solely achieved by incorporating some form of randomness into the selection process. It is a necessary condition, but not a sufficient condition. Sampling must also be correct, and the correct implementation of a sampling plan is probably the most challenging part of environmental sampling. In many cases this problem is not well known and not even addressed. The fact that standards committees approve sampling systems such as triers, thiefs, augers, or dippers to implement sampling does not alter the primary structural property of these pieces of equipment which is to be basically incorrect as explained at large in Part VII. They cannot perform a correct delimitation, nor a correct extraction, and as a result they cannot be proportional when performing composite sampling, opening the door to a minimum of three sampling biases. One might say “environmental agencies are in the business of solving problems; therefore, they should remain practical to do the job with whatever they have.” There is indeed nothing wrong with such a statement. Yet, it is necessary to address and recognize that sampling performed by hand, with bottles, triers, thiefs, augers, dippers and so on, cannot generate correct samples. All these techniques are non-probabilistic in nature, mainly because of the erratic geometry of the increments they delimitate and the selective recoveries they provide. Because we know now that incorrect sampling tools cannot ensure accurate sampling, we also know that these incorrect sampling tools generate samples biased in one direction one day, accurate the next day, and biased in another direction the day after. Conclusion #1: Sampling biases are never constant as they depend on numerous forms of segregation which are essentially transient phenomena. We cannot quantify a sampling bias; we can only make sure sampling tools are correct by design and the way they are used. Conclusion #2: Most of the sampling tools presently used today to sample the environment are incorrect. It is a fact we have to live with, and it should be addressed and recognized. With time, correct sampling tools will be developed. The development of the “Coliwasa sampler” is a good step in the right direction and proves beyond any doubt that cost is not a barrier to correct sampling, awareness is.212

588

Theory of Sampling and Sampling Practice

Conclusion #3: It is unthinkable to proceed with the standardization of sampling techniques as long as fundamental problems, such as those generated by sampling incorrectness, are not being solved first. Many are tempted to implement standards without even solving complex problems generated by the capital notion of sampling correctness. 31.7.1. Notion of Sampling Correctness in the Environment The key would be to develop sampling systems that would delimit a correct extended increment, and extract a correct actual increment. This can be achieved only if the extended increment can be considered as an isotropic module within the investigated waste, and if the actual increment is representative of the undisturbed theoretical content of the extended increment. In summary: • For a three-dimensional waste, the actual increment should be an undisturbed volume delimited by a sphere, which is an unsolvable problem. As a result, we should divide the three-dimensional waste into a sum of superimposed two-dimensional wastes. • For a two-dimensional waste, the actual increment should be an undisturbed volume delimited by a cylinder with constant cross-section. This cylinder must cross the entire thickness of the two-dimensional waste. • For a one-dimensional waste, the actual increment should be an undisturbed cross section delimited by two parallel planes.

31.7.1.1. Depth Sampling of Water All current depth samplers are, more or less, based on the same device made of a bottle or container sent to the desired depth, then opened, filled, closed, and returned to the surface. They all have the same defects: • The delimitation is incorrect: more or less because of the way it is filled with a small opening making the shape of the volume of the container irrelevant. • The extraction is incorrect: the way the material enters the bottle or container highly disturbs the natural distribution of all the potential constituents, promoting high segregation of suspended solids; therefore, suspended solids find their way in and stay in, while liquid and very fine suspended solids find their way in but also find their way out before the container is closed. In such a case, it is hard to believe that an increment or sample representative of all the particle size classes can be extracted. We should emphasize again that the sampling of three-dimensional domains is an unsolvable problem. The solution is to subdivide a three-dimensional domain into a series of imaginary superimposed two-dimensional domains that will be sampled independently using a Coliwasa sampler or a Kemmerer depth sampler. 31.7.1.2. Sampling of Two-Dimensional Wastes As explained in Part VII, the correct delimitation of the ideal cylinder with constant cross section can be achieved by conventional drilling. Extraction of the undisturbed

589

Sampling of Wastes and Sampling of the Environment

theoretical cylinder can present difficult problems, but they can be resolved. The increment will be representative of the entire thickness of the two-dimensional waste. The qualities and defects of various sampling devices currently used to sample twodimensional wastes are summarized in Table 31.1. Sampling of two-dimensional wastes is difficult but not as difficult as three-dimensional wastes. Sampling of one-dimensional or zero-dimensional wastes is always solvable. 31.7.1.3. Coliwasa and Kemmerer Depth Samplers Sampling devices such as the Coliwasa and the Kemmerer depth samplers are perfect examples demonstrating that the achievement of sampling correctness does not necessarily involve complex and expensive solutions. These two devices consist of a glass, plastic, or metal tube equipped with an end closure which can be opened or closed at a preselected depth under the surface of the water. One of the main advantages of such a device is that the length of the tube body can be adjusted. The connecting rod could be made of two parts screwed to one another to facilitate cleaning or changing of the tube. When these devices collect increments starting from the surface, a perfect column of undisturbed liquid enters the tube body. Until the preselected level is reached, as long as it is not deeper than the total length of the tube, a perfect cylinder representing the total thickness of the two-dimensional liquid waste or any water body, can be obtained. As long as those samplers are used starting from the surface and pushed down slowly as indicated by their manufacturer, they can be perfectly correct. Conclusion: With simple modifications, this kind of sampler should be generalized to the sampling of liquid wastes and waters. They are superior than the usual surface sampling by hand, surface sampling by weighted bottle frame, dipper, and so on. But, as soon as these devices are used as depth samplers they become incorrect, at least under their present stage of development. Opening the tube body below the surface necessarily

TABLE 31.1 Qualities and defects of various sampling devices currently used to sample two-dimensional wastes Sampling device

Waste type

Qualities

Defects

Coliwasa

Free flowing liquids with or without suspended solids

Correct delimitation Correct extraction

Kemmerer depth sampler

Free flowing liquids with or without suspended solids

Correct delimitation Correct extraction

Trier

Sludges, moist powders, granules

Thief

Dry powders, granules

Auger

Sand. Packed powders, soil

Dipper

Most materials

Bad delimitation Bad extraction

Shovel Scoops

Most materials

Bad delimitation Bad extraction

Bad delimitation Poor extraction Bad delimitation Bad extraction Correct delimitation

Bad extraction

590

Theory of Sampling and Sampling Practice

introduces great disturbances and segregation as the material enters the tube. The solution could consist of installing a flexible plastic tube above the primary tube body, thus, no disturbance can be introduced at any time. When the preselected depth is reached, the lower end of the tube body is closed and the plastic tube above the upper end squeezed so that upper levels of liquid cannot contaminate lower levels. The idea is to prevent any disturbance during the sample extraction.

31.7.1.4. Sediment Sampling Existing sampling devices greatly disturb sediments before or during the actual increment extraction. Therefore, it is hard to believe that they are capable of collecting a correct proportion of all the particle size classes making up the sediments. For the same reasons, all devices consisting of bag inflation to collect sediments by suction are incorrect. The sampling probability is favoring the coarse material, which is wrong.

31.7.1.5. Sampling of One-Dimensional Wastes The correct sampling of one-dimensional wastes can be achieved only with cross-stream sampling systems. Manual sampling, grab sampling, probe sampling, in-stream stationary cutters, and so on, generate unsolvable difficulties because they are nonprobabilistic techniques; they should be rejected by all means if the generation of misleading and non-defensible data is to be prevented.

31.7.1.6. Sampling of Zero-Dimensional Wastes This kind of sampling mainly concerns subsampling at the laboratory where the entire field primary sample is taken entirely for the sole purpose of secondary or tertiary subsampling. Only use true splitting devices and methods, as they are the only ones that ensure sampling equity. They should be the only devices and methods used at the laboratory for subsampling purposes. Nonprobabilistic methods such as dipping and probing should be condemned with no possible compromise.

31.8. Development and Use of Standard Reference Materials Instrumental methods of analysis used for the determination of trace elements, and inorganic or organic constituents, depend heavily on accurately analyzed standards. Preparation of a useful standard reference material requires the following: 1. a complete primary analysis of all the major constituents 2. reliable uncertainty statements and sampling constants, or other measures of the amount of heterogeneity carried by the constituents of interest 3. an established sampling weight baseline 4. known mineralogy or chemical structure 5. a sufficient supply.

Sampling of Wastes and Sampling of the Environment

591

In many cases, commercially available standard reference materials are not as reliable as they are advertised. 1. Often, they are much more finely ground than the samples to be analyzed. This affects rapid methods of analysis that examine samples without iso-formation. 2. Mineralogy and chemistry involved in these standards can be very different from those involved in samples to be analyzed. 3. Most standard reference materials are not supplied with complete analysis. Even if the certificate value for some of the constituents of interest can be confidently accepted, lack of information on interfering constituents severely limits the usefulness of standards. 4. Seldom does a standard reference material certificate carry a convincing and useful statement concerning the amount of heterogeneity carried by the constituents of interest. In some cases, the misapplication of statistics gives some evidence that each bottle contains the same amounts of certified constituents, or within acceptable limits. However, no measure of the heterogeneity at the subsample weight demanded by a given instrumental method to be calibrated is ever performed either by the vendor or the user. For example, a standard reference material from which reproducible results are obtained for a given constituent when 1-g subsamples are analyzed, may become a totally unreliable standard when 0.1 or 0.01-g subsamples are analyzed. 5. Most often, the certificate value is an adjusted average of numerous determinations using different methods and different sample weights. Little consideration is given to subsampling variance. The consequence is a wide spread of values, whose causes are often obscure. For these reasons the manipulated average provided by round-robin exercises inspires little confidence. Recommendations: 1. No material should be circulated in a round-robin exercise unless its sampling characteristics for the constituents of interest have been estimated. 2. In reporting preferred or certified values for a standard reference material, a sampling constant for at least the most important constituents should be given. It is the only way for the user to know how large a sample must be taken to achieve a desired reproducibility. 3. Statistical exercises in which values derived from samples of different weight are manipulated, are invalid unless these weights are taken into account. If no information is available, the potential standard reference material should not be certified.

31.9. Conclusions and Recommendations The environmentalist preparing a sampling campaign should pay attention to the following three critical steps: 1. Characterization of the various kinds of heterogeneity carried by the pollutants in a waste.

592

Theory of Sampling and Sampling Practice

2. Development of a sampling plan. 3. Correct implementation of the sampling plan. The indicated chronology of the above three steps is critically important. It should be the backbone of a standardized sampling strategy. 31.9.1. Characterization of the Various Kinds of Heterogeneity 1. For zero-dimensional wastes: • Determine the intrinsic heterogeneity IHL. • Perform a test comparing the pollutant content of a series of small samples with a series of large samples collected at the same location. • Determine the particle size distribution of the waste. 2. For 0ne-dimensional wastes: • Perform a variographic experiment that would give an idea about short-range, long-range, and periodic heterogeneity fluctuations. 3. For two-dimensional wastes: • All of the earlier tests can be performed following several preselected transects. 4. For three-dimensional wastes: • Slice the investigated waste into an imaginary series of superimposed twodimensional wastes. Then, each two-dimensional slice is investigated separately as suggested earlier.

31.9.2. Development of a Sampling Plan 1. Zero-dimensional wastes: • Determine the optimum sample weights for all sampling and subsampling stages. • If for some reason the characterization of the amount of constitution heterogeneity carried by the pollutant of interest is not possible, make sure that all samples and subsamples are representative of all particles-size classes, including the largest fragments. • Collect as many correct increments as practically possible per sample. 2. For one-dimensional wastes: • Optimize the minimum sampling intervals θmin. • Optimize the number of increments necessary to minimize the random fluctuations on the short-term. • Choose the most appropriate sampling selection mode. 3. For two-dimensional wastes: • Decide upon a sampling pattern or grid. • Choose the most appropriate sampling selection mode within the grid. • Optimize sample weight and number of increments per sample.

Sampling of Wastes and Sampling of the Environment

593

31.9.3. Correct Implementation of the Sampling Plan 1. For zero-dimensional wastes: • Use only true splitting methods and devices. • Reject degenerated splitting methods and devices. 2. For one-dimensional wastes: • Use correctly designed sampling devices. • Reject incorrectly designed sampling devices such as thief, trier, dipper, auger, in-stream or by-stream point samplers, scoop, shovel, and so on. 3. For two-dimensional wastes: • For liquids, water, or water with suspended sediments, use the Coliwasa or Kemmerer depth sampler every time possible. • Reject bottle sampling, dippers. • New samplers need to be developed for solids. 4. For three-dimensional wastes: • For liquids, water, or water with suspended sediments, new samplers need to be developed. The pragmatic solution is to divide the three-dimensional waste into a sum of superimposed two-dimensional wastes. • For solids, sampling problems are unsolvable unless we slice the waste into a series of superimposed two-dimensional wastes. Overall: • The standardization of a sampling strategy is a must. • The standardization of sampling devices is possible only if they are correct with respect to rules established in Part VII. • Implementation of a sampling plan with incorrect sampling devices cannot ensure accurate sampling. • The standardization of a sampling plan is possible only for local conditions, with respect to the amount of heterogeneity carried by a critical pollutant. The pollutant carrying the largest amount of heterogeneity will decide what the appropriate sampling plan is.

This page intentionally left blank

32 Solvable and Unsolvable Sampling Problems

32.1. Definitions A sampling problem is considered solvable only if the total sampling error TSE can satisfy to a certain degree of representativeness r2(TSE), that is: r2 ðTSEÞ ¼ m2 ðTSEÞ þ s2 ðTSEÞ  r2 ðTSEÞmax

ð32:1Þ

Where: m2 ðTSEÞ ¼ 0 m2 ðTSEÞ is equal to zero if, and only if, the sample selection is correct (i.e., IDE + IEE + IPE = 0). s2 ðTSEÞ is smaller than or equal to a certain standard of reproducibility s2 ðTSEÞmax . This condition can be met only if the various amounts of heterogeneity carried by the constituent of interest have been characterized. Only under these conditions equation (32.1) can be written as follows: r2 ðTSEÞ ¼ s2 ðTSEÞ  s2 ðTSEÞmax

ð32:2Þ

Conversely, a sampling problem is unsolvable whenever the degree of representativeness r2 ðTSEÞ of the total sampling error TSE cannot be estimated. There could be two reasons for that: 1. An unpredictable bias is likely to take place if the sample selection is incorrect (i.e., IDE + IEE + IPE = 0). The word specimen should be used instead of the word sample to qualify a fraction of the lot obtained under such conditions. This case perfectly applies to those who believe that the sample does not have to be perfectly accurate, as long as it is accurate enough; they make the wrong assumption that a sampling bias can be measured, which is never the case. 2. The various amounts of heterogeneity carried by the constituent of interest have not been characterized.

32.2. Cost of Representativeness Theoretically all sampling problems are solvable; however, from a practical standpoint many sampling problems are not solvable within a reasonable cost. In a great number of 595

596

Theory of Sampling and Sampling Practice

cases, the notion of solvability is closely related to the notion of cost effectiveness. This leads to the following definitions: • A sampling problem is solvable when we are capable of developing and implementing a sampling plan characterized by an acceptable degree of representativeness that can be achieved at an acceptable cost. • A sampling problem is unsolvable when the above condition cannot be met, or when nonprobabilistic specimens are collected instead of probabilistic samples.

32.3. Notion of Acceptable Representativeness It would be meaningless to choose an allotted variance s2 ðTSEÞmax significantly smaller than the allotted s2 ðAEÞmax because the relative reduction of the variance of the Overall Estimation Error s2 ðOEEÞ would be very small and relevant only to the second order. However, a too large variance s2 ðTSEÞmax would ruin the advantages of a precise analysis. Therefore, it would seem logical to allow for a sampling variance s2 ðTSEÞmax equal to the allotted analytical variance s2 ðAEÞmax , Because of cost consideration, we found that it is usually cheaper to reduce the allotted s2 ðAEÞmax than to reduce the sampling variance s2 ðTSEÞmax . Conclusion: For a given allotted variance of the Overall Estimation Error s2 ðOEEÞit is cheaper to allow for a Total Sampling Error variance s2 ðTSEÞmax slightly larger than the allotted analytical variance s2 ðAEÞmax . Thus, we acknowledge that the variance of the total allotted sampling error is: s2 ðTSEÞmax ¼ Ks2 ðAEÞmax

ð32:3Þ

with 1 ≤ K ≤ 5. The value of K depends on the cost of assays which widely varies according to the constituent of interest, and on the allotted cost for sampling. In our discussions we assumed that m2 ðTSEÞ ¼ 0; therefore, we assumed that the sampling operation is correct. Experience shows that sampling correctness does not necessarily affect sampling cost. Sampling incorrectness is rather a matter of choice, whose origin is a total disregard for bias generating sampling errors such as IDE, IEE, and IPE.

32.4. Notion of Acceptable Cost An acceptable cost depends on the use to be made of the estimates obtained by the analyst. If the sample must provide estimates for the settlement price of a multi-million dollar shipment of a given raw material, it may be justified to spend more money on it than the sample routinely assayed for quality control purposes. If the sample is assayed to estimate the selenium content of a sludge from a conversion plant that is approaching the regulatory threshold, it may justify a larger expense than the sample routinely assayed for pollutants known as being far below the regulatory threshold.

Solvable and Unsolvable Sampling Problems

597

It all depends on the associated economic risk we are willing to take. 32.4.1. Commercial Sampling According to our experience, for most mining or metallurgical facilities, we shall consider as acceptable a sampling cost representing no more than 1% of the market value of the commodity to be sampled. This may seem a high cost for many people, but it is not if we consider this cost as an insurance policy against much larger losses. 32.4.2. Technical Sampling Technical sampling provides information necessary to monitor or improve a transformation process. Its main objective is to perform an effective statistical process control that allows an economic equilibrium corresponding to a maximum profit for the company. Usually, technical sampling does not require the high degree of accuracy required by commercial sampling. 32.4.3. Environmental Sampling This type of sampling also falls in the area of technical sampling; however, money spent on sampling may considerably vary depending on many factors: • If the pollutant approaches the regulatory threshold, higher expense must be allowed to sampling. • If the pollutant content is far below or far above the regulatory threshold, smaller expense may be allowed to sampling, and so on.

32.4.4. Administrative or Internal Accounting Sampling This is the area where money can be saved. We know many examples where the figures resulting from a costly sampling and assaying procedure are simply fed to daily, weekly, or monthly production sheets, or to their up-to-date computer equivalent, with no apparent benefit to anyone. The only justification of such controls is that they have always been carried out that way and that no one has ever wondered to what extent they are useful or make the decision to suppress them. Some companies are spending more money on administrative sampling than on commercial, technical, or environmental sampling of their production units.

32.5. Sampling of Three-Dimensional Lots There are many three-dimensional lots such as: • Piles too heavy to be transported or flattened for the sole purpose of their sampling.

598

Theory of Sampling and Sampling Practice

• Tailings that the increase of metal prices tend to transform them into low-grade ores. • Loads on ships, railroad cars, trucks. • Certain mineral deposits or sites made of unconsolidated materials, and so on. From a theoretical standpoint, the problem set by the sampling of these lots is not different from the problem set by compact lots to which the geostatistical theory is applicable. However, from a practical standpoint, the problem is different. The sampling of consolidated orebodies is usually carried out by core drilling or rotary drilling. When cores or drill cuttings are quantitively recovered, which is a very optimistic assumption, such sampling can be regarded as achieving an unbiased increment delimitation and extraction. If this is true, a three-dimensional lot is automatically transformed into a sum of superimposed two-dimensional lots, and samples are made of the ideal cylinder taken into consideration by the geostatistical model. But, we should remember that as soon as we are talking about splitting a core recovered as a series of several fragments, we are talking about sampling selection; therefore, all conditions of sampling correctness should be respected. When drilling through unconsolidated materials, delimitation and extraction biases are likely to take place. Vibrations and percussions are likely to alter the natural particle size distribution and vertical segregation is increasing. Pressure transmitted by the drilling machine is likely to make the coarsest fragments escape laterally unless the hole is of a large diameter. Practically, we must conclude that the sampling of three-dimensional unconsolidated lots is generally an unsolvable sampling problem.

32.6. Sampling of Two-Dimensional Lots Usually, the thickness of a two-dimensional lot is relatively small compared with the other two dimensions. The thickness must also be relatively uniform. These lots are: • Flat mineral deposits made of unconsolidated materials • Successive slices of three-dimensional mineral deposits made of unconsolidated materials • Large ponds • Flat piles of materials, and so on. We essentially meet the same difficulties as with three-dimensional lots, but the fact that the thickness is relatively small makes it possible to collect a uniform section. However, any drilling method is likely to generate the errors described for threedimensional lots, unless we drill large diameter holes. The ideal solution would be to transform the two-dimensional lot into a onedimensional or zero-dimensional lot. Most of the time this is too expensive to be considered. Practically, we must conclude that the sampling of two-dimensional lots made of particulate material is an unsolvable problem. Of course, such lots are

Solvable and Unsolvable Sampling Problems

599

sampled every day, especially for environmental sampling; thus the accuracy of such sampling must be addressed and looked at with circumspection.

32.7. Sampling of One-Dimensional Lots 32.7.1. Stationary Lots We know how to solve this problem. Each slice from the lot must correspond to an increment correctly delimited and extracted. The maximum particle size must be small enough to force the parallel edges of the extraction tool to extract constant thickness slices. The one-dimensional stationary piles are very often due to be reclaimed soon or later, then they are transformed into a one-dimensional flowing stream whose sampling is much easier. 32.7.2. Flowing Streams This problem, as we saw throughout this book, is the easiest to solve. We know it is always solvable and we know how to solve it. All modern sampling facilities rely on this type of sampling, which tends to be universally recognized as the only reliable one.

32.8. Sampling of Zero-Dimensional Lots A lot is said to be zero-dimensional when it is naturally divided into a large number of non-chronological units of practically uniform weight, and when its primary sampling consists of a selection, at random, of a certain number of these units. These lots are usually handling or transportation units, such as series of railroad cars, truckloads, shovelfuls, drums, sacks, bags, and so on. Splitting methods transform three-dimensional lots into zero-dimensional lots prior to sampling, which consists of selecting a certain number of units at random to form a sample. If the units are disposed in a natural order reflecting more or less the chronology of their production, the theoretical solution is the one used for one-dimensional lots and involves the variogram of the critical content. With units in true random order, the selection scheme is irrelevant. However, there is always a small part of the original chronological order that remains; therefore, a random systematic scheme is recommended. A stratified random scheme would be even better. The difficulties usually arise with the secondary sampling stage. It is one thing to say that the primary sample is made of thirty 250-kg barrels, and it is another thing to process this primary sample. When handling large tonnages under the form of zero-dimensional lots in a routine way, the most accurate and the cheapest of all solutions consists of selecting, for

600

Theory of Sampling and Sampling Practice

instance, one unit out of 10 or 20 (primary sample) according to a random systematic or stratified random scheme, discharging the increment units in a surge bin, feeding the material to a cross-stream sampler (secondary sample), and feeding back sampling rejects to the empty barrels kept on standby. This solution requires capital expenditures but leads to very small operating cost and very high reliability, which is very important in commercial sampling.

Part X

Chronostatistics

This page intentionally left blank

33 A Strategy to Take Better Advantage of Existing Chronological Data

33.1. Introduction Every day, managers look at process production sheets, but the in-depth process variability analysis is rarely done. Every working shift supervisor looks at the behavior of process parameters, but the in-depth process variability analysis is too often kept silent. Every minute online analyzers and measurement systems send information about key process parameters to the control room where reactive action is taken when necessary, but there also, in-depth process variability analysis is too often beyond the knowledge of the monitoring operator. Several factors play a role in these observations: • High-level productivity does not allow much time for managers, supervisors, and operators to investigate process variability in depth. • Easy access to existing data is often a limiting factor. • The will to take time, reflect on existing data, and determine a creative course of action is weak at best. • Quality thinking and quality action-taking are often considered the enemies of economic productivity (Carrasco81). • But, most often, it is a statistical methodology that is inappropriate, falling short of looking at process variability in terms of heterogeneity: this is exactly where the Theory of Sampling (TOS) makes a huge difference.

33.2. Micromanagement of Process Variability Statistical Process Control (SPC) becomes an important part of the decision-making process in many industries. But, nothing is more difficult than the control of heterogeneous and dynamic material systems.78 Management still believes that if everyone did as he or she were told, everything would be alright. Unfortunately, the reality is not that simple, and it is an absolute must to stop perpetuating the myth of simplicity. Our processes are complex, and they generate complex problems. Solutions to these problems are also complex. Many managers hope for instant improvement solutions: this is micromanagement. 603

604

Theory of Sampling and Sampling Practice

At this end of the book, we know now that any part of a process contains many sources of variability. There is no such thing as a constant or perfectly stable process. Results obtained from samples are affected by visible and invisible sources of variability. In other words, there are obvious sources of data variability, but there are some that are far subtler to detect and to understand. These invisible sources of variability are often the key to effective SPC. Dr. W. Edwards Deming said: “He who runs his company on visible figures alone will soon have neither company nor visible figures to work with.”213 Process variability can have many origins (e.g. variability from raw materials, chemical reagents, the processing equipment, equipment wear, sampling, subsampling, analytical measurement, etc.). These sources of variability generate quality fluctuations that can be classed into three categories: • Short-range, random quality fluctuations • Long-range, non-random quality fluctuations • Periodic or cyclic quality fluctuations. Therefore, to prevent overcontrol and overcorrections, it is a must to understand the nature of the variability of any part of a process. To effectively achieve process control, we must use control charts on which we must calculate control limits and compare them with targeted average, upper specification, and lower specification.

33.3. The Significance of Control Limits The understanding of control limits is not a simple matter and necessitates a deep and complete comprehension of the process variability; this is the main shortcoming of conventional SPC for heterogeneous and dynamic systems. There is nothing wrong with conventional SPC, except that it is mainly adapted to situations where a process should be stable. With heterogeneous dynamic processes, it is rare to have a completely stable process. At least, we must take a few precautions. In this chapter, and the next two chapters, we hope our contribution to effective SPC will be recognized.

33.3.1. Differences between Control Limits and Specification Limits Do not confuse control limits with specification limits. Specification limits are guidelines to satisfy a contract with a client. So, specification limits have nothing to do with the variability of the process. However, it is our duty to understand our process variability with respect to specification limits to find out if our process is capable to perform as expected; therefore, it is our duty to quantify control limits to find out if our process is in compliance with contractual specifications. Furthermore, even if we do not have any contractual specifications to meet at a given point of the process, it is still our duty to have enough vision to find what the key parameters are

605

Take Better Advantage of Chronological Data

that we should keep well under control to meet self-imposed specifications. Unless we succeed in doing so, our process is not likely to reach its optimum performance in quality, productivity, and cost-effectiveness. 33.3.2. Definition of the Control Chart A control chart is a tool that should allow the process to talk to us. It should put the various sources of variability into an easy to understand time context. All control charts have the same two basic functions: 1. Signals the presence of special causes of variability. 2. Maintain the process in state of statistical control. Figure 33.1 illustrates an example of control chart used in a cement plant to monitor the percent lime in a kiln feed. 33.3.3. The Old Ways of Doing Process Control Many people in the industry do not even bother with control charts. It is not rare to see an operator making important decisions after new data show up on his sophisticated computer monitor. Raw data do not talk. With such practice, it is nearly impossible to retrieve all the valuable information carried by a chronological series of assay results. So, as an example, make an attempt to study the raw data in Table 33.1, and find out what the data is trying to tell us. The data is from a cement plant, making a special and very expensive cement. The process must be controlled in such a way that the particle size distribution of the final product should be such that the +63µm is kept between 5.20% and 5.80%, which are contractual specifications.

% CaO

Upper Specification US Upper Control LimitUCL

3s of random variability

Average X

Lower Control Limit LCL

Lower Specification LS Time FIGURE 33.1 Control chart in a cement plant to monitor the lime content of the kiln feed (conceptual only).

606

Theory of Sampling and Sampling Practice

TABLE 33.1 Percent +63µm in a special cement (Basic sampling interval j=1= 2 hours) Sample #

% +63µm

Sample #

% +63µm

Sample #

% +63µm

1

5.61

21

5.52

41

5.41

2

5.69

22

5.55

42

5.44

3

5.66

23

5.52

43

5.47

4

5.57

24

5.59

44

5.55

5 6

5.56 5.60

25 26

5.61 5.61

45 46

5.57 5.58

7

5.53

27

5.55

47

5.53

8

5.58

28

5.52

48

5.52

9

5.53

29

5.57

49

5.48

10

5.49

30

5.57

50

5.44

11

5.61

31

5.55

51

5.48

12

5.60

32

5.52

52

5.48

13 14

5.55 5.50

33 34

5.49 5.49

53 54

5.49 5.42

15

5.49

35

5.43

55

5.41

16

5.51

36

5.48

56

5.37

17

5.57

37

5.40

57

5.36

18

5.52

38

5.41

58

5.40

19

5.51

39

5.45

59

5.36

20

5.51

40

5.43

60

5.33

FIGURE 33.2 Percent +63µm in a special cement as a function of time.

Take Better Advantage of Chronological Data

607

It is easy to find out that the process is within control; however, it is impossible to make an assessment of the heterogeneity of the material, and figure out what the variability components are, which is a major flaw in the eventuality something may go wrong in the process, which is only a matter of time. 33.3.4. The Superiority of Graphics The very least we should do is to look at existing data on graphics, as illustrated in Figure 33.2. It is much easier to see the evolution of the process around the arithmetic average. A 5-point moving average is displayed to partially eliminate the random noise and emphasize process trend and cycles if any. But, if we ask someone, “Please quantify the random variability, the nonrandom variability and the periodic variability,” obviously nobody can do that by simply looking at Figure 33.2. Conclusion: There must be a much better way to perform such variability study and put it in the context of contractual specifications. We completely discard the Six Sigma approach based on a theory inappropriate for heterogeneous dynamic processes, and suggest a far more effective variographic approach that is the object of the following chapter.

This page intentionally left blank

34 The Use of the Variogram to Elaborate Meaningful Process Control Charts

34.1. Abstract Forty years ago, Dr. Pierre M. Gy, a metallurgist, introduced variography as applied to monitoring a process. Consequently, his followers used the variogram to elaborate meaningful process control charts, especially since 1993. The name “Chronostatistics” was suggested several times by Gy as a logical name for a new way to interpret any data collected in a chronological way: the intent was to draw a parallel between the variogram identifying the variability components affecting a process parameter as a function of time, and the variogram used in Geostatistics identifying the variability components of a given mineral content as a function of space. Heterogeneity problems encountered indeed have strong similarities. However, there is a fundamental difference: the variogram used in Geostatistics is more likely to be reasonably stationary within a given type of mineralization. While investigating a dynamic process through Chronostatistics, there is no such thing as a stationary variogram. Still, the evolving variogram provides essential information about subtle, often overlooked process difficulties. Chronostatistics is especially useful for management to take advantage of existing data that are often a forgotten source of valuable information to pinpoint recurring structural problems. The superiority of Chronostatistics over conventional statistical process control approaches is that it links observations to the inherent heterogeneity of materials under investigation, therefore to the Theory of Sampling. Chronostatistics is like opening a new world to the metallurgist, making the process variability speak in a more comprehensive and in-depth way. Several case studies are presented to demonstrate the potency of this new science: for each of them, observations lead to solutions for continuous process improvement, therefore attractive economic opportunities. The philosophy is this: if a control chart shows a process is out of control, it must also suggest logical and pragmatic directions for solutions. This chapter intends to vastly expand on these valuable, almost forgotten ideas.

34.2. Scope The interpretation of existing chronological data gathered with any computerized process information system, either from weightometers, sampling stations, online process control analyzers, or process control samples collected at regular intervals, 609

610

Theory of Sampling and Sampling Practice

is a critically important task for management. It is a powerful way for management to have access to causes of problems in their processes. Yet, many years of consulting experience show that such interpretation is not done on a routine basis, at least not enough in depth. The well-known Six Sigma philosophy can certainly help in this endeavour; however, Six Sigma does not go far enough to find the causes of problems. The total error estimation σ2E in a heterogeneous material stream where n measurements were collected can be summarized in the following formula, well known in Geostatistics: σ 2E ¼

n X n

σ2 1 X þ 2 cov xi ; xj n n i¼1 j¼1

ð34:1Þ

In this formula it is easy to see the correlation between the way of thinking of geostatisticians and what has been presented all along in this book. • The first term of the formula refers to the Central Limit Theorem applying only to the random variability. It is similar to the short-range Heterogeneity Fluctuation Error HFE1. • The second term is the nonrandom variability. It is similar to the sum of the longrange Heterogneity Fluctuation Error HFE2 and the periodic Heterogeneity Fluctuation Error HFE3. It is interesting to notice that Six Sigma assumes the covariance between measurements xi and xj is zero, which as far as process control is concerned is a very naïve, and incorrect view. Chronostatistics looks at the entire domain of variability and does not assume the covariance between measurements is zero, as it is rarely the case in any dynamic process. Therefore, the statistical tool of choice selected for Chronostatistics is the variogram, either relative or absolute. The variogram is not the panacea; however, because of its simplicity, it provides a pragmatic way of looking at variability components unmatched by many other far more complex statistical tools. For example, the correlogram could be used as well; however, the variogram involves simpler and quicker calculations. The following quote says it all: Since we expanded the use of the variogram to study process variability, using Chronostatistics, we made significant progress understanding different sources of variability in our copper operations. Now we are in a much better position to effectively control our processes from rock to cathodes.

Dr. Pedro Carrasco, President of Codelco Corporate Quality Control Committee (Open and verbal statement at the 2005 World Conference on Sampling and Blending). In this chapter discussions are limited to the variability of one-dimensional streams as a function of space, time, or tons. The reader should be familiar with Chapters 7 and 8 for a better understanding of the following material.

Use of Variogram to Elaborate Process Control Charts

611

34.3. Searh for a Variographic Approach We shall proceed with a pragmatic definition of the variogram. Also, in what follows, the variability of differences between samples with a small lag must remain reasonably constant through time. This limitation suggests taking precautions when selecting a series of chronological data, such as making reasonably sure along the selected chronology that no important changes have been applied to the process in the way it works. As such, it is likely that the suggested applications loose power when the basic sampling interval in the selected chronology is too long (e.g., days, weeks, or months). Aware of this limitation, the practitioner can use common sense not to mix data that do not belong to each other. Let’s iterate a few principles because they are important. 34.3.1. Selection of a Given Process Parameter of Interest The variability of parameters controlling or making up a process stream is, most of the time, a stochastic phenomenon. Such phenomenon is a hybrid in which we can find random and chaotic elements taking place at very short time intervals, but in which we can also find functional and continuous elements. Therefore, a certain parameter of interest (e.g., the proportion of a given size fraction in a stream feeding a SAG mill) characterizing a process stream can be described as follows: f ½t ¼ fL þ fA ½t þ fB ½t þ fC ½t

ð34:2Þ

with; fL : a constant term, such as the average proportion of a given size fraction of a stream feeding a SAG mill over f ½t, fA ½t: a random component characterized by neighboring fragments that are different and the resulting proportion of a given size fraction changing in a chaotic and unpredictable way at very short time intervals, regardless of the fact that these fragments may originate at the same place earlier in the process. fB ½t: a nonrandom, nonperiodic component, essentially continuous, that is an inherent property of the feed heading to the plant, for example the slowly changing proportion of a given size fraction of the crushed ore. fC ½t: a periodic component characterized by the fact that people and machinery often work in a cyclic way, for example a segregating stockpile introducing a cycle in the proportion of coarse fragments and fines feeding a SAG mill. 34.3.2. Heterogeneity Affecting the Given Process Parameter of Interest The function f may refer to a content or proportion aS , the mass AS of any constituent of interest in a sample, the mass MS of a sample or the resulting total heterogeneity of these three factors considered together. Taking into the fact that the heterogeneity of a constant, such as fL , is obviously zero, the total heterogeneity affecting the parameter of interest in the stream is fully characterized by fA ½t, fB ½t, and fC ½t. 34.3.3. Measuring Heterogeneity Variability with the Variogram The most natural way to compare two values such as f1 (e.g., the proportion of a given size fraction in the stream at time t1 ) and f2 (e.g., the proportion of a given size fraction

612

Theory of Sampling and Sampling Practice

in the stream at time t2 ), t1 and t2 being separated by a time lag called j, is to measure their difference d: d ¼ f1  f 2

ð34:3Þ

As explained in detail in Section 7.3 the absolute semi-variogram is calculated as follows: V ½ j ¼

X

2 1 fmþj  fm 2 ½ N  j m

ð34:4Þ

A relative, dimensionless variogram could be calculated as well, making it easier to compare variograms from different experiments: V½j ¼

X

2 1 fmþj  fm 2 2½N  jfL m

ð34:5Þ

The study of many variograms shows that they are usually made of four major components: V½ j ¼ VA ½ j þ VB ½ j þ VC ½ j þ VD ½ j

ð34:6Þ

where: VA ½ j is a very short-range term, random, and discontinuous. At the limit, when j = 0, this leads to a very important term in Chronostatistics called V[j=0], simplified in further discussions to V[0], which is the variability from sampling, subsampling, and measurement that does not exist in the process. VB ½ j is a long-range term, usually nonrandom and continuous. This variability is the one that needs to be controlled or tamed in a process. VC ½ j is a periodic term, continuous, tied to ways people work or to ways some process machineries work. This is a source of variability usually poorly understood, leading to process overcorrections, therefore losses of process efficiency. The economic impact of this source of variability, if misunderstood, is enormous. VD ½ j is a random residual variability tied to the variogram precision when the variance V[j] is calculated with too few pairs N-j. VD ½ j tends toward zero when the number of pairs increases. It is not a good practice to calculate any point on a variogram with less than 20 pairs. Usually, 30 pairs or more is recommended.

34.3.4. Extrapolation of the Variogram to Time or Distance Zero An accurate extrapolation of the variogram to time or distance zero is required for many of its applications. The most effective solution is to extrapolate either the first order integral W[j] or the second order integral W’[j] of the variogram. Indeed, W[j] and W’[j] smooth out residual noise due to VD ½ j, cycles, and other features interfering with a good extrapolation. But, as demonstrated in the Theory of Sampling, the variogram V[j], W[j], and W’[j] all have the same intercept V[0], W[0], and W’[0] at time or distance zero. In Section 7.12 we obtained the following formulas:

Use of Variogram to Elaborate Process Control Charts

    ðj 1 S½j Vðj ¼ 1Þ VðjÞ 0 0 ¼ S½j  1 þ þ W ½ j ¼ EST:½ Vðj Þdj  ¼ j j 2 2

613

ð34:7Þ

0

  2 0 Sðj  1Þ SðjÞ W ½j ¼ 2 S ðj  1Þ þ þ j 2 2 0

ð34:8Þ

34.3.5. Important Limitations for the Variogram Limitation #1: The term VD ½ j of the variogram (i.e., precision of the variogram) must remain as small as possible. Under no circumstance should any point on the variogram be calculated with less than 20 pairs. Limitation #2: In a chronological series of data, there is a problem associated with the central values of that series. If the selected lag j of the variogram goes beyond half of the data available, the calculation of the variance for that lag can no longer use data located at the center of the series. Therefore, the minimum number of pairs N-j should remain larger or equal to N/2. Limitation #3: A process is not stationary. It is in a permanent state of change. Most processes in the mining and chemical industries are dynamic processes. The general trend observed today, or this week, may be different from the one observed tomorrow or next week. When looked at on a large scale of time, these trends may actually carry a random component. Therefore, before calculating a variogram, it is good practice to take a good look at the chronological data and select a window within which the general trend is reasonably consistent: in a variographic analysis, do not mix data that do not belong to each other. Similar stationarity assumptions are often taken in Geostatistics; it is called the weak-stationarity assumption, often referred to as second-order stationarity and seldom exists in earth science applications. Geostatistics and Chronostatistics remain effective tools despite common violations of the assumption: awareness is everything. These are all the basic tools that are needed to build an effective, pragmatic strategy for managers and supervisors to take better advantage of all their existing chronological data, collected at great expense during production. Such existing data nobody looks at are a gold mine for discovering the origin of some annoying problems plaguing the daily optimization of a process. Now, let’s enter the heart of Chronostatistics. The variogram can separate the variability components involved in a process from one another, like a skilled detective would do. A complex problem would therefore be divided into its basic components, so it becomes much easier for a manager setting priorities for continuous improvement. Now, we may continue to analyze in depth the data from a cement plant listed in Table 33.1 and illustrated in Figure 33.2. 34.3.6. Understanding Clearly the Client’s Needs Establish a clear list of contractual specifications. If a contract is not involved, it can be simply called a “wish list” from the manager who knows very well within what range of variability the process performs well. It must consist of a Targeted Average TA, an Upper Specification US, and a Lower Specification LS, as shown in Figure 34.1. Without these three values well established, it becomes impossible to assess if the variability components quantified with the variogram are acceptable or not. Therefore, this step is

614

Theory of Sampling and Sampling Practice

FIGURE 34.1 Definition of Target TA and Specifications US and LS.

critically important, as the following analysis cannot proceed without it. It is called the appropriate Data Quality Objectives (DQO); no logical statistical analysis is possible without them. Often, processes evolve, and TA, US, and LS must be adjusted to new goals: there is nothing wrong with this if everybody is perfectly aware of the new rules.

34.3.7. Calculation of the Short-Range Absolute Variogram Calculate the short-range absolute variogram, emphasizing the terms V[0], V[j=1], W[j] simplified to W, and W’[j] simplified to W’, as illustrated in Figure 34.2. The following features are illustrated: 1. The variogram is shown as a solid, black curve. 2. The first order integral W is shown as a dashed, gray curve. It is used to extrapolate the variogram to V[0]. The first three points were extrapolated. 3. The junction of the solid, vertical black bar with the horizontal, black line shows the location and value of V[j=1], corresponding to the total sampling, measurement variability, and two-hour sampling interval variability. V[j=1] = 0.000959 (% +63µm)2. 4. V[0] is shown by the lowest horizontal, dotted line, and corresponds to the total sampling and measurement variability. V[0] = 0.000333 (% +63µm)2. This is the variability that does not exist in the process; it is artificially introduced by the sampling protocol and the assaying laboratory: it is a vital piece of information for QA-QC. 5. V[process j=1] = V[j=1] – V[0] = 0.000626 (% +63µm)2. is the typical process drift within the basic sampling interval, which is two hours. This variability is the result of the basic sampling interval; if it is small the sampling interval is acceptable or

Use of Variogram to Elaborate Process Control Charts

615

FIGURE 34.2 Illustration of the 5-point short-range absolute variogram for the % +63-micron of a special cement.

could even be increased. If it is too large, then we have the proof that the sampling interval should be reduced to better control the process. We see that we already have very important pieces of information we did not have without the variogram. 34.3.8. Calculation of the Long-Range Absolute Variogram Calculate the long-range absolute variogram. Following limitation #2 requirements, a 30point variogram is calculated and shown in Figure 34.3. The following features are illustrated: 1. The amplitude of the cycle is half the distance, expressed as a variance, between the two gray lines tangent to the maximum and the minimum. The reader may notice that the tangent to the minimum starts a V[0]. It is logical, because when the time interval is zero, there is nothing we can see from the cycle. V[Cyclic j=19=38 hours] = 0.00185. 2. The dashed, vertical black bar shows the period of the cycle. It is important to know the period of the cycle to investigate what in the process may have created it. It is a 38-hour cycle. 3. The fine horizontal dotted and dashed line shows the sill of the variogram. Obviously, in that case, if we had more points on the variogram we may have a greater value for the sill.

616

Theory of Sampling and Sampling Practice

FIGURE 34.3 Illustration of the 30-point long-range absolute variogram of the % +63-µm material in a special cement.

All of this information is stored by the computer and then used as needed in the following steps.

34.4. From Variography to Control Chart 34.4.1. Testing the Capability of the Total Sampling, Measurement, and Sampling Interval In Figure 34.4 the following features are illustrated: 1. The two horizontal, gray lines show the contribution of the V[0] variability using 3 standard deviations for this random source of variability (i.e., UCL and LCL control limits). It is suggested these lines should not take more than 1/3 of the allotted variability by the DQO (i.e., US, TA, and LS illustrated by the three horizontal thick black lines). It is easy to see that the sampling and measurement protocol is well under control. 2. The Targeted Average TA is 5.5%. The Upper Specification US is 5.8%. The Lower Specification LS is 5.2%. In Figure 34.5 the contribution of the basic two-hour sampling interval is added to the chart:: 3. The two horizontal, dashed gray lines show the contribution of the two-hour sampling interval (i.e., nonrandom variability) when added to the random variability. It is easy to see that the sampling, measurement, and sampling interval protocols, all combined

Use of Variogram to Elaborate Process Control Charts

617

FIGURE 34.4 Testing the sampling and measurement capability relative to DQO.

(i.e., UCL’ and LCL’ control limits), provide a reasonable amount of variability over which the process engineer has no control. For this case study, this was good news. Important remark: The random variability is calculated using three standard deviations (i.e., 99.7% level of confidence). Such practice should not be used on nonrandom or periodic components, for which only one standard deviation should be used. This could be controversial to some statisticians: the argument is that trends and cycles may indeed carry a random component of their own, however when considering a relatively shorttime window of observation when trying to control a process, such random component is irrelevant. The random component affecting trends or cycles, if any, can be seen only when looking at very large time windows. Therefore, the use of only one standard deviation for non-random and periodic variability components is a more pragmatic practice to achieve effective process variability analyses: the operator in charge of process control is interested in what is happening at a given instant, regardless of the long-term process behavior. So far, none of the variability components are in competition with the specifications, therefore the database used to control the quality of this special cement is quite robust: pffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V½0 þ V½j ¼ 1

ð34:9Þ

pffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V½0  V½j ¼ 1

ð34:10Þ

UCL0 ¼ Mean þ 3 LCL0 ¼ Mean  3

618

Theory of Sampling and Sampling Practice

FIGURE 34.5 Testing the sampling, measurement, and 2-hour sampling interval capability: Testing the validity of the entire protocol.

34.4.2. Testing the Additional Contribution of a Cycle The variogram in Figure 34.3 detected a regular 38-hour cycle of amplitude V[cyclic j=19] = 0.00185. Following exactly the same argument as developed in Section 34.4.1, we can calculate new control limits UCL” and LCL” including sampling, measurement, twohour sampling interval, and the amplitude of the cycle, as illustrated in Figure 34.6 by the two horizontal dotted black lines. pffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V½0 þ V½j ¼ 1 þ V½cyclicj ¼ 19

ð34:11Þ

pffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V½0  V½j ¼ 1  V½cyclicj ¼ 19

ð34:12Þ

UCL0 ¼ Mean þ 3 LCL00 ¼ Mean  3

It is interesting to make the following observations: 1. The sum of the variability contribution of the sampling and measurement protocol, the two-hour sampling interval, and the cycle takes about half of the total variability allowance given by the contractual specifications: this team at that cement plant is doing a very good job at controlling the process. There is still plenty of room for the process to drift and bring such drift under control. 2. The validity of equations (34.11) and (34.12) is fully supported by the fact that out of 60 points, only two points are going outside the UCL” and LCL” limits.

Use of Variogram to Elaborate Process Control Charts

619

FIGURE 34.6 Testing the sampling and measurement, the two-hour sampling interval capability, and the cycle variability contribution separately.

Conclusion: The use of variography to extract information from existing chronological data is very powerful. Not only it gives information for QA-QC, but it also quantifies various sources of variability in the process, especially cycles that are very expensive if undetected. Furthermore, the information is free; therefore, it should be a very important tool for top management. Because of the importance of chronostatistics using the variogram, we devoted the next chapter to a few cases studies that may serve as a model for the reader to become more comfortable with the subject.

This page intentionally left blank

35 Case Studies where Variography is an Effective Tool to Discover and Quantify Structural Problems

35.1. Case Study #1: Controlling the Copper Content of Slag in a Copper Smelter A copper smelter in Chile monitors the copper content of slag every three hours. To prepare such sample several increments of the molten material are collected within three hours to make a three-hour composite. The plant must comply with the following specifications believed to work best for an economic optimization: the Targeted Average TA is set at 10%; the Upper Specification US, above which too much money is lost is 14%, and the Lower Specification LS, below which too much money is spent for the process to perform high copper recovery is 7%. At times, the plant encounters difficulties taming the process variability within the specifications. Explanations for these difficulties have eluded management for some time. It was believed that the variability investigation of existing data was a good start: 217 consecutive three-hour copper assays shown in Figure 35.1 are collected to perform the chronostatistical study. The collected data is listed in Table 35.1. The following sequence is a step-by-step strategy to approach such difficulties. 35.1.1. Understanding Clearly the Client’s Needs Establish a clear list of contractual specifications. If a contract is not involved, it can be simply called a “wish list” from the manager who knows very well within what range of variability the process performs well. It must consist of a Targeted Average TA, an Upper Specification US, and a Lower Specification LS, as shown in Figure 35.2. Without these three values well established, it becomes impossible to assess if the variability components quantified with the variogram are acceptable or not. Therefore, this step is critically important, as the following analysis cannot proceed without it. It is called the appropriate Data Quality Objectives (DQO); no logical statistical analysis is possible without them. Often, processes evolve, and TA, US, and LS must be adjusted to new goals: there is nothing wrong with this if everybody is perfectly aware of the new rules. 35.1.2. Calculation of the Short-Range Absolute Variogram Calculate the short-range absolute variogram, emphasizing the terms V[0], V[j=1] and W, as illustrated in Figure 35.3. The following features are illustrated: 621

622

Theory of Sampling and Sampling Practice

FIGURE 35.1 Percent copper in a smelter slag, sampled and estimated every three hours.

1. The variogram is shown as a thick, solid black curve. 2. The first order integral W is shown as a thick, dashed gray curve. It is used to extrapolate the variogram to V[0]. The three first points are extrapolated. 3. The junction of the solid, vertical black bar with the horizontal, solid black line shows the location and value of V[j=1], corresponding to the total sampling, measurement variability, and three-hour sampling interval variability. V[j=1] = 1.96 (% Cu)2. 4. V[0] is shown by the lowest horizontal, dashed light gray line, and corresponds to the total sampling and measurement variability. V[0] = 1.38 (% Cu)2. This is the variability that does not exist in the process; it is artificially introduced by the sampling protocol and the assaying laboratory: it is a vital piece of information for QA-QC. 5. V[process j=1] = V[j=1] – V[0] = 0.579 (% Cu)2 is the typical process drift within the basic sampling interval, which is three hours. This variability is the result of the basic sampling interval; if it is small the sampling interval is acceptable or could even be increased. If it is too large, then we have the proof that the sampling interval should be reduced to better control the process.

35.1.3. Calculation of the Long-Range Absolute Variogram Calculate the long-range absolute variogram. Following limitation #2 requirements a 100-point variogram could be calculated; however, because the general trend is almost nonexistent, the variogram reaches its maximum very quickly; therefore, for clarity only the first 60 points of the variogram are displayed in Figure 35.4. The following overall information is as follows:

623

Variography is an Effective Tool

TABLE 35.1 Percent copper in slag at a copper smelter. Composite samples are collected every three hours. #

% Cu

#

% Cu

#

% Cu

1

9.9

32

7.3

63

10.9

94

9.2

125

8

156

10.8

187

11.1

2

8.3

33

7.8

64

12

95

12.5

126

7.4

157

13

188

8.2

3

7.8

34

7.2

65

14.9

96

12.1

127

7.1

158

14.6

189

7.7

4

7.9

35

7.2

66

11

97

13

128

10.1

159

11.7

190

8.9

5 6

7.3 8.5

36 37

10 11.6

67 68

10 9

98 99

13.2 14.2

129 130

8.8 8

160 161

10.6 9.5

191 192

8.8 11

7

9.9

38

12.5

69

8

100

11.8

131

7.2

162

9

193

11.8

8

11.8

39

12

70

11

101

10

132

10.2

163

8.6

194

13.2

9

9.4

40

10.2

71

8.8

102

8.4

133

9

164

9.8

195

11.5

8.2

41

9.4

72

11.2

103

11.1

134

12

165

10.3

196

11

42

8.5

73

7.8

104

10

135

11

166

7.6

197

12.8

10 11

10

#

% Cu

#

% Cu

#

% Cu

#

% Cu

12

8.5

43

9.8

74

9

105

9.6

136

11.8

167

6

198

10

13 14

5.6 10.1

44 45

12.6 11

75 76

11.8 10.6

106 107

7.5 11.2

137 138

11 10.8

168 169

5.2 6.1

199 200

11.6 11

15

8

46

11.8

77

11.7

108

8.2

139

10.5

170

7.2

201

12.8

16

7.5

47

10.9

78

12

109

11.5

140

11.8

171

7

202

11

17

8.1

48

12.8

79

9.2

110

14

141

9.6

172

9.8

203

10

18

7

49

9.5

80

12.2

111

11.4

142

12

173

10.2

204

11.8

19

10.3

50

7.2

81

9.8

112

10.1

143

10

174

11

205

10

20

11.5

51

7.9

82

8

113

14

144

9.4

175

11.5

206

10.5

21 22

12.5 11.4

52 53

83 84

14.2 10

114 115

10.5 12.2

145 146

8.9 9

176 177

14.3 11

207 208

11.5 10

23

10.4

54

7.1

85

9

116

13.8

147

12

178

10.3

209

24

9.8

55

7.6

86

10.9

117

13.6

148

9

179

9.8

210

25

14

56

26

11

57

27

13.3

58

28

7.5

29 30 31

11 8

11

9.7 10

87

7.5

118

12.6

149

8.5

180

8

211

7.7

88

11.8

119

13.7

150

8

181

6.5

212

8.6

11

89

13.7

120

11.5

151

7

182

8.1

213

8

59

10

90

9.5

121

9.8

152

11.2

183

7

214

7.4

6.9 7

60 61

9.8 10.4

91 92

10.5 10.1

122 123

7.2 9.7

153 154

12.2 12

184 185

8.8 7.4

215 216

7 7.2

6.4

62

9.3

93

11.8

124

10.6

155

10.2

186

9

217

9.4

10

• V[0] = 1.38 [% Cu]2 or ±1.17% Cu • V[1] = 1.96 or ±1.40% Cu • V[Process j=1] = 0.579 or ±0.76% Cu • V[Cyclic j = 19 = 57 hours] = 1.54 or ±1.24% Cu. This is a little more than a two-day cycle of quite large amplitude that came as a surprise to management. • The sill of the variogram is reached at V[Sill] = 4.0 All this information is stored by the computer and then used as needed in the following steps.

624

Theory of Sampling and Sampling Practice

FIGURE 35.2 Definition of Target TA and Specifications US and LS.

FIGURE 35.3 Short-term absolute variogram of the copper content of slag in a copper smelter.

35.1.4. Testing the Capability of the Total Sampling, Measurement, and Sampling Interval In Figure 35.5 the following features are illustrated: The two horizontal, black lines show the contribution of the V[0] variability using three standard deviations for this random source of variability (i.e., UCL and LCL

Variography is an Effective Tool

625

FIGURE 35.4 Long-term absolute variogram of the copper content of slag in a copper smelter.

control limits). It is suggested these lines should not take more than 1/3 of the allotted variability by the DQO (i.e., US, TA, and LS illustrated by the three horizontal gray lines). It is easy to see that the sampling and measurement protocol is a big problem. The capability of the sampling protocol to minimize the random variability that is not likely to exist in the process is not good enough and many things can hide in this variance such as problems with: • the subsampling protocol • the number of collected increments • a cycle in phase with the sampling interval • a sampling tool that is not appropriate • an excessive laboratory variability • a little of everything listed earlier, which is likely the case. pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2. Test the capability of the three-hour sampling interval calculated using ± V½j ¼ 1 and represented in Figure 35.5 by the horizontal dashed black lines UCL’ and LCL’, corresponding to a substantial variability addition located above UCL and below LCL. The typical process variability between two consecutive three-hour samples seems to be a significant problem. Remark: The extrapolation of the variogram from a basic three-hour interval to time zero is a risky endeavor. Let’s keep in mind that the value V[0] obtained in that case may indeed very well contain nonrandom components taking place along such a threehour period. It would be advisable to proceed with a variographic experiment, at least once, collecting 30 consecutive samples at very short intervals (e.g., 30-second intervals), and another 60 samples at longer intervals (e.g., five-minute intervals). Such experiment should not be performed blindly, but only when the analysis of existing data suggests

626

Theory of Sampling and Sampling Practice

FIGURE 35.5 Testing the sampling, measurement, and three-hour sampling interval capability: testing the validity of the entire protocol.

such an expense is necessary to analyze mysterious hidden variability components that are in competition with well-defined DQO. 35.1.5. Testing the Additional Contribution of a Cycle Test the process variability contribution of the cycle. The variogram in Figure 35.4 detected a regular 57-hour cycle of large amplitude V[cyclic j=19=57 hours] = 1.54. The resulting, total variability leads to the control limits UCL” and LCL” including sampling, measurement, three-hour sampling interval, and the amplitude of the cycle, as illustrated in by the dotted black lines on Figure 35.6: pffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V½0 þ V½j ¼ 1 þ V½cyclicj ¼ 19

ð35:1Þ

pffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V½0  V½j ¼ 1  V½cyclicj ¼ 19

ð35:2Þ

UCL00 ¼ Mean þ 3 LCL00 ¼ Mean  3

It is interesting to make the following observations: 1. The sum of the variability contribution of the sampling and measurement protocol, the three-hour sampling interval, and the cycle takes much more than the total variability allowance given by the specifications: the operators of the smelter are not doing a good job at controlling the process. 2. Too many potential problems that are not well accounted for are taking place. 3. The 57-hour cycle is probably poorly understood, even if it is known in the first place. If well understood, such a cycle is always an opportunity to increase the copper recovery, which can lead to an enormous amount of money.

Variography is an Effective Tool

627

FIGURE 35.6 Testing the sampling and measurement, the 3-hour sampling interval capability and the cycle variability contribution separately.

The maximum of the variogram illustrated in Figure 35.4 is reached for j =15 = 45 hours and is mostly ruled by the evolution of the cycle. V[Trend j =15] is the difference between V[Sill] and V[0], which is about 2.57 [% Cu]2. At this stage we accumulated most of the information we can obtain from such existing data. This should not be a one time exercise, but a continuous practice looking for example at a two- or three-week moving window and find how such information evolves over time; in other words, it is a powerful QA-QC program.

35.2. Case Study #2: Controlling G/L Solids of a Precipitated Chemical at the Underflow Exit of a Thickener An unspecified chemical plant is monitored for the g/l solids of a precipitated valuable product at the underflow exit of a thickener. On a routine basis, a 1-liter, 1-hour sample is collected to measure the g/l solids. The plant must comply with the following specifications believed to work best for economic optimization: a Targeted Average TA of 440 g/l is set. The Upper Specification US, above which too much money is lost is 490 g/l, and the Lower Specification LS, below which too much money is lost as well is 390 g/l. On a regular basis, the plant encounters difficulties taming the process variability within the specifications. Explanations for these difficulties have eluded management for some time. Following an audit meeting, 35 consecutive 1-hour assays are collected to make a Chronostatistical study, as shown in Figure 35.7 and Table 35.2.

628

Theory of Sampling and Sampling Practice

TABLE 35.2 Grams per liter solids in a thickener underflow. Composite samples are collected every hour. Sample

g/l

Sample

g/l

Sample

g/l

#

solids

#

solids

#

solids

1 2

430 396

13 14

454 392

25 26

422 394

3

480

15

418

27

414

4

348

16

408

28

423

5

470

17

424

29

430

6

330

18

362

30

458

7

488

19

448

31

380

8

386

20

374

32

448

9 10

448 388

21 22

414 412

33 34

372 410

35

406

11

412

23

384

12

362

24

410

35.2.1. Understanding Clearly the Client’s Needs Establish a clear list of realistic process specifications. It can be defined as a “wish list” from the manager who knows within what range of variability the process performs well. It must consist of a Targeted Average TA, an Upper Specification US, and a Lower Specification LS, as shown in Figure 35.8. 35.2.2. Calculation of the Absolute Variogram Calculate the absolute variogram, emphasizing the terms V[0], V[j=1], W, and W’, as illustrated in Figure 35.9. The following features are illustrated: 1. The variogram is shown as a solid black line. 2. V[0] is very large. 3. Overall the variogram shows a bizarre, systematic pattern, suggesting a cycle of some kind. The variogram seems to hide something important, which triggers a short-range experiment that would help understanding the process variability at shorter time intervals. Sixty samples collected every two minutes, under the same conditions, were used to make a thorough study of the short-term process variability. The experimental data is shown in Figure 35.10 and Table 35.3. Immediately, a near perfect cyclic behavior can be observed. 35.2.3. Interpretation of the Experimental Absolute Variogram Calculate the long-range absolute variogram, illustrated in Figure 35.11. Following limitation #2 requirements a 30-point variogram should be calculated; however, because

Variography is an Effective Tool

629

FIGURE 35.7 Monitoring g/l solids at the thickener underflow exit.

FIGURE 35.8 Definition of Target TA and Specifications US and LS.

the cycle is so well defined, it was decided to push the variogram to 40 points so the full cyclic curve could be better observed. • V[0] = 93.2[g/l solids]2 or ±9.65 g/l solids around a given estimate. The extrapolation to V[0] was performed by using the first 2 points of the first order integral of the variogram • V[1] = 386 or ±19.6 g/l solids • V[Process j=1] = 293 or ±17.1 g/l solids

630

Theory of Sampling and Sampling Practice

FIGURE 35.9 Illustration of the absolute variogram using 1-hour existing data.

TABLE 35.3 Grams per liter solids in a thickener underflow: Composite samples are collected every 2 minutes. Sample #

g/l solids

Sample #

g/l solids

Sample #

g/l solids

1

414

21

388

41

398

2

418

22

417

42

411

3

402

23

410

43

407

4

427

24

431

44

447

5

417

25

454

45

461

6

430

26

438

46

455

7 8

438 447

27 28

467 450

47 48

470 441

9

434

29

492

49

490

10

470

30

526

50

512

11

490

31

502

51

500

12

522

32

530

52

560

13

491

33

490

53

538

14

470

34

458

54

482

15 16

458 454

35 36

472 450

55 56

493 452

17

431

37

440

57

426

18

390

38

412

58

432

19

412

39

376

59

390

20

369

40

420

60

347

• V[Cyclic j = 20 = 40 minutes] = 1470 or ±38.3 g/l solids. This is a cycle of huge amplitude that came as a surprise to management, and of critical importance. It was introduced by a rake going around the thickener in 40 minutes.

Variography is an Effective Tool

FIGURE 35.10 Monitoring g/l solids at the thickener underflow exit every two minutes.

FIGURE 35.11 Illustration of the absolute variogram using two-minute sampling intervals.

631

632

Theory of Sampling and Sampling Practice

35.2.4. Use of the Variogram Information into a Control Chart In Figure 35.12 the following features are illustrated: 1. The two horizontal, gray lines show the contribution of the V[0] variability using 3 standard deviations for this random source of variability (i.e., UCL and LCL control limits). It is suggested these lines should not take more than 1/3 of the allotted variability by the DQO (i.e., US, TA, and LS illustrated by the three horizontal gray lines). It is easy to see that the sampling and measurement protocol is not the biggest problem. 2. Test the fficapability of the two-minute sampling interval calculated using pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ± V½j ¼ 1 and represented by the horizontal dashed gray lines UCL’ and LCL’, corresponding to a substantial variability addition located above UCL and below LCL. The typical process variability between two consecutive two-minute samples seems to be significant but again, it is not the biggest problem. 3. Test the process variability contribution of the cycle. The variogram detected a regular 40-minute cycle of large amplitude V[cyclic j=20] = 1470 [g/l solids].2 The resulting total variability leads to the control limits UCL” and LCL” including sampling, measurement, two-minute sampling interval, and the amplitude of the cycle, as illustrated by the two dotted gray lines. It is interesting to make the following observations: • The sum of the variability contribution of the sampling and measurement protocol, the two-minute sampling interval, and the cycle takes much more than the total variability allowance given by the specifications: there is no way the process can be optimized under the present conditions, even if a sample was taken every two minutes. • The 1-liter sample leading to V[0] is itself a problem as clearly shown by the UCL and LCL control limits, shown as gray lines. Perhaps the use of an on-line analyzer would improve the situation. • The 40-minute cycle is by far the largest concern that should indeed lead to serious engineering modifications.

35.3. Conclusions The two case studies, the one studied in Chapter 34, and many others investigated by the author show that valuable information can always be learned from a careful variographic investigation of existing chronological data selected within a range characterized by a reasonable stationarity. If such stationarity does not hold, the chronological series of data can always be broken up into different periods, and then the variographic analysis can be performed for each period separately. This information can be used for continuous process improvement guidance, such as: • Find out if the sampling protocols and sampling equipment perform reasonably well.

Variography is an Effective Tool

633

FIGURE 35.12 Testing the sampling and measurement, and two-minute sampling interval capability separatel.

• Find out if process control sampling is performed frequently enough, so process drifts are tamed in due time. • Find out if costly process cycles are present, offering opportunities for better metal recoveries, better optimization of mills, minimum additions of special processing reagents, rarer process overcorrections, more efficient training of people working on different shifts, and so on. The list could continue and give food for a book. • Perform more effective cost and benefit analyses for new and existing projects by pointing clearly at each source of data variability. • Create more effective process control strategies by minimizing the irrelevant variability that does not exist in the process and better focusing on the relevant variability that does indeed exist in the process. • Provide valuable guidance for experimental design decisions.

This page intentionally left blank

Part XI

Homogenization

This page intentionally left blank

36 An Introduction to Homogenizing Processes

36.1. Introduction The purpose of a homogenization process is to diminish the amount of distribution heterogeneity carried by the constituents of interest in a given lot. This lot may be made of particulate solids, of a pulp with suspended solids, or of a liquid made of one or several phases. If we remember the contents of Chapter 6, it is clear that homogenization processes concern only the distribution heterogeneity. They have no effect on the constitution heterogeneity which is an intrinsic property of the material making up the lot. At this stage, we can already make an important observation: homogenization has absolutely no effect on the value of the variance of the Fundamental Sampling Error FSE affecting a given sampling protocol. It has an effect only on the variance of the Grouping and Segregation Error GSE. From equation (6.52), we saw that the minimum reached by the distribution heterogeneity is never zero, while the maximum distribution heterogeneity is equal to the constitution heterogeneity itself. Therefore, the natural distribution heterogeneity (DHL)NAT varies between a maximum (DHL)MAX = CHL and a minimum (DHL)MIN. The purpose of a homogenization process is to make (DHL)NAT approaching (DHL)MIN. The kind of homogenization that we were talking about in Chapter 6 is specific to discontinuous processes, In other words, it was defined for a given finite set of fragments. Therefore, the homogenization by mixing is only a discontinuous process such as: • Mixing a closed vial • Mixing the content of a beaker with a stirrer • Pumping the content of a tank in a closed circuit • Pneumatic homogenization of the content of a silo • Mixing a sample inside a vee-blender, and so on. These discontinuous processes require enormous amounts of energy; therefore, they are economical only for small lots. This is the reason why we mainly encounter these processes in laboratories or small size pilot plants. To solve these difficulties for large amounts of materials, the technique of bedblending has been developed. It is a one-dimensional homogenization process, which is a degeneration of the three-dimensional homogenization process. Bed-blending can be performed discontinuously or continuously. Because bed-blending is a very important technique of homogenization for the industry, Chapter 37 is entirely devoted to it.

637

638

Theory of Sampling and Sampling Practice

36.2. Classification of Homogenization Processes There are two kinds of homogenizing processes: 1. Stationary processes, in which the material is not mixed but, instead, cleverly redistributed in such a way that any correlation between the personality of a fragment and its position has been completely, or at least partially eliminated. Bed-blending is the best example of this category, 2. Dynamic processes, all based on mechanical mixing whose principle is to eliminate by force the correlation, introduced by gravity, between the personality of a fragment and its position. In both categories, we may distinguish continuous and discontinuous processes. 36.2.1. Stationary and Discontinuous Processes These processes are based on the principle of the moving average. One of the best examples is Kelly’s Process,214 which has been described by Gy in one of his books.78 36.2.2. Dynamic and Discontinuous Processes These processes work in a closed or opened container on a fixed amount of material; it is often called a batch homogenization process. For a given period of time the entire mass of the lot is mixed (i.e., dynamic process). During the same period no exchange of any kind with the exterior of the container is permitted (i.e., discontinuous process). The mixing technique varies widely from one process to another; however, the objective remains strictly the same: a probabilistic objective consisting of destroying any correlation between the personality of a fragment and its position within the lot. This objective is fulfilled when the distribution heterogeneity DHL approaches its minimum value (DHL)min. At this stage, any increment I selected from the lot is a correct increment representative of the entire lot, if its recovery is correct (IEE), of course. Therefore, the population of all potential increments of the same weight, which is possible to select from the lot, have their average equal to the average of the lot L and a variance equal to the variance of the Fundamental Sampling Error characterizing the lot and the selected increment mass. Now, we may describe a few of these processes. 36.2.2.1. Mechanical Mixing of Test Tubes Samples Mixing of materials in laboratory test tubes is extremely important in chemistry and biology. Quantities on which homogenization is performed can be very small such as a few milliliters. Also, the mechanical mixing is often performed on many test tubes at the same time. There are several systems such as: • Blood pipette shaker: motor-activated rocking controlled motion. At a rate of several hundred oscillations per minute, the minimum distribution heterogeneity can be reached in less than one minute.

An Introduction to Homogenizing Processes

639

• Nutator mixer: vials or test tubes are fixed on an oscillating plate which provides continuous and gentle mixing. • Tube rotator: tubes are attached to a rotating plate mounted at an angle. This allows mixing of the material in the tubes in all possible planes. • Vortex mixers, and so on. 36.2.2.2. Mechanical Mixing of Closed Containers Closed containers such as bottles can be effectively mixed by rotating about a tumbling axis. The Turbula mechanical mixer is a good example of this type of system. 36.2.2.3. Magnetic Stirrers Magnetic stirrers are commonly used to homogenize the content of an opened beaker, Erlenmeyer flask, or other flasks. They are very effective for homogenizing liquids, and liquids with small amounts of suspended solids as long as solids do not have magnetic properties. 36.2.2.4. V-Blenders In Chapter 13, Figure 13.2 illustrates a V-blender. To obtain an effective mixing action. V-blenders should not be filled too much. A rule of thumb to follow is not to fill the blender above the level of the rotating axle. When filled above the axle of rotation, the time required to reach the minimum distribution heterogeneity increases exponentially as the blender becomes fuller. These systems are used to homogenize lots weighing from a few hundred grams at the laboratory, to a few hundred kilograms at pilot plants, or a few tons in some industries such as the recycling of precious metals. 36.2.2.5. Stirrers Coupled with Propellers or Paddles These systems operate in large containers in the chemical and food industries. They are used to mix large volumes of slurries and pulps, sometimes up to several hundred tons. They are also used in laboratories on a much smaller scale. 36.2.2.6. Sheet Mixing or Rolling Technique Some chemists, technicians, or operators discharge a sample above a sheet of paper, cloth, or other material and try to homogenize the sample by taking opposite corners and rolling the sample. The same operation is done several times for each diagonal. The only advantage of this procedure is that no special piece of equipment is required. Time is not an advantage because the procedure of rolling the sample is not, by any means, shorter than using a riffle splitter several times to achieve the same task, that is far more effective. The rolling technique is time-consuming and its effectiveness is highly questionable. In fact, homogenizing using a rolling technique is often only an illusion and may actually induce strong segregation. If the reader is not convinced of this, it is suggested that he mixes 10 grams of black wolframite or 10 grams of black biotite with 100 grams of white quartz and try to homogenize the mixture using the

640

Theory of Sampling and Sampling Practice

rolling technique. From this experiment, the reader can draw his own conclusions. We do not recommend this technique. 36.2.3. Dynamic and Discontinuous Processes with Circulation in a Closed Circuit In this case only a fraction of the lot is homogenized during a given period of time. This system may be used to homogenize the content of a silo but is effective only with fine powders and liquids with suspended solids and slurries. The effectiveness of the homogenization can be checked by comparing the composition of a few random increments selected at the top of the lot with the composition of a few random increments selected at the bottom of the lot. When these compositions are the same, we know that the natural distribution homogeneity or minimum distribution heterogeneity has been reached, 36.2.4. Dynamic and Continuous Processes with Circulation in a Closed Circuit These processes are different from those mentioned in Section 36.2.2 because the introduction and extraction of material is continuous. In Chapter 21, Figure 21.29 illustrates a header tank which is a typical example of such a homogenizing process. In this holding tank there is a three-dimensional homogenization of the pulp. The larger the holding tank, the larger the buffering effect of the homogenization. Let’s define: MC: the weight of material in the holding tank M0(t): the average rate of the incoming material TC: the average retention time of the material in the holding tank Then we can calculate: TC ¼

MC M0 ð t Þ

ð36:1Þ

36.2.5. The Particular Case of Ball and Rod Mills These mills are mainly used in the mineral processing industry. In addition to their grinding capabilities, they also act as dynamic and continuous homogenizing processes. They have an important role in the cement industry where they are installed just before the pre-homogenization process. With a retention time of several minutes, they produce an appreciable homogenization of the material; however, we should remain cautious of the fact that when used in closed circuit, they often introduce periodic fluctuations with a period between 10 and 20 minutes. Such periodic phenomenon can be extremely detrimental to a process.

37 Bed-Blending Techniques

37.1. Introduction Bed-blending is a compositing and homogenizing process that allows the preparation of a process feed material with average composition within preestablished specifications. As high standards of quality are becoming severe, process control is also becoming more important in finding ways to improve both quality and productivity. These objectives become much easier to fulfill if homogeneous raw material input can be prepared. Bed-blending is intended to do that; therefore, the engineer can mix, in a very effective way, rich and poor raw materials, or different materials into a constant feed with given characteristics. Bed-blending simplifies an entire process and serves as a buffer between raw materials stock and the processing plant. Many operations could use bed-blending systems, which would considerably improve the quality of output products (e.g., blast furnaces, sinter plants, cement plants, coal cleaning plants, fossil-fuel-fired steam generating stations, primary aluminum smelters, chemical plants, ceramic and glass plants, and so on).

37.2. Continuous and Discontinuous Fluctuations Pierre Gy’s bed-blending theory is directly derived from his sampling theory and clearly differentiates continuous fluctuations which can be corrected by bed-blending from discontinuous fluctuations which cannot be corrected by bed-blending.66,77,78 It is essential for the user of a bed-blending technique to identify and quantify the types of heterogeneity the input materials are carrying because if discontinuous fluctuations are overwhelming, homogenization may become a loss of time and money. We saw in Chapters 5 and 6 that the homogenization of a material carrying a large constitution heterogeneity is always wishful thinking because constitution heterogeneity is a form of heterogeneity we can do nothing about. The following summary of the theory of bed-blending emphasizes the use of the variogram to optimize bed-blending. Because bed-blending is expensive, it is important to determine the optimum number of layers beyond which bed-blending systems rapidly become ineffective. The way variograms are used to measure the efficiency of bed-blending can be extended to the study of the efficiency of other mixing units such as grinding mills, homogenization in storage bins, homogenization in rotary blenders, and so on. 641

642

Theory of Sampling and Sampling Practice

37.3. Development of a Suitable Model We shall try to identify and separate the aspects of the overall problem relevant only to the second order from those that are relevant to the first order. Our model will be effective only if we succeed to make this separation clearly, which implies that we accept a few simplifying assumptions. 37.3.1. Assumption #1: Constancy of the Input Stream Flow Rate For the bed-blending process to be effective, it is important that all layers have the same thickness, or at least be affected by fluctuations that are relevant only to the second order. A flow rate affected by one standard deviation equal to ±20% can be regarded as acceptable, as long as the fluctuations are random and not related to the stacker position along the pile. 37.3.2. Assumption #2: Constancy of the Stacker Speed The constancy of the stacker speed is a critical factor. It insures that all layers have the same thickness all along the pile including the extremities. Random fluctuations are acceptable as long as they are relevant only to the second order. 37.3.3. Assumption #3: Preparation of the Pile Following a Preestablished Mode There are only two ways to prepare the pile: • The layers are superposed. It is called the chevron method, which is illustrated in Figure 37.1. This method is the easiest, simplest, and cheapest. However, it can be affected by segregation and crumbling effects that may introduce some short-term

Stacker

Fines Crumbling Coarse fragments

F FIGURE 37.1 Bed-blending performed by using the chevron method: combination of segregation and crumbling effects on the sides of the pile.

Bed-Blending Techniques

643

FIGURE 37.2 Bed-blending performed by using the windrow method.

distribution heterogeneity. In many cases, these effects may only be relevant to the second order. • The layers are juxtaposed. It is called the windrow method, which is illustrated in Figure 37.2. This method is slightly more complicated to implement; however it presents the advantage of minimizing small-scale segregation and crumbling effects. It may, to a small extent, minimize the short-term distribution heterogeneity. Usually the total number of layers 2Z (i.e., Z layers in one direction and Z layers in the other direction) is larger than on the figures where it has been minimized for clarity.

37.3.4. Assumption #4: Constancy of the Length of the Layers The pile should be a prism with a uniform triangular cross-section laying on its side as indicated in Figure 37.3. A concrete structure can be placed at one end of the pile to ensure both constancy of the layer’s length and the regularity of the reclaiming operation. Without this concrete structure, the reclaimer may start to send extremely irregular output material to the process as the number of layers would rapidly diminish in each reclaimed slice. This is unacceptable and should be prevented.

37.3.5. Assumption #5: Constancy of the Speed of the Reclaiming System The material is recovered by successively removing thin slices across the whole cross-section of the pile as indicated in Figure 37.4. It is important that all layers be reclaimed in an equal amount each time that a slice is sent to the output stream (constant weighted amounts if different materials are mixed together).

644

Theory of Sampling and Sampling Practice

Concrete structure

TC

0

FIGURE 37.3 Illustration of the necessary concrete structure placed at one end of the pile to ensure the regularity of the reclaiming operation.

1

2

3

N-1

N

FIGURE 37.4 Illustration of the reclaiming process of the pile: N slices made of 2Z increments each.

37.4. Parallel between the Reclaiming Phase and Systematic Sampling A parallel between the reclaiming phase of the bed-blending process and the systematic sampling of the input material can be made. For practical purposes, we may assume that the time necessary for the stacker to deposit one layer is equal to TC, and the number of layers equal to 2Z. Under these conditions, the time TL necessary to prepare the pile is: TL ¼ 2ZTC

ð37:1Þ

Now, if we reclaim the pile vertically as illustrated in Figure 37.4, the composition of each of the N reclaimed s2P slices can be conveniently schematized by reconstructing the input stream of the material as mentioned later in this chapter.

645

Bed-Blending Techniques

It can be observed that the 2Z increments can be grouped in such a way that two systematic samples are obtained: 1. One sample made of the odd increments (i.e., increments corresponding to the Z layers deposited in the starting direction). It is the interleaved sample SODD. 2. One sample made of the even increments (i.e., increments corresponding to the Z layers deposited in the returning direction). It is the interleaved sample SEVEN. Therefore, these two samples are respectively made of Z increments extracted from the input stream at a uniform interval θ ¼ 2TC , as indicated in Figure 37.5. Any slice S is equal to the reunion of the two systematic interleaved samples SODD and SEVEN: S ¼ SODD þ SEVEN

ð37:2Þ

SODD ¼ I1 þ I3 þ I5 þ . . . þ I2Z1

ð37:3Þ

SEVEN ¼ I2 þ I4 þ I6 þ . . . þ I2Z

ð37:4Þ

The theory of sampling shows how the moments of the sampling error associated with the two samples SODD and SEVEN representing the entire pile L can be expressed in terms of the variographic parameters of the input material. If aL is the critical content of the entire pile L, aS is the critical content of the slice S, then we may express the mean and the variance of the of the Heterogeneity Fluctuation Error HFE exactly in the same way as in the theory of sampling: HFE ¼

aS  aL aL

ð37:5Þ

2TC

0 1

2

3

4

5

6

7

8

2Z -3

2Z -2

2Z -1

2Z

2TC

FIGURE 37.5 Illustration of the increments making up any given slice.

2ZTC

646

Theory of Sampling and Sampling Practice

37.4.1. Conditions for the Nullity of the Mean m(HFE) of the Heterogeneity Fluctuation Error HFE Usually, the total number of layers 2Z being very large, whenever the ration 2Z = TL/TC is a whole or nonwhole number the mean m(HFE) can be very close to zero: mðHFEÞ ≈ 0

ð37:6Þ

Nevertheless, there are some precautions to take: • If a correcting material is added to the pile, it must be added in constant amount all along each layer from time 0 until time TL. This is particularly true if our intention is to mix several materials together. • If a periodic component is present with a period tP equal to a submultiple of the time TC necessary to generate one layer, then the content aS of a slice S reclaimed at a time TS is not a strictly random variable. A corrective action becomes necessary: ○ Change the time TC. ○ Find and suppress the cause for the periodicity. If we satisfy these conditions, each slice S can be considered as a random and representative (i.e., accurate and precise) sample of the pile L, and its content aS is an unbiased estimator of the content aL of the pile. 37.4.2. Consequences of the Nullity of the Mean m(HFE) The time series represented by the various increments that could be collected from the output stream at a certain constant interval TS is comparable to a random series, which would be characterized by a flat variogram. Therefore, a bed-blending system transforms any input variogram into a flat output variogram by minimizing: • The long-range term HFE2 • The periodic term HFE3 When the output variogram is flat, it can be characterized by a single parameter: its sill. The sill is equal to the variance s2(HFE) of the critical content of the increments involved. 37.4.3. Properties of the Variance s2 (HFE) If we call: j: The basic interval of the variogram V(j): The variogram obtained on the input material. Let’s assume that it is a parabolic variogram whose equation is: V ð j Þ ¼ V 0 þ V 1 j þ V2 j 2

ð37:7Þ

V0: Value taken by the variogram when j = 0, supposedly expressed for increments of weight M. The sampling theory shows that for increments of weight M’ essentially different from M, the value of the variogram when j = 0 is:

647

Bed-Blending Techniques

V00 ¼ V0

M M0

ð37:8Þ

WSY(j): The continuous selection error of the systematic sample when the sample is made of a single increment. According to Gy,77,78 a parabolic variogram WSY(j) can be expressed as follows: WSY ð jÞ ¼ V0 þ

V1 j 6

ð37:9Þ

s2 ðHFEÞODD : The variance associated with the sample SODD, whose weight is ZM s2 ðHFEÞEVEN : The variance associated with the sample SEVEN, whose weight is ZM. If Z is the number of increments per sample, the theory of sampling, as described in Chapter 7, allows us to write: s2 ðHFEÞODD ¼ s2 ðHFEÞEVEN ¼

WSY ð jÞ Z

ð37:10Þ

j ¼ 2TC

ð37:11Þ

TL 2TC

ð37:12Þ



s2 ðHFEÞODD ¼ s2 ðHFEÞEVEN ¼

2TC WSY ð2TC Þ TL

ð37:13Þ

The variance s2(HFE) associated with the sample S made of the combination of both samples SODD and SEVEN is, therefore, half of the variance s2 (HFE)ODD and s2 (HFE)EVEN. s2 ðHFEÞ ¼

TC WSY ð2TC Þ TL

ð37:14Þ

Which can be written as a function of the total number of layers 2Z making up the pile L: s2 ðHFEÞ ¼

V0 V1 TL þ 2Z 12Z2

ð37:15Þ

Now, instead of considering the variance s2(HFE) associated with increments of weight M, let us consider the variance (s’)2(HFE) that would be associated with a sample whose increments have a weight M’ = M/2Z. Our objective is to compare the input variogram with the output variogram, and these must have been computed from slices having the same average weight. However, the variance s2(HFE) was associated with a sample of weight 2ZM, while the variance (s’)2(HFE) is associated with a sample of weight M. From equation (37.8) we can write: ðs0 Þ ðHFEÞ ¼ 2

V00 V10 TL V0 TL þ ¼ V0 þ 1 2 2 2Z 12Z 12Z

ð37:16Þ

648

Theory of Sampling and Sampling Practice

But, we know that for a given lot the parameters V10 and V1 are independent of the increment weight, therefore: V1 ¼ V10

ð37:17Þ

Then: ðs0 Þ ðHFEÞ ¼ V0 þ 2

V1 TL 12Z2

ð37:18Þ

37.4.4. Sill of the Output Variogram By definition, the sill of the flat variogram is the variance of the samples of weight M which could be extracted after the reclaiming of the pile. This variance is expressed by equation (37.18). Thus, this variogram is composed of: • V0 which is the principal term, which is the value of the variogram when j = 0. • A correcting factor whose denominator is proportional to Z2 , therefore very large, which suggests that it will very often be negligible. Conclusion: If the total number of layers 2Z is relatively large, the sill of the output variogram is practically equal to the short range term V(0) of the input variogram.

37.5. Testing the Effectiveness of an Homogenization Process We can see now that the effectiveness of a homogenization process can be conveniently tested by conducting a variographic experiment before the homogenization and another variographic experiment after the homogenization. For the sake of convenience, we will take the example of bed blending by conducting a variographic experiment with two sampling stations A and B as illustrated in Figure 37.6. The same type of experiment can be performed to test the effectiveness of any homogenization process such as inside bins or silos, rotary blenders, grinding mills, and so on.

37.5.1. Necessary Precautions during the Experiment Let us assume that the weight of increments collected in sampling stations A and B are the same, and sampling and subsampling procedures performed on these increments are strictly the same. Each increment should be submitted to a series of comminutions and splitting operations with respect to the rule of sampling correctness. Any additional Fundamental Sampling Error introduced in the sampling protocol should be relevant only to the second order (e.g., sFSE = ±2 or 3% can be regarded as acceptable).

649

Bed-Blending Techniques

Forward layer

Reverse layer Stacker

Sampling station

Input stream

A Stacking phase of the pile L

Concrete wall

Slice S Reclaiming phase of the pile L

Concrete wall

Reclaimer

Output stream

B Sampling station FIGURE 37.6 Schematization of sampling points A and B where the variographic experiments conducted on input and output materials should be performed.

37.5.2. Determination of the Punctual Variance of the Input Material At least 35 increments are collected at station A at very short intervals hP (e.g., θMIN = hP = 10 seconds). The value of the interval may vary depending on local conditions and the sampling equipment at our disposal. If we respect the conditions given in Sections 7.5.1 and 7.5.2, we should obtain a variogram with 10 points like the one illustrated in Figure 37.7. The interpretation of this short-range variogram can be done by following the recommendations given in Section 7.12.3. The value of V(0) is s2P . 37.5.3. Determination of the Parameter V1 of the Input Variogram The determination of the slope V1 of the input variogram in the useful domain of the time interval TC can be achieved by collecting at least 60 increments at intervals θMIN = hS = TC/2 at station A. This series gives an indication about the rate of variability of the

650

Theory of Sampling and Sampling Practice

V(h)

Sill V(0)

10

20

30

40

50

60

70

80

90

100 nhP (Second)

FIGURE 37.7 Typical variogram generated by the short-range quality fluctuation of the input material.

Amplitude of the cycle

V(h)

Period tP of the cycle

10

20

30

40

50

60

70

80

90

100 nhP (Minute)

FIGURE 37.8 Typical variogram obtained in the useful domain of the time interval TC.

input material for the critical constituents. We shall notice that we have to optimize TC, then proceed by iteration if necessary. We should obtain a long-range variogram, whose example is illustrated in Figure 37.8. The first thing to do is to check for periodicity and, if any, make sure that the period tP is not a submultiple of the time TC necessary to generate one layer.

651

Bed-Blending Techniques

37.5.4. Calculation of the Variogram of the Very Long Term The calculation of the variogram of the very long term requires at least 40 increments collected at very long intervals hC at station A (e.g., θMIN = hC = 90 minutes). Again, the value of this interval may largely vary according to special local conditions. With this variogram we have two main objectives: • Characterize the long-range fluctuations of the input material and compare them with the fluctuations of the output material. This gives an idea about the effectiveness of the bed-blending system for a given value of the total number of layers 2Z. • Estimate the variance s2C of a population of increments collected at intervals long enough to ensure they are uncorrelated. We should obtain a variogram with about 15 points, whose example is illustrated in Figure 37.9.

37.5.5. Calculation of the Variogram of the Output Material At least 40 increments at intervals hE consistent with intervals hC are collected at station B. For consistency make: hE ¼ hC

ð37:19Þ

V(h) Sill

V(0)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

nhP (Hour) FIGURE 37.9 Variogram of the very long term for the input material.

652

Theory of Sampling and Sampling Practice

Input Variogram

V(h)

Sill for station A

Output Variogram Sill for station B

V(0)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

nhP (Hour) FIGURE 37.10 Comparison of the very long-range input variogram with the output variogram.

We should obtain a variogram with about 15 points, whose example is illustrated in the lower part of Figure 37.10. From the study of this variogram, if our estimation of the optimum number of layers 2Z is correct, we shall notice that all variances introduced by earlier trends and cyclic phenomena have practically disappeared. However, the sill of the output variogram is still at least equal to s2P , which was the sill of the short term obtained for the input material in station A, which is V(0). There is not a blending system that is capable of homogenizing beyond this point.

37.6. Conclusions This theory of bed-blending clearly shows that, for a given material, there is a state of minimum heterogeneity beyond which the homogenizing system becomes inefficient. In fact, no homogenizing system can go beyond this limit because this minimum heterogeneity is fixed by the intrinsic constitution heterogeneity carried by the constituent of interest. There is nothing we can do about it. This conclusion is in disagreement with classical statistics that have stated that several stages of homogenization could practically reduce the heterogeneity to zero. Such a statement is essentially wrong. Bed-blending is the perfect means for engineers to perform an effective and reproducible homogenization prior to a process. Money spent in bed-blending could probably

Bed-Blending Techniques

653

be recovered by a simplification of the process whose feed is very consistent and predictable. Bed-blending also gives the advantage of minimizing the effects of unusual small tonnages out of specification from the input material because of the tremendous buffer obtained by the large number of layers. When bed-blending is well conceived, it offers a constant feed characterized by a state of homogeneity that does not need any extra mixing prior to the process.

This page intentionally left blank

Part XII

Recommendations to Manufacturers of Sampling Equipment and to Engineering Firms

This page intentionally left blank

38 Recommendations for the Design, Installation, and Maintenance of Sampling Systems

A large majority of the sampling systems found on the market, selected, installed, and maintained by engineering firms and their clients today, transgress the most elementary rules of sampling correctness or sampling equiprobability. During 30 years, the author visited and inspected sampling systems in more than 300 industrial facilities around the world (e.g., milling facilities, flotation plants, aluminum and copper smelters, cement plants, steel plants, coal cleaning facilities, coal burning power plants, coal mines, gold and copper mines, chemical plants, refineries, commercial laboratories, and many others), and found only few sampling stations that were well designed and well maintained. This is a very sad conclusion, especially at a time when most industries are starting to apply principles of Total Quality Management (TQM) and of Statistical Process Control (SPC and Six Sigma programs). It should be clearly understood by anyone concerned with the quality of a product that SPC or Six Sigma without precise and accurate sampling is an exercise in futility, like TQM without SPC is an exercise that will never bear fruit for a company.

38.1. Attempt to Define the Nature of the Problem During these inspection visits, the same questions came to the mind many times over. Let’s try to answer some of these questions. 38.1.1. Why Is It That Sampling Systems Don’t Work? • Sampling systems don’t work because they are built without regard to increment delimitation correctness, without regard to increment extraction correctness, without regard to increment preparation correctness, and without regard to the intrinsic physical properties of the material to be sampled. • Sampling systems don’t work because they are installed after the plant has been built, as an afterthought. • Sampling systems don’t work because they are located in inaccessible areas, making regular and preventive maintenance often impossible to perform. • Sampling systems don’t work because no one maintains them properly, nor understands the most elementary functions assigned to these sampling systems. • Sampling systems don’t work because they are not profit centers, at least in the eyes of those who are convinced that the future is not more than 90 days ahead of us. 657

658

Theory of Sampling and Sampling Practice

• Sampling systems don’t work because they require the attention of specialists who understand the complexity of a process and the limitations of human nature. More often than not, sampling systems are under the responsibility of people who still believe that if everybody was doing as told, everything would be fine. • Sampling systems don’t work because they are a clear reflection of a short-sighted management style that plagued our industries for the last 70 years in the name of quick profit, without respect for quality, without vision for the future. • Basically, because the ways they are built, installed, maintained, and cleaned, sampling systems cannot be equiprobabilistic.

38.1.2. Why Is It That Incorrect Sampling Systems Are Built? Many brochures advertising sampling equipment show how little manufacturers know about the statistical correctness of their devices. Indeed, some of the statements made on such brochures show beyond any possible doubt that engineers making most sampling systems available on the market follow a strategy that can lead to sampling correctness only by accident, which is very rare. To be more explicit, let’s take a few examples. • A cross-belt swinging hammer sampler is advertised as taking samples representative of the entire stream when the cutter actually gets full by the time it is halfway across the stream. • A tubular line probe sampler is advertised to take reproducible and unbiased samples, when it takes only a tiny portion of a stream in a stage segregation, which is changing all the time. • An online analyzer with a stationary gamma source above a large coal stream is advertised to be unaffected by across-the-stream segregation, when the beam reaching the detector went only through a small part of the stream, and always looked at the same part of the stream. • A slurry sampling system in a flotation plant consists of a primary sampler made of a submerged pump in a turbulent stream assumed to be homogeneous. There is no such thing as a liquid, homogeneous stream with suspended solids. The list can go on and on. In other words, manufacturers build sampling systems with the following list of priorities in mind: • The sampling system must be practical: the smaller the sample the better. • The sampling system is primarily a mechanical engineer’s toy. • The sampling system must be a good-looking and expensive device to impress the customer and make a lot of money. • The sampling system must be standardized, regardless of specific situations. • The sampling system must not be maintained and cleaned too often. • The sampling system must be probabilistic: “Well, all right, why not!” This is the engineer’s response. • The sampling system must be equiprobabilistic: “What the heck is that?” Euphemisms often picture the mediocrity of our world.

Recommendations for Sampling Systems

659

This scenario is the main cause of the problem, which is a combination of the practicality syndrome, an excuse for mediocrity through the down to earth, so-called real world channel, and a direct departure from W. Edwards Deming’s principles of building quality products through a good and necessary statistical understanding. In other words, manufacturers must learn to build sampling systems by following a new list of priorities, and a few are actually starting to do that: • The designer and the mechanical engineer must understand the basic concepts of probabilities and statistics. • The designer and the mechanical engineer must understand the critically important notion of stream heterogeneity: there are three kinds of stream heterogeneities. • The designer and the mechanical engineer must understand the Theory of Sampling thoroughly, and all the various types of sampling errors that are likely to affect the sampling system: There are eight different types of sampling errors to be understood in an unambiguous way; many manufacturers can barely name three of them, and when they do they confuse them with one another. • The sampling system must be equiprobabilistic, with no compromise possible with this condition. It is a very stringent condition requiring the fulfillment of many criteria. • The sampling system must be accessible, with large inspection windows positioned in the right place, and equipped with safety screens, to quickly inspect and clean cross-stream cutters in a safe way. • All cutter driving systems must be electric. • All free falls must be kept to a strict minimum. • All sampling systems must be installed during the construction phase of a plant: retrofitting always presents unwelcome problems and unnecessary extra capital expenditures. In this respect, engineering firms have the responsibility of being realistic in their feasibility studies (e.g., is it conceivable that a multibillion-dollar aluminum smelter can perform at its best without a single correct sampling station? Most of the time, operators take samples with a recycled coffee can!) • All sampling systems must be easy to maintain and clean. • All sampling systems must be built with good quality stainless steel where applicable. In short, incorrect sampling systems are built because of ignorance, negligence, carelessness, search for quick and easy profit, and because good mechanical engineers are not necessarily aware of the most basic principles of sampling statistics. Many are those who believe sampling is just a ritual ceremony, doing little good for the company.

38.1.3. List of Typical Incorrect Sampling Systems The following list concerns samplers with one or several major shortcomings about increment delimitation and increment extraction correctness; they just cannot be accurate. • Thief probes • Augers

660

Theory of Sampling and Sampling Practice

• Sampling bombs for volatile liquids with suspended solids • In-stream point sampling probes • By-stream point sampling probes • In-stream line samplers, by-stream line samplers, by-stream slide gates • By-stream slide valves • Tubular line probes using rotating screw extraction • Point samplers at the bend of a pump discharge line • Point samplers with by-stream popper valves • Flap samplers • Single or multistationary cutters located in the steam • Pressure pipe samplers, and so on. All these systems should be slowly phased out of the market after necessary research has been conducted to find much better substitutes. Thanks to fast progress generated by the many World Conferences on Sampling and Blending (WCSB), better substitutes exist today on the world market.

38.1.4. Why Do Customers Still Support the Use of Incorrect Samplers? The answer to that question is actually very straightforward: to this day, there is no accountability at a very high level of management, for transgressing due diligence requirements on sampling. However, people who buy sampling systems are not necessarily aware of the most basic principles of sampling correctness, or are the salesmen advertising them. Once, I was sitting in a conference room of one of my clients, listening to the speech of a salesman trying to promote an automated tubular probe sampler for the sampling of clinker in a cement plant; within five minutes, I listed four false slogans that could lead to major sampling biases. • “The major advantage of the sampler is to deliver small samples.” Indeed, the samples were small enough to be affected by a huge Fundamental Sampling Error depriving them of any practical value. • “The sampler collects both fine and large fragments.” The problem is that the sampler is strictly incapable of collecting the right amounts of fine and large fragments because of fundamental flaws on delimitation and extraction correctness. • “The sampler takes a representative part of the stream.” How could it be, because the slot gets full instantaneously and overflows abundantly, with excessive bouncing out of many fragments? • “The sampler is reliable and reproducible.” How could it be, because the across the stream fragment size segregation is a well-known transient phenomenon changing all the time? Yes, people buy such samplers because they believe slogans, and also because these samplers are sometimes indeed well built by very competent mechanical engineers.

Recommendations for Sampling Systems

661

38.1.5. How Can Incorrect Samplers Be Improved? Manufacturers building some of these incorrect sampling devices could greatly improve them, sometimes at the cost of minor modifications, if attention was given to the notion of sampling equiprobability. This can only be achieved by building sampling devices that respect the rules of increment delimitation and increment extraction correctness. To achieve this, a stringent program of education is necessary to be able to understand the subtleties of stream heterogeneity, the rules of sampling equiprobability, and the source of all eight kinds of sampling errors. All manufacturers should have such a program, which would be an asset for them to produce the next generation of samplers; WCSB and other Sampling conferences were created to help manufacturers in such an important endeavor. Cross-stream samplers are not always practical to solve sampling problems for some materials. This does not mean we should necessarily select the worst possible scenario with in-stream probes, tubular probes, by-stream probes, pressure pipe samplers, or instream stationary cutters. Sometimes the correct solution is relatively simple: the only ingredients you need are awareness of what can go wrong in a statistical way, and a little creativity. Indeed, for stream samplers, I believe there are many opportunities for applying new patents that would create outstanding sampling systems for an industry gearing up into a high-quality world. The ones that will have enough vision to do that will be the survivors of tomorrow.

38.1.6. Can a Bias Test Credit an Incorrect Sampling System? Bias tests are very popular to find out if a sampling system is accurate or not. The problem with this has been discussed in detail in many presentations at WCSB. Bias tests have strong limitations that are often not very well addressed. Let’s recall a few: • A bias test must be statistically complete, which can be a long process as explained in Gy’s textbook.78 • A bias test can be very sensitive to the presence of a process cycle with a period more or less in phase with the sampling interval. • A bias test can find a positive bias today, a negative bias tomorrow, and no bias the day after, as there is no such thing as a constant bias in sampling. As a consequence, a bias test can only prove there is a bias, and it cannot prove there is no bias. • The outcome of a bias test can be affected by the operator that did not clean a sampling system when he was supposed to. So, a bias may have nothing to do with the correctness of the system. • A well-done bias test can be more expensive than buying a correct sampling system, and it does not fix the problem. • A bias test must have perfectly correct reference samples, combined with perfectly correct subsampling and analytical procedures, which is rarely the case. • Therefore, yes, a bias test can credit an incorrect sampler and discredit a correct sampler: it all depends on circumstances. It is a ceremonial form of gambling to entertain statisticians.

662

Theory of Sampling and Sampling Practice

38.1.7. Are Incorrect Sampling Systems Good Enough for Process Control? Judging by the number of people who are satisfied with grab sampling using a cup to collect specimens with no respect to interpolation problems, the answer should be a definite yes. In this respect, some incorrect sampling systems may indeed be superior to these techniques from another age feeding data to a production sheet with no apparent benefit to anyone (i.e., administrative sampling). But, if the reader carefully went through Part X of this book, he or she already knows the answer. The answer is an absolute, unconditional no. An incorrect sampling system creates variability that does not exist in the process. Correcting the process for something that does not exist has devastating consequences in invisible costs added to process performance: it is irresponsible.

38.1.8. Is It Possible to Achieve Material Balance with Incorrect Sampling Systems? Any attempt to perform material balance using incorrect sampling systems is an exercise in futility, costly, and misleading. Nevertheless, it is done all the time for accounting purposes serving no cause and giving no benefit to anyone, other than a false sense of security to directors and shareholders. Usually, material balance with such systems are achieved after unacceptable data manipulations, and the application of correcting factors which can become embarrassments to their authors. Under no circumstances should material balance be attempted by using incorrect sampling systems: their results are not defensible.

38.1.9. The Special Case of Environmental Sampling The manufacturing of sampling systems to be used in the environment is an area where research and progress is necessary. Firms have been pouring many millions of dollars into building sampling systems without ever addressing the notion of heterogeneity, or the notion of equiprobability. People are using sampling tools with electronic systems that belong to the twenty-first century with well-engineered mechanical parts, and instream or by-stream probes transgressing the most elementary rules of sampling correctness. Enormous research is mandatory to develop correct sampling systems to monitor the environment. The same thing can be said of the pharmaceutical industry, which is a bit of a worry.

38.1.10. Responsibility of Standards Committees on Sampling Many Standards Committees around the world do not help to promote correct sampling. As a matter of fact, very few of their members are familiar with the Theory of Sampling. There, more than anywhere else, a stringent program of education is needed to prevent the standardization of principles or devices that should never even be proposed in the first place. Standards Committees on sampling have the imperative duty to promote correct and equiprobabilistic sampling systems only, and to discard any sampling systems that obviously transgress the most basic rules of sampling correctness. Also, Standards Committees on sampling should be free of any bias for a given manufacturer, which raises the question about obvious conflict of interest that we will not discuss in this book.

Recommendations for Sampling Systems

663

38.1.11. What Can a Variographic Experiment Tell about Incorrect Sampling Systems? A variographic experiment cannot detect a sampling bias introduced by an incorrect sampling system. However, it may detect it indirectly. Indeed, we know that there is no such thing as a constant bias in sampling. A problem of accuracy in sampling is always associated with a problem of reproducibility. Therefore, when a variographic experiment is conducted to quantify the short-range, random variability, it may show a certain amount of variance that cannot be accounted for by the Fundamental Sampling Error and the Grouping and Segregation Error, and the Analytical Error. This would be an indication that the sampling system is bias somewhere, and a warning to check possible Increment Delimitation and Extraction Errors, and also Increment Preparation Errors.

38.2. Correct Sampling Systems Can Go Wrong 38.2.1. Can We Find Correct Sampling Systems on the Market? There are correct sampling systems on the market; however, they are still a small minority and sometimes difficult to locate. Many manufacturers of sampling equipment may have indeed a few attractive devices. The majority of correct sampling systems consist of cross-stream samplers, but you can also find correct shovels, spoons, and spatulas. Some online analyzers consisting of a primary cross-stream sampler and creating a tiny secondary stream capable of being analyzed in the entirety by a gamma/detector system may provide effective process control capabilities; with more research, it is a promising approach. But, correct sampling systems often go wrong for several reasons related either to poor engineering or poor maintenance or a combination of both. Let’s make a list of small details that would most certainly ruin the correctness of a sampling system: • The sampling system is installed in an inaccessible area, preventing the operator from making necessary inspections, maintenance, and cleaning. • The sampling system is enclosed in a box with no inspection door, or an inspection door at the wrong place for inspecting and cleaning the cross-stream cutter. Or, to open the inspection door, the operator must struggle with a dozen bolts and nuts. • The sampling system consists of long, unnecessary free falls between the primary, secondary, and tertiary samplers, and the final receiving sample bucket. • The sampling system is not airtight when the stream to be sampled is either under positive or negative pressure. • The sampling system is built with poor quality stainless steel. • The sampling system is not cleaned nor maintained as it should be, on every working shift. • The sampling system is installed at the wrong place, where the stream to be sampled is in a very turbulent state. • The sampling system is built in such a way that many spillages are likely to take place.

664

Theory of Sampling and Sampling Practice

• The sampling system is not properly protected from an aggressive environment. • The sampling system is unnecessarily complicated with too many transfers, slope changes, conveying systems, free falls, and dust generators.

38.2.2. How Can a Correct Sampling System Be Improved? A little discipline about the details listed previously can go a long way to improve existing correct sampling systems. It can be done at almost no cost, being mainly a problem of awareness rather than real practical difficulties: it is what good engineering is all about. On this subject, I strongly recommend all manufacturers and engineering firms to get acquainted with W. Edwards Deming’s philosophy, and apply it: it is a game at which they have nothing to lose and everything to gain. The first things that W. Edwards Deming may propose to manufacturers of sampling equipment and engineering firms are: • Design sampling systems with appropriate exploration and research on sampling correctness. • Build sampling systems with quality materials, test them, and constantly improve them. • Put sampling systems on the market when, and only when, they satisfy the stringent conditions of sampling correctness. • Find out what the users think of their new sampling systems. • Finally, find out why the nonusers have not bought them yet. To successfully implement these five points, manufacturers and engineering firms need a road map consisting of four major commitments: 1. A commitment to innovation 2. A commitment to research and education on the principles of sampling correctness. 3. A commitment to the continuous improvement of their existing sampling systems. 4. A commitment to the promotion of easy, preventive maintenance and cleaning of these sampling systems. All of this is only point #1 of W. Edwards Deming’s 14 key points, and I most certainly do not intend to rewrite his wonderful work.213

38.3. Integration of Correct Sampling Systems in All New Plant Feasibility Studies Many modern plants excluded adequate automated sampling systems during feasibility studies, which was often done as a means of reducing capital investment. Such a practice can be economically devastating for the lifetime of an industrial facility, yet it

665

Recommendations for Sampling Systems

Method used by TOTAL QUALITY MANAGEMENT to keep a process under control

People. material. equipment, method, and environment

SPC

Process Feedback

Customer Feedback

Processing system: Part #1

People. material. equipment, method, and environment

Statistical Control Charts

Processing system: Part #2 Data Generation Through Analyses

CORRECT SAMPLING SYSTEMS

Etc.

FIGURE 38.1 Illustration of the relationship between TQM, SPC, and correct sampling systems.

is done all the time. Furthermore, when automated sampling systems are included in feasibility studies, either they are incorrect in concept or they are extravagantly too expensive. Either way, it is of the utmost importance to stress that an industrial facility cannot perform well in the modern world without a stringent commitment to Total Quality Management (TQM). TQM itself cannot be achieved without a harmonized Statistical Process Control (SPC) program. SPC itself cannot be beneficial to a company without the capabilities offered by well-designed, well-maintained, correct sampling systems: this is a very strong statement that is summarized in Figure 38.1 and the economic consequences of noncompliance have been quantified in Carrasco’s work.81

This page intentionally left blank

References 1. 2. 3. 4. 5. 6. 7.

8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25.

Brunton, D. W., A new system of ore sampling, Trans. AIME, 13, 639, 1884–1885. Brunton, D. W., The theory and practice of ore sampling, Trans. AIME, 25, 826, 1895. Brunton, D. W., Modern practice of ore sampling, Trans. AIME, 40, 567 and 675, 1909. Bailey, E. G., Accuracy in sampling coal, J. Ind. Eng. Chem., 1, 161, 1909. Baule, B., Benedetti-Pitchler, A., Zur probenamme aus Konigen Materialien (Sampling of particulate materials), Z. Anal. Chem., 74, 442, 1928. Hassialis, M. D., et al. Handbook of Mineral Dressing, Section 19, Sampling, John Wiley & Sons, New York, 71, 1945. Becker, R. M., Some Generalized Probability Distributions with Special Reference to the Mineral Industries. 1: sampling to N Items per Sample, USBMRI #6329, 1964. 2: Sampling to “Lambda” Amount of Items per Sample, USBMRI #6552, 1964. 3: Computer Program of Distribution Moments, USBMRI #6598, 1965. 4: Experimental Confirmation, USBMRI #6627, 1965. 5: Theoretical Confirmation and Applications, USBMRI #6768, U.S., Bureau of Mines Research Institute, Golden, CO, 1966. Wilson, A. D., The sampling of rock powders for chemical analysis, Analyst, 89, 18–30, 1964. Kleeman, A. W., Sampling errors in the chemical analysis of rocks, J. Geol. Soc. Australia, 14, 43–48, 1967. Visman, J., De Monsterneming Van Heterogene Binomial Kurrenmengstels in Het BIjzondere Steenkool (Sampling of heterogeneous binomial populations with application to coal), Thesis, Hogeschool Deft, FRG, 1947. Visman, J., Towards a Common Basis for the Sampling of Materials, Tech. Surv. Res. Rep, 98, Canadian Department of Mineralogy, 1962. Visman, J., Sampling of coal and washing products, Transactions of the World Power Conference, The Hague, Section A2, p. 430, 1947. Visman, J., A general sampling theory, Mater. Res. Stand., 9(11), 8, 1969. Engels, J. C., Ingamells, C. O., Effect of sample inhomogeneity in K-Ar dating, Geochem. Cosmochim. Acta., 34, 1007, 1970. Ingamells, C. O., Engels, J. C., Switzer, P., Effect of laboratory sampling error in geochemistry and geochronology, Proc. 24th IGC, Section 10, p. 405, 1972. Ingamells, C. O., Switzer, P., A proposed sampling constant for use in geochemical analysis, Talanta, 20, 547, 1973. Samland, P. K., Ingamells, C. O., New approaches to geostatistical evaluations, Proc. 12th Comp. Appl. Conf., vol. 2, Colorado School of Mines, Denver, F 108, 1973. Ingamells, C.O., New approaches to geochemical analysis and sampling, Talanta, 21, 141, 1974. Ingamells, C. O., Control of geochemical error through sampling and subsampling diagrams, Geochem. Cosmochim. Acta., 38, 1225, 1974. Ingamells, C. O., Derivation of the sampling constant equation, Talanta, 21, 263, 1976. Ingamells, C. O., Evaluation of skewed exploration data – the nugget effect, Geochim. Cosmochim. Acta., 45, 1209, 1981. Ingamells, C. O., Pitard, F. F., Applied Geochemical Analysis, vol. 88, Wiley-Interscience, New York, 1986. François-Bongarçon, D., Geostatistical determination of sample variances in the sampling of broken gold ores, CIM Bul., 84(950), 46–57, 1991. François-Bongarçon, D., Un nouveau regard sur la theorie de l’echantillonnage des minerais morceles: une approche geostatistique effective de la determination des variances et masses minerals d’echantillon. Proceedings of the 15th World Mining Congress, Madrid, May 1992. François-Bongarçon, D., Geostatistical tools for the determination of fundamental sampling variances and minimum sample masses, in Geostatistics Troia ’92, Kluwer Academic Publications, Dordrecht, vol. 2, pp. 989–1000, 1992.

667

668

References

26. François-Bongarçon, D., Comments on F. Pitard’s Exploration of the nugget effect, Proceedings of the Forum Geostatistics for the next century. Kluwer Academic Publications, Montreal, QC, Canada, Dordrecht, 1993. 27. François-Bongarçon, D., The practice of the sampling theory of broken ores, CIM Bul., 86(970), 75–81, May 1993. 28. Matheron, G., Les Variables Regionalisee et leur Estimation (Regionalized variables and their estimation), Editions Masson, Paris, 1965. 29. Matheron, G., Le Krigeage Universel (The universal kriging), Fasc. 1, Cahiers du Centre de Morphologie Mathematique, Paris, 1969. 30. Matheron, G., La Theorie Des Variables Regionalisees et ses Applications (The Theory of Regionalized Variables and Their Aplications), Fasc. 5, Cahiers du Centre de Morphologie Mathematique, Paris, 1970. 31. Matheron, G., The intrinsic random functions and their applications, Adv. Appl. Prob., 5, 439, 1973. 32. Gy, P. M., Erreur Commise dans le Prelevement d’un echantillon sur un lot de Minerai (Error committed when taking a sample from a batch of ore). Congres des laveries des mines metalliques françaises, Ecole des Mines de Paris (1953), Revue de l’Industrie Minerale, France, vol. 36, pp. 311–345, 1954. 33. Gy, P. M., Erforderliche Probemenge – kurventafeln, Int. Kongr. Erzaufbereitung, 8, 155, B199/220, Goslar, FRG, May 8, 1995. 34. Gy, P. M., Poids a donner a un echantillon – abaques, Rev. Ind. Minerale, 38, 1–30, 1956. 35. Gy, P. M., Nomogramme D’echantillonnage – probeahme Nomogram, Minerais Et Metaux, German Edition Published by Gesellshaft Metallhutten und Bergleute, Paris, 1956. 36. Gy, P. M., Echantillonnage pour Analyse Granulometrique, Ann. Mines., April 1956. 37. Gy, P. M., errors committed in size distribution, Congress of Jamshedpur, India, Indian Mining J., February 1957. 38. Gy, P. M., A new theory of ore sampling, paper presented at Ann. Meet. Am. Inst. of Mining Engineers, New Orleans, February 1957. 39. Gy, P. M., L’echantillonnage, in section of the encyclopedia, volume on mesures et analyses, Techniques de l’Ingenieur, Paris, 220/1, 1958. 40. Gy, P. M., Facteurs ayant une influence dans l’echantillonnage des minerais, Ann. Mines., 207–217, April 1961. 41. Gy, P. M., Echantillonnage et analyse, J. Anal. Org., Cetama, May 8 1963. 42. Gy, P. M., L’Echantillonnage des Minerais en Vrac, Lecture to the Association Francaise de Controle Industriele de Qualite, Bull. AFCIQ, vol. 45, Paris, March 1964. 43. Gy, P. M., L’echantillonnage des minerais en cours de transport continu, Rev. Ind. Minerale, 47, 51 and 82, 1965. 44. Gy, P. M., Sampling of ores and metallurgical products during continuous transport, Trans. Inst. Min. Metal, 165, 165–199, 1965. 45. Gy, P. M., Variography, International Symposium of Applied Statistics, vol. 59, Colorado School of Mines, Golden, CO, 693, 1964. 46. Gy, P. M., Calculateur d’Echantillonnage, Pierre Gy’s Sampling Slide Rule (Both in French and English), Societe de l’Industrie Minerale, Saint-Etienne, France, 1965. 47. Gy, P. M., Le principe d’equiprobabilite, Ann. Mines., 779–794, December 1964. 48. Gy, P. M., La notion d’heterogeneite, Bull. Bur. Rech. Geol. Minieres, 1, 91–99, 1965. 49. Gy, P. M., L’Echantillonnage Des Minerais En Vrac, vol. 1, Societe de l’Industrie Minerale, Saint-Etienne, France, 1967, This document is considered a classic. 50. Gy, P. M., Ore Reserve Estimation and Grade School, Canadian Institute of Mining and Metallurgy, l’Esterel, Quebec, 1968. 51. Gy, P. M., L’Echantillonnage Des Minerais En Vrac, vol. 2, Societe de l’Industrie Minerale, Saint-Etienne, France, 1971, This document is considered a classic. 52. Gy, P. M., Contribution a l’Etude De l’Heterogeneite D’un Lot De Matiere Morcelee, Dr.-Ing. Thesis, University of Nancy, Nancy, France, 1972. 53. Gy, P. M., Die Probenhame bei stuckigen Erzen, Aufbereitungstechnik, 13(11), 687, 1972. 54. Gy, P. M., L’echantillonnage des matieres morcelees, presentation d’une theorie generale, Ann. Mines., 51, November 1972.

References

669

55. Gy, P. M., The sampling of broken ores – a review of principles and practice, Proceedings of the opening Lecture of a Meeting Organized by the Institution of Mining and Metallurgy, London, July 2 1973. 56. Gy, P. M., El control minero y metalurgico, Rev. Minerales Chile, 24(127), 7, 1974. 57. Gy, P. M., Theorie et Practique de l’Echantillonnage des Matieres Morcelees, L’Echantillonnage, Cannes, 597, 1975. 58. Gy, P. M., The Sampling of Particulate Materials – A General Theory in Sampling Practices in the Mineral Industries Symp., The Australasian Institute of Mining and Metallurgy, Melbourne, 17, September 16 1976. 59. Gy, P. M., Teorie matematica generale della campionature dei granulate in transport continuo, in Semin. Utilizzazione di Modeli Matematici e Probabilistici in Mineralurgia, University of Trieste, Trieste, Italy, January 27 1979. 60. Gy, P. M., Sampling Particulate Materials, in Semin, Sampling Systems for On-LineAnalyzers, SIRA, Institute and Warren Spring Laboratories, London, September 1978. 61. Gy, P. M., Unbiased sampling from a falling stream of particulate material, Int. J. Mineral Process., 5(31), 297–315, 1978. 62. Gy, P. M., L’Echantillonnage des Matieres Morcelees – Theorie et Practique, in Semi. On Fragmentation, College International des Sciences de la Construction, Saint-Remy-les-Chevreuse, France, December 12 1979. 63. Gy, P. M., Sample Preparation, in Computer Method for the 80’s, Society of Mining Engineers of the American Institute of Mining, Metallurgical and Petroleum Engineers, Littleton, CO, 1979. 64. Gy, P. M., Sampling of Particulate Materials, Elsevier, Amsterdam, 1979. 65. Gy, P. M., Theorie de la prehomogeneisation derive de la theorie de l’echantillonnage – presentation et verification experimentale dans une cimenterie, Ind. Minerale. Tech., 107, 19, 1981. 66. Gy, P. M., A new theory of bed blending derived from the theory of sampling – development and full-scale experimental check, Int. J. Mineral Process., 8(201), 201–238, 1981. 67. Gy, P. M., Proportional sampling – a new philosophy of metallurgical accounting, Int. J. Mineral Process., 8, 279, 1981. 68. Gy, P. M., Sampling or gambling? Coal Mining Process., 18(9), 62, 1981. 69. Gy, P. M., Does your mechanical sampler do what it is supposed to? Coal Mining Process., 18(12), 71, 1981. 70. Gy, P. M., Sampling from high-capacity streams, in Proceedings of the First Australian International Bulk Material Conference, IES Publications, Sydney, pp. 407, 1982. 71. Gy, P. M., Probenahme oder Gluckspiel? A Ufbereitungstechnik, 22(12), 655, 1981. 72. Gy, P. M., Echantillonnage des matieres morcelees en vue d’une analyse granulometrique, Ind. Minerale. Tech., 2, 62, 1982. 73. Gy, P. M., Sampling of particulate materials – theory and practice, 2nd ed., Elsevier, Amsterdam, 1982. 74. Gy, P. M., Analyse commerciale. Reflexions sur la procedure de l’ecart partageable, Analusis, 11(6), 257, 1983. 75. Gy, P. M., Les erreurs d’echantillonnage – ells peuvent oter toute significationaux resultats d’analyse, Analusis, 11(9), 413–439, November 1983. 76. Gy, P. M., Principles et bases scientifiques de l’echantillonnage des matieres morcelees, Rev. Ind. Aliment. Agric. I, October, 1983, II, December 1983. 77. Gy, P. M., Heterogeneite, Echantillonnage, Homogeneisation, Editions Masson, Paris, 1988. 78. Gy, P. M., Sampling of Heterogeneous and Dynamic Material Systems, Theories of Heterogeneity, Sampling and Homogenizing, Elsevier, Amsterdam, xxx+653, 1992. 79. Gy, P. M., L’echantillonnage des lots de matiere en vue de leur analyse, Masson, Paris, xvi+148, 1996. 80. Gy, P. M., Sampling for Analytical Purposes, Translation by A. Royle. John Wiley & Sons, Chichester, April 1998, 2nd ed., June 1999. 81. Carrasco, P., Carrasco, P., Jara, E., The economic impact of correct sampling and analysis practices in the copper mining industry, WCSB1, Chemometrics and Intelligent Laboratory Systems, 74(1), 209–214, November 28 2004.

670

References

82. Carrasco, P., Carrasco, P., Campos, M., et al., Heterogeneity and Ingamells’s Tests of Some Chilean Porphyry Ores. WCSB2, AusIMM, May 10–12 2005. 83. Carrasco, P., Wilke, A., Jara, E., et al., Some applications of statistics and geostatistics to the sampling of a big copper tail stream, WCSB3, Published by Joao Felipe and Jair Koppe, Porto Alegre, Brazil, October 23–25 2007. 84. Carrasco, P., Nugget effect, artificial or natural? WCSB4, SAIMM, Symposium series S59, Cape Town, South Africa, October 21–23 2009. 85. Carrasco, P., Some key facts and concepts in the evolution of sampling and assaying practices at Codelco, Sampling 2008, AusIMM. Perth, Australia, May 27–29 2008. 86. Carrasco, P., Sampling challenges for the copper industry, Sampling 2010, AusIMM, Perth, Australia, May 11–12, 2010. 87. Minkkinen, P., Practical applications of sampling theory, WCSB1, Chemometrics and Intelligent Laboratory Systems, 74(1), 85–94, November 28 2004. 88. Minkkinen, P., Paakkunainen, M., Direct estimation of sampling variance from time series measurements – comparison to variographic analysis, WCSB2, AusIMM, May 10–12, 2005. 89. Minkkinen, P., Weighting error – is it significant in process analysis? WCSB3, Published by Joao Felipe and Jair Koppe, Porto Alegre, Brazil, October 23–25 2007. 90. Minkkinen, P., Esbensen, K. H., Simulating sampling of aggregate materials with significant spatial heterogeneity, WCSB4, SAIMM, Symposium series S59, Cape Town, South Africa, October 21–23, 2009. 91. Minkkinen, P., Comparison of independent process analytical measurements – a variographic study, WCSB5 SAMPLING2011, Gecamin, Conferences for Mining, Santiago, Chile, October 25–28, 2011. 92. Minkkinen, P., Sampling for feasibility studies and process control – properties of process variograms, WCSB6, Gecamin, Conferences for Mining, Lima, Peru, November 19–22, 2013. 93. Minkkinen, P., Auranen, I., Ruotsalainen, L., et al., Comparison of sampling methods by using size distribution analysis. WCSB7, TOS Forum, IMP Publications, Bordeaux, France, June 10–12, 2015. 94. Minkkinen, P., Dependence of the variance of lot average on the sampling mode and heterogeneity type of the lot, WCSB8, The Australasian Institute of Mining and Metallurgy, Perth, Australia, May 9–11, 2017. 95. François-Bongarçon, D., Theory of sampling and geostatistics: an intimate link, WCSB1, Chemometrics and Intelligent Laboratory Systems, 74(1), 143–148, November 28 2004. 96. François-Bongarçon, D., The modelling of the liberation factor and its calibration, WCSB2, AusIMM, May 10–12, 2005. 97. François-Bongarçon, D., The philosophy of statistical bias testing revisited and applied to automatic samplers, WCSB2, AusIMM, May 10–12, 2005. 98. François-Bongarçon, D., A fundamental principle for automatic sampler correctness, WCSB2, AusIMM, May 10–12, 2005. 99. François-Bongarçon, D., Freitas, J. C. F., Homogenizing using true division, WCSB3, Published by Joao Felipe and Jair Koppe, Porto Alegre, Brazil, October 23–25, 2007. 100. François-Bongarçon, D., Fishy samples: how big a sample to avoid the infamous Poisson Effect, WCSB4, SAIMM, Symposium series S59, Cape Town, South Africa, October 21–23, 2009. 101. François-Bongarçon, D., Some theoretical and practical aspects of 1-D stream sampling. WCSB4, SAIMM, Symposium series S59, Cape Town, South Africa, October 21–23, 2009. 102. François-Bongarçon, D., A proposal for standard reference material recertification, sampling 2008, AusIMM. Perth, Australia, May 27–29, 2008. 103. François-Bongarçon, D., On the use of dry pulp duplicates in Quality Assurance/Quality Control – A new tool of the trade, Sampling 2008, AusIMM, Perth, Australia, May 27–29, 2008. 104. François-Bongarçon, D., Sampling and heterogeneity studies – creative methods. Sampling 2008, AusIMM. Perth, Australia, May 27–29, 2008. 105. François-Bongarçon, D., Aspects of incremental sampling and sampler, Sampling 2010, AusIMM, Perth, Australia, May 11–12, 2010. 106. François-Bongarçon, D., Why the overall exponent in Gy’s formula almost never gets to 2.5, Sampling 2010, AusIMM, Perth, Australia, May 11–12, 2010.

References

671

107. François-Bongarçon, D., Mathematical modelling and TOS, WCSB5 SAMPLING2011, Gecamin, Conferences for Mining, Santiago, Chile, October 25–28, 2011. 108. François-Bongarçon, D., A new philosophy of application of QA-QC data processing, WCSB6, Gecamin, Conferences for Mining, Lima, Peru, November 19–22, 2013. 109. François-Bongarçon, D., Building confidence intervals around the obtained value of a sample, WCSB7, TOS Forum, IMP Publications, Bordeaux, France, June 10–12, 2015. 110. François-Bongarçon, D., Introduction of first ever rigorous derivation of the liberation factor, WCSB7, TOS Forum, IMP Publications, Bordeaux, France, June 10–12, 2015. 111. François-Bongarçon, D., Pitard, F. F., Comparison between samples with constant mass and samples with constant population size. A translation of G. Matheron publication in French, WCSB7, TOS Forum, IMP Publications, Bordeaux, France, June 10–12, 2015. 112. François-Bongarçon, D., State of affairs and current research trends in Theory of Sampling fundamentals, including segregation and correctness, Sampling 2014, AusIMM. Perth, Australia, July 29–30, 2014. 113. François-Bongarçon, D., Evolution of the concept of the liberation factor and a surprising new result, WCSB8, The Australasian Institute of Mining and Metallurgy, Perth, Australia, May 9–11, 2017. 114. François-Bongarçon, D., Theory of sampling and geostatistics – the ultimate link, WCSB8, AusIMM, Perth, Australia, May 9–11, 2017. 115. Pitard, F. F., Practical statistical tools for managers, Metallurgical Plant Design and Operating Strategies Conference, The Australasian Institute of Mining and Metallurgy, Sydney, April 15–16, 2002. 116. Pitard, F. F., Practical and theoretical difficulties when sampling gold, Mineral Processing Plant Design, Control and Practice Conference, Society for Mining, Metallurgy and Exploration, Inc. and Canadian Mineral Processors of the Canadian Institute of Metallurgy, Vancouver, BC, Canada, October 20–24, 2002. 117. Pitard, F. F., The auditing process in sampling for ore reserve estimations, 2003, prepared for the Committee of ore reserves at the Institute of Mining of Chile. 118. Pitard, F. F., Effects of residual variances on the estimation of the variance of the fundamental error, World Conference on Sampling and Blending: WCSB1, Esbjerg, Denmark. August 19–20, 2003. 119. Pitard, F. F., Automated sampling systems to minimize mine-mill conciliation problems, International Conference in Mining Innovation, MININ, Santiago, Chile, April 18–21, 2004. 120. Pitard, F. F., El Proceso de la Auditoría del Muestreo en la Estimación del Recurso Minero, Instituto de Ingenieros de Minas de Chile, Santiago, Chile, 2004, Recursos, Reservas y Bancabilidad Minera. 121. Pitard, F. F., New Sampling Technologies for Ore Grade Control, Metallurgical Accounting and Laboratory Preparation, Metallurgical Plant Design and Operating Strategies Conference, The Australasian Institute of Mining and Metallurgy, Perth, Australia, September 6–7, 2004. 122. Pitard, F. F., Sampling correctness – a comprehensive guideline, World Conference on Sampling and Blending: WCSB2, The Australasian Institute of Mining and Metallurgy, Sunshine Coast, Queensland, Australia, May 10–12, 2005 123. Pitard, F. F., Chronostatistics: a powerful, pragmatic, new science for metallurgists, Metallurgical Plant Design and Operating Strategies Conference, The Australasian Institute of Mining and Metallurgy, Perth, Australia, September 18–19, 2006. 124. Pitard, F. F., The Sampling of Mineral Deposits, A Primer in Sampling Theory and Sampling Practice, Published by University of Witwatersrand, Johannesburg, South Africa, Richard Minnitt and Francis F. Pitard. 125. Pitard, F. F., The In Situ Nugget Effect: A major component of the random term of a variogram, World Conference on Sampling and Blending: WCSB3, Porto Alegre, Brazil, 2007. 126. Pitard, F. F., Blasthole sampling for grade control: the many problems and solutions, Sampling 2008 Conference, The Australasian Institute of Mining and Metallurgy, Perth, Australia. 127. Pitard, F. F., Pierre Gy’s Theory of Sampling and C.O. Ingamells’ Poisson Process Approach, Pathways to Representative Sampling and Appropriate Industrial Standards, Doctoral Thesis, Aalborg University, Campus Esbjerg, Niels Bohrs Vej 8, DK-67 Esbjerg, Denmark, 2009.

672

References

128. Pitard, F. F., Sampling quality assurance and quality control requirements for mining projects, presented at the Sampling 2010 Conference, The Australasian Institute of Mining and Metallurgy, Perth, Australia, May 11–12, 2010. 129. Pitard, F. F., Cutting diamond core samples in halves: the necessity, the wisdom and the problems, presented at the Geomin 2011 Conference, Second International Seminar on Geology for the Mining Industry, Antofagasta, Chile, June 8–10, 2011. 130. Pitard, F. F., François-Bongarçon, D., Demystifying the fundamental sampling error and the grouping and segregation error for practitioners, World Conference on Sampling and Blending: WCSB5, Proceedings published by Gecamin, Santiago, Chile, October 25–28, 2011. 131. Pitard, F. F., Stevens, D., The development of a solution to the sample preparation of coarse gold samples, World Conference on Sampling and Blending: WCSB5, Santiago, Chile, Proceedings published by Gecamin, October 25–28, 2011. 132. Pitard, F. F., The advantages and pitfalls of a circular bed-blending operation, World Conference on Sampling and Blending: WCSB6, Proceedings published by Gecamin, Lima, Peru, 2013. 133. Pitard, F. F., Guidelines for acceptable allotted sampling uncertainty, World Conference on Sampling and Blending: WCSB6, Proceedings published by Gecamin, Lima, Peru, 2013. 134. Pitard, F. F., Lyman, G. J., Single and multi-stage Poisson processes: a case study for gold exploration, World Conference on Sampling and Blending: WCSB6, Proceedings published by Gecamin, Lima, Peru, 2013. 135. Pitard, F. F., Sampling systems for process control and metallurgical accounting, Sampling 2014 Conference, Published by the AusIMM, the Minerals Institute, July 29–30, 2014, Perth, Australia. 136. Pitard, F. F., The advantages and pitfalls of conventional heterogeneity tests and a suggested alternative, World Conference on Sampling and Blending: WCSB7, Proceedings published by IMP, Bordeaux, France, June 10–12, 2015. 137. Kelly, S. E., Pitard, F. F., Review of a non-probabilistic sampler versus a Vezin sampler on low weight percent solids slurries, World Conference on Sampling and Blending: WCSB7, Proceedings published by IMP, Bordeaux, France, June 10–12, 2015. 138. François-Bongarçon, D., Pitard, F. F., Comparison between samples with constant mass and samples with constant fragment population size (and calculations of their sampling variances). G. Matheron, and translated to English by D. Francois-Bongarcon and F.F. Pitard. World Conference on Sampling and Blending: WCSB7, Proceedings published by IMP, Bordeaux, France, June 10–12, 2015. 139. Ehrig, K. J., Pitard, F. F., Sampling the supergiant Olympic Dam Iron oxide Cu-U-Au-Ag deposit, South Australia, World Conference on Sampling and Blending: WCSB8, Proceedings published by the AusIMM, the Minerals Institute, Perth, Australia, May 9–11, 2017. 140. Pitard, F. F., Integration of Visman and Ingamells’ works into the Theory of Sampling – a necessity to expand possibilities in sampling practices, World Conference on Sampling and Blending: WCSB8, Proceedings published by the AusIMM, the Minerals Institute, Perth, Australia, May 9–11, 2017. 141. Haughy, C., Pitard, F. F., Use of variography to measure improvement of sampling practices for the quality control of graphite consistency in marble used for carbon-sensitive application, World Conference on Sampling and Blending: WCSB8, Proceedings published by the AusIMM, the Minerals Institute, Perth, Australia, May 9–11 2017. 142. Pitard, F. F., From errors to uncertainty – a clarification for proper use in the Theory of Sampling, World Conference on Sampling and Blending: WCSB8, Proceedings published by the AusIMM, the Minerals Institute, Perth, Australia, May 9–11, 2017. 143. Esbensen, K. H., Heydorn, K., Metrology in sampling – a first foray (biological materials), WCSB1, vol. 74, Elsevier, p. 1, November 28 2004. 144. Esbensen, K. H., A multivariate perspective on Gy’s Theory of Sampling. WCSB2, The Australasian Institute of Mining and Metallurgy, Sunshine Coast, Queensland, Australia, May 10–12, 2005. 145. Esbensen, K. H., Minkkinen, P., Paoletti, C., Reliable assessment of total errors in sampling large kernel lots: a new variographic approach, WCSB4, SAIMM, Symposium series S59, Cape Town, South Africa, October 21–23, 2009.

References

673

146. Esbensen, K. H., Minkkinen, P., Paoletti, C., et al., Horizontal – a matrix-independent generic sampling standard, Sampling 2010, AusIMM. Perth, Australia, May 11–12, 2010. 147. Esbensen, K. H., Minkkinen, Illustrating sampling standards – how to generate complete understanding and TOS-compliance, WCSB5, Proceedings published by Gecamin, Santiago, Chile, October 25–28, 2011. 148. Esbensen, K. H., History and achievements of the World Conference on Sampling and Blending in the decade 2003–2013, World Conference on Sampling and Blending: WCSB6, Proceedings published by Gecamin, Lima, Peru, 2013. 149. Esbensen, K. H., Petersen, L., DS 3077 Horizontal – new standard for representative sampling: design. History and acknowledgement, WCSB6, Proceedings published by Gecamin, Lima, Peru, 2013. 150. Esbensen, K. H., Theory of Sampling in a multivariate perspective – what can Chemometrics contribute, Sampling 2014, AusIMM. Perth, Australia, July 29–30, 2014. 151. Esbensen, K. H., Romanach, R. J., Proper sampling, total measurement uncertainty, variographic analysis & fit-for-purpose acceptance levels for pharmaceutical mixing monitoring, WCSB7, Proceedings published by IMP, Bordeaux, France, June 10–12, 2015. 152. Esbensen, K. H., Theory of Sampling – what’s next? WCSB8, Proceedings published by the AusIMM, the Minerals Institute, Perth, Australia, May 9–11 2017. 153. Holmes, R. J., Correct sampling and measurement – the foundation of accurate metallurgical accounting, WCSB1, vol. 74, p. 1, Elsevier, November 28 2004. 154. Holmes, R. J., Robinson, G. K., Codifying the principles of sampling into mineral standards, WCSB1, Elsevier, November 28, vol. 74, p. 1, 2004. 155. Holmes, R. J., Design of sample plants – getting it right first time, WCSB2, The Australasian Institute of Mining and Metallurgy, Sunshine Coast, Queensland, Australia, May 10–12, 2005. 156. Holmes, R. J., Best practice in sampling iron ore, WCSB3, Published by Joao Felipe and Jair Koppe, Porto Alegre, Brazil, October 23–25, 2007. 157. Holmes, R. J., The importance of sampling in grade control, Sampling 2008 Conference, The Australasian Institute of Mining and Metallurgy, Perth, Australia. 158. Holmes, R. J., Sampling mineral commodities – the good, the bad and the ugly, WCSB4, SAIMM, Symposium series S59, Cape Town, South Africa, October 21–23, 2009. 159. Holmes, R. J., Challenge of developing ISO sampling standards, World Conference on Sampling and Blending: WCSB5, Proceedings published by Gecamin, Santiago, Chile, October 25–28, 2011. 160. Holmes, R. J., Development and revision of ISO sampling standards, Proceedings published by Gecamin, WCSB6, Lima, Peru, 2013. 161. Holmes, R. J., Sample station design and operation, WCSB7, Proceedings published by IMP, Bordeaux, France, June 10–12, 2015. 162. Holmes, R. J., Common pitfalls in sampling iron ore, WCSB8, Proceedings published by the AusIMM, the Minerals Institute, Perth, Australia, May 9–11, 2017. 163. Minnitt, R. C. A., A simulated broken ore model for sampling parameters K and alpha, WCSB3, Published by Joao Felipe and Jair Koppe, Porto Alegre, Brazil, October 23–25, 2007. 164. Minnitt, R. C. A., Gluck, T., A chronostatistical approach to hydrometallurgical plant process control, Sampling 2008 Conference, The Australasian Institute of Mining and Metallurgy, Perth, Australia. 165. Minnitt, R. C. A., Assibey-Bonsu, W., A comparison between the duplicate series method and the heterogeneity test as methods for calculating Gy’s sampling constants. K and alpha, WCSB4, SAIMM, Symposium series S59, Cape Town, South Africa, October 21–23, 2009. 166. Minnitt, R. C. A., The state of sampling practice in the South African Minerals Industry, Sampling 2010, AusIMM. Perth, Australia, May 11–12, 2010. 167. Minnitt, R. C. A., François-Bongarçon, D., Pitard, F. F., Segregation free analysis for calibrating the constants K and alpha for use in Gy’s formula, WCSB5, Proceedings published by Gecamin, Santiago, Chile, October 25–28, 2011. 168. Minnitt, R. C. A., Changes in deposit value associated with sampling error and sampling bias, WCSB6, Proceedings published by Gecamin, Lima, Peru 2013.

674

References

169. Minnitt, R. C. A., A comparison of two methods for calculating the constants K and alpha in a broken ore, Sampling 2014, AusIMM, Perth, Australia, July 29–30, 2014. 170. Minnitt, R. C. A., Jashashvili, T., Gilchrist, G., et al., Quantifying segregation of minerals and metals in particulate materials using computed X-ray tomography and variography. WCSB8, Proceedings published by the AusIMM, the Minerals Institute, Perth, Australia, May 9–11, 2017. 171. Minnitt, R. C. A., A generalized form of Gy’s equation for gold ores – empirical evidence, WCSB8, Proceedings published by the AusIMM, the Minerals Institute, Perth, Australia, May 9–11, 2017. 172. Whittle, P., On stationary processes in the plane, Biometrika, 41, 431–449, 1954. 173. David, M., Geostatistical ore reserve estimation, in Developments in Geomathematics 2, Elsevier Scientific Publishing Company, Amsterdam; Oxford; New York, 1977. 174. Cochran, W. G., Sampling Techniques, 3rd ed., John Wiley & Sons, New York, 1977. 175. Cochran, W. G., Cox, G. M., Experimental Designs, 2nd ed., John Wiley & Sons, New York, 1957. 176. Cochran, W. G., Planning and Analysis of Observational Studies, John Wiley & Sons, New York, 1983. 177. Journel, A. G., Huijbregts, C. J., Mining Geostatistics, Academic Press, London, 1978. 178. Myers, J. C., Geostatistical Error Management, Van Nostrand Reinhold, a Division of International Thomson Publishing Inc., 1997. 179. Bastien, M., Loi du rapport de deux variables normales, Rev. Stat. Appl., 8, 45, 1960. 180. Geary, R. C., The frequency distribution of the quotient of two normal variables, J.R. Stat. Soc., 93, 442, 1930. 181. Kingman, J. F. C., Poisson Processes, Oxford Studies in Probability. 3. University of Bristol, Clarenton Press, Oxford, 1993. 182. Smee, B. W., Stanley, C. R., Reply to Dr. Dominique François-Bongarçon, Publication Type in a Journal Article: Explore - Association of Exploration Geochemists Newsletter, 127, 19–23, 2005. 183. François-Bongarçon, D., (2000). The most common error in applying Gy’s formula in the Theory of Mineral Sampling, and the history of the liberation factor. monograph “Toward 2000”, AusIMM. 184. François-Bongarçon, D., Gy, P., The most common error in applying ‘Gy’s Formula’ in the theory of mineral sampling and the history of the Liberation factor, in Mineral Resource and Ore Reserve Estimation – The AusIMM Guide to Good Practice, The Australasian Institute of Mining and Metallurgy, Melbourne, 67–72, 2001. 185. Richards, R. H., Ore Dressing, Sampling: Vol. 2: 843–852. Vol. 3: 1571–1578, Vol. $: 2031–2033, McGraw-Hill, New York, 1908. 186. Suarez, E. V., Carrasco, P., Numerical studies of texture and liberation in microscopic images, Fifth World Conference on Sampling and Blending”, October 25–28, 2011, Proceedings published by Gecamin, Santiago, Chile. 187. Mitchell, D. R., Segregation in the handling of coal, Trans. AIME, 130, 107, 1938. 188. Johanson, J. R., Particle segregation and what to do about it, Chem. Eng., 183, May 8 1978. 189. Williams, J. C., The segregation of particulate materials, a review, Powder Tech., 15(2), 245, 1976. 190. Krige, D. G., Statistical applications in mine valuation, J. Inst. Mine Surv. S.A., 12(2–3), 1962. 191. Sichel, H. S., The estimation of means and associated confidence limits for small samples from lognormal populations. Proc. 1966 Symp, South African Institute of Mining and Metallurgy. 192. Shelly, W. J., Ziegler, W. A., Generation of quantitative data while sampling ores and concentrates, Symposium on Nuclear Materials Maagement, Vienna, August 30, Communication SM-67/39. 193. Duncan, A. J., Some measurement error considerations in bulk sampling with special reference to the sampling of fertilizers, Symp. on Bulk Sampling, Boston, MA, ASTM. STP 242, Am. Soc. Testing and Materials, Philadelphia, PA, p. 3, 1958. 194. Duncan, A. J., An experiment in the sampling and analysis of bagged fertilizer, J. Assoc. Offic. Agric. Chem., 43, 831, 1960. 195. Duncan, A. J., A case study of the sampling and analysis of bulk material, Bull. Int. Stat., 38, 357, 1961. 196. Duncan, A. J., Bulk sampling – problems and lines of attack, Technometrics, 5, 319, 1962.

References

675

197. Chieregati, A. C., Pignatari, L. E. C., New reconciliation model for gold industry, World Conference on Sampling and Blending: WCSB5, Proceedings published by Gecamin, Santiago, Chile, October 25–28, 2011. 198. Chieregati, A. C., The many facets of mine reconciliation. World Conference on Sampling and Blending: WCSB6, Proceedings published by Gecamin, Lima, Peru, 2013. 199. Chieregati, A. C., El Hajj, T. M., Imoto, C. F., et al., Validation of reverse circulation drilling rig for reconciliation purpose, World Conference on Sampling and Blending: WCSB7, Proceedings published by IMP, Bordeaux, France, June 10–12, 2015. 200. Chieregati, A. C., Amaral, E. A., Souza, J. C. O., Validation of a modified cross-belt sampler for reconciliation purposes, WCSB8, Proceedings published by the AusIMM, the Minerals Institute, Perth, Australia, May 9–11, 2017. 201. Armstrong-Smith, G., Sampling and sample preparation of copper concentrator products, Trans. Inst., Mining Metallurgy, 232, 259, 1974. 202. Lyman, G. J., The impact of the weighting error. WCSB8, Proceedings published by the AusIMM, the Minerals Institute, Perth, Australia, May 9–11 2017. 203. Colijn, H., Weighing and Proportioning of Bulk Solids, 1975, Series on Bulk Materials Handling, vol. 1 (1975/77), Trans Tech Publications. 204. Multon, J. L., Conservation et stockage des grains et graines et produits derives, Technique et Documentation Lavoisier, Paris, 1982. 205. Prigogine, A., Etude pratique de l’echantillonnage des minerais d’or au Congo Belge, Institut Royal Colonial Belge, 1954. 206. Prigogine, A., Analyse des minearis heterogenes a faible teneur, These aggregation Fac, Sci. University libre Bruxelles, 1950. 207. Prigogine, A., Echantillonnage et analyse des minerais homologues a faible teneur, Acad. R. Sc. Outremer., 15, 1961. 208. Prigogine, A., Echantilonnage des minerais auriferes, R. I. M., 44, 557, 1962. 209. Wayne, A. P., Fuel sampling and analysis: an alternative to on-stack emission monitoring for sulfur dioxide, presented at the 1987 Joint Power Generation Conference, Miami, FL, American Society of Mechanical Engineers, New York, October 4, 1987. 210. Suffet, I. H., Hunchak, K., Wicklund, A., et al., Trace organic analysis of suspended sediments collected with an in-stream composite sampler: the used for a standard method, presented at the 1986 Symposium on Chemical and Biological Characterization of Municipal Sludges, Sediments, Dredge and Drilling Muds, U.S. Environmental Proptection Agency, Office of Research and Development, May 20, 1986. Cincinnati, OH. 211. Stamets, P., Mycelium Running, How Mushrooms Can Help Save the World, Ten Speed Press, Trademarks of Random House, Inc., Berkeley, CA, 2005. 212. United States Environmental Protection Agency. Test Methods for Evaluating Solid Wastes, 2nd ed., SW-846, Office of Solid Waste and Emergency Response, Washington, DC, 1982. 213. Scherkenbach, W. W., The Deming Route to Quality and Productivity, CEEP Press Book, ASQC Quality Press, Washington, DC; Milwaukee, WI, 1986. 214. Kelly, J., Homogenisation des matieres premieres, Rim, 44(5), 1962.

This page intentionally left blank

Index A Abrasion, 488 Absolute variogram, 614–616, 621–624, 628–631 Absorbed waters, 520, 532–534, 537 Acceptable cost, 596–597 Acceptable representativeness, 596 Accidental errors, 42, 59, 72; see also specific types Accuracy, 60–61 and correctness, 35–36 definition, 11 graphic representation, 61–64 and moisture content, 524–525 risks of controlling, 36–37 Acid digestion, 240, 546, 547 Active components, 14, 25, 88, 95, 112, 349–351 Active physical components, 13 Actual extended increment, 357–359, 364 Actual fragmental increments, 359, 361, 364–365, 439, 461 Addition, 491–492 Additivity of heterogeneity, 91 Adjustable plates, 451, 452 Administrative sampling, 597 Adsorbed waters, 492, 520, 536, 537 Air turbulence, 255–256 Alluvial orebodies, 541, 544 Alternate shoveling, 479–480 Ammonium paratungstate, 290–297 Analysis, definition, 12 Analytical errors, 12, 73, 156, 568 Analytical logs, 567, 573 Analytical subsamples, 12, 47, 73, 224, 240, 257, 491 Anisotropy, 66, 82 Area, 577–579 Arithmetic average, 12, 47–48 of heterogeneities, 91 Arithmetic mean, 21, 646 Augers, 341, 436–437 Authoritative protocol, 31, 341, 576, 577 Authoritative sampling, 341–342 Auxiliary functions, 132–136, 151; see also specific types Auxiliary parameters, 228 Average constitution heterogeneity, 97 Average estimates, 150–151

Averages, 12, 47–48 of heterogeneities, 91, 97 B Background content, 275, 280, 296, 323–325 Bags, 67, 71, 85, 493 Ball mills, 640 Barrels, 83, 140, 341 Basic module of observation, 82 Bed-blending techniques, 641–653 conclusions, 652–653 development of a suitable model, 642–644 fluctuations, 641 homogenization process effectiveness, 648–652 overview, 641 reclaiming phase and systematic sampling, 644–648 Belt sampling, 395–396 Belt scales, 525 Bias, 11, 12–13, 379, 380–381; see also specific types local, 322 operational, riffle splitters, 473–474 systematic, 43, 480, 481 test of incorrect system, 661 weighting error, 501 Bias test, 661 Binomial distribution, 53, 180 Binomial model, 53 Bins, 259, 583, 641, 648 Biological sampling, 494 Bits (drilling), 423, 424, 425–427 Blast-holes drilling, 379–386, 431–435 sampling blast-hole pile, 437 Blenders, 251–252 Block caving, 386 Blood pipette shaker, 638 Buckets radial, 381–382 rotary dividers, 419–420 C Calculated distribution, 294 Capacity of cutter, 449, 457 Carbon dioxide

677

678

elimination, 492 fixation, 491–492 Carbonates, 491, 492 Cement plants, 172, 393, 605, 613, 618 Center of gravity, 19, 423, 444, 448, 449 Central limit theorem, 52, 184, 610 Cereals, 179, 202, 334, 519 CH see Constitution heterogeneity (CH) Channel sampling, 386–387, 435–436 Chemical analysis, 522–523 Chemical composition, 491–492 Chevron method, 642 Chromite, 239, 251, 253, 257, 555–556 Chronological data, 603–607 control limits, 604–607 process variability, 603–604 Chronological order of units, 124–127 Chronological series, 117, 646 variograms of, 120–121, 651 Chutes, 377, 420–421, 422, 473–474 Circular path cutters/samplers, 405, 449, 506, 507, 510 Circumstantial properties, 13, 32; see also specific types Classical method, moisture content, 526, 536 Classification of Lots, 67–68, 82–84 Clays, 383, 424, 426, 520 Cleaning, 393–394, 399, 474, 487–488 Client’s needs, variograms, 613–614, 621, 628 Closed circuit, 640 Clustering, 105, 303, 320–321, 328–329 Clusters, 8, 22, 56, 109, 264, 304 Coal, 205, 211, 583–584 Coarse materials, 164, 464, 465 Cobalt, low background content, 280 Coliwasa, 562, 587, 589–590 Combinations, 52 Commercial sampling, 346, 479, 481, 497, 597 Commercial transactions, 522, 559 Comminution, 13, 106, 209–210 Compact solids, 81, 230, 231, 252 Competitiveness, 573 Components, 2, 13, 18, 43, 65, 162–163; see also specific types Composite sample, definition, 13 Composited lot, 92 Compositing vertically, 283–284 Coning, 478–479 Consolidated orebodies, 252, 598 Constant factor, 20, 158, 287 Constant feed, 641, 653 Constituents of lot, order of, 68

Index

Constitution heterogeneity (CH), 13–14, 66, 80–82 definition, 92 and distribution heterogeneity, 96–97 fragments, 92 fragments size distribution and value of, 105–106 liberation factor calculation, 206–210, 216–217 in-situ nugget effect, 312–316 of zero-dimensional lots, 68–69, 87–113; basic relationships, 88–89; and fragment groups, 94–109; and fragments, 89–94 Constitution homogeneity definition, 97 of lot, 94 Constitution waters, 492, 520, 525, 532 Containers, 639 Contamination abrasion, 488 blast-hole drilling, 434 corrosion, 488–489 dust, 486 flowing streams, 399–400 of gold, 548 sampling circuit, 486–488 Content, 14; see also specific types Continuity index of, 119 of lot models, 14, 67–68, 83–84 Continuous fluctuations, 641 Continuous lots, 85–86 Continuous models, 14, 191, 349–354; see also specific types one-dimensional lots, 85–86, 155–177; analytical error, 156; characterization of sample, 159–160; continuous nonrandom term, 167–172; definitions in, 156–158; discontinuous random term, 163–166; flowing streams, 351–352, 355–362; heterogeneity fluctuation error, 160–163; heterogeneity of, 158; periodic term, 173–177; theoretical use of, 155–156; variance, 161–162 variance in, 191–194 Continuous nonrandom term, 167–172 Continuous selection error see Heterogeneity fluctuation error (HFE) Continuous selection model, 14 Continuous steam, 85, 116, 351 Continuous terms, 18, 135, 144–148 Continuous trends, 128 Control chart, 605, 616–619, 632

679

Index

Control limits, 604–607 Control room, 174 Conventional statistics, 52, 84, 565 Conveyor belts, 251, 465–466 Copper, 234 arsenic impurity, 325–327 molybdenum in deposit, 327–328 plucking effect, 427–429 slag in copper smelter, 621–627 Core drilling, 303, 321, 375–376, 544 Core halving, 208, 209, 217, 375 Core recovery, 598 Core samples, 22, 115, 211, 238, 297–298 Coring bits, 423–427, 544 Correct delimitation, 166, 343, 376, 386, 413–414 Correct extraction, 425, 439, 455, 468, 482 Correct probabilistic model, 189–190 Correct sampling systems availability, 663–664 improving, 664 new plant feasibility studies, 664–665 Correct selection, 14–15 Correcting material, 646 Correctness and accuracy, 35–36 definition, 12 practical advantages of control of, 36 primary property, 31–32 primary structural property of, 35 of strategy, 31–37 Correlated populations, 68 Correlation and order, 118 Corrosion, 488–489 Cost of representativeness, 595–596 Critical components, 15 addition or creation, 493 losses, 490–491 moisture content, determination, 522–532 subtraction or destruction, 493–494 Critical content, 15, 186–188 Critical cutter speed, 463, 466–468 Critical cutter width, 464, 465 Critical fragment size, 462 ‘Critical plane’, 444 ‘Critical point’, 441, 460, 461, 463 Cross-contamination, 488, 540 Cross-stream cutter speed, 502 Cross-stream flap cutter, 403 Cross-stream flap sampler, 403–406 Cross-stream samplers, 140, 342–343, 359, 396–406, 661, 663 Crushing, 225–227 Crystallization water, 492, 520, 524, 534

Cube mixing, 100 Cutoff grade, 31, 32, 227, 266, 330 Cutter(s): see also Delimitation characteristics of, 397–401, 448–471; capacity, 449, 457; circular path, 405, 449, 506, 507, 510; depth of, 459; design for different materials, 464–465; edges of, 449–453; extraction correctness, 463–464; flowrates, 465; good design, 453, 459; inclination of, 454–457; length of, 453–454; opening of, 453–457, 508; overflowing problems, 457–459; rotating cutter, 468–471; slopes, 451–453; speed of, 466–468; stationary cutters, 471; straightness, 449; thickness, 449–451; velocity, 460–464, 506, 508; width of, 460–465, 468 fragment collision, 442–443, 444 geometry, 506 idle position, 403, 486 rebounding fragments, 439–446, 448 rotating sweepers, 410–411 rotating-path cutters, 401–403, 407–410 sampler layout, 506–508 speed of, 502–503 stationary cutters, 393–395 straight-path cutters, 395–401 Cuttings pile, 380–383 Cyanide leaching, 490, 547, 548 Cycles, 618–619 Cyclic fluctuations, 418, 561 Cyclic phenomena, 70, 129–130, 175, 367, 564, 586 Cyclic term, 117 D Data quality objectives (DQO), 27 Degenerated fractional shoveling, 481–482 Degenerated models of lot, 350 Degenerated splitting devices, 476–478 Degenerated splitting methods, 481–482 Dehydration, 532–533 Delimitation cross-stream samplers, 389–411; cross-stream flap sampler, 403–406; flexible hose samplers, 406; header tanks, 406; rotating cutter, 407–410; rotating sweepers, 410–411; rotating-path cutters, 401–403; stationary cutters, 393–395; straight-path cutters, 395–401; stream shape, 403 reject type splitters, 476–478 riffle splitters, 420–422, 473–475

680

rotary dividers, 416–420 sectorial splitters, 418–419, 476 spatulas, scoops, and shovels, 413–416; correct and incorrect, 415–416; extraction correctness, 413–416 spoons, 414–416 true splitting methods, 478–482 Delimitation bias, 379, 380–381 Delimitation correctness, 373–375 Density fractions, 174, 197, 199–200 Depth sampling, 588, 589–590 Descriptive statistics, 41 Descriptors and fragments, 90–91 and units, 109–110 and variograms, 127–128 Desiccators, 492 Destructive drilling techniques, 426–427 Deterministic sample, 15 DH see Distribution heterogeneity (DH) Diamond core drilling, 304, 375–376, 423–425 Diamond core length, 376 Dippers, 587 Discontinuity of lot models, 67–68, 83–84 Discontinuous errors, 193–195 Discontinuous fluctuations, 641 Discontinuous lots, 85, 86 Discontinuous processes, 637, 638, 640 Discontinuous term, 163–166 Discrete estimator, 353 Discrete models, 15, 354–355; see also specific types variance in, 191–194 zero-dimensional lots, 179–195; correct sampling, 189; notions for, 179–180; random variable distribution, 180–182; sampling errors, 183–188 Discrete set, 15 Discrete units, 83, 88, 140 Discrete variable, 45–46 Disk pulverizers, 490, 548 Disorder, 68, 84 Dispersion parameters, 48–51 Distribution: see also specific types binomial, 53, 180 normal, 59, 160, 184, 212, 278 Poisson, 54, 56–59, 265, 266, 289–290, 300, 557–559 probability, 47–51, 52–53 random variables, 180–182 skewness of, 184, 186 spatial, 81–82 weight, 182 Distribution heterogeneity (DH), 66

Index

and constitution heterogeneity, 81, 96–97 definition, 15 and fragments, 94 general expression of, 106 of lots, 95–96 maximum, 102–103 observation scale and value of, 105 of zero-dimensional lots, 68–69, 87–113; basic relationships, 88–89; and fragment groups, 94–109; and fragments, 89–94 Distribution homogeneity definition, 15, 97–98 within lots, 98–100 natural, 98–102 one-dimensional, 101 revolution, 82, 101 three-dimensional, 100 two-dimensional, 101 Distribution law, 183–186 Diverters, 406 Domain, 15 DQO (Data quality objectives), 27 Draw-off points, 584, 585 Draw-points, 387–388 Drill core samples, 303, 321, 375–376, 544 Drilling, 304 blast-holes, 379–386, 431–435; contamination, 434; lateral segregation, 434–435; sampling blast-hole pile, 437 correctness of, 377 drifts and surveys, 375 leach pads, 436–437 of orebodies, 423–427, 544 subdrill area, 431–433 of three-dimensional lots, 598 of two-dimensional lots, 598 Drilling techniques, 423–425 Driving systems, 397, 409, 507 Dry materials, 256, 376, 418, 474, 489, 524 Drying, 523–524 baseline for, 532–534 classical method of, 526–528 recommended method, 529–532 simultaneous method, 528–529 single sample method, 529–532 Due diligence, 306–307 Dust, 486, 489 Dynamic phenomenon, 15 Dynamic processes, 638 E Education, 2, 287, 661, 662

681

Index

Electric drive, 397, 506 Electrostatic properties, 491 Elementary slice, 351, 355 Elimination, 492 Eluvial gold, 541 Empirical approaches, 1, 17, 35, 42 Environmental protection agencies, 561 Environmental sampling, 561–593 acceptable cost, 597 conclusions and recommendations, 591–593 heterogeneity carried by pollutant, 569–571, 592 key questions and issues, 562–563 logical approach, 563–566 overall sampling error, 567–568 overview, 561–562 sampling plan development, 572–587, 592 sampling plan implementation, 587–590, 593 standard reference materials, 590–591 Equitable, definition, 16 Equity, 342, 420, 475, 478–480, 590 Errors: see also specific types analytical, 12, 73, 156, 568 definition, 16 generators, 136–137 heterogeneity fluctuation, 175–176 word usage controversy, 26–29 Essential waters, 520 Estimates, 16 Estimation and moisture content, 524–525 pollutant content, 575–576 Estimation errors see Overall estimation error Estimator, definition, 16 Exact selection, 16 Experimental errors, 42 Experimental results, interpolation, 43 Experimental variances, 270–272 Experimental variograms, 628–631 Exploration programs, 375–377, 423–437 Extended functions, 352–353 Extended increment, 343–344, 355–359, 364, 588 Extended sample, 344 Extraction correctness, 413–416, 446–471 cutter characteristics, 448–471 definition, 439 header tanks, 406 material to be sampled, 446–448 rebound analysis, 439–446 of splitting process, 420–422 Extraction errors see Increment extraction error Extrapolation, 16, 42, 612–613

Extrusion effect, 426–427 F Feasibility studies, 664–665 Feed, 35, 112, 122, 172 Filtration of pulps, 485 Financial losses, 59, 194, 334, 392 Fine materials, 239, 256, 399, 431, 464–465, 528 Fines, 489 Fire assay, 156, 216, 220, 224, 227 ‘First critical position’, 444, 445 First order average integral of variogram, 133–134, 142 Fixation, 491–492 Flap samplers, 403–406 Flexible hose samplers, 406 Flotation plants, 465, 468 Flow rates, 20, 71, 397, 465 Flow regulation, 509 Flowing streams, 351–352, 355–362 increment delimitation error, 389–411; stationary cutters, 393–395; straightpath cutters, 395–401; stream shape, 403 increment weighting error, 502–503 one-dimensional lots, 599 temporal model, 351–352 Food, 5, 13, 31, 45 fossil-fuel-fired steam generating plants, 583, 641 Fractional shoveling, 480–481 Fragment shape factor, 200–202 Fragment size distribution factor, 202–203 Fragmental functions, 353 Fragmental increments, 445–446 Fragments: see also specific types definition, 16 density heterogeneity, 253 falling, 448 within group of fragments, 94–95 heterogeneity of, 90–91 within lots, 95 with one descriptor, 90 rebounding, 439–446, 448, 460–463 refluxing, blast-hole drilling, 431 shape factor, 200–202 shape heterogeneity, 255 size heterogeneity, 253–254 size of, 202–203 spinning, 448 with two descriptors, 90–91 Fraud, 497

682

Free falls, 486, 659, 663 Functional relations, 84; see also specific types Fundamental sampling error, 197–212, 235–243 cardinal rules, 237–243, 287 definition, 17, 194, 211–212 fragment shape factor, 200–202 gold, 557–559 liberation factor, 206–210, 216–217 logical strategies, 240–243 maximum, 213–214 and mineralogical factor, 203–205 minimization, 213–234; case study, 217–225; and nomographs, 214–216; recommended method, 217–225; size distribution analysis, 228–230 minimum, 213–214 quantifying, 287 standard deviation calculation, 281–282 standard deviation maximum, 294–295 variance of, 231–233, 243, 247

Index

Grab sampling, 22, 107, 233, 341, 371, 386 Granulometric classes, 493 Granulometric factor, 17 Graphic representation of accuracy, 61–64 of precision, 61–64 of probability law, 45–46 graphical integration of a variogram, 137–139 Gravels, 427 Gravity, 19, 21, 24, 80 Grinding, 13, 227, 485, 489 Group of particles, 17 Grouping and segregation error, 245–259 definition, 17, 195 minimization, 246–249, 288, 413, 416 variance of, 249–250 Grouping factor, 246–247 definition, 17, 103–104 minimization, 247–248 H

G Gangue concentration, 274–275 Gaussian model, 52–53, 54 Generalization, 17, 42 Geochemical exploration, 423, 487 Geological units, 241 Geometric average, 48 Geostatistics, 21, 71, 81, 132 G/L solids, controlling, 627–632 Glass containers, 491 Gold, 263–264 associated with another mineral, 227–228, 555–556 coarse and fine clustering, 328–329 contamination of, 548 fundamental sampling error, 557–559 intrinsic heterogeneity, 551–553 liberated, 225–227, 541–551 losses of, 547–548 most probable result, 559 nomographs, 548–551, 553–555 not liberated, 551–555 particle size, 304 Poisson distribution, 557–559 Poisson processes case study, 297–301 properties of, 241 samples, 265–266 sampling preparation problems, 547–548 size distribution, 218–225 in stream sediments, 545, 546 total allotted variance, 233–234

Hammer mills, 489 Handling, 18, 84, 115, 248 Harmonic average, 48 Header tanks, 406 Health risk, 562 Heterogeneity additivity of, 91 average of, 91 constitution, see Constitution heterogeneity (CH) definition, 18, 79–80, 116–117; illustration of, 107–109; practical use of, 110–113 distribution, see Distribution heterogeneity (DH) fragment density, 253 fragment shape, 255 fragment size, 253–254 intrinsic, 20 large-scale, 564, 570 logical flow sheet, 242–243 long-range fluctuation in, 128–129 one-dimensional lots, 69–70, 115–151; characterization of, 118, 158; chronological order of units, 124–127; chronological series, 118; relative variogram, 122–124; semi-variogram, 118–120; sequential characterization of, 118–120; variogram functions, 132–136; variogram interpretation, 137–148; variogram modeling, 127–132; variogram presentation, 120–121

683

Index

one-dimensional wastes, 569–570 Periodic fluctuation in, 129–130 pollutants, 569–571, 592 short-range fluctuation in, 128 small scale, 190–191 three-dimensional, 102 three-dimensional lots, 70–71 three-dimensional wastes, 570–571 total, 116 two-dimensional lots, 70 two-dimensional wastes, 570–571 zero-dimensional lots, 68–69, 87–113; basic relationships, 88–89; and fragment groups, 94–109; and fragments, 89–94 zero-dimensional wastes, 569 Heterogeneity fluctuation error (HFE), 71, 136–137, 143–144, 160–163, 333 components of, 162–163 definition, 14, 18–19 logical analysis, 193–195 and moisture content, 535–537 nullity of the mean, 646 variograms, 646–648 Heterogeneity tests, 286–287 Heterogeneity variability, variograms, 611–612 Heterogeneous distribution see Distribution heterogeneity (DH) HFE see Heterogeneity fluctuation error (HFE) High-purity materials, 290–297 histograms, 292–294 sampling errors, 294–295 Visman’s sampling equation, 296–297 Histograms, 292–294 Homogeneity constitution, 94, 97 distribution, see Distribution homogeneity experimental verification, 102 one-dimensional, 101 three-dimensional, 100 two-dimensional, 101 Homogenization, 637–640 ball and rod mills, 640 in closed circuits, 640 continuous processes of, 640 discontinuous processes of, 638–640 dynamic processes of, 638–640 process effectiveness, 648–652 processes of, 637–640 stationary processes of, 638 Hoods, 486 Hoppers, 513

Hose samplers, 406 Hoses, 399 Hot plates, 492, 525 Hydraulic drives, 397 Hydrogen content, 521, 522 Hydroxides, 491 I Idle position of cutter, 403, 486 Impurities, 59, 205, 231, 311, 312 Inclination of cutter, 454–457 Incorrect extraction, 19, 455 Incorrect sampling, 568 Incorrect sampling systems, 658–663 improving, 661 list of, 659–660 and material balance, 662 and process control, 662 standards committees responsibilities, 662 variographic experiment, 663 why build, 658–659 why customer still use, 660 Incorrect selection, 19, 185 incorrect systems, bias test, 661 Increment, 19 Increment delimitation error, 19, 337–338, 357–359, 371–388 definition and concept, 371–375 flowing streams, 389–411; cross-stream flap sampler, 403–406; flexible hose samplers, 406; header tanks, 406; rotating cutter, 407–410; rotating sweepers, 410–411; stationary cutters, 393–395; straight-path cutters, 395–401; stream shape, 403 ore grade control, 377–388; open pit, 377–386; underground, 386–388 processing plant, 389–411 recommendations for exploration programs, 375–377 sampling at the laboratory, 413–422; riffle splitters, 420–422; rotary dividers, 416–420; spatulas, scoops, and shovels, 413–416 Increment extraction error, 19, 337–338, 359–360, 423–437 definition and concept, 423 exploration and mining, 423–437; channel sampling, 435–436; drilling and sampling blastholes, 431–435; drilling techniques, 423–427; plucking effect, 427–431

684

sampling at the laboratory, 473–483; reject type splitters, 476–478; riffle splitters, 473–475; rotating cutters, 482–483; sectorial splitters, 476; true splitting methods, 478–482 sampling in a processing plant, 439–471; extraction correctness, 439, 446–471; rebound of a fragment, 439–446 Increment materialization error, 19, 71–72, 337–362 increment sampling process, 343–345 increment sampling process models, 347–362 nonprobabilistic selecting processes, 340–342 overview, 337–340 probabilistic selecting processes, 340, 342–343 recapitulation, 362 and splitting process, 345–347 Increment preparation error, 19–20, 338, 360–362, 365 sample integrity, 485–498; chemical composition, 491–492; conclusions, 497–498; contamination, 486–489; fraud and sabotage, 497; losses, 489–491; physical composition, 492–494; technology old and new, 494; unintentional mistakes, 495–497 Increment sampling process, 159 analysis, 343–345 models of, 347–362; flowing streams, 351–352, 355–362 and splitting process, 346–347 Increment weighting error (IWE), 14, 20, 338, 501–510 moments of, 501–502 practical experience, 502–504; flowing streams, 502–503; laboratory subsampling, 503–504; mineral deposits, 503 proportional sampling; overview, 504–506; practical implementation, 506–509; reliability/benefits of, 509–510; required characteristics, 508–509 Independent random variables, 180 Infrared lamps, 492, 525 Ingamells’ gangue concentration, 274–275 Ingamells’ minimum sample mass, 277 Ingamells’ most probable result, 272–274 Ingots, 70 Inlets, 584, 585 Input variograms, 649–650 In-situ nugget effect, 21–22, 311–331 case studies, 325–329

Index

consequences of, 329–330 constitution heterogeneity, 312–316 definition, 312 local biases, 322 low background content, 323–325 necessary sample mass, 325–329 recommendations, 330–331 theoretical approach, 317–322; mineral of interest particle, 319–322 In-situ orebodies, 541 In-situ structural case, 312–314 Instantaneous estimates, 150–151 In-stream composite sampling, 584 In-stream stationary cutters, 393–395 In-stream suspended sediments, 584 Instrumental methods, 590, 591 Integration error see Heterogeneity fluctuation error (HFE) Integrity of sample increment preparation error, 485–498; chemical composition, 491–492; conclusions, 497–498; contamination, 486–489; fraud and sabotage, 497; losses, 489–491; physical composition, 492–494; technology old and new, 494; unintentional mistakes, 495–497 Internal accounting sampling, 597 Interpolation, 20, 41–42 experimental results, 43 Interpretation experimental results, 287 routine data, 570 Intrinsic heterogeneity, 20, 92–93, 211 Gold, 551–553 Intrinsic properties, 13, 35, 66, 80, 250 Iron content, ammonium paratungstate, 290–297 Isotropic module of observation, 19, 94, 375, 576 IWE see Increment weighting error (IWE) J Japanese slab cake, 413 JIS spoon, 414–415 Jones splitters (riffle splitters), 256, 420–422, 473–475 cardinal rules, 475 chute width, 474 design, 474 equity, 475 operational bias, 473–474 wrong direction feeding, 475 Judgment errors, 565

685

Index

K Kemmerer depth samplers, 588, 589–590, 593 L Laboratory blenders, 251–252 Laboratory sampling Increment delimitation error, 413–422; riffle splitters, 420–422; rotary dividers, 416–420; spatulas, scoops, and shovels, 413–416 Increment extraction error, 473–483; reject type splitters, 476–478; riffle splitters, 473–475; rotating cutters, 482–483; sectorial splitters, 476; true splitting methods, 478–482 Laboratory subsampling, 503–504 Laboratory vibro-spatula, 257 Large-scale heterogeneity, 564, 570 Large-scale segregation, 70, 117, 252, 283 Lateral segregation, 434–435 Laterite, 383 Laterites, 282, 426, 491, 532 Law of small probabilities, 264–265 Layers, 643 Layout of sampler, 506–508 Leach pads, 436–437 Leaching, Cyanide, 490, 547, 548 Leading edge, 441–444, 460–462 Length of cutter opening, 453–454 Liberated gold, 225–227, 541–551 Liberation factor, 20, 206–210, 216–217 Liberation size, 20–21, 210 Line sampling, 392, 660 Liquid stream, 116, 351 Liquid wastes, 589 Liquids, 65, 117, 128, 165, 166, 567 Logarithmic model, 129, 214, 548, 553 Logging of drill core samples, 22, 297, 303 Logical flow sheet, 242–243 Lognormal statistical model, 289 Long-range absolute variogram, 615–616, 622–624 Long-range fluctuation in heterogeneity, 128–129 Long-range trend, 70, 586 Losses, 489–491 fines as dust, 489 fractions of the sample, 490–491 sampling and preparation circuit, 490 Lots classification of, 67–68, 82–84

composited, 92, 93 constitution homogeneity of, 94 continuity of model of, 67–68, 83–84 continuous model of, 349–354 definition, 21 degenerated model of, 350 dimensions, number of, 67 discontinuity of model of, 67–68, 83–84, 354–355 discontinuous, 85 distribution heterogeneity, 95–96 distribution homogeneity, 98–100 fragments within, 95 heterogeneity within, 68–71 intrinsic heterogeneity, 92–93 mass calculation, 505–506 models of, 82–83 one-dimensional lots, see One-dimensional lots three-dimensional lots, see Threedimensional lots two-dimensional lots, see Two-dimensional lots zero-dimensional lots, see Zero-dimensional lots Low background content, 275, 280, 296, 323–325 Low-frequency grains, 265, 273, 293 M Magnetic stirrers, 639 Maintenance, 334, 418, 420, 502, 509, 657, 663 Manual drive, 397 Manufacturers of sampling equipment, 227, 334, 338, 466 recommendations to, 657–665 Marine bivalves, 586 Mass, 21 Mass-sampling ratios, 505 Material balances, 662 Materialization error see Increment materialization error Selection modes Maximum cutter speed, 463 Maximum distribution heterogeneity, 102–103 Maximum particle size, 17, 291, 567, 574, 599 Maximum rebound, 462 Mean, 21, 646 Mean square, 21 Mechanical mixing, 638–639 Median, 48 Metallurgical accounting, 232 Microscopic observations, 286–287 Milling operations, 539

686

Mineral deposits, 503, 519–520 Mineral of interest particle, 319–322 Mineralogical factor, 21, 203–205 Mineralogical observations, 286–287 Mineralogical studies, 241–242 Minimum distribution heterogeneity, 24, 98, 100, 248, 638, 639 Minimum heterogeneity, 100, 652 Minimum weight (sample), 214, 543 Mining, increment extraction error, 423–437 Mining evaluations, 539 Mode (The), 48 Model extended increment, 355–357 Model fragmental increment, 359, 364 Models: see also specific types definition, 21 probabilistic, 189–190 Modes of selection see Selection modes Moisture content, 493, 519–537 and accuracy, 524–525 classical method, 526, 536 conclusions, 535–537 definition, 519–521 determination, 521–532, 534–535 difficulties encountered, 532–534 drying, 523–524, 526–529 sampling equipment, 534–535 single sample method, 529–532 Molybdenum trioxide concentrate, 205 Monitoring, neutron activation, 383–386 Mortars, 488, 490, 548 Most probable result, 272–274, 280, 296–297, 325, 559 Moveable lots, 342–343 Moving average, 282, 328, 329, 511, 607, 638 Moving average of variogram, 135–136 Mushrooms, 586–587 N Natural distribution homogeneity, 100–102 Natural gas treatment plants, 494 Necessary sample mass, 325–329 Neutron activation, 383–386 Nomographs, 214–216 liberated gold, 548–551 not liberated gold, 553–555 Nonchronological units, 179 Noncorrectness, 339, 357, 371 Nonprobabilistic selection processes, 340–342 Nonselective operations, 22, 485 Nonselective preparation steps, 365 Nonselective process, 338

Index

Nonselective steps, 360, 365 Normal distribution, 59, 160, 184, 212, 278 Normal probability distribution, 52–53 Normal statistical model, 289 Notations, 25–26 Nugget effect, 21–22, 314; see also In-situ nugget effect Nullity of the mean, 646 Nutator mixer, 639 O Observed distribution, 294 Occluded water, 492, 520 One-dimensional homogeneity, 101 One-dimensional lots continuous models, 85–86, 155–177; analytical error, 156; characterization of sample, 159–160; continuous nonrandom term, 167–172; definitions in, 156–158; discontinuous random term, 163–166; flowing streams, 351–352, 355–362; heterogeneity fluctuation error, 160–163; heterogeneity of, 158; periodic term, 173–177; theoretical use of, 155–156; variance, 161–162 flow regulation, 509 flowing streams, 599 heterogeneity of, 69–70, 115–151; characterization of, 118; chronological order of units, 124–127; chronological series, 118; relative variogram, 122–124; semi-variogram, 118–120; sequential characterization of, 118–120; variogram functions, 132–136; variogram interpretation, 137–148; variogram modeling, 127–132; variogram presentation, 120–121 stationary lots, 599 One-dimensional models, 155–156 One-dimensional wastes, 567–568, 569–570, 575–576, 590 On-line analyzers, 29, 406, 632 On-line probes, 583 Optimized selection error, 246 Optimum sample mass, 276–277, 282 Ore deposit, 173 Ore grade control, 377–386 Orebodies, 541, 544 Outliers, 54, 222, 264, 288, 289 Output material, 643, 651–652 Output variogram, 648

Index

Ovens, 492, 528 Overall estimation error, 65–76; see also specific components and constitution heterogeneity, 66 definition, 22 and distribution heterogeneity, 66 lots classification, 67–68 Overall sampling error, environmental sampling, 567–568 Overcorrections, 174, 604, 612, 633 Oxidation of sulfides, 491 P Packaging, 72, 74, 360 Paddles, 639 Palladium, 227, 540, 546, 555 Parabolic model, 129, 133 Particle size analysis, 304 Particle size distribtion, Granulometric factor, 17 Particle size fractions, 13, 542, 572, 573 Passive physical components, 13 Pearson variation factor, 51 Pearson’s coefficient, 22 Periodic fluctuations, 173–174 Periodic heterogeneity contribution, 18 Periodic heterogeneity fluctuation, 129–130 Periodic phenomena, 130, 131, 132, 173, 366, 585 Periodic term, 173–177 Permutations, 51–52 Pestles, 548 Physical components, 13, 15, 42, 204 Piles, 642–643 Pilot plants, 42, 280, 507, 508 Pipe samplers, 660, 661 Placer deposits, 541 Planetary ball mills, 489 Plant feasibility studies, 664–665 Plant performance, 510 Plastic containers, 491 Plates, 451, 488 Platinum, 227, 540, 555, 556 Plucking effect, 427–431 Pneumatic drive, 397 Point sampling, 389, 392, 660 Poisson distribution, 54, 56–59, 265, 266, 289–290, 300, 557–559 Poisson model, 54–58, 264–265 double process, 56–58 single process, 54–56 Poisson processes, 263–267 additivity of, 266–267 double process, 290, 303

687

gold deposit case study, 297–301 programming factorials, 267 single process, 289–290 Pollutants content estimation, 575–576 heterogeneity carried by, 569–571, 592 regulatory threshold, 565–566, 572 structural property, 564–565 variographic experiment, 582–587 Population correlated, 68 of fragment groups, 94–109 of fragments, 89–94 heterogeneity characteristics, 111–113 random, 50, 68, 70, 84 sequential, 18 Potable water supplies, 585 Precious metals, 232, 539–559; see also specific types logical approach, 540–541 overview, 539–540 Precision, 60 definition, 22 graphic representation, 61–64 Preparation, 22 Preparation circuit, 490 Preparation errors see Increment preparation error Preparation stages, 24, 72, 74, 231, 365, 485 Prepared fragmental increment, 360–362 Preventive choice, 28 Preventive recommendations, 59, 278, 306–307 Primary properties, 33–35 Probabilistic model, 189–190 Probabilistic sampling, 342–343 Probabilistic selection, 22, 59, 340, 363–365 Probabilities, 43–44 definition, 23 Poisson model, 54–58, 264–265 selection, 184–185, 364 Probability distribution characterization of, 47–51 normal, 52–53 position parameters, 47–51 Probability law definition, 45 graphic representation, 45–46 Poisson model, 54–58, 264–265 of random variable, 44–45 Process control, 605–607 and incorrect sampling systems, 662 process parameter of interest, 611 search for variographic approach, 611–616

688

variograms, 609–619 Process feed material, 641 Process parameter of interest, 611 Process trends, 115, 337, 607 Process variability, 603–604 Processing plant, flowing streams, 389–411 Processing stream, 121 Productivity, 79, 401, 603, 641 Propellers, 639 Proportional cutter speed, 502–503 Proportional sampling overview, 504–506 practical implementation, 506–509 reliability/benefits of, 509–510 required characteristics, 508–509 Protocol, 24, 315–316 Pulps, 220, 376, 377, 418, 464, 465, 485 Pulverizing, 13, 22, 82, 216, 220, 227 Punctual functions, 352 Punctual increment, 355 Purity, 66, 110, 184, 186, 263 Purposive sampling, 340–341 Purposive selection, 23 Q Quadratic average, 48 Qualitative analysis, 80–82 Quality control Cartesian approach, 7 sampling correctness, 37, 79, 166, 257, 648 and variographic experiment, 137, 148–151 Quality control analysts, 497 Quality control engineers, 115 Quality control programs, 4, 41, 79, 143, 419 Quality control regulators, 231 Quality fluctuation error, 14, 18, 71, 235, 501 Quantifying structural problems variograms, 621–633; controlling g/l solids, 627–632; copper content of slag, 621–627 Quartering, 478–479 R Radial buckets, 381–382 Radio channels, 382–383 Railroad cars, 67, 83, 86, 115, 343 Random errors, 59–64 Random fluctuations, 68, 128, 130, 139, 521, 564 Random population, 50, 68, 70, 84 Random relations, 85; see also specific types Random sampling, 161–162

Index

Random selection, 363 stratified mode, 366–367 systematic mode, 365–366 Random stratified sampling, 161, 580–581 Random stratified selection, 160, 173, 365, 366–367, 368, 575 Random systematic sampling, 579–580 Random systematic selection, 136, 160, 170, 363, 365–366 Random variability, 23 Random variables dependence between, 47 distribution of, 180–182 probability law of, 44–45 Raw materials, 79, 110, 150, 351, 491 Reality, challenges of, 288 Rebounding fragment, 439–446, 448, 460–463 Rebounding rule, 445–446, 448 Reclaiming phase, 644–648 Reclaiming system, 643–644 Rectilinear model, 129, 167, 168, 170 Refluxing fragments, blast-hole drilling, 431 Regionalization, 117, 128, 235 Regulatory threshold, pollutants, 565–566, 572 Relative standard deviation, 184–185 Relative variance, 164, 184 Relative variogram, 122–124 Representativeness, 23, 61, 595–596 Reproducibility see Precision Reservoir outlets, 584, 585 Residual component, 128, 130, 131, 132 Residual variances, 28, 100, 231, 233, 234, 612 Reverse circulation drilling, 304, 376–377 Revolution distribution homogeneity, 101 Revolution homogeneity, 80, 82, 101, 478 Revolving feeders, 418 Riffle splitters, 256, 420–422, 473–475 cardinal rules, 475 chute width, 474 design, 474 equity, 475 operational bias, 473–474 wrong direction feeding, 475 Ring mills, 494 Rod mills, 640 Rolling technique, 639–640 Rotary blenders, 641, 648 Rotary drilling, 598 Rotary sample dividers, 257–259, 416–420 sectorial splitters, 418–419; buckets (type), 419–420 Rotating-path cutters, 401–403 under a cone, 407–410, 482–483

Index

correct geometry, 402–403 guidelines, 468–471 incorrect geometry, 403 Rotating-path sweepers, 410–411 Round-robin exercises, 591 Rules of center of gravity, 19, 360, 361, 445 of extraction correctness, 463–464 of sampling correctness, 166, 648 S Sabotage, 497 Sacks, 599 Safety, 5, 31, 215, 231, 305, 393, 394 Sales people, 232–233 Sample (s) definition, 23 physical composition, 492–494 processing, 509 Sample integrity increment preparation error, 485–498; chemical composition, 491–492; conclusions, 497–498; contamination, 486–489; fraud and sabotage, 497; losses, 489–491; physical composition, 492–494; technology old and new, 494; unintentional mistakes, 495–497 Sampler layout, 506–508 Sampling and accuracy, 35–36 concerned industries, 286 correctness of, 29, 35–36, 287, 333–335, 565, 588 cost of, 565–566 diagram of, 278–284, 301–303; nickel-cobalt deposit, 279–284 mode of, see Selection modes ratio for, 509 strategy for, 305–307 Sampling circuit, 486–488, 490 Sampling diagram nickel-cobalt deposit, 279–284 trace constituents, 301–303 variances, 278–279 Sampling equation (Visman), 267–268, 295–296 Sampling equipment, moisture determination, 534–535 Sampling equipment manufactures, 227, 334, 338, 466 recommendations to, 657–665 Sampling error see Selection error Sampling intervals, 616–619

689

Sampling point, 343, 372, 392, 502, 568 Sampling problems, 595–600 Sampling protocol, 24 selection, 315–316 Sampling ratio, 24 Sampling scheme, 24 Sampling stage, 24; see also specific types Sampling strategy, standardization, 566 Sampling systems correct systems; availability, 663–664; improving, 664; new plant feasibility studies, 664–665 incorrect systems; bias test, 661; improving, 661; list of, 659–660; and material balance, 662; and process control, 662; standards committees responsibilities, 662; variographic experiment, 663; why build, 658–659; why customer still use, 660 recommendations, 657–665 systems that don’t work, 657–658 Sands, 427 Saprolite, 383 Scoops, 341, 413–416, 420, 422, 473 Screen analysis, 227 Screening, 490 Screw conveyor, 486 Screw feeder, 256 ‘Second critical point’, 461 Second order average integral of variogram, 134, 142–143 Secondary properties, 33–35 Sector, 577–579 Sectorial splitters, 418–419, 476 Sediment sampling, 584, 590 Sediments, suspended, 584, 593 Segregation air turbulence, 255–256 definition, 24 fragment density heterogeneity, 253 fragment shape heterogeneity, 255 fragment size heterogeneity, 253–254 lateral, blast-holes, 434–435 other causes, 259 relativity of, 252–253 transient nature of, 250–252 vibrations, 256–259 Segregation error see Grouping and segregation error Segregation factor, 24, 247 minimization, 248–249 Selecting conditions, 24 Selecting results, 24

690

Selection error components of, 190–195 definition, 24 properties of, 183–188 Selection modes, 363–370 continuous model, 349–354 increment and splitting processes, 346–347, 353–354 modes used in practice, examples, 368–370 nonprobabilistic, 340–342 probabilistic, 22, 59, 189–190, 340, 342–343, 363–365 random, 365–368; stratified mode, 366–367; systemic, 365–366 Selection probabilities, 364 Semi-variogram, 118–120 Sequential population, 18 Set, 47–48 Shaker mixing, 98, 100 Shaker tables, 256, 490, 548 Shape factor, 23, 198, 200–202, 224, 287 Shape of cutter edges, 449–451 Sheet mixing, 639–640 Shellfish, 585–586 Short-range absolute variogram, 614–615, 621–622 Short-range heterogeneity contribution, 18 Short-range heterogeneity fluctuations, 6, 70, 128, 575 Shovel mixing, 98 Shovels, 415–416 alternate shoveling, 479–480 degenerated fractional shoveling, 481–482 Silicates, 520, 525, 547 Sill of output variogram, 131, 648 Silos, 100, 648 Silt, 424 Single assay standard deviation calculation, 283 variance, 279 Single descriptor, 90, 109, 110 Sinter plants, 641 Size analysis, 12, 203, 457 Size distribution, 208–209 gold, 218–225 Size distribution analysis, 228–230 Skewed distribution, 184, 186 Sliding gates, 660 Slurries, 115, 139, 179, 351, 418, 453 Small probabilities law, 264–265 Small scale heterogeneity, 190–191 Soft materials, 426 Soils, 383, 426, 491, 532

Index

suspended, 69, 389, 390, 392, 406 Solid wastes see Environmental sampling Solids, 465–466 compact, 81, 230, 231, 252 G/L solids, controlling, 627–632 suspended, 69, 389, 390, 392, 406 Solvable sampling problems, 595–600 Spatial distribution, 81–82 Spatulas, 413, 415–416 Specification limits, 604–605 Speed of cutter, 466–468 Splitting devices/methods see Delimitation Splitting process, 345–347, 375–376 Spoons, 415–416 Stacker speed, 642 Standard deviation, 281–283 fundamental sampling error, 294–295 Standard reference materials, 257, 590–591 Standards committees, 16, 31, 36, 287, 587, 662 Static phenomenon, 15 Stationary cutters, 393–395, 471 Stationary feeders, 418–419 Stationary lots, 346, 599 Stationary processes, 638 Statistical models, 289 Statistical process control, 603–604 Stirling approximation, 267 Stirrers, 639 Stochastic relations, 85; see also specific types Stopped belt sampling, 395–396 Storage bins, 101, 259, 583, 641 Straightness of cutter edges, 449 Straight-path cross-stream cutters, 395–401 contamination, 399–400 cutter accelerating, 397–398 damaged cutter, 398 dirty cutter, 399 parking place, 400–401 sampling, 396–397 secondary stream, 401 size cutter to stream, 401 Stratified random sampling, 161, 580–581 Stratified random selection, 160, 173, 365, 366–367, 368, 575 Stratum, 577–579 Stream flow rate, 642 Stream shape, 403 Streams all the time, 389–390 part of the time, 389, 390–393 stream shape, 403 temporal model, 351–352 thickness, 453, 570

691

Index

Strict random sampling, 161–162 Structural properties, 31–32; see also specific types Subdrill area, 431–433 Subsampling, laboratory subsampling, 503–504 Substratum, 577–579 Subtraction, 492 Sulfides, oxidation, 491 Sulfur, 494 in coal, 583–584 Super-panners, 548 Surface sampling, 589 Suspended sediments, 584, 593 Suspended solids, 69, 389, 390, 392, 406 Systematic biases, 43, 480, 481 Systematic grid, 577–579 Systematic random selection, 136, 160, 170, 363, 365–366 Systematic sampling, 579–580, 644–648 Systematic scheme, 161, 346, 599 Systemic errors, 59–64 T Tanks, 406 Technical sampling, 597 Temporal model of flowing streams, 351–352 Terminology, error and uncertainty usage controversy, 26–29 Test tube samples, 638–639 Theory of Sampling (TOS) links between Gy and Ingamells, 305–307 overview, 1–7 works of Visman and Ingamells, 263–284, 289–290, 307 Thickness of cutter edges, 449–451 Thief probes, 341 Three-dimensional heterogeneity, 80 Three-dimensional homogeneity, 100 Three-dimensional lots, 83, 86, 597–598 heterogeneity of, 70–71 Three-dimensional wastes, 570–571, 581–582 Time series, 117, 646 variograms of, 120–121, 651 Time-sampling ratio, 505, 507, 509 Total allotted variance, 231–234 Total heterogeneity, 116–117 Total quality management, 657, 665 Total sampling error, 72–73 Toxic chemicals, monitoring, 584–585 Toxic constituents, 497 Trace constituents, 285–304 case studies, 290–301 Ingamells’ work, 289–290

logical approach, 285–304 recommendation, 304 sampling diagram, 301–303 Trace elements, 322, 590 Trailing edge, 442, 444–445, 463 Trenching soil, 383 Trends continuous, 128 long-range, 70, 586 process, 115, 337, 607 Triers, 587 Trucks, 67, 71, 83 True splitting devices riffle splitters, 473–475 sectorial splitters, 476 True splitting methods alternate shoveling, 479–480 coning and quartering, 478–479 degenerated splitting methods, 481–482 fractional shoveling, 480–481 Tube, 380–381 Tube rotator, 639 Tungsten carbide coring bits, 425–426 Turbula mechanical mixer, 639 Turbulences, 255–256, 390–391, 460, 489, 508 Two-dimensional distribution homogeneity, 82, 101 Two-dimensional homogeneity, 101 Two-dimensional Japanese slab-cake, 413 Two-dimensional liquid wastes, 589 Two-dimensional lots, 83, 86, 598–599 heterogeneity of, 70 Two-dimensional wastes, 570–571, 576–581, 588–589 U Uncertainty, word usage controversy, 26–29 Underground face development, 387, 435–436 Unintentional mistakes, 495–497 Units chronological order, 84, 124–127 number of, 181 with one descriptor, 109–110 spatial distribution, 81–82 transformation of a set of, 109–110 with two descriptors, 109–110 Unmovable lots, 343 Unsolvable sampling problems, 595–600 V Variance: see also specific types

692

cancellation of, 165, 171–172, 246–247 in continuous models, 161–162, 191–194 in discrete models, 191–194 experimental, 270–272 fundamental sampling error, 231–233, 243, 247 grouping and segregation error, 249–250 of heterogeneities, 91, 646–648 mathematical cancellation, 165, 171–172, 246–247 minimization, 165–166, 172, 247–249 nugget effect, 315 properties of, 164–165, 170–171 relative, 164, 184 residual, 28, 100, 231, 233, 234, 612 sampling diagram, 278–279 sampling diagram of, 278–279 single assay, 279 total allotted, 231–234 variographic, 315 Variograms absolute, 614–616, 621–624, 628–631 actual presentation of, 120–121 auxiliary functions of, 132–136, 151 of chronological series, 120–121, 651 client’s needs, 613–614, 621, 628 control chart, 616–619, 632 and descriptors, 127–128 experimental absolute, 628–631 extrapolation, 612–613 first order average integral, 133–134, 142 graphical integration, 137–139 heterogeneity variability, 611–612, 646–648 input, 649–650 limitations, 613 long-range absolute, 614–616, 622–624 measurement, 616–618, 624–627 modeling of, 127–132 moving average of, 135–136 output material, 651–652 point-by-point interpretation, 137–144 precision of variance, 121 process control, 609–619 processing stream, 121 properties of, 130–131 in quality control, 137, 148–151 quantifying structural problems, 621–633; controlling g/l solids, 627–632; copper content of slag, 621–627 random variance components, 315 relative, 122–124 residual component, 130 sampling interval, 616–618, 624–627

Index

second order average integral, 134, 142–143 short-range absolute, 614–615, 621–622 stability, 150 step-by-step interpretation, 144–148 still of, 648 total sampling, 616–618, 624–627 Variographic approach, 611–616 Variographic experiment, 148–151, 582–587 Variographic random variance, 315 V-blenders, 639 Velocity of cutter, 460–464, 506, 508 Vezin samplers, 377, 378, 401, 404, 452, 468–469 Vibrations, and segregation, 256–259 Vibratory ball mills, 489 Vibratory feeders, 486 Virus detection methodology, 584 Visman’s experiment, 268–272 Visman’s sampling equation, 267–268, 295–296 Vortex mixers, 639 W Waste piles, 334, 341, 580 Waste producers, 568, 583 Wastes: see also Environmental sampling heterogeneity, 569–571 liquid, 589 one-dimensional, 567–568, 569–570, 575–576, 590 three-dimensional, 570–571, 581–582 two-dimensional, 570–571, 576–581, 588–589 zero-dimensional, 567, 569, 572–575, 590 Water absorbed, 520, 532–534, 537 adsorbed, 492, 520, 536, 537 of constitution, 492, 520, 525, 532 of crystallization, 492, 520, 524, 532 depth sampling, 588 fixation of, 491 monitoring, 585 quality trends, 585 streams, 115, 569 supplies, 585 Water plant intakes, 584, 585 Weighing error weightometers, 511–515; accuracy, 511–512; correct installation, 512–515 Weight base line, 532–534 Weighted average, 48 Weighting bias, 501 Weighting error see Increment weighting error (IWE)

Index

Weightometers, 511–515 accuracy, 511–512 correct installation, 512–515 Wet materials, 522–523 Width of cutter, 460–465, 468 ‘Windrow method’, 643 Z Zero-dimensional lots, 83, 85, 599–600

693

discrete models, 179–195; correct sampling, 189; notions for, 179–180; random variable distribution, 180–182; sampling errors, 183–188 heterogeneity of, 68–69, 87–113; basic relationships, 88–89; and fragment groups, 94–109; and fragments, 89–94 Zero-dimensional wastes, 567, 569, 572–575, 590 Zone of influence, 161