Subsurface Environmental Modelling Between Science and Policy [1st ed.] 9783030511777, 9783030511784

This book provides a broad overview of essential features of subsurface environmental modelling at the science-policy in

460 47 5MB

English Pages XVI, 219 [232] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Subsurface Environmental Modelling Between Science and Policy [1st ed.]
 9783030511777, 9783030511784

Table of contents :
Front Matter ....Pages i-xvi
Introduction (Dirk Scheer, Holger Class, Bernd Flemisch)....Pages 1-12
Front Matter ....Pages 13-13
Conceptual Models for Environmental Engineering Related to Subsurface Flow and Transport (Dirk Scheer, Holger Class, Bernd Flemisch)....Pages 15-33
Overview of Mathematical and Numerical Solution Methods (Dirk Scheer, Holger Class, Bernd Flemisch)....Pages 35-56
Software Concepts and Implementation (Dirk Scheer, Holger Class, Bernd Flemisch)....Pages 57-81
The Science-Policy Interface of Subsurface Environmental Modelling (Dirk Scheer, Holger Class, Bernd Flemisch)....Pages 83-106
Front Matter ....Pages 107-107
Geologic Carbon Sequestration (Dirk Scheer, Holger Class, Bernd Flemisch)....Pages 109-152
Hydraulic Fracturing (Dirk Scheer, Holger Class, Bernd Flemisch)....Pages 153-178
Nuclear Energy and Waste Disposal (Dirk Scheer, Holger Class, Bernd Flemisch)....Pages 179-192
Further Subsurface Environmental Modelling Cases (Dirk Scheer, Holger Class, Bernd Flemisch)....Pages 193-210
Conclusions and Outlook (Dirk Scheer, Holger Class, Bernd Flemisch)....Pages 211-219

Citation preview

Advances in Geophysical and Environmental Mechanics and Mathematics

Dirk Scheer Holger Class Bernd Flemisch

Subsurface Environmental Modelling Between Science and Policy

Advances in Geophysical and Environmental Mechanics and Mathematics Series Editor Holger Steeb, Institute of Applied Mechanics (CE), University of Stuttgart, Stuttgart, Germany

More information about this series at http://www.springer.com/series/7540

Dirk Scheer Holger Class Bernd Flemisch •



Subsurface Environmental Modelling Between Science and Policy

123

Dirk Scheer Institute for Technology Assessment and Systems Analysis (ITAS) Karlsruhe Institute of Technology (KIT) Karlsruhe, Germany

Holger Class Institute for Modelling Hydraulic and Environmental Systems (IWS) University of Stuttgart Stuttgart, Germany

Bernd Flemisch Institute for Modelling Hydraulic and Environmental Systems (IWS) University of Stuttgart Stuttgart, Germany

ISSN 1866-8348 ISSN 1866-8356 (electronic) Advances in Geophysical and Environmental Mechanics and Mathematics ISBN 978-3-030-51177-7 ISBN 978-3-030-51178-4 (eBook) https://doi.org/10.1007/978-3-030-51178-4 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The beginnings of this book can be traced back to about a decade ago when we first met within the Cluster of Excellence in Simulation Technology (SimTech) at the University of Stuttgart in late 2008. The SimTech Cluster, funded by the German Research Foundation (DFG), provided an opportunity for scientists from different faculties to carry out interdisciplinary research in the field of computer simulations from their different perspectives of engineering, natural sciences and social science. We all shared the same vision and firm conviction: that is, computer simulations will continue to gain importance within both the scientific community itself and society as a whole. From that perspective, computer simulations at the science– policy interface deserve our full attention. Focusing on subsurface environmental issues, its modelling and its science-policy impact, we carried out several research projects together with a particular focus on the exploration of potential and challenges for subsurface environmental modelling at the science–policy interface. Our collaborative research initiated intensive and still ongoing discussions and feedback loops between us to better understand the “other” discipline, our “own” discipline and ways forward to gain better insights into the research item under focus. Our joint research in the past decade was strongly aligned with Carbon Capture and Geological Storage (CCS)—a technique fiercely discussed as an option for mitigating CO2 emissions and extensively investigated in Germany around 2010 with the government providing significant amounts of funding. However, in the meantime, the CCS technique has been abandoned for the time being and the topic has disappeared from the scene. Since the climate still hasn’t been saved, one might see discussions around CCS resurfacing at some point in the future. Beyond CCS, there is a lot more to subsurface environmental modelling at the science–policy interface. The transformation of the energy supply system—in Germany and worldwide—turns its head more and more towards the subsurface: geothermal energy, nuclear waste disposal, hydraulic fracturing and energy storage are just a few examples for why the subsurface plays a fundamental role in both science and society. We should not forget that the subsurface contains the most basic resource for mankind: enormous amounts of drinking water which need to be protected with decisions that are also more and more based on the results of computer simulations. v

vi

Preface

It is therefore essential, in our view, to better understand subsurface environmental modelling at the science–policy interface in order to achieve a sustainable and reconciled subsurface utilisation amongst competing interests—which this book is intending to contribute. Throughout the past years, we enjoyed collaborating with many national and international researchers. We are grateful for the findings resulting from these collaborations which helped to shape the contents of this book. We also would like to acknowledge the professional editing by Michael Errington and the reviews from the members of our Stuttgart working group. Heidelberg, Stuttgart May 2020

Dirk Scheer Holger Class Bernd Flemisch

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 The Energy Transition (German: Energiewende) and Its Turn Towards the Subsurface . . . . . . . . . 1.2 Subsurface Environmental Modelling Has a Role to Play in Science . . . . . . . . . . . . . . . . . . . . . . . 1.3 ... and at the Science-Policy Interface . . . . . . . . . 1.4 This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part I 2

3

............

1

............

2

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

5 7 9 11

Models, Methods, Software–and the Science-Policy Interface

Conceptual Models for Environmental Engineering Related to Subsurface Flow and Transport . . . . . . . . . . . . . . . . 2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 A Timeline of Literature and Research on Subsurface Flow and Transport . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 The Range of Topics and Scales . . . . . . . . . . . . . . . . 2.1.3 Uncertainties and Risk Assessment . . . . . . . . . . . . . . 2.2 Governing Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Single- and Multiphase Flow Through Porous Media . 2.2.2 Flow-Induced Geomechanics . . . . . . . . . . . . . . . . . . 2.2.3 Subsurface Flow and Reactive Transport . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

... ...

15 15

. . . . . . . .

. . . . . . . .

. . . . . . . .

15 17 21 22 23 26 28 32

Overview of Mathematical and Numerical Solution Methods 3.1 Approaches for Solving Multiphase Flow Equations . . . . . 3.1.1 Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Pressure-Pressure Formulation . . . . . . . . . . . . . . . 3.1.3 Pressure-Saturation Formulation . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

35 35 36 36 37

. . . . .

. . . . .

vii

viii

Contents

3.1.4 Global Pressure-Saturation Formulation . . . . . . . . . . . . 3.1.5 Assigning Boundary Conditions . . . . . . . . . . . . . . . . . 3.2 Discretization of the Equations . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Time Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Space Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Box Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Cell-Centred Finite-Volume Method . . . . . . . . . . . . . . 3.3 Linearization and Newton’s Method . . . . . . . . . . . . . . . . . . . 3.3.1 Fully-Coupled Solution . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Example of a Sequential Iterative Scheme: Flow and Geomechanics . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Fully-Coupled Scheme . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Sequentially Iterative Fixed-Stress Scheme . . . . . . . . . 3.3.5 Exemplary Scenario Evaluation . . . . . . . . . . . . . . . . . 3.4 Primary Variables for Compositional Models . . . . . . . . . . . . . 3.4.1 Degrees of Freedom According to Gibbs’ Phase Rule . 3.4.2 An Algorithm to Substitute Primary Variables . . . . . . . 3.4.3 Primary Variables for Non-isothermal Water-Gas-NAPL Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Primary Variables for Modelling Steam Injection in the Unsaturated and Saturated Zones . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Software Concepts and Implementation . . . . . . . . . . . . . . . . 4.1 Open Science: Principles of Open-Source Code and Data . 4.1.1 Why Develop Open-Source Research Software? . . 4.1.2 The Definitions of Free and Open-Source Software 4.2 Infrastructure for Open-Source Projects . . . . . . . . . . . . . . 4.2.1 Version Control . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Code Hosting and Version Control Management . . 4.2.3 Code Publication . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 Automated Testing and Dashboards . . . . . . . . . . . 4.2.5 Issue Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.6 Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.7 Mailing List and Other Communication Tools . . . . 4.2.8 Project Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 The Porous-Media Simulator DuMux . . . . . . . . . . . . . . . 4.3.1 Historical Development . . . . . . . . . . . . . . . . . . . . 4.3.2 Design Principles . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Modules and Structure . . . . . . . . . . . . . . . . . . . . . 4.3.4 DuMux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.5 4.3.6

. . . . . . . . . . . . . . . . . . Derived Modules . . . . . . . . . . . . . . . . . . . . . . . . . . Relevant Models . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

37 39 40 41 42 42 43 44 44

. . . . . . .

. . . . . . .

45 47 47 50 51 51 52

..

53

.. ..

54 55

. . . . . . . . . . . . . . . . . . . .

57 57 58 60 62 62 62 63 63 64 64 64 65 65 66 71 72 73 75 77

. . . . . . . . . . . . . . . . . . . .

Contents

ix

4.3.7 1pnc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.8 2p2cni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

The Science-Policy Interface of Subsurface Environmental Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Knowledge Transfer Between Science and Policy . . . . . . . 5.1.1 The Science-Policy Interface . . . . . . . . . . . . . . . . . 5.1.2 Knowledge Transfer Patterns at the Science-Policy Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Challenging Simulations Across Borders . . . . . . . . . . . . . 5.2.1 The Black-Box Character of Simulations . . . . . . . . 5.2.2 The (Un-)certainty Dilemma of Simulations . . . . . 5.2.3 Perception of Simulations Regarding Hazard and Risk Assessment . . . . . . . . . . . . . . . . . . . . . . 5.3 Producing Knowledge—Communicating Knowledge: Modes of Modelling Across Borders . . . . . . . . . . . . . . . . 5.3.1 The Knowledge Mode of Simulations . . . . . . . . . . 5.3.2 The Communication Mode of Simulations . . . . . . . 5.3.3 Computerised Policies - Politised Computers: Types of Simulation Usage . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part II 6

77 77 79

..... ..... .....

83 83 83

. . . .

. . . .

87 91 92 94

.....

96

. . . .

. . . .

. . . .

..... 97 ..... 98 . . . . . 100 . . . . . 101 . . . . . 104

Case Studies of Subsurface Environmental Modelling

Geologic Carbon Sequestration . . . . . . . . . . . . . . . . . . . . . . . 6.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Overview of Selected Commercial/Research CCS Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Modelling Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 The Scope of Modelling in CCS . . . . . . . . . . . . . . 6.2.2 Focussing on Subsurface CCS Modelling . . . . . . . 6.3 Science-Policy Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Processing of Modelling Results by Policy Makers and Stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Participatory Modelling Approaches of Brine Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Selected Model-Based Illustrations: CO2 Plume Shape Development and Convective Mixing . . . . . . . . . . . . . . . 6.4.1 CO2 Plume Shape Development: Injection in a Saline Formation . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Convective Mixing . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . 109 . . . . . 109 . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

112 116 116 119 122

. . . . . 122 . . . . . 133 . . . . . 141 . . . . . 142 . . . . . 144 . . . . . 147

x

7

8

9

Contents

Hydraulic Fracturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Overview of Selected Commercial/Research Fracking Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Modelling Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 The Importance of Risk Assessment . . . . . . . . . . . . . 7.2.2 Example: Scenario-Related Risk Assessment in the EU H2020 FracRisk Project . . . . . . . . . . . . . . 7.2.3 Challenges for Fracking-Related Predictive Modelling 7.3 Science-Policy Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Fracking as a Contested Technology . . . . . . . . . . . . . 7.4 Selected Model-Based Illustration: Methane Migration Induced by Hydraulic Fracturing . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nuclear Energy and Waste Disposal . . . . . . . . . . . . . . . . 8.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Modelling Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Science-Policy Issues . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Subsurface Modelling as a Key Factor for Site Selection Processes . . . . . . . . . . . . . . . . . . . . 8.4 Selected Model-Based Illustration: Heat-Pipe Effect . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . 153 . . . 153 . . . 155 . . . 158 . . . 158 . . . .

. . . .

. . . .

159 164 165 165

. . . 172 . . . 176 . . . .

. . . .

. . . .

179 179 182 184

. . . . . . . . 184 . . . . . . . . 188 . . . . . . . . 191

Further Subsurface Environmental Modelling Cases . . . . . . . . . 9.1 Energy-Related Subsurface Engineering Applications . . . . . . . 9.1.1 Geothermal Energy Exploitation . . . . . . . . . . . . . . . . . 9.1.2 Aquifer Thermal Energy Storage (ATES) . . . . . . . . . . 9.1.3 Fluid Injection and Induced Seismicity . . . . . . . . . . . . 9.2 Contamination in the Subsurface: Spreading and Remediation . 9.2.1 Contaminant Spreading in the Unsaturated Zone . . . . . 9.2.2 Remediation of NAPL-Contaminated Sites . . . . . . . . . 9.2.3 Contaminated Lands Between Stakeholders, Policy and Science . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Selected Model-Based Illustration: Remediation of Contaminated Soils . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

193 193 193 195 195 197 197 199

. . 203 . . 204 . . 208

10 Conclusions and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

List of Figures

Fig. 2.1 Fig. 2.2

Fig. 2.3

Fig. 2.4 Fig. 3.1 Fig. 3.2 Fig. 3.3

Fig. 3.4

Fig. 3.5

Fig. 4.1 Fig. 6.1

Schematic view of selected gas-liquid flow problems in the subsurface (Class 2008) . . . . . . . . . . . . . . . . . . . . . . . . . Soil structures from different perspectives. Photographs taken in a quarry (permission from A. Färber, personal communication) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Soil structures from different perspectives. Photographs taken in a quarry (permission from A. Färber, personal communication) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Averaging: transition between micro scale (or: pore scale) and REV scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discretization of the box method . . . . . . . . . . . . . . . . . . . . . . . Discretization of the cell-centred finite volume method . . . . . . Evolution of the calculated pressure during sequential fixed-stress iterations compared with the value achieved with the fully-coupled scheme as the reference. It can be clearly seen that the fixed-stress sequential schemes converges to the fullycoupled solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . This plot shows a comparison of pressure evolutions for a two-phase flow injection scenario (CO2 in brine). The solution of the fixed-stressed without iteration deviates from the reference solution of the fully-coupled scheme. It is not shown that an iterated fixed-stress scheme provides the same curve as the fully-coupled scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Process-adaptive substitution of primary variables after a local change of the phase state. Here, the NAPL phase disappears at one node during the time step of size Dt (Class 2008) . . . . . . . Dependencies of DUNE and DuMux modules and groups . . . . Options for the geological storage of CO2 according to IPCC (2005) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

..

18

..

19

..

20

.. .. ..

23 43 44

..

50

..

51

.. ..

54 72

. . 111

xi

xii

Fig. 6.2

Fig. 6.3 Fig. 6.4 Fig. 6.5 Fig. 6.6

Fig. 6.7

Fig. 6.8

Fig. 6.9

Fig. 7.1

Fig. 7.2

Fig. 7.3

Fig. 7.4

List of Figures

Variation of CO2 trapping mechanisms in the subsurface and the corresponding dominating processes on different time-scales (modified after IPCC (2005)) . . . . . . . . . . . . . . . . . . . . . . . . . . Types of CCS modelling, adapted from Scheer (2013) . . . . . . . Conceptual framework for analysing simulations at the science-policy interface, adapted from Scheer (2015) . . . Sketch of a geological model used for interviews with experts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gas saturation (left) and CO2 mass fraction in the liquid water phase (right) after 1:55  108 s simulated time. Total CO2 injection rate was 0.001 kg/s . . . . . . . . . . . . . . . . . . . . . . . . . . Gas saturation (left) and CO2 mass fraction in the liquid water phase (right) after 1:55  107 s simulated time. Total CO2 injection rate was 0.01 kg/s, thus ten times higher than the value given in Table 6.8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The mole fraction of CO2 in brine after a time of 1:7  108 s (left) and after 2:8  108 s (right). A mesh with 30  30 cells was used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The mole fraction of CO2 in brine after a time of 1:7  108 s (left) and after 2:8  108 s (right). A mesh with 100  100 cells was used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exemplary illustration: Hydraulic fracturing. The fracked shale layer is strongly exaggerated in the vertical in order to show the typical features of fracked shale: fracks perpendicular to the horizontal well, a network of small fractures, methane released from the shale and potentially escaping, fracking fluid pumped into the shale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schematic of the source scenario, including the shale layer and overburden. Boundary conditions and dimensions are shown in the figure. The fracking region is displayed zoomed. The scenario employs a variable number of fractures in a worst-case assumption all connecting the injection well (bottom boundary) to the overburden . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schematic of the pathway (left) and target (right) scenarios. The pathway features a fault zone at a variable distance from the fracking region, which receives data from the results of the source scenario as in Fig. 7.2. Likewise, the target aquifer receives an inflow of methane as calculated from the pathway scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Source scenario: Histograms of the methane mass flux (left) and the maximum pressure at the interface shale/overburden (right) as obtained from a Monte-Carlo analysis of the surrogate model using PCE with 4th order polynomials as well as 50 runs of the full-complexity model for comparison . . . . . . . . . . . . . . . . . . .

. . 111 . . 116 . . 124 . . 136

. . 143

. . 143

. . 146

. . 146

. . 154

. . 160

. . 161

. . 162

List of Figures

Fig. 7.5

Fig. 7.6

Fig. 7.7

Fig. 7.8 Fig. 7.9

Fig. 8.1

Fig. 8.2 Fig. 8.3 Fig. 8.4

Fig. 8.5

Fig. 9.1

Fig. 9.2

Results of the AMAE index evaluation for the source scenario (see Dell’Oca et al. (2017)). The highest sensitivity is seen for the overburden permeability anisotropy in this setup . . . . . . . . Variation of the expected value of methane flux into the overburden from the source scenario dependent on fixed input parameter values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pressure distribution after 14400 s (4 h, left) and 105 s (28 h, right). After 4 h, the injection is stopped. Note that the max. pressure in the matrix after 4 h reaches a value of 1.1e7 Pa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gas (methane) saturation after 4 h (left) and after 28 h (right) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methane saturation after 30 days—or: 2.6e6 s—(left). The plot on the right shows the amount of gas that escaped into the overburden over time. Crosses correspond to the times of the example graphs in Fig. 7.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conceptual illustration of the waste classification scheme according to the IAEA in 2009 (IAEA Safety Standards Series No. GSG-1 2009) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generic model for a deep geological repository, modified after Stahlmann et al. (2016). . . . . . . . . . . . . . . . . . . . . . . . . . . Basic setup and processes involved in a heat-pipe effect . . . . . Spatial distribution of the molar fraction of water in the gaseous phase (left) and the liquid saturation (right) for a 1D heatpipe problem. The heat source is placed at the right end of the model area. Continuous curves represent a simulation with a capillary pressure factor of 1 while the dashed curves represent simulations where it is 0.5. In the left diagram, the two curves cannot be distinguished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spatial distribution of the gaseous pressure and liquid pressure (left) and temperature (right) for a 1D heatpipe problem. The heat source is placed at the right end of the model area. Continuous curves represent a simulation with a capillary pressure factor of 1 while the dashed curves represent simulations where it is 0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . The dominating processes of contaminant spreading after a NAPL spill in the subsurface can vary with the time-scale of interest; eventually, the same holds true for a subsequent steam-injection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schematic diagram: comparison between steam injection and steam/air injection, combined with soil-air extraction for contaminant recovery in the unsaturated zone . . . . . . . . . . . . . .

xiii

. . 163

. . 164

. . 174 . . 175

. . 175

. . 180 . . 181 . . 188

. . 190

. . 190

. . 198

. . 200

xiv

Fig. 9.3

Fig. 9.4 Fig. 9.5

Fig. 9.6

List of Figures

This diagram illustrates the significantly different processes when injecting steam into the unsaturated zone (left half) and below the groundwater table (right half). In this figure, the thermal radius of influence would be too small to reach the NAPL spill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setup of the 2D VEGAS experiment . . . . . . . . . . . . . . . . . . . . Spatial distribution of NAPL saturation at time t ¼ 0 s (left) and after t ¼ 4320 s (right). The area marked with the blue square-shaped dots is the lens of lower permeability . . . . . . . . Spatial distribution of temperature (left) and contaminant mole fraction in the gas phase (right) after t ¼ 4320 s . . . . . . . . . . .

. . 201 . . 205

. . 207 . . 207

List of Tables

Table 3.1

Table 3.2

Table 5.1 Table 5.2 Table 6.1 Table 6.2 Table 6.3 Table 6.4 Table 6.5 Table 6.6

Table 6.7 Table Table Table Table Table Table Table

6.8 6.9 6.10 6.11 6.12 6.13 7.1

Non-isothermal three-phase three-component model: phase states, corresponding primary variables and criteria for substitution in the case of phase appearance . . . . . . . . . . Non-isothermal two-phase single-component model for steam-injection in the saturated zone: phase states and corresponding primary variables . . . . . . . . . . . . . . . . . . Types of research use by and service provision for policy . . Types of knowledge and communication modes . . . . . . . . . Types of modelling in the area of carbon capture and storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Categories and specification of processing simulations among stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assessment of the simulation methods and tools among stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quality assessment of the Regional Pressure Study . . . . . . . Assessment of usage types among stakeholders . . . . . . . . . . Overview of recommendations from stakeholders and category of revision complemented by brief comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Continued: overview of implementation and revision of stakeholder input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boundary conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Initial conditions (t ¼ 0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model input parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boundary conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Initial conditions (t ¼ 0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model input parameters considered as variable in the source scenario. Values of their ranges are given exemplarily . . . . .

..

53

.. .. ..

55 89 98

. . 117 . . 126 . . 128 . . 130 . . 131

. . 139 . . . . . . .

. . . . . . .

140 142 142 142 145 145 146

. . 160

xv

xvi

List of Tables

Table 7.2

Table 7.3 Table 7.4 Table 7.5 Table 7.6

Table 7.7 Table 7.8 Table Table Table Table Table

8.1 8.2 9.1 9.2 9.3

Model input parameters considered as variable in the pathway scenario. Values of their ranges are given exemplarily. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model input parameters considered as variable in the target scenario. Values of their ranges are given exemplarily . . . . . Need for action and research on environmental impacts due to fracking according to the SRU report . . . . . . . . . . . . Boundary conditions in the matrix domain . . . . . . . . . . . . . . Boundary conditions in the fracture domain, existing only in the lower part of the total model domain and ‘overlapping’ with the matrix domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . Initial conditions in the matrix and the fracture domain . . . . Model parameters in the fracture-matrix setup, all residual saturations are zero everywhere . . . . . . . . . . . . . . . . . . . . . . Consideration criteria for the German site selection process Boundary conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boundary conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Initial conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model parameters related to the hydraulic properties of the fine and coarse sand in the flume filling . . . . . . . . . .

. . 161 . . 161 . . 170 . . 173

. . 173 . . 173 . . . . .

. . . . .

174 186 189 205 206

. . 206

Chapter 1

Introduction

Groundwater, resources, hidden secrets from the past... The subsurface contains all this and has consequently attracted the attention of engineers, geologists and many others. Groundwater has always served as a major source of drinking water and its protection is and will remain a top-priority task, not only for arid regions, but for all countries worldwide. Recently, however, the subsurface has received increasing attention from environmental engineering. There are several reasons for this, one of the most prominent being the transition in energy-supply technologies from fossilfuel-dominated sources to those of sustainable energy (German: Energiewende). Historically, the subsurface has always been a treasure chest for mankind to recover resources. Prehistoric ages, such as the Bronze Age or the Iron Age, have been named after the dominant resources that influenced the development and civilisation of mankind. Mining has since become an engineering task through all ages and in all civilisations. In particular, the use of chemicals has led to mining having an impact on the environment, primarily on groundwater. After 1800, the increasing availability of fossil energy resources from the subsurface, such as coal and oil, boosted the industrial revolution; this has continued ever since and affected the environment, the climate and, generally speaking, living conditions all over the planet. Early interest in and usage of the subsurface by humankind goes along with early subsurface policy regulations and impact assessments. Political decision makers have always put subsurface regulations into force. In the Middle Ages, selected towns were assigned the status of mining town (German: Bergstadt) equipped with specific rights, privileges and restrictions with the main objective of attracting labour forces to settle down nearby. During industrialization, the feudal mining regulations transformed into a rather functional mining administration which set general law standards, also for upcoming coal mining areas. In 1865, for instance, parliamentarian decisionmaking in Germany approved the General Mining Act for Prussian States, which laid the basis of modern German subsurface regulation. © Springer Nature Switzerland AG 2021 D. Scheer et al., Subsurface Environmental Modelling Between Science and Policy, Advances in Geophysical and Environmental Mechanics and Mathematics, https://doi.org/10.1007/978-3-030-51178-4_1

1

2

1 Introduction

However, only since then has the general public developed an awareness and understanding of environmental problems. Specific study programmes on environmental engineering were only established at universities in the early 1990s (e.g. in Germany): today, subsurface flow and transport is one of the major fields of research and application for environmental engineers. It is also a major field of software development and computer simulations and is gaining more and more importance for policy-making and public debate. Why?

1.1 The Energy Transition (German: Energiewende) and Its Turn Towards the Subsurface At least since the Fukushima Daiichi nuclear disaster in March 2011, the transition from a nuclear and fossil-fuelled power supply to renewable energies has gained enormous momentum. Greenhouse gas emissions from fossil-fuel power stations and the high operational risks of nuclear power stations together with an unresolved waste-disposal problem have made this task a high priority, which is denoted in German language as Energiewende. The Fukushima accident happened only two weeks before the Baden-Württemberg state election, 2011, and the parties campaigned intensively on the nuclear issue. In 2000, the German federal government, led by the Social Democrats and the Green Party, had announced its intention to phase out the use of nuclear power. This law was modified by the new Christian Democrats and Free Democrats government in 2010 to extend the operating time of German nuclear power stations. Leading politicians and managers and, in particular, the four major energy suppliers E.ON, RWE, Vattenfall and EnBW, had appealed to the public that the transition to renewable energies will take time and that state-of-the-art coal-fired power stations as well as the existing nuclear power stations could secure the transition without wasting a lot of money with a premature shutdown. Parts of the public had interpreted and criticised this as corporate lobbying. The news from Fukushima was clearly a turning point in the debate and most likely the main reason for an overturn of the Christian Democratic government in the state of Baden-Württemberg in favour of the Green Party. After March 2011, the extensions of the operating times for nuclear power stations were cancelled and a definitive phase-out until 2022 was decided. Since then, the focus has increasingly been placed on how to manage renewable energies regarding the security of supply in the future. The potential of continuously and reliably available renewable sources like biomass and water power are limited due to natural reasons. The fluctuating availability of renewable energy from solar panels or wind turbines due to weather and seasonal conditions requires sufficient storage capacities to meet the demands of industry, private sectors and transportation as well as to stabilise the grids. This is the point where the subsurface can contribute a significant share to energy-storage capacities, for example, through the storage of electrical energy converted to gases (power-to-gas) like hydrogen (H2 ) or

1.1 The Energy Transition (German: Energiewende) and Its Turn …

3

methane (CH4 ) which have a high energy density. Thus, geosciences, i.e. disciplines like hydrogeology, reservoir engineering or environmental engineering are also an important research field for the energy-supply sector of the future; and the numerical modelling of the subsurface storage of fluids is a major part. The energy concept of the German federal government as published in 2010 (Bundesregierung 2010) aims at changing the energy supply towards using technologies which are environmentally friendly, reliable and affordable. The German government acknowledges therein that increasing the use of renewable energy sources and increasing energy efficiency is associated with high costs, which need to be counteracted by an appropriate regulatory framework. The new technologies need to be competitive and affordable in order not to endanger the prosperity of society and the economy. The energy concept formulates aims like an 80% or more reduction of greenhouse gas emissions by 2050 compared to 1990, while at the same time increasing the share of renewable energies up to at least 60% in terms of total energy consumption and up to 80% in terms of electrical energy. Almost ten years later, controversial discussions regarding a phase-out of coal and lignite in Germany are still ongoing and there are fears of a national failure to reach the self-defined goals. We should note that there are still huge reserves of fossil energy resources available worldwide. But for an industrialised country with high standards of living like Germany, there is also a geo-strategic interest in avoiding a strong dependence on energy imports from countries which often have unstable political conditions or questionable policies regarding international law. Thus, there are strong arguments even besides global warming or technology-related risks that should lead us to a sustainable energy supply based on local renewable energy sources. On the other hand, there is a clear call for action from the majority of climate researchers to limit greenhouse gas emissions in the very near future. A widely recognised paper in Science from Pacala and Socolow (2004) has brought forth a discussion about stabilisation wedges, an approach developed at Princeton University to describe climate-change mitigation scenarios. They argue that, based on the historical development of greenhouse gas emissions since the industrial revolution, several mitigation measures (simplified as “wedges”) could already be applied today to freeze the emissions which are otherwise expected to double along an extrapolated historic straight line until the year 2055. The Intergovernmental Panel on Climate Change (IPCC) has published several summary reports on the scientific basis of climate change and potential mitigation measures, e.g. Metz et al. (2005), Solomon et al. (2007), Stocker et al. (2013). They define, for example, “that it is extremely likely (95% confidence) that more than half of the observed increase in the global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.” In conclusion of the above discussion, the direction seems clear. A further increasing contribution of energy from renewable sources needs to be achieved. In 2012, a study by Fraunhofer ISE (Henning and Palzer 2012) assumed a scenario with 100% renewable supply of electricity and heat and considered storage in pumped storage, batteries, different kinds of heat storage but also with power-to-gas. They assumed a fixed amount of 60 GWh storage capacity in pumped-storage hydroelec-

4

1 Introduction

tricity; in 2019, we have ≈40 GWh with limited potential to increase. Little more than 50 GWh would be stored in batteries in this scenario, while methane storage from power-to-gas would cover around 70–80 TWh with the remaining excess energy of up to 200 TWh being stored as heat. The German Federal Ministry for Economic Affairs and Energy regularly monitors the progress of the German energy transition. For example, the most recent report (BMWi 2016) also stresses the importance of the subsurface for the storage of energy, explicitly naming power-to-gas or compressed air energy storage as important pillars to provide sufficient storage capacity. Bauer et al. (2017) address the importance and implications of subsurface energy storage in their editorial for a topical collection. The underground storage of methane has been well-established since the 1960s with roughly half of it stored in salt caverns and the other half in pore storage with a current storage capacity of more than 2 · 1010 standard cubic meters (Landesamt für Bergbau 2015), corresponding to 200 TWh. Pore-storage sites in Germany are mainly in sandstone formations of depleted oil or gas fields or in aquifers. They are found in the sedimentary basins of Northern, Eastern and Southern Germany; for example, the Berlin natural gas storage facility at a depth of 760–860 m below ground. Storage in salt caverns is naturally restricted to regions with thick salt formations. Comparing the demand and the given capacity, we can see that the required storage capacity already exists today with huge additional storage potential available, although not yet identified in detail. However, estimates could be derived from a previous survey of the potential storage capacity for carbon dioxide (CO2 ) storage, elaborated by the Bundesanstalt für Geowissenschaften und Rohstoffe (BGR) in Hannover/Germany (Reinhold and Müller 2011), and a further restriction on technically feasible and accessible formations. Thus, it is clear that there are technologies and geological conditions available which would allow the underground storage of excess energy from renewable sources; for example, via an electrolysis of water to produce hydrogen (power-togas). Studies by researchers from the German Research Centre for Geosciences in Potsdam (Kühn et al. 2014, 2013) have suggested an energy-storage system that can be understood as an underground battery. Hydrogen and CO2 could react to produce methane, which could be stored and used as fuel on demand. In turn, the CO2 from the combustion could be captured and stored underground again. The estimated overall efficiency of this system would be around 30%. Although such an efficiency would be less than that of pumped storage or compressed-air storage, it could still be economically competitive. Furthermore, all components of this technology are ready to be deployed. Besides a major focus on the subsurface storage of energy, when speaking of subsurface related energy-supply technologies, we should also mention geothermal energy production. Geothermal energy is abundantly available; however, its use is related to high investment costs and some geological risks like induced seismicity (e.g. in the Deep Heat Mining Project in Basel/Switzerland) or drilling-related water-flux into swelling formations like the Gipskeuper layer (e.g. in Staufen im Breisgau/Germany). The increasing number of reports about smaller or larger prob-

1.1 The Energy Transition (German: Energiewende) and Its Turn …

5

lems in relation to geothermal drillings has recently impaired the development of this technology.

1.2 Subsurface Environmental Modelling Has a Role to Play in Science For all these subsurface-related aspects of future energy supply, hydrogeology and reservoir engineering are key disciplines. This includes subsurface flow and transport modelling in particular. Important tasks include the characterisation of groundwater flow in and between deep and shallow aquifers, in many cases induced by fluids other than water; furthermore, an integrated consideration of the different interacting processes of flow, heat and mass transport, (bio-)geochemistry or geomechanics. One of the major interests of policy makers in modelling and numerical simulation is concerned with risk quantification, which usually requires the consideration of uncertainty at various levels, e.g. Walter et al. (2012). Predicting likelihoods of hazardous events and the amount of damage based on an objective evaluation of available data is a task which can be solved only in co-operation between modellers and policy makers. The development of models for subsurface flow and transport processes is intrinsically tied to the implementation of such models in terms of software. The everincreasing complexity of the processes to be modelled is related to an increase in complexity of the corresponding software which is ever more amplified by the growing diversity of the targeted hardware architecture. In the following, we provide a brief account on the history of software development and simulators in the context of subsurface environmental modelling, describe a credibility crisis that the whole branch of computational science and engineering is currently facing and present a possible way out of this crisis based on reproducible research and open-source development patterns. Analytical solutions for problems which are posed in terms of models for subsurface flow and transport processes are only available if strong simplifications are taken into account regarding the effective domain dimension, process complexity, parameter heterogeneity and initial/boundary conditions. For tackling at least somewhat realistic scenarios, a discretization of the continuous problem as well as the implementation of the resulting discrete problem and its solution by means of a computer programme are unavoidable. The development of such numerical simulators started with the evolving discretization technologies of finite-difference, finite-volume and finite-element methods in the late 1950s which was accompanied by correspondingly emerging computer architecture providing the corresponding implementation. Apart from academia, research was mainly driven by U.S. institutions like the National Laboratories and the Geological Survey. Based on those early efforts, the first publicly distributed simulators for subsurface flow and transport processes emerged in the 1980s. Most prominently, the groundwater flow simulator MODFLOW was

6

1 Introduction

released by the U.S. Geological Survey in 1984. TOUGH from Lawrence Berkeley National Laboratory, first released in 1987, is perhaps the most well-known code from a National Laboratory in this context and has already offered discrete models for multiphase processes. Another main driver for developing software is the U.S. oil and gas industry. During the 1980s, first releases of the commercial black-oil simulators ECLIPSE from Schlumberger Ltd. and IMEX from Computer Modelling Group Ltd appeared. Apart from offering very different modelling capabilities, the above-mentioned software packages follow very different business and distribution models. ECLIPSE and IMEX are purely commercial products, buyers receive executables without access to the underlying source code. TOUGH also has a charge but the source code is part of the product. Customers are able to extend it for their needs although they are not allowed to redistribute it. MODFLOW has been a public domain from the first release onwards; everybody may download, use, modify and redistribute it. In recent years, simulator development at research institutions is greatly facilitated by using increasingly available and mature open-source infrastructure and following corresponding development principles. Exemplary open-source projects include MRST, OpenGeoSys, OPM, ParFlow and PFLOTRAN as well as DuMux , the simulator which is described and used in this book. The current sales pitch for the importance of computational science is that it “has become the third pillar of the scientific enterprise, a peer alongside theory and physical experiment” (Reed et al. 2005). Although it has become almost a common belief, this statement should be viewed with suspicion. Very often, computational science doesn’t follow the scientific method, which should be the common foundation for the three pillars of empirical science, deductive science and computational science. In particular, it fails in the most important ingredient in the scientific method of testing “hypothesis by performing an experiment and collecting data in a reproducible manner” (Wikipedia Contributors 2019). While deductive science offers the formal notion of mathematical proof and empirical science demands, no formalism is required of the standard format of empirical research papers (data, materials, methods) to ensure the reproducibility of computational experiments. This leads to computational science facing a credibility crisis (Donoho et al. 2008): “It’s impossible to verify most of the results that computational scientists present at conferences and in papers.” In order to overcome the fundamental drawback of the lacking reproducibility of computational experiments, the scientific field of reproducible research has evolved in the last few decades. The main principle of reproducible research is that (LeVeque et al. 2012) “an article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generate the figures.” A particular prerequisite for enabling reproducibility is to publish the respective software under an open-source license. For academic researchers, this promises more advantages: code quality and applicability is expected to increase and collaboration should be facilitated. We would like to promote an open-source development process for research software. This goes far beyond the mere publication of the code under an appropriate license. It involves

1.2 Subsurface Environmental Modelling Has a Role to Play in Science

7

the full software development cycle of coding, building, testing and releasing. In recent years, a sound infrastructure for assisting this development has evolved and continues to be improved, consisting of tools which are open-source themselves. We will describe our open-source development strategy by means of the simulator DuMux .

1.3 ... and at the Science-Policy Interface Digitisation has become a mega trend in science and society and will continue to do so. As a specific application, computer simulations represent a fundamental innovation in the field of information and communication technologies. They have established themselves as an important tool and discipline with a wide variety of applications in science, business and industry. In research, it is primarily in basic and applied science that simulations play an important role as an additional epistemic methodological approach alongside theory and experimentation. One can list almost every discipline in science. To give a short overview: simulations are used in genetics, gravitational physics, molecular modelling, energy system modelling, various fields of engineering sciences and last but not least geo-science. As such, computer simulations are well established across the whole range of scientific disciplines. However, a fundamental conviction in this book is the fact that computer simulation and the knowledge gained from them is not limited to the scientific community itself. Other domains in the world—let it be politics, business and industry, and civil society are fundamentally and increasingly influenced and affected by science simulation knowledge. The production of simulation-based knowledge and its communication to political decision makers have become crucial factors within policy-making and decision-making. From that end, we conclude that computer simulations fulfil two principal functions: computer simulations serve for both knowledge production and knowledge communication at the science-policy interface. Policy-making in pluralistic societies is bound to principles of forward-thinking, decision-orientation and evidence-based rationales. Policies result from a process in which problems to be solved are identified and articulated, policy objectives and solutions are formulated and finally decided by the legislator. Policy interventions are thus key aspects of a decision-based understanding of policy-making. Firstly, the forwardthinking aspect is an inherent feature of policy-making. By deciding on specific policies, policy makers intend to solve an identified problem in the (near) future. While doing so, policy makers meet the challenge of developing, evaluating and deciding on adequate policy options which will become effective in the future to solve the identified problem. Secondly, political decision-making in modern democracies requires policy makers to provide objective reasoning and justification. This is intended to help secure popular legitimacy, acceptance and accountability. Legitimacy can be awarded by the use of, among other things, institutionalized decision-making procedures, sufficient consent and acceptance from the public and stakeholders—and by scientific knowledge.

8

1 Introduction

Taking these three functionalities of modern policy-making, one may ask what science can contribute and feed in. Science contributes with input to enhance the knowledge base for better problem understanding, to support the political decisionmaking process, and provides legitimacy for evidenced-based decisions made. Scientific policy advice may deliver state-of-the-art knowledge to policy makers in order to guarantee robust, problem-solving policies. Moreover, science should contribute by delivering reflexive knowledge (i.e. meta-knowledge), which allows an assessment of the existing basis of knowledge and the corresponding risks, uncertainties and ambiguities (Grunwald 2000). Decisions are at the core of modern policy-making, and the legislator has committed to consider and use scientific knowledge as a basis for decision-making. The German Supreme Court, for instance, urged the legislator to carry out impact assessments in order to pre-evaluate the effects and effectiveness of intended policy laws. This is done by obligatory Regulatory Impact Assessment (RIA) which analyses the impacts before a new government regulation is introduced. Regulatory Impact Assessment documents need to consider the scientific state of knowledge and technology. Against that background, the vague legal concepts of “Stand der Wissenschaft” and “Stand der Technik” (“the current state of science and technology”) were introduced as legal terms (Beyme 1997). Political decisions relate not only to the phase of decision-making, but also include pre-phase with the preparation of decisions in stages of opinion-building and the ongoing political debate. In opinion-building processes, scientific expertise has a share in the clarification of public debates, can contribute to “socially robust” decisionmaking and provides an evidence-based background for decisions. Last but not least, legitimacy is a key feature and major resource within democratic political systems. Political action requires a minimum of legitimacy in order to implement generic binding decisions. Research input supports policy-making with contributing objective and evidence-based knowledge. Scientific expertise and findings are a legitimate resource for policy-making due to their assigned features of objectivity, evidence and independence. Research input, therefore, may justify decisions and supports the acceptance of decisions taken. Scientific computer simulations are largely compatible with these three policy needs. The key characteristics of computer simulations compatible with policy-making has been summarized by Scheer (2017: 105–106) (Scheer 2017): • Reduction of complexity: simulations have the capability to reduce, represent, and visualize real-world system complexities and statuses. • Comparison of options: simulations have the capability to represent and hence communicate various problem dimensions and courses of action. • Intervention effects: simulations have the capability to represent and hence communicate the impact and effect of different political steering interventions. • Formats of results: simulations have the capability to aggregate and transform time-dependent system states into easy accessible formats of pictures, diagrams, and numbers. • Trial without error: simulations have the capability to use trial and error to find optimal solutions without serious real-world consequences.

1.3 ... and at the Science-Policy Interface

9

While there is good reason simulations may serve policy-making, there is also a high level of complexity and uncertainty within simulation running and its results. Simulation-based policy decisions are vulnerable and might come under attack by political opponents or competing experts. What can be seen is that complexity reduction, option comparison and intervention effects are frequently based on oversimplified system functions, starting point assumptions and cause-impact relationships, which insufficiently reflect real-world phenomena. On the other hand, computed quantitative results in the form of pictures and numbers tend to obscure underlying uncertainties and suggest a level of accuracy which does not adequately meet model reality. Having said this, a key question for simulations is its level of validity and reliability. The production of simulation knowledge is sometimes an opaque endeavour, accessible only to modelling experts—provided that the computer code is open-access. It is not surprising to see computer simulation as a basis of, or information resource for, political decision-making being heavily criticized: a lack of trust in models and modellers, the questionable accuracy of simulation results and the inadequacy of the computing process itself are only some points of criticism that have been raised (e.g. Hellström 1996; Petersen 2006; Brugnach et al. 2007; Ivanovic and Freer 2009; Fisher et al. 2010; Wagner et al. 2010). Modern policy tasks and requirements in the field of subsurface regulations contrast considerably from the past. A multitude of subsurface activities emerged alongside the transitioning of the energy supply system. Besides traditional activities such as oil and gas exploitation, coal and mineral mining, groundwater and drinking water supply, gas storage, waste and nuclear disposal etc., several new subsurface technologies gain importance: for instance, carbon capture and storage, geothermal energy, several storage activities such as subsurface pumped hydroelectric storage, gas storage in aquifers and caverns, seasonal heat storage and compressed air energy storage. The “new” interest in the subsurface challenges geoscientists, modellers, policy makers, stakeholders from business and industry and the public at large alike. The subsurface, including its modelling exercises, have entered the policy arena. Interests, normative positions, expert dilemmas, tactical and strategic tasks have become common features in subsurface regulation policies. In this book, we intend to shed interdisciplinary light on the exciting topics of environmental issues of subsurface flow and transport, its adaptation and implementation into software environments, and its reference and impact on the science-policy interface.

1.4 This Book With this book we intend to deliver a broad overview on (in our view) essential features of subsurface environmental modelling at the science-policy interface. This shall provide insights on the potential and challenges in the field of subsurface flow and transport, corresponding computational modelling and its impact on the area of policy- and decision-making.

10

1 Introduction

The book is divided into two parts. Part one presents models, methods, software— and the science-policy interface. Part two builds on these efforts and further illustrates specifications on detailed case-studies of subsurface environmental modelling. In total, the book encompasses ten chapters. Opening part one, Chap. 2 delivers an overview of the timeline of literature and research on subsurface flow and transport, discusses the range of topics and scales and deals with the matter of uncertainties and risk assessment. What follows is a brief overview of governing equations as they arise from basic conceptual models for environmental engineering to subsurface flow and transport. Chapter 3 provides an overview of mathematical and numerical solution methods for the systems of partial differential equations as they typically originate from the considered flow and transport systems. The chapter introduces approaches to solve the multiphase flow equations, discretization of the equation, linearization and the Newton’s Method and discusses primary variables for compositional models. In Chap. 4, a broad range of software concepts and implementations is provided with an emphasis on the particular relevance of Open Science and Open Source. On the one hand, the chapter elaborates on principles of open-source code and data, and the infrastructure for opensource projects. On the other hand, the porous-media simulator DuMux is presented. Finalizing part one of the book, Chap. 5 deals with the science-policy interface of subsurface environmental modelling. The chapter discusses the principles of knowledge transfer between science and policy, specifies the challenges of simulations across the border of science and policy and discusses modes of knowledge production and communication. Part two of the book presents specific case-studies on several subsurface applications. This part tightens modelling and science-policy issues in the light of illustrative case-study analyses. The subsurface engineering activities and technologies considered include geological carbon sequestration (Chap. 6), hydraulic fracturing (Chap. 7) and nuclear energy and waste disposal (Chap. 8). These three cases will be handled consistently by introducing first the background and then presenting differing facets of modelling and science-policy issues. The list of subsurface-related engineering topics relevant for this book is, of course, not complete with these three case studies. Therefore, although treated in less detail, this is followed by Chap. 9 with further subsurface environmental modelling cases. On the one hand, energy-related subsurface engineering applications are considered, such as geothermal energy exploitation, aquifer thermal energy storage and fluid injection and induced seismicity. On the other hand, the case of contamination in the subsurface is dealt with from several perspectives. As an overall comment, the huge field of groundwater is only touched upon in some issues related to contamination in the unsaturated and saturated soil zone, while many other issues like water scarcity, salinization, groundwater management and others remain beyond the scope of this book. Each case-study chapter is completed with selected numerical simulation examples related to topics of the case study. We also use these examples for teaching and here they serve to illustrate some of the characteristic processes and parameters that dominate these applications on the basis of model results. Finally, Chap. 10 provides general conclusions and

1.4 This Book

11

lessons learnt and provides an outlook on the way forward which is based on our key findings and formulated into a number of theses. For all above-mentioned numerical examples, the 3.2 release of DuMux is used, particularly, several of the test cases of the module dumux-lecture, git.iws.unistuttgart.de/dumux-repositories/dumux-lecture. While the problem settings in general are supposed to be maintained over future versions of the software, the details might be subject to change. Any reader interested in reproducing the exact same results that have been employed for the figures in this book can download the corresponding DuMux -Pub module at git.iws.uni-stuttgart.de/dumux-pub/scheer2020. Apart from the code, this module also contains explicit installation instructions and scripts for replicating the figures.

References Bauer S, Dahmke A, Kolditz O (2017) Subsurface energy storage: geological storage of renewable energy - capacities, induced effects and implications. Environ Earth Sci 76:695 Beyme K (1997) Der Gesetzgeber: Der Bundestag als Entscheidungszentrum. VS Verlag für Sozialwissenschaften, Wiesbaden BMWi (2016) The energy of the future - sixth ”energy transition” monitoring report. Technical report, German Federal Ministry for Economic Affairs and Energy. https://www.bmwi. de/Redaktion/EN/Publikationen/Energie/sechster-monitoring-bericht-zur-energiewendelangfassung.pdf?__blob=publicationFile&v=6. Accessed 21 Nov 2019 Brugnach M, Tagg A, Keil F, de Lange WJ (2007) Uncertainty matters: computer models at the science-policy interface. Water Resour Manag 21:1075–1090 Bundesregierung (2010) Energiekonzept für eine umweltschonende, zuverlässige und bezahlbare Energieversorung. Technical report, Bundesregierung der Bundesrepublik Deutschland. https://www.bmwi.de/BMWi/Redaktion/PDF/E/energiekonzept-2010,property=pdf, bereich=bmwi2012,sprache=de,rwb=true.pdf. Accessed 23 Apr 2019 Donoho DL, Maleki A, Rahman IU, Shahram M, Stodden V (2008) Reproducible research in computational harmonic analysis. Comput Sci Eng 11:8–18 Fisher E, Pascual P, Wagner W (2010) Understanding environmental models in their legal and regulatory context. J Environ Law 22:251–283 Grunwald A (2000) Technikfolgenabschätzung als wissenschaftliche Politikberatung am Deutschen Bundestag. Denkstrome J Sachs Akad Wiss 64–82 Hellström T (1996) The science-policy dialogue in transformation: model-uncertainty and environmental policy. Sci Public Policy 23, 91–97 Henning H-M, Palzer A (2012) 100% Erneuerbare Energien für Strom und Wärme in Deutschland. Technical report, Fraunhofer ISE. https://www.ise.fraunhofer.de/de/veroeffentlichungen/ veroeffentlichungen-pdf-dateien/studien-und-konzeptpapiere/studie-100-erneuerbareenergien-in-deutschland.pdf. Accessed 23 Apr 2019 Ivanovic RF, Freer JE (2009) Science versus politics: truth and uncertainty in predictive modelling. Hydrol Process 23:2549–2554 Kühn M, Nakaten N, Streibel M, Kempka T (2013) Klimaneutrale Flexibilisierung regenerativer Überschussenenergie mit Untergrundspeichern. Erdöl Erdgas Kohle 129:348–352 Kühn M, Streibel M, Nakaten N, Kempka T (2014) Integrated underground gas storage of CO2 and CH4 to decarbonise the “power-to-gas-to-gas-to-power” technology. Energy Procedia 59:9–15 Landesamt für Bergbau (2015) Energie und Geologie, Niedersachsen. Untertage-Gasspeicherung in Deutschland. Erdöl Erdgas Kohle (Urban-Verlag) 131:398–406

12

1 Introduction

LeVeque RJ, Mitchell IM, Stodden V (2012) Reproducible research for scientific computing: tools and strategies for changing the culture. Comput Sci Eng 14:13 Metz B, Davidson O, de Coninck HC, Loos M, Meyer LA (eds) (2005b) IPCC. Special report on carbon dioxide capture and storage. Prepared by working group III of the intergovernmental panel on climate change. Cambridge University Press, Cambridge Pacala S, Socolow R (2004) Stabilization wedges: solving the climate problem for the next 50 years with current technologies. Science 305:968–972 Petersen A (2006) Simulating nature: a philosophical study of computer-simulation uncertainties and their role in climate science and policy advice. Het Spinhuis Reed DA, Bajcsy R, Fernandez MA, Griffiths J-M, Mott RD, Dongarra J, Johnson CR, Inouye AS, Miner W, Matzke MK, Ponick TL (2005) Computational science: ensuring America’s competitiveness. https://apps.dtic.mil/dtic/tr/fulltext/u2/a462840.pdf. Accessed 10 Jun 2019 Reinhold K,üller CM (2011) Storage potential in the deeper subsurface - overview and results from the project storage catalogue of Germany. Technical report 74, Schriftenreihe der Deutschen Gesellschaft für Geowissenschaften Scheer D (2017) Between knowledge and action: Conceptualizing scientific simulation and policymaking. In: Resch M, Kaminski A, Gehring P (eds) The science and art of simulation I. Springer, Berlin, pp 103–118 Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, Tignor M, Miller HL (eds) (2007) IPCC. Climate change 2007: the physical science basis. Contribution of working group I to the 4th assessment report of the intergovernmental panel on climate change Stocker TF, Qin D, Plattner G-K, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM (eds) (2013d) IPCC. Climate change 2013: the physical science basis. Contribution of working group I to the 5th assessment report of the intergovernmental panel on climate change. Cambridge University Press, Cambridge Wagner W, Fisher E, Pascual P (2010) Misunderstanding models in environmental and public health regulation. NYU Environ Law J 18:293–356 Walter L, Binning P, Oladyshkin S, Flemisch B, Class H (2012) Brine migration resulting from CO2 injection into saline aquifers - an approach to risk estimation including various levels of uncertainty. Int J Greenh Gas Control 9:495–506 Wikipedia Contributors (2019) Scientific method — Wikipedia, the free encyclopedia. https://en. wikipedia.org/w/index.php?title=Scientific_method&oldid=923957098. Accessed 21 Nov 2019

Part I

Models, Methods, Software–and the Science-Policy Interface

Chapter 2

Conceptual Models for Environmental Engineering Related to Subsurface Flow and Transport

This chapter aims at providing an overview of research related to subsurface flow and transport modelling. We address the existing literature and discuss the role of modelling for engineering activities in the subsurface. For environmental applications, modelling is typically employed to quantify risk scenarios and corresponding uncertainties which arise from different origins. One source of uncertainty is, of course, the degree of abstraction that is applied to describe the relevant physics (or chemistry, if necessary) in a problem of interest. Therefore, this chapter provides an introduction to conceptual models dependent on the complexity of the physics. We explain the governing equations based on the assumptions in the simplification of the physics and corresponding mathematics, and also the strong interaction of different processes and various non-linearities arising in the mathematical description of the processes.

2.1 Overview 2.1.1 A Timeline of Literature and Research on Subsurface Flow and Transport Research on environmental issues related to subsurface flow and transport goes back to the roots of the description of flow processes through porous media. Undoubtedly, the most fundamental contribution to research on porous-media flow so far was made by Henry Darcy as early as 1856 (Darcy 1856). Darcy stated that the advective flow rate through a porous material can be, in good approximation, considered linearly dependent on the pressure gradient (or the piezometric head) and the hydraulic conductivity. © Springer Nature Switzerland AG 2021 D. Scheer et al., Subsurface Environmental Modelling Between Science and Policy, Advances in Geophysical and Environmental Mechanics and Mathematics, https://doi.org/10.1007/978-3-030-51178-4_2

15

16

2 Conceptual Models for Environmental Engineering Related …

A first concept that extended Darcy’s law to multiphase flow was then proposed by Buckingham (1907). He postulated that the hydraulic conductivity of an unsaturated soil depends on the water content. Richards (1931) advanced this and formulated a partial differential equation for water flow in the unsaturated zone which is commonly known as the Richards equation. Another basic early work is documented by Leverett (1941) who explains the fundamentals of capillarity in porous solids. One of the most-cited textbooks on fluid dynamics in porous media was written by Bear (1972) and provides a comprehensive introduction to the fundamental physics and their mathematical description. Also, the book by Scheidegger (1974) covers the physics of flow and transport through porous media. A major stimulus to increasing interest in flow through porous media was given by the petroleum industry which aimed at an efficient exploitation of oil and gas reservoirs. Aziz and Settari (1979) and Chavent and Jaffre (1978) both focus on methods for modelling the flow processes in petroleum reservoirs with numerical simulators, while Lake (1989) discusses stateof-the-art techniques (for that time) for Enhanced Oil Recovery (EOR) in great detail. Another more general textbook on the modelling of transport phenomena in porous media is the book by Bear and Bachmat (1990). The book of Looney and Falta (2000) contains a collection of very detailed contributions to flow and transport processes in the vadose zone, thereby also having a strong focus on both forward and inverse-modelling techniques and, in this context, parameter estimation, which is an important topic for subsurface processes that are usually characterised by great parameter uncertainty. The capabilities for modelling subsurface flow and transport processes have improved significantly over the last few decades, and many models nowadays can consider complex coupled and non-linear multiphase processes including mass transfer between the fluid phases. On the one hand, this puts high demands on accurate quantitative approaches for fluid properties, hydraulic properties and mass transfer processes. On the other hand, there is a need for sophisticated algorithms and discretization methods in order to solve the arising systems of non-linear partial differential equations fast and efficiently. Comprehensive tables and constitutive equations for gas and liquid fluid properties are provided by, for example, Hirschfelder et al. (1954), Vargaftik (1975), and Poling et al. (2001). Issues of computational methods in subsurface flow were addressed by Huyakorn and Pinder (1983). An excellent overview of numerical methods and discretization schemes for multiphase subsurface-flow models is given by Helmig (1997). The classical fields of application for multiphase flow and transport models in the subsurface are still (i) in reservoir engineering for improving and optimising the production of oil and gas and (ii) in environmental engineering for addressing issues like groundwater management, protection and remediation. These topics are covered by the textbooks mentioned above and an additional huge number of other books and more specific publications in scientific journals. Many of them will be cited later on in this book in their specific context. Reservoir engineers and environmental engineers might not always have the same motivation to model flow and transport in the subsurface. A reservoir engineer is

2.1 Overview

17

primarily interested in optimising the increase in the oil produced while not greatly concerned about the amount left in the reservoir. The exact opposite is true for the environmental engineer: he wants to remove the last drop of oil contaminating the soil, while the amount of oil is of secondary importance. This different perspective on more or less the same physical problem might lead to different priorities for the development of technologies and simulation methods. For example, a model for the remediation of contaminated soil needs to be mass-conservative, lest the soil is not “cleaned” accidentally by numerically disappearing contaminants. Just recently, since the late 1990s, petroleum engineers and environmental engineers have begun joint efforts to develop models for simulating CO2 injection into deep geological formations. A detailed overview of the general issues of Carbon Dioxide Capture and Storage (CCS) is given by the Special Report of the Intergovernmental Panel on Climate Change (IPCC) (IPCC 2005). Of course, this brief survey of the literature can only provide a rather fragmented view of the vast amount of good books and articles related to the modelling of subsurface environmental processes. Accordingly, a range of commercial and noncommercial software packages and research codes are available for the simulation of multiphase flow in porous media which are not all mentioned in this book. For all simulations and simulation-related discussions which are used here for illustrative purposes, we will refer to the Open Source porous-media simulator DuMux (see Sect. 4.3 or www.dumux.org). As mentioned earlier, references to further and more recent publications and advances in the various fields are given later in their particular context.

2.1.2 The Range of Topics and Scales Figure 2.1 is an illustration that attempts to put some selected engineering problems in their geological context. The depth at which the various single- and multiphase flow and transport problems are located in the subsurface can extend from the shallow soil down to more than a kilometre. The examples shown in this sketch start in the vadose zone which might become spoiled by leachates from landfills or from NAPL spills (non-aqueous phase liquids, see Chap. 9). Such contamination can result in long-term threats to the groundwater and thus to drinking-water resources. Further problems related to subsurface-flow are illustrated in this figure and are located further down. For example, in coal mines, from where methane may be released from coal seams that remained unmined, their depth depends on the local depth of the carbon layers and can be in several hundred or even a few thousand meters. Also, the depth below ground surface of reservoirs for CO2 injection or formations to construct nuclearwaste disposals can vary according to the location of appropriate target formations. Yet, for geologic CO2 storage, engineers aim to achieve supercritical conditions for the CO2 which requires a minimum depth of about 700 m in order to reach sufficiently high pressures.

18

2 Conceptual Models for Environmental Engineering Related …

Fig. 2.1 Schematic view of selected gas-liquid flow problems in the subsurface (Class 2008)

With increasing depth, there is typically a growing number of geological layers involved, which makes a thorough characterisation of subsurface properties practically unattainable. However, what most geological layers have in common is that the description of their properties depends on the spatial scale on which they are viewed. For a description of the properties in flow models it is, in general, necessary to average over some volume of the respective porous medium. The issue of scales might be intuitively understood by imagining that we look at a porous medium from different distances. Figures 2.2 and 2.3 illustrate this impressively. Figure 2.2 shows a photograph of the soil structure in a quarry from a rather distant perspective. The scale of consideration here is in the range of several metres

2.1 Overview

19

Fig. 2.2 Soil structures from different perspectives. Photographs taken in a quarry (permission from A. Färber, personal communication)

20

2 Conceptual Models for Environmental Engineering Related …

Fig. 2.3 Soil structures from different perspectives. Photographs taken in a quarry (permission from A. Färber, personal communication)

2.1 Overview

21

to tens of metres. From this viewpoint, one can identify a more or less horizontal layering, while the layers themselves seem homogeneous. This layered structure results from the sedimentation processes taking place over geologic ages, and this characteristic layering is responsible for the anisotropy of the permeability in sedimentary rocks. Typically, vertical permeability is much lower than the horizontal. Coming closer or, in other words, zooming in to a scale of a few centimetres, as is shown in Fig. 2.3, shows that the very same structure as in Fig. 2.2 reveals some small-scale heterogeneity. Here, the grain-size distribution appears to be locally distinctive. Gravel inclusions in sandy or silty layers can be observed. We can conclude from this, without exception of any of the aforementioned topics, that knowledge about the structure of the subsurface and its heterogeneity is key to any modelling endeavour for flow and transport in the subsurface. The geological layers, faults, fractures or other features determine the resistance of the subsurface to fluid flow. This is expressed by properties like permeability and porosity, which are both dependent on the size of the so-called representative elementary volume (REV) that needs to be chosen for defining these and other properties. In contrast to technical porous media like papers, filters or porous diffusion layers in fuel cells, the subsurface is a natural porous medium and, thus, subject to a great and typically unpredictable variability of properties. This makes the parametrization of the subsurface a source of huge uncertainties.

2.1.3 Uncertainties and Risk Assessment Environmental sciences have become a subject of its own because risks to the environment and, ultimately, to human health are perceived with increasing awareness. It is, however, undoubtedly a huge challenge to identify and numerate risks. In this book, we will discuss, among other things, the topic of Carbon Capture and Storage (CCS); it is an excellent example for the complexity of problems which embrace several scientific disciplines and struggle with a clear identification and quantification of risks. It is not even undisputed whether carbon dioxide (CO2 ) is responsible for the trend in global warming that can be observed world-wide. Furthermore, it is subject to discussion what the appropriate countermeasures are. And, eventually, what are their associated risks? In all this, people try to get reliable predictions on what would happen if ..., and numerical simulation is the most obvious predictive method for these kinds of environmental subsurface-flow problems we are discussing in this book. Numerical simulation provides us with numbers for everything we want, at any time and at any place, given that it is parametrized in our model equations. This illusion of accuracy might be tempting, but we must never forget to consider the uncertainties associated with model predictions. Both uncertainty quantification and risk assessment are large fields of research for subsurface-related problems and many textbooks and papers have been published on the subject. A very basic common issue is the definition of risk, i.e. what does risk mean; and here we would like to single out an approach given by Kaplan (1997)

22

2 Conceptual Models for Environmental Engineering Related …

and accordingly (Kaplan 1997; Kaplan and Garrick 1981) that could be summarised as: risk is a scenario-dependent function of likelihood and damage. It is therefore important to distinguish risk from likelihood (or: probability) and from hazard. In turn, the likelihood (or: probability) of a hazard to cause damage with a certain amount of consequences can only be quantified if the uncertainties associated with this event are known. Uncertainties are always the result of a lack of information, and they can have very different origins. It is quite common to distinguish uncertainties and to classify them accordingly as either epistemic or aleatory. Epistemic uncertainty is due to a lack of information about the system of interest, and this type of uncertainty could be reduced by more exploration, more studies, more measurements, etc. On the other hand, aleatory uncertainty accounts for the unpredictability of a system’s reaction or behaviour and should rather be addressed by stochastic methods. In addition to these two different types (epistemic or aleatory), some authors have introduced additional categorisation; again, we would like to pick out one of those and refer it Walker et al. (2003) who have sub-divided this into different levels of uncertainties. These levels are (i) determinism, which means that we know everything; (ii) statistical uncertainty, which can be described by statistical terms (probability density functions, histograms, etc.); (iii) scenario uncertainty, which refers, for example, to a lack of information regarding dominant geologic features like faults, aquifers, aquitards, open or closed boundaries, etc.; (iv) recognised ignorance , which might stand for things we intentionally or, at least, knowingly neglect, for example, for the sake of simplicity; finally, there is (v) total ignorance, i.e. when we are unaware that we don’t know something. Thus, in fact, no matter which model we apply, it is always only a more or less defective image of the “real world”, and every party involved in the modelling process needs to understand a model’s prospects and limits. “Essentially, all models are wrong, but some are useful.” (George Box, 1919–2013). The aforementioned concepts and methods have already been applied in some previous work of the authors on environmental subsurface-flow problems; e.g. in Walter et al. (2012), Kissinger et al. (2017) on brine migration into fresh-water aquifers or Kopp et al. (2010) on the leakage of CO2 through abandoned wells.

2.2 Governing Equations Subsurface flow and transport can be modelled on different spatial scales. Flow occurs in pores which are abundant and, due to the variability of their individual morphology, usually too complex to describe except for as academic problems. Therefore, an averaging over an appropriate number of pores is often applied which leads to the concept of the REV (representative elementary volume) (Helmig 1997), see also Fig. 2.4. The governing equations for a given flow problem vary from one scale to another since upscaling typically leads to introducing effective processes and parameters to account for effects which occur below the scale of consideration and need to be taken into account effectively. Such an example is permeability (Hommel

2.2 Governing Equations

23

micro scale

REV solid matrix fluid phase 1

averaging fluid phase 2

Fig. 2.4 Averaging: transition between micro scale (or: pore scale) and REV scale

et al. 2018). The effective parameter permeability is used by modellers to describe resistance to fluid flow in the porous medium on the REV scale. In the scale below the micro scale or pore scale - resistance occurs more naturally due to shear stresses resulting from the non-slip conditions that every real viscous fluid experiences in the solid walls of the pore skeleton. In the following, we give a brief overview of model concepts for fluid flow and transport relevant to subsurface environmental problems, derived and applied on the REV scale. For more detailed aspects of compositional single and multiphase flow in porous media, we refer, e.g., to the textbook of Helmig (1997).

2.2.1 Single- and Multiphase Flow Through Porous Media 2.2.1.1

Single Phase Flow

The continuity equation for a fluid flowing with velocity v can be written as ∂  + ∇ · (v) − q = 0 ∂t

(2.1)

where  is the density of the fluid and q is a source/sink term. For a flow through porous media considered on the REV scale, we use an empirical relationship, known as Darcy’s Law, to relate the velocity to a gradient of the piezometric head h: v = −kf ∇h (2.2) kf (in m/s) is here the hydraulic conductivity, describing the resistance of the porous medium to a given medium. This means, kf depends both on the fluid and the porous medium. kf is related to the parameter permeability by

24

2 Conceptual Models for Environmental Engineering Related …

kf = K

g , μ

(2.3)

where the intrinsic permeability K (in m2 ) is now only a property of the porous medium, while the influence of the fluid is considered by the fluid’s dynamic viscosity μ. Note, that we use here in this equation the more general form of kf and K as tensorial quantities. The piezometric head is defined as h=

p +z g

(2.4)

with p for pressure, g the gravitational constant and z the vertical coordinate.

2.2.1.2

Transport Problems, Compositional Flow

Using the equations above with appropriate initial and boundary conditions allows computation of a flow field to obtain Darcy velocities and pressures (or piezometric heads). In order to model transport problems, it is further necessary to consider the porosity of the porous medium. The concept of the representative elementary volume (REV) introduces the effective parameter porosity  as: =

volume of pore space in REV . total volume of REV

(2.5)

The porosity allows it to be taken into account that fluid flow only occurs in the void space and is, thus, typically faster than described by the Darcy velocity. Using the mass fraction X to describe the amount of a component κ within fluid phases, the mass balance equation of that component being transported with and within a fluid phase can then be written, for example, as: ∂(X κ ) − ∇ · (X κ K(∇ p − g)) − ∇ · (D κ ∇ X κ ) − q κ = 0. ∂t

(2.6)

In addition to a storage term, an advective flux term employing Darcy’s law and a source/sink term, this equation also now contains a diffusion term with D κ representing an effective diffusion/dispersion coefficient to describe the diffusive flux of component κ.

2.2.1.3

Multiphase Flow

For multiple fluid phases α ∈ (water, gas or other fluids) present inside an REV, another effective parameter is the saturation Sα :

2.2 Governing Equations

25

Sα =

volume of phase α in REV volume of pore space in REV

(2.7)

For the saturations within an REV, there is a very simple closure relation which states that  Sα = 1 . (2.8) α

Darcy’s Law as in Eq. (2.2) can be extended to multiple phases and is then written as: vα = −

krα K · (∇ pα − α g) , μα

(2.9)

where krα /μα represents the mobility of the phase, often denoted as λα . The relative permeability krα takes values between zero and one and is typically in non-linear relation to the phase saturations. For systems of two fluid phases or also for three fluid phases, there are a number of approaches in the literature that quantify the relative permeability–saturation relationship, e.g. according to Van Genuchten (1980) and Brooks and Corey (1964) for two phases or e.g. Parker et al. (1987) also for three phases. Similarly, non-linear relations usually describe the capillary pressure pc as a function of the fluid saturations. pc (S) = pnon-wetting − pwetting

(2.10)

is accordingly another closure relation for the system of partial differential equations that can be formulated for multiphase multi-component flow: 

  α X ακ Sα )  kr α − ∇· α X ακ K(∇ pα − α g) ∂t μα α    κ ∇ · Dpm α ∇ X ακ − q κ = 0 ∀κ . −



∂(

α

(2.11)

α

Further closure relations for this system of equations are found in thermodynamic relationships to calculate phase composition. An alternative formulation of these mass balance equations would use mole fractions instead of mass fractions to account for phase composition.

2.2.1.4

Non-isothermal Conditions

Thermal heat balance equations for the different fluids and the solid matrix are often simplified under the assumption of local thermal equilibrium; accordingly, a single balance equation for thermal energy in the fluid-filled porous medium is formulated:

26

2 Conceptual Models for Environmental Engineering Related …

∂ ∂t







∂ ((1 − ) s cs T ) ∂t α  

 kr α −∇ · λpm ∇T − ∇· α h α K (∇ pα − α g) = 0 . μα α 

α u α Sα +

(2.12)

Here, cs represents the specific heat capacity of the soil grains. u α is the specific internal energy of fluid phase α and h α the specific enthalpy respectively. λpm is an averaged heat conductivity for the fluid-filled porous medium. The assumption of local thermal equilibrium holds true for small grains and slow fluid velocities where equilibration is fast compared to the propagation of heat fronts. If this is not given, separate energy balance equations for the solid and the fluids need to be formulated, where heat transfer terms between the phases need to be introduced.

2.2.2 Flow-Induced Geomechanics Flow through porous media is often modelled with the porous media acting as rigid bodies that interact with the flow only unilaterally by representing a no-flow and non-slip boundary condition of the flow field. However, this is just one of many simplifying assumptions ultimately taken to come up with a practicable conceptual model, which contributes to any model’s more or less defective image of the real world; keep in mind the words of George Box, see Sect. 2.1.3. Subsurface engineering problems often violate the assumption of a non-deforming porous medium. There are two perspectives which raise demand for considering a two-way interaction between flow and matrix deformation. On the one hand, deformations can lead to changes in pore volumes, which strongly affects fluid pressures and thus the flow field. Deformations might also lead to permeability changes. On the other hand, it might be, for example, the integrity of a hydraulic barrier during a fluid injection that needs to be predicted by a model. In such a case, it is clear that metrics are required that account for flow-induced geomechanical effects and allow the evaluation of case-specific failure criteria, e.g. shear failure, reactivation of pre-existing faults, tensile failure. Linking geomechanical models with subsurface flow models is currently a very active field of research with much less mature and established approaches than we have for ’simple’ porous media flow. For many decades, the latter has been described by Darcy’s law as a commonly accepted and well-established model, while it was clear that (strong) simplifications are involved in neglecting interaction with other processes like geomechanical ones. The equations we present below represent current attempts for considering rock mechanical effects under the simplifiying assumption of linear poroelasticity. We further assume that geomechanical effects occur very fast and we can use a quasi-steady approach with all the time derivatives vanishing for this part of the overall system of equations. In other words, we have transient

2.2 Governing Equations

27

fluid flow coupled to a quasi-steady geomechanical system. For more details than provided below, we refer e.g. to the following literature (Darcis 2013; Beck 2018; Beck et al. 2016, 2020).

2.2.2.1

Additional Parameters and Closure Relations

For describing geomechanics, we need to introduce a number of new parameters. The total stress tensor σ is a function of fluid pressure and effective stress of the porous matrix. It is related to the strain tensor via the elastic moduli of the porous medium; in turn, the stress tensor depends on the deformation vector u. Poroelasticity has to consider that the fluid contributes to the total stress. Following the concept of effective stresses σ  , formulated by Terzaghi (1923) and honoured by (among others) de Boer and Ehlers (1990), the effective stress equals the total stress σ reduced by the effective pore pressure: σ  = σ − αpeff I,

(2.13)

α is here the Biot parameter (Biot 1955) which has a value between 0 and 1 depending on the ratio between the elasticity of the grains and the elasticity of the porous medium, thus it corresponds to a deformation of the solid skeleton. Generally, the higher the porosity, the better the assumption of α = 1 holds true, which is very often taken for simplicity. The effective pore pressure is often approximated by a simple weighting of fluid pressures with their corresponding saturations, e.g. in a two-phase system with fluid phases w (wetting) and n (non-wetting): peff = Sw pw + Sn pn .

(2.14)

The deformation vector u is often used as a primary variable for which the arising system of equations is solved, i.e. the overall system consisting of mass balance equations for the fluids or components, a balance equation for thermal energy (if required) and the momentum balance equations as introduced further below. Deformations lead to a local change in porosity, which is commonly expressed in terms of a reference porosity 0 and the deformation vector: eff =

0 − ∇ · u . 1−∇ ·u

(2.15)

A change in porosity is typically linked to a change in permeability and a huge number of approaches is found in the literature. Many of them are presented and discussed in a review paper by Hommel et al. (2018).

28

2.2.2.2

2 Conceptual Models for Environmental Engineering Related …

Momentum Balance Equation for the Porous Matrix

The quasi-steady momentum balance equation contains a stress term and a gravity term: (2.16) ∇ · σ + b g = 0 b is the bulk density and depends both on the fluid density and on the rock density s . In a multiphase system, we approximate it with b = (Sn n + Sw w ) + (1 − )s .

(2.17)

Using Terzaghi’s effective stress concept as in Eq. (2.13), the momentum balance for the solid can be reformulated: ∇ · (σ  + peff I) + b g = 0.

(2.18)

Equation (2.18) requires the assignment of appropriate initial stress states which, according to Hooke’s law, should correspond to initial strain and thus to initial displacements. This is, in general, quite difficult to achieve. Therefore, one might prefer to consider an initial stress state as a state where initial deformations are zero. Such a concept subtracts an initial stress state σ0 from the total stresses and then solves the equations for σ . Applying this to Eq. (2.18) then yields ∇ · ( σ  + peff I) + b g = 0

(2.19)

Provided that , (1 − ), s and w are small enough to be neglected, while e.g. n of a possibly compressible non-wetting phase is not, b can be expressed by (2.20)

b ≈ [ Sn (n − w ) + Sn n ] , which would eventually lead to a momentum balance formulated for the changes relative to the initial state as here: ∇ · ( σ  + peff I) + [ Sn (n − w ) + Sn n ] g = 0

(2.21)

2.2.3 Subsurface Flow and Reactive Transport Subsurface flow often interacts with physical-chemical processes. Physical processes like dissolution, degassing, evaporation or condensation can affect the distribution of components within different fluid phases, usually described as dependent on pressure and temperature by given thermodynamic relations. The overall mass conservation of the components is not affected in the sense that components would appear or disappear from the system except via the system’s boundaries. On the other hand,

2.2 Governing Equations

29

chemical processes consume or produce the components which we are balancing. Thus, we need to introduce local source and sink terms, which depend on the reaction rates or, in other words, the reaction kinetics.

2.2.3.1

Example

Here we try to illustrate this by using an example from the work of Johannes Hommel, e.g. Hommel et al. (2015), Hommel et al. (2016), where induced calcite precipitation results in a very strong interaction between flow and reactive processes. This refers to an engineering technology that allows a targeted sealing of flow paths in the subsurface. The main reaction is catalysed by the enzyme urease, which is expressed, for example, by bacteria or added from jack bean extracts. Accordingly, one can refer to this technology as MICP (microbially induced calcite precipitation) or EICP (enzyme induced calcite precipitation). The overall reaction is given by urease

CO(NH2 )2 + 2H2 O + Ca2+ −→ 2NH+ 4 + CaCO3 ↓ ,

(2.22)

where urea is hydrolyzed in water and the produced carbonates react with Calcium ions which can then precipitate, while ammonium is released at the same time. The precipitation then reduces pore space and permeability accordingly. Equation (2.22) is actually the sum of the following reactive sub-systems: urease

CO(NH2 )2 + 2H2 O −→ 2NH3 + H2 CO3 NH3 + H+ ←→ NH+ 4 H2 CO3 ←→ HCO3− + H+ 2+ CO2− −→ CaCO3 ↓ 3 + Ca

(2.23) (2.24) (2.25) (2.26)

A challenge in systems with many such reactive sub-systems is to decide whether all these sub-reactions need to be calculated or whether lumped parameters and reactions can be introduced to reduce the number of equations and also to reduce the number of individual components that need to be balanced. Another challenge is to consider these chemical reactions either as equilibrium reactions or as reactions which are controlled by kinetics where reaction rates determine the transient behaviour. In particular for kinetics, there is a further complexity in the overall system of equations, i.e. flow, transport, and chemistry, since another time derivative needs to be considered to a possibly different time scale than the time scale (the one on which the reactions occur) of the flow and transport processes.

30

2.2.3.2

2 Conceptual Models for Environmental Engineering Related …

Equilibrium Reactions

Considering equilibrium reactions in a model that is coupled to flow and transport processes requires a decision regarding which chemical species and reactions should be considered, if all reactive sub-systems are taken into account or if, instead, reactive processes are described by lumped parameters and reactions. Such a decision is important for the number of balance equations which is formulated and accordingly for the number of unknowns in the arising set of equations that needs to be solved. Equilibrium reactions are usually described by the law of mass action, where an equilibrium constant, which depends on thermodynamic quantities like temperature, determines the equilibrium concentrations of the species involved in a reaction. Take, for example, Eq. (2.25), describing the first dissociation step of carbonic acid, we have an equilibrium dissociation constant of Ka =

[HCO3− ][H+ ] = 2.5 · 10−4 at 25◦ C. [H2 CO3 ]

(2.27)

The values in the square brackets are the concentrations of the species. The dissociation constant can also be expressed by its negative log value, which is, in this case, pK a = 3.6. Note that for this particular dissociation of carbonic acid, it is often found that H2 CO3 already includes the dissolved amount of CO2 , referred to as CO2(aq) . In fact, the concentration of H2 CO3 is much lower than the concentration of CO2(aq) . The equilibrium dissociation constant of that apparent H2 CO∗3 dissociation is then at K a(app) = 4.47 · 10−7 or pK a(app) = 6.35 at 25◦ C. More generally, in textbooks we can find formulations of the law of mass action also considering the stoichiometric coefficients (a, b, c, d) of the reactant concentrations (A, B, C, D): aA+bB ↔ cC+dD [C]c [D]d K eq = [A]a [B]b

(2.28) (2.29)

Using equilibrium reactions in this form in flow and transport models requires reactions to be very fast compared to the change in concentrations caused by flow and transport, so the time for reaching this equilibrium can be neglected.

2.2.3.3

Non-equilibrium Reactions/Reaction Kinetics

Expressing Eq. (2.28) as an equilibrium actually means that forward and backward reactions are at the same rate. Before the equilibrium concentrations are reached, either forward or backward reactions occur at a higher rate so that a net rate in one of kg mol both directions occurs. One could express a reaction rate with units in [ m 3 s ] or [ m3 s ] in general as (2.30) r = k [A]x [B] y .

2.2 Governing Equations

31

k is a reaction-rate constant and the exponents x and y determine the order of the reaction rate. In the simplest case, a constant reaction rate does not depend on the concentrations of the reactants. This can be, for example, reactions catalysed by enzymes. Such a reaction is then of zeroth order and we have r =k.

(2.31)

Reactions where the rate depends linearly on the concentration of one (or the only) reactant as, for example, in radioactive decay processes are denoted as first order. This yields then, e.g. r = k [A] . (2.32) Similarly, this can be extended to second-order or nth-order reactions. In second order reactions, we have the concentrations of two reactants affecting the reaction rate or the concentration of one reactant in a non-linear manner or such like.

2.2.3.4

Biochemical Rate Equations

In an example above, we mentioned the MICP technology. An essential part of modelling MICP is properly describing the growth of microorganisms. The Monod equation is a commonly applied model for such growth processes. It has the following form: [A] (2.33) μ = μmax K a + [A] μ is the specific growth rate of the microbes, μmax the corresponding maximum growth rate when the substrate A, which is limiting for growth, is in abundance. K a is the concentration value of substrate A, when we have μ/μmax = 1/2. Both μmax and K a are characteristic parameters for the particular type of microorganism and the given substrate. This results in a growth rate which asymptotically approaches the maximum rate with an increasing substrate concentration. Mathematically, the Monod equation has the same form as the Michaelis-Menten equation, which is one of the most established models for enzyme kinetics in biochemistry. Reaction rates following Michaelis-Menten could be written as: r =k

[A] , K a + [A]

k is again a rate constant. The maximum rate would be equal to k[A].

(2.34)

32

2 Conceptual Models for Environmental Engineering Related …

References Aziz K, Settari A (1979) Petroleum reservoir simulation. Applied Science Publishers Bear J (1972) Dynamics of fluids in porous media. Elsevier, Amsterdam Bear J, Bachmat Y (1990) Introduction to modeling of transport phenomena in porous media. Kluwer Academic Publishers, Berlin Beck M (2018) Conceptual approaches for the analysis of coupled hydraulic and geomechanical processes. PhD thesis, University of Stuttgart Beck M, Seitz G, Class H (2016) Volume-based modelling of fault reactivation in porous media using a visco-elastic proxy model. Transp Porous Media 114:505–524 Beck M, Rinaldi AP, Flemisch B, Class H (2020) Accuracy of fully coupled and sequential approaches for modeling hydro- and geomechanical processes. Comput Geosci. https://doi.org/ 10.1007/s10596-020-09987-w Biot MA (1955) Theory of elasticity and consolidation for a porous anisotropic solid. J Appl Phys 25:182–185 Brooks A, Corey A (1964) Hydraulic properties of porous media. Hydrol Pap, Colorado State University, Fort Collins Buckingham E (1907) Studies on the movement of soil moisture. Bulletin 38. USDA Bureau of Soils, Washington, DC Chavent G, Jaffre J (1978) Mathematical models and finite elements for reservoir simulation. NorthHolland Class H (2008) Models for non-isothermal compositional gas-liquid flow and transport in porous media. Habilitation thesis, University of Stuttgart Darcis M (2013) Coupling models of different complexity for the simulation of CO2 storage in deep saline aquifers. PhD thesis, University of Stuttgart Darcy H (1856) Les fontaines de la ville de Dijon. Dalmont, Paris de Boer R, Ehlers W (1990) The development of the concept of effective stresses. Acta Mechanica 83:77–92 Helmig R (1997) Multiphase flow and transport processes in the subsurface. Springer, Berlin Hirschfelder J, Curtiss C, Bird R (1954) Molecular theory of gases and liquids. Wiley, Hoboken Hommel J, Lauchnor E, Phillips A, Gerlach R, Cunningham A, Helmig R, Ebigbo A, Class H (2015) A revised model for microbially induced calcite precipitation: improvements and new insights based on recent experiments. Water Resour Res 51:3695–3715 Hommel J, Lauchnor E, Gerlach R, Cunningham A, Ebigbo A, Helmig R, Class H (2016) Investigating the influence of the initial biomass distribution and injection strategies on biofilm-mediated calcite precipitation in porous media. Transp Porous Media 114:557–579 Hommel J, Coltman E, Class H (2018) Porosity-permeability relations for evolving pore space: a review with a focus on (bio-)geochemically altered porous media. Transp Porous Media 124:589– 629 Huyakorn P, Pinder G (1983) Computational methods in subsurface flow. Academic, Cambridge IPCC (2005) Special report on carbon dioxide capture and storage. In: Metz B, Davidson O, de Coninck HC, Loos M, Meyer LA (eds) Prepared by working group III of the intergovernmental panel on climate change. Cambridge University Press, Cambridge Kaplan S (1997) The words of risk analysis. Risk Anal 17:407–417 Kaplan S, Garrick B (1981) On the quantitative definition of risk. Risk Anal 1:11–27 Kissinger A, Noack V, Knopf S, Konrad W, Scheer D, Class H (2017) Regional-scale brine migration along vertical pathways due to CO2 injection - part 2: a simulated case study in the north german basin. Hydrol Earth Syst Sci 21:2751–2775 Kopp A, Binning P, Johannsen K, Helmig R, Class H (2010) A contribution to risk analysis for leakage through abandoned wells in geological CO2 storage. Adv Water Resour 33:867–879 Looney B, Falta R (2000) Vadose zone. Battelle Press Parker J, Lenhard R, Kuppusami T (1987) A parametric model for constitutive properties governing multiphase flow in porous media. Water Resour Res 23(4):618–624

References

33

Poling B, Prausnitz J, O’Connel J (2001) The properties of gases and liquids. McGraw-Hill, Inc., New York Richards L (1931) Capillary conduction of liquids through porous mediums. PhD thesis, Cornell University Scheidegger A (1974) The physics of flow through porous media, 3rd edn. University of Toronto Press, Toronto Terzaghi K (1923) Die Berechnung der Durchlässigkeitsziffer des Tones aus dem Verlauf der hydrodynamischen Spannungserscheinungen. Sitzungsberichte Akademie der Wissenschaften (Wien), Math.-naturwiss. Klasse, Abt. IIa 132:25–138 Van Genuchten R (1980) A closed-form equation for predicting the hydraulic conductivity of unsaturated soils. Soil Sci Soc Am J 44:892–898 Vargaftik N (1975) Tables on the thermophysical properties of liquids and gases, 2nd edn. Wiley, Hoboken Walker WE, Harremoës P, Rotmans J, van der Sluijs JP, van Asselt MB, Janssen P, Krayer von Krauss MP (2003) Defining uncertainty: a conceptual basis for uncertainty management in model-based decision support. Integr Assess 4:5–17 Walter L, Binning P, Oladyshkin S, Flemisch B, Class H (2012) Brine migration resulting from CO2 injection into saline aquifers - an approach to risk estimation including various levels of uncertainty. Int J Greenh Gas Control 9:495–506

Chapter 3

Overview of Mathematical and Numerical Solution Methods

This chapter provides a brief overview of the mathematical and numerical methods which are employed for solving the systems of equations that arise from the kind of subsurface environmental problems discussed in this book. It serves the interested reader as a reference while, for details, we refer to textbooks focusing on these subjects, such as Chen et al. (2006), Helmig (1997). In Sect. 3.1, general approaches for solving multiphase flow equations in terms of choosing primary variables and boundary conditions are examined. Methods for the temporal and spatial discretization of the continuous mathematical problem formulations are described in Sect. 3.2. Solution strategies for the resulting large nonlinear systems of equations are discussed in Sect. 3.3. Finally, Sect. 3.4 investigates the choice of primary variables for multi-component systems with miscible fluid phases.

3.1 Approaches for Solving Multiphase Flow Equations In Chap. 2, we introduced the governing balance equations for multiphase flow (2.6), compositional multiphase multi-component flow and transport (2.11) and an additional thermal energy balance equation (2.12). Depending on the number of phases, additional supplementary constraints, constitutive relationships and closing relations, these equations compose a system of partial differential equations which are typically strongly coupled to each other. They are of mixed hyperbolic/parabolic character according to the influence of the capillary pressure relative to the advective flow of the phases. Diffusive effects, as we have in some compositional systems, further shift the character of the equations towards parabolic. Such effects are caused by molecular diffusion, dispersion and heat conduction. The mathematical behaviour of the general multiphase flow equations is discussed in more detail in Helmig (1997). For the case of an isothermal two-phase system, (2.6) stands for a set of two coupled partial differential equations, with one equation for the wetting phase w (typically: water) and the other one for the non-wetting phase n (e.g.: NAPL or © Springer Nature Switzerland AG 2021 D. Scheer et al., Subsurface Environmental Modelling Between Science and Policy, Advances in Geophysical and Environmental Mechanics and Mathematics, https://doi.org/10.1007/978-3-030-51178-4_3

35

36

3 Overview of Mathematical and Numerical Solution Methods

gas). The system of equations is closed by algebraic relations: the capillary pressure constraint (2.10), the saturation constraint (2.8) and constitutive relationships for krα (S), pc (S), ρα ( p, T ), μα ( p, T ), etc. According to Gibbs’ phase rule and with restriction to an isothermal system, two independent primary variables are sufficient for describing the system uniquely. The choice of the primary variables allows some alternative formulations of the two-phase flow equations as explained in the next subsection.

3.1.1 Formulations We focus on the three commonly encountered formulations: pressure-pressure, pressure-saturation and global pressure-saturation.

3.1.2 Pressure-Pressure Formulation The pressures of the two fluid phases pw and pn are both chosen as primary variables in the solution vector. Based on the pressure difference, the saturations of the fluid phases are then calculated via an inverse capillary pressure function Sα = gα ( pn − pw )

for α = n, w

(3.1)

with the preconditions that pc as a function of Sw (resp. Sn ) behaves strictly monotonically. This allows the formulation of the partial differential equations as follows. For the wetting phase (index w), one obtains   ∂(φgw ρw ) − ∇ · λw ρw K(∇ pw − ρw g) − ρw qw = 0, ∂t

(3.2)

while the equation for the non-wetting phase (index n) can be written as   ∂(φgn ρn ) − ∇ · λn ρn K(∇ pn − ρn g) − ρn qn = 0. ∂t

(3.3)

The primary variables are underlined herein. The numerical behaviour of the pressure-pressure formulation depends strongly on the shape, and in particular on the slope of the capillary pressure function. In regions (of saturation values) where dpc is too small, the calculation of the saturation from the inverted pc (S)-function d Sw becomes unstable since already small variations in pc produce strong variations in Sw . This is probably the main reason why the pressure-saturation formulation is often preferred over the pressure-pressure formulation.

3.1 Approaches for Solving Multiphase Flow Equations

37

3.1.3 Pressure-Saturation Formulation Primary variables in this case are one phase pressure and one saturation of a fluid phase. The appropriate choice of which pressure and which saturation to use may depend on a number of factors such as, e.g., the boundary conditions. Below, we take the pw -Sn formulation, i.e. the pressure of the wetting phase and the saturation of the non-wetting phase. The modifications of the general two-phase equations are summarized by the algebraic relations ∇ pn = ∇( pw + pc ),

∂ ∂ Sw ∂ Sn = (1 − Sn ) = − , ∂t ∂t ∂t

(3.4)

which yields after insertion in the system of partial differential equations for the wetting phase: −

∂(φ Sn ρw ) ∂t

  − ∇ · λw ρw K(∇ pw − ρw g) − ρw qw = 0

(3.5)

and for the non-wetting phase: ∂(φ Sn ρn ) ∂t

  − ∇ · λn ρn K(∇ pw + ∇ pc − ρn g) − ρn qn = 0

(3.6)

Since the pressure-saturation formulation includes one of the saturations in the vector of primary variables, its numerical behaviour is much less dependent on the slope or steepness of the pc (S)-curve.

3.1.4 Global Pressure-Saturation Formulation The global pressure formulation is often known as the fractional-flow formulation. For a detailed derivation we refer to e.g. Chavent and Jaffre (1978). The global pressure formulation is essentially based on mathematically motivated constructs with only limited physical meaning. One such construct is the total velocity vt = vw + vn

(3.7)

representing the sum of the velocities of the two fluid phases, while not being observed or measured in reality. The advantage of this definition is that it allows to transform the two-phase flow equations into ∂φ 1 + ∂t ρw

    ∂ρw ∂ρn 1 φ Sw φ Sn + ∇ρw · vw + + ∇ρn · vn + ∇ · vt = qw + qn . ∂t ρn ∂t

(3.8)

38

3 Overview of Mathematical and Numerical Solution Methods

The total velocity can be expressed dependent on the non-wetting phase pressure pn by applying Darcy’s Law and the capillary-pressure relation pn − pw = pc : vt = −λK(∇ pn − f w ∇ pc − G)

(3.9)

In (3.9) several abbreviations are used; λ = λw + λn is the total mobility, f w = λw /λ is the fractional flow of the water phase and G = g(λw ρw + λn ρn )/λ is a modified gravity vector. The idea is to find a scalar function—the global pressure p—so that Eq. (3.9) looks similar to Darcy’s Law. Thus, p must be chosen so that ∇ p = ∇ pn − f w ∇ pc .

(3.10)

With this, it can be shown that for any Sw it holds: pw ≤ p ≤ pn . Combining (3.9) and (3.10) and inserting them into (3.8) yields an equation to calculate the global pressure p that is coupled to the saturation for low-compressible fluid phases only via λ and G and not via the storage term as, e.g., in (3.5). Thus, the set of equations for the global pressure-saturation formulation with the unknowns p and Sw is given by the pressure equation   ∂φ ∇ · −λK(∇ p − G) = qw + qn − ∂t     ∂ρw ∂ρn 1 1 − + ∇ρw · vw − + ∇ρn · vn ,(3.11) φ Sw φ Sn ρw ∂t ρn ∂t

the saturation equation ∂(φρw Sw ) ∂t

= ρw qw − ∇ · (ρw vw ) ,

(3.12)

and the equations for the phase velocities vw = f w vt + λn f w K(∇ pc + (ρw − ρn )g)

(3.13)

vn = f n vt − λn f w K(∇ pc + (ρw − ρn )g) .

(3.14)

The weakly coupled equations (3.11) and (3.12) may be solved sequentially. Within a sequential scheme, the saturation equation (3.12) is typically dominated by advection and thus often solved explicitly, while the pressure equation (3.11) is solved implicitly (IMPES method—Implicit Pressure Explicit Saturation). However, since, as mentioned before, the global pressure p and the total velocity vt are theoretical mathematical constructs with limited physically measurable meaning, it may be a problem to determine the boundary conditions for a global pressure-saturation formulation (Binning and Celia 1999) from given experiments. Using some simplifying assumptions, this formulation of two-phase flow systems is helpful, e.g., to analytically investigate 1D model problems. The complexity of the equations can be significantly reduced by assuming incompressibility of the fluids and the pore space, no sources/sinks and no gravity. Then the pressure equation (3.11)

3.1 Approaches for Solving Multiphase Flow Equations

reduces to

∂ ∂x

39

  ∂p −λK =0. ∂x

(3.15)

If capillary pressure is zero, the saturation equation (3.12) is purely hyperbolic and can be written as ∂ ∂ Sw + vt f w (Sw ) = 0 . φ (3.16) ∂t ∂x Equation (3.16) is known as the Buckley-Leverett equation. The solution of this equation is extensively discussed in literature, e.g. by LeVeque (1992). If capillary pressure is not neglected as in (3.16), the saturation equation reveals parabolic behaviour. Here, we again refer to the literature, e.g. the McWhorther-problem described by a quasi-analytical solution presented by McWhorter and Sunada (1990).

3.1.5 Assigning Boundary Conditions Solving initial and boundary-value problems as those explained above requires the assignment of initial conditions to the whole domain and boundary conditions to the complete boundary  of the solution domain. Different types of boundary conditions can be distinguished and used in combination with segments  i of a subdivided boundary. u = u D (x, t) on  D Dirichlet boundary condition A Dirichlet boundary condition fixes the value of a primary variable u at given time t and location x. In engineering literature, this is often also referred to as the essential boundary condition. C(x, t)

∂u = u N (x, t) ∂n

on  N Neumann boundary condition

A Neumann boundary condition provides information about the derivative of the primary variable at the boundary. n is the normal vector at the boundary segment  N . C can be a function depending on time and location. Dependent on C, u N may express a flux across the boundary into or out of the domain. Thus, boundary conditions of the Neumann type are also called flux boundary conditions in this context. C1 (x, t)

∂u + C2 u = u R (x, t) ∂n

on  R

Robin boundary condition

A Robin boundary condition represents a more complex type that allows the specification of information both of the values of the primary variables (Dirichlet type) and of their gradients (Neumann type). Robin boundary conditions may be, for example,

40

3 Overview of Mathematical and Numerical Solution Methods

useful for modelling the interaction between a groundwater system and a surface water system. In such a case, the water flux across the system interface is often dependent on the water levels of both systems as well as on the properties of the interface, like the permeability of the sediment in a river bed. For many multiphase flow problems, the assignment of the boundary conditions can be a rather non-trivial problem. Often, it is not practical to assign a Dirichlet condition or a Neumann condition to every segment of the model domain. This holds true particularly for boundaries that rather represent an interface to another physical system or compartment which cannot be modelled with the same or similar partial differential equations. An example would be an interface between a porous medium and a free flow medium separates two different compartments, in this case the porous medium which is mostly modelled with a Darcy-type approach on the one side and on the other side the free-flow domain which requires a (Navier-)Stokes model (Fetzer et al. 2017). Such a type of problem occurs in manifold variations. Another example is the injection of a fluid into a laboratory sand-box with a free outflow to the atmosphere after the breakthrough (Class and Helmig 2002). In this case, the atmospheric pressure is the only ‘known’ quantity which can be assigned as a Dirichlet value to the interface between the porous medium and the environment which is the boundary of the solution domain, while saturations or concentrations of components in the fluid phases develop over time on the outflow boundary. They are not known a-priori, thus neither can their values be fixed as Dirichlet values nor are the phase/component fluxes known.

3.2 Discretization of the Equations The systems of partial differential equations are typically solved by numerical methods since analytical solutions are not feasible. The equations then need discretization in space and in time. There is already a large number of discretization methods available which were individually designed for dealing with distinct numerical difficulties which may occur during the simulations. In general, the equations for subsurface porous media flow and transport describe an advection-diffusion problem, which can be coupled to different kinds of other processes like reactions, geomechanics or others. Accordingly, the characteristics of the equations can change and further treatment is required. Diffusive effects are enhanced, for example, by capillary pressure, concentration gradients, temperature gradients, etc.; advective problems are typically given during infiltration, displacement and non-diffusive transport processes. For such kinds of processes, finite-volume or finite-element methods and a multitude of variants thereof are a common choice. The methods need to be robust, multi-dimensional and capable of covering a wide range of varying characteristic properties of the equations. Fully-coupled—all equations solved simultaneously— and fully-implicit—in time—methods using mass-conservative schemes are often preferred. This might become computationally expensive if the system of equations

3.2 Discretization of the Equations

41

is too big and too complex; for such cases, specialized alternatives with new advantages and disadvantages are available and in development. For the following developments, we focus on the general case of balance equations of the form ∂m(u) + ∇ · f(u, ∇u) + q(u) = 0, (3.17) ∂t seeking an unknown quantity u in terms of storage m, flux f and source q. All mathematical models presented in Chap. 2 and further refined in Sect. 3.1.1 can be written in the form of (3.17) with possibly vector-valued quantities u, m, q and a tensor-valued flux f. For the sake of simplicity, we assume scalar quantities u, m, q and a vector-valued flux f in the rest of this section.

3.2.1 Time Discretization Usually, the first step for discretizing (3.17) is to choose an approximation for the temporal derivative ∂m(u)/∂t. While many elaborate methods for this approximation exist (Deuflhard and Bornemann 2012), we focus on the simplest one of a first order difference quotient ∂m(u k/k+1 ) m(u k+1 ) − m(u k ) ≈ (3.18) ∂t tk+1 for approximating the solution u at time tk (forward) or tk+1 (backward). The question of whether to choose the forward or the backward quotient leads to the explicit and implicit Euler method, respectively. In case of the former, inserting (3.18) into (3.17) at time tk leads to m(u k+1 ) − m(u k ) + ∇ · f(u k , ∇u k ) + q(u k ) = 0, tk+1

(3.19)

whereas the implicit Euler method is described as m(u k+1 ) − m(u k ) + ∇ · f(u k+1 , ∇u k+1 ) + q(u k+1 ) = 0. tk+1

(3.20)

Once the solution u k at time tk is known, it is straightforward to determine m(u k+1 ) from (3.19), while attempting to do the same based on (3.20) involves the solution of a nonlinear system. On the other hand, the explicit method (3.19) is stable only if the time-step size tk+1 is below a certain limit that depends on the specific balance equation, whereas the implicit method (3.20) is unconditionally stable.

42

3 Overview of Mathematical and Numerical Solution Methods

3.2.2 Space Discretization For spatial discretization, we focus on finite-volume methods. They amount to dividing the spatial domain into a set of control volumes Vi , i = 1, . . . , n V , that share intersections ei j = Vi ∩ V j of codimension one, j ∈ Ni , the set of neighbours of Vi . The balance equation (3.17) is integrated over each control volume Vi , yielding   Vi

 ∂m(u) + ∇ · f(u, ∇u) + q(u) dV = 0. ∂t

(3.21)

Applying Gauß’ theorem to the flux term and splitting the surface integral into the contributions of the individual intersections ei j yields   Vi

  ∂m(u) + q(u) dV + f(u, ∇u) · nij de = 0. ∂t j∈N ei j

(3.22)

i

The defining features of a specific finite-volume method are now the choices for approximating the unknown u and the fluxes f(u, ∇u) · nij . In the following, we provide a brief overview of two standard methods, namely, the box method and the cell-centred finite-volume method.

3.2.3 Box Method The so-called box method unites the advantages of finite-volume (FV) and finiteelement (FE) methods. First, the model domain is discretized with a FE mesh consisting of vertices i and corresponding elements E k . Then, a secondary FV mesh is constructed by connecting the midpoints and barycentres of the elements surrounding vertex i and creating a box Bi around vertex i (see Fig. 3.1a). The FE mesh divides the box Bi into subcontrol volumes (scv’s) bik (see Fig. 3.1b). Figure 3.1c shows the finite element E k and the scv’s bik inside E k , which belong to four different boxes Bi . Also necessary for the discretization are the faces of the subcontrol volumes (scvf’s) eikj between the scv’s bik and bkj , where |eikj | is the length of the scvf. The integration points xikj on eikj and the outer normal vector nikj are also to be defined (see Fig. 3.1c). The idea isto apply the finite-volume method   (3.22) to each box Vi = Bi and split the integrals Vi and ei j into k bk and k ek . To calculate the fluxes across the i

ij

interfaces eikj , finite-element interpolation at the integration points xikj is employed. Consequently, at each scvf the following expression results in:  eikj

˜ u(x f(u, ∇u) · nijk de ≈ |eikj | f( ˜ ikj ), ∇ u(x ˜ ikj )) · nikj

with

u(x ˜ ikj ) =



uˆ i Ni (xikj ).

i

(3.23)

3.2 Discretization of the Equations

43

Fig. 3.1 Discretization of the box method

Above, uˆ i indicates the unknown solution value at vertex i and Ni the finite-element basis function associated with i. The approximation of the flux function f by f˜ usually involves the averaging of coefficients. For example, if f(u, ∇u) = λ(u)K∇u for up ˜ u(x ˜ ikj )) = λ(u i j )Kk ∇ u(x ˜ ikj ), the mobility λ and the permeability K, then f( ˜ ikj ), ∇ u(x up where λ(u i j ) denotes the mobility evaluated upstream and Kk the permeability of the element E k .

3.2.4 Cell-Centred Finite-Volume Method The cell-centred finite volume method uses the elements of the grid as control volumes. For each control volume, all discrete values are determined at the element/control volume centre (see Fig. 3.2). The mass or energy fluxes are evaluated at the integration points (xi j ), which are located at the midpoints of the control volume faces. Both Neumann and Dirichlet boundary conditions are formulated by prescribing fluxes at the boundary control volume faces. Different variants of cell-centred schemes result mainly from different calculations of the fluxes between two neighbouring control volumes. In the simplest and oldest variant, the flux is based only on the two potentials in the neighbouring control volumes. This so-called two-point flux approximation is monotone and robust, but only consistent on K-orthogonal grids, namely, if the directions of the permeability tensor with respect to the normals of the control volume faces are aligned with the distance vectors between cell and face centres. Developing methods that are consistent for more general grids is an active field of research (Aavatsmark et al. 2008; Edwards and Rogers 1998; Schneider et al. 2017), usually leading to so-called multi-point flux approximations, namely, the dependency of a face flux on more than only the two neighbouring cell potentials.

44

3 Overview of Mathematical and Numerical Solution Methods

Fig. 3.2 Discretization of the cell-centred finite volume method

3.3 Linearization and Newton’s Method The partial differential equations for multiphase flow are typically characterized by a high degree of nonlinearity, predominantly caused by the relationships between the capillary pressure and the saturation, as well as between the relative permeabilities and the saturation. Also, constitutive relationships for fluid properties like density, viscosity or enthalpy contribute to the non-linearity. An iterative numerical non-linear solution of such a system of equations requires an appropriate linearization scheme. A commonly applied method in this context is Newton’s method (or: Newton-Raphson method). After temporal and spatial discretization, as described in the preceding section, a typically large system of nonlinear equations has to be solved at each time step. This system can be written as F(x) = 0 , (3.24) where x holds the primary variables pw and Sn in case of the ( pw , Sn ) formulation at the geometrical degrees of freedom, for example, the vertices in case of the box method or the cell centres in case of a cell-centred method.

3.3.1 Fully-Coupled Solution Equation (3.24) has to be solved for x. For the non-linear iteration step m + 1 at time level k + 1, a Taylor-series expansion neglecting all terms higher than first order yields:  F(x

k+1,m+1

) ≈ F(x

k+1,m

)+

∂F ∂x

 · (xk+1,m+1 − xk+1,m ). k+1,m

As F(xk+1,m+1 ) must become zero, we can transform Eq. (3.25) into

(3.25)

3.3 Linearization and Newton’s Method

45

K(xk+1,m )u = −F(xk+1,m ) .

(3.26)

Here, K = ∂F/∂x represents the Jacobian matrix and u = xk+1,m+1 − xk+1,m is the vector that holds the corrections of the primary variables. F(xk+1,m ) stands for the defect term at time level k + 1 and non-linear iteration step m. The Jacobian matrix can be computed exactly if the derivatives with respect to the primary variables can be found analytically. With the increasing complexity of the system equations, this is no longer practicable. Then, the Jacobian can be computed by numerical differentiation. A non-linear solution procedure with Newton’s method could look like this: Choose x k+1,0 ; set m = 0; while ((||F(x k+1,m )||2 / ||F(x k+1,0 )||2 > εnl ) ∧ (||F(x k+1,m )||2 > absnl )) { Solve K (x k+1,m )u = −F(x k+1,m ) with accuracy εlin resp. abslin ; x = x k+1,m + ηu; k+1,m+1

m = m + 1; } εnl and εlin are relative accuracy criteria for the nonlinear and the linear solution. absnl and abslin are absolute stopping criteria for the nonlinear and linear solver. || · || is the Euclidean vector norm. η is a damping factor for the update of the primary variables. Ku = f

(3.27)

is the Jacobian system to be solved by a linear solver. The numerical simulator DuMux , which is the basis of all our development work on multiphase flow models, includes a number of different direct and iterative linear solvers that are based on the DUNE module dune-istl.

3.3.2 Example of a Sequential Iterative Scheme: Flow and Geomechanics Here, we introduce an exemplarily discussion on solution strategies, in this particular case related to solving flow equations (see Sect. 2.2.1) on the one hand and equations describing geomechanics (see Sect. 2.2.2) on the other hand. The respective discretized equations can be solved fully implicitly or sequentially with a variety of strategies for solving them sequentially with or without iterations. What we present here is essentially based on the work by Beck (2018), Beck et al. (2020). Regard-

46

3 Overview of Mathematical and Numerical Solution Methods

ing the particular topic of coupling the mathematical/numerical description of flow to geomechanics, there is a vast amount of literature, and a substantial discussion regarding it is not our intention here. Later in this book, by means of specific case studies on nuclear waste storage (Chap. 8), hydraulic fracturing (Chap. 7) and wastefluid injection (Sect. 9.1.3), this topic and its importance will be further elaborated. Here, we will refer only to one of the most well-known approaches in the scientific community, which is, from our point of view, the TOUGH-FLAC simulator, e.g. see Rutqvist et al. (2002). Let us assume that we need to solve a system of equations consisting of mass balance equations for a fluid phase or multiple fluid phases, see e.g. Equation (2.11), as well as a momentum balance equation for the solid as in Eq. (2.21). In their discretized versions, the flow equations (Eq. 2.11) contain, for example, the vector of pressure values of the wetting phase and the saturation values of the non-wetting phase as primary unknowns. In the following, we will use “2p” when we refer to the two-phase model or the two-phase equations. The vector of displacement u is the primary variable in the balance equation for linear momentum of the solid phase (Eq. 2.21) and we refer to this system below as “el”. We now use a notation where both are combined. The left-hand side of the balance ˆ n ) with the compoˆ n−1 , w equations would be referred to by the residual vector r(w nents r2p and rel for the two-phase system and for the linear-elastic momentum balance:

r n−1 n ˆ ) = 2p = 0 ˆ (3.28) ,w r(w rel ˆ n−1 , w ˆ n ) is calculated from the solution vectors w ˆ n−1 and The residual vector r(w ˆ n . These solution vectors contain the discrete values of two subsequent time steps w ˆ n exemplarily) t n−1 and t n for all nodes. Both solution vectors (here elaborated for w n ˆ eln with uˆ n : ˆ 2p with pˆ nw and Sˆ nn as well as an elastic part w include a 2p-part w

ˆn = w

with n ˆ 2p w

pˆ w = Sn

n ˆ 2p w ˆ eln w

(3.29)

n ˆ eln = uˆ n . and w

(3.30)

Since the system arising from these residual equations is strongly non-linear, we employ a Newton scheme for linearization, see previous Sect. 3.3.1. In the k-th iteraˆ n,k denotes the k-th estimate of the solution. The Jacobian matrix Jk tion at time t n , w is used to calculate the new solution for k + 1, where Jk is the matrix that contains all first-order derivatives of the residual vector in the k-th iteration:  Jk =

∂r ˆ ∂w

k .

(3.31)

3.3 Linearization and Newton’s Method

47

ˆ n,k+1 can then be obtained from solving The vector holding the new solution w ˆ = −rk Jk w

(3.32)

ˆ needs to be added to the previous solution vector where the update of the solution w ˆ n,k w ˆ n,k+1 = w ˆ n,k + w. ˆ w (3.33) Equation (3.28) represents the block-wise structure of the residual vector r. As a consequence, the Jacobian system solved within Newton’s method for each of those updates can be written as

J2p,2p J2p,el Jel,2p Jel,el

k

ˆ 2p w ˆ el w





r = − 2p rel

k ,

(3.34)

where Ja,b stands for a balance equation a for which derivatives are computed with respect to each entry in the solution vector b. For example, J2p,2p is the derivative of the fluid’s mass balance equation with respect to pˆ w and Sn . The components J2p,el and Jel,2p represent the coupling between flow and geomechanics.

3.3.3 Fully-Coupled Scheme A fully-coupled solution procedure requires in each time step a simultaneous solution for the primary unknowns pw and Sn from the two-phase flow system, as well as uˆ for the geomechanics. Then, each Newton update can be calculated from the linearized system given in Eq. (3.34). This approach is also denoted as monolithic when it employs the full matrix J.

3.3.4 Sequentially Iterative Fixed-Stress Scheme A scheme which solves sequentially for the unknowns in the systems of flow and of geomechanics requires the Jacobian J to be split up. This in turn necessitates an alteration of the coupling blocks J2p,el and Jel,2p . A sequential solution requires the decoupling of an actually coupled physics, this requiring assumptions on which aspect of the physics is compromised. The fixed-stress split is a scheme (Kim et al. 2011; Mikeli´c and Wheeler 2013) which assumes that the stress state is fixed in both parts of the overall problem of flow and geomechanics. The solution procedure first computes the flow problem and then proceeds with the solution of the geomechanical problem. We refer to this as a coupling step, while several of these coupling steps yield then the iterative scheme.

48

3 Overview of Mathematical and Numerical Solution Methods

We have already said that the stress is assumed to be constant at the transfer from one part of the model to the next. In other words and more precisely: the difference between the volumetric stress of the solution of the flow problem σv,n,i2p and the volumetric stress σv,n,eli−1 of the previous solution of the geomechanical problem is zero: i − σv,n,eli−1 = 0. (3.35) σv,n,2p i denotes here the index of a coupling step. Before, we stated that a coupling step begins with with the solution of the flow problem. This means that the last geomechanical solution has been calculated already in the previous coupling step and, consequently, is denoted with the index i − 1. For the part of the geomechanical problem, the pressure is prescribed as the one n, i are equal within a from the previous flow problem is prescribed, so peln, i and p2p coupling step i: n, i . (3.36) peln, i = p2p n, i n, i−1 and p2p and the volumetric Thus, one can express Eq. (3.35) by the pressures p2p n, i n, i−1 strains εv, 2p and εv, el : n, i n, i n, i−1 n, i−1 ) = 0. (K dr εv, 2p − α p2p ) − (K dr εv, el − α p2p

(3.37)

Here, α is the Biot coefficient and K dr the drained bulk modulus. The coupling to the flow part is due to the dependence of the porosity on the volumetric strain (Eq. 2.15). Using Eq. (3.37) allows the formulation of the flow problem independent of the current displacement vector u by calculating the volumetric strain n, i of the flow problem in the current coupling step i from the pressure difference εv,2p n, i n, i−1 − p2p of the current and the previous coupling step of the 2p-problem and p2p n, i−1 which was obtained in the previous coupling step from the volumetric strain εv,el for the part of the geomechanics problem: n, i εv, 2p =

1 n, i−1 n, i−1 ( p n, i − p2p ) + εv, el . K dr 2p

(3.38)

Here, Biot’s coefficient α is assumed to be 1. The two-phase system depends now on the primary variables pwn, i and Snn, i , since the pressure p is the effective pressure peff , which is a function of saturation. peff = Sw pw + Sn pn

(3.39)

Once the flow problem is solved with the fixed-stress condition included, then pwn, i and Snn, i are transferred to the balance equation for linear momentum in the geomechanics problem. There, the vector of displacement un, i is the only primary unknown. Thus, we have two separate and sequentially solved Jacobian systems:

3.3 Linearization and Newton’s Method



J˜ 2p,2p

49

k

pˆ w Sˆ n





k

= − r2p

(3.40)

k   J˜ el,el uˆ = − [rel ]k .

(3.41)

J˜ 2p,2p and J˜ el,el are the derivatives of the modified balance equations: J˜ 2p,2p includes the fixed-stress condition. Thus, the volumetric strain of the current Newton iteration step k and coupling step i within one time step n can be determined from n, i, k εv, 2p =

1 n, i−1 n, i−1 ( p n, i, k − peff, 2p ) + εv, el K dr eff, 2p

(3.42)

while the effective pressures from the 2p-problem are prescribed within J˜ el,el . The update of porosity is obtained from φeff = φ0 + εv ,φeff = φ0 + εv , thus it should be mentioned that a fixed-stress split that employs only one coupling step yields effectively exactly the same as a computation of the porosity change from a given pore compressibility by φeff = φ0 + K1dr ( p − p0 ). In other words, the previous coupling step is essentially the previous time step when no iterations are carried out. εvi−1 equals zero for the first time step; for all following time steps, it is only a function of the pressure difference between the previous time step and the current one. Since these eventually collapse into the difference between the initial pressure and its value of the current solution, such a simplified approach is equivalent to employing the pore compressibility or the drained bulk modulus, respectively: n = φ0 + εv φeff 1 = φ0 + ( pn − K dr eff, 2p 1 = φ0 + ( pn − K dr eff, 2p 1 = φ0 + ( pn − K dr eff, 2p ... 1 = φ0 + ( pn − K dr eff, 2p

n−1 n−1 peff, 2p ) + εv, el n−1 peff, 2p ) +

1 n−2 n−2 ( p n−1 − peff, 2p ) + εv, el K dr eff, 2p

n−2 n−2 peff, 2p ) + εv, el

0 peff, 2p ).

(3.43)

In such a case, the geomechanics are not fed back into the flow problem. It can thus be denoted as a zero-iteration case.

50

3 Overview of Mathematical and Numerical Solution Methods

Fig. 3.3 Evolution of the calculated pressure during sequential fixed-stress iterations compared with the value achieved with the fully-coupled scheme as the reference. It can be clearly seen that the fixed-stress sequential schemes converges to the fully-coupled solution

3.3.5 Exemplary Scenario Evaluation Beck (2018) provides a comparison between the above-mentioned sequentially iterative fixed-stress approach and a fully coupled reference solution of a fluid-injection scenario where the pressure evolution at the injection is used as an indicator of accuracy. Figure 3.3 shows the evolution of the pressure as calculated during a series of fixed-stress iterations during one time step of the injection scenario. It can be seen how the pressure approaches the reference solution which was obtained during the same time step calculated by the fully-coupled scheme. As the plot indicates, the iterative scheme requires 4 to 5 iterations to produce a pressure value that is very close to the reference. One can also see that the iteration converges monotonically to the reference solution. The general question is how many iteration steps should be used for such a sequentially iterative procedure. While Fig. 3.3 suggests 4 to 5, many model couplings use sequential schemes without iterations. This would correspond to the furthest left pressure point in Fig. 3.3. It should be noted that a difference in predicted pressures in both models (fully coupled and sequential) only occurs during transient conditions, when the effects of compressibility play a role. Figure 3.4 shows exemplarily the evolution of the injection pressure during a CO2 injection into a brine aquifer, thus a strongly transient process with a pressure peak followed by tailing due to the effects of relative permeability. In this case, the fully-coupled solution is compared to the sequential fixed-stress without iteration, which can be considered a scheme which calculates flow by simply including an approach for pore compressibility as we explained above. The difference between both curves is clearly visible, although the non-iterated sequential scheme still provides reasonable results. The two important aspects of a sequential versus fully-coupled solution procedure are robustness and efficiency. While the fixed-stress scheme as introduced above has allowed a robust solution converging to the fully coupled reference (Fig. 3.3), the

3.3 Linearization and Newton’s Method

51

Fig. 3.4 This plot shows a comparison of pressure evolutions for a two-phase flow injection scenario (CO2 in brine). The solution of the fixed-stressed without iteration deviates from the reference solution of the fully-coupled scheme. It is not shown that an iterated fixed-stress scheme provides the same curve as the fully-coupled scheme

question is whether this can also be achieved with a gain in efficiency. The abovementioned study of Beck has shown that, theoretically, two fixed-stress coupling steps could be carried out in order to benefit from the reduced effort in the solution of the linear systems compared with the fully-coupled scheme. However, this is usually not as accurate as the fully-coupled procedure. Thus, it depends on the degree of requested accuracy whether an iterative scheme can offer computational benefits. We have seen exemplarily that the zero-iteration case (Fig. 3.4) gives reasonable while not exact results. The accuracy of coupling methods in such complex problems as this one here of flow with geomechanics is often not the subject of investigation since fully-coupled reference solutions are not available.

3.4 Primary Variables for Compositional Models The choice of appropriate primary variables for which the systems of partial differential equations are solved is a key issue for the solution process. This holds true, in particular, for compositional systems where fluid phases appear or disappear due to processes like evaporation, condensation, dissolution, etc. We define a phase state as a distinct set of separate fluid phases within an elementary volume.

3.4.1 Degrees of Freedom According to Gibbs’ Phase Rule The number of required state variables to describe a multiphase multi-component system uniquely is determined by the number of degrees of freedom which is given by Gibbs’ phase rule.

52

3 Overview of Mathematical and Numerical Solution Methods

The number of independent state variables F (degrees of freedom) depends on the number of components C and the number of fluid phases P. In general, the phase rule formulates for a system in thermodynamic (thermal, chemical and mechanical) equilibrium F=C−P+2. (3.44) For the description of multiphase multi-component systems in porous media, this approach requires modifications. For porous media systems at the scale of a representative elementary volume (REV), the phase saturations increase the number of degrees of freedom by P-1. Thus, we end up with F = C − P + 2 + (P − 1) = C + 1 .

(3.45)

For a thorough thermodynamic derivation of Eq. (3.44) and the equations used below to introduce the caloric state variables, we refer to the literature, e.g. (Atkins 1996) or other textbooks of thermodynamics. Many multiphase models for flow in porous media assume isothermal conditions. This reduces the degrees of freedom by 1 and leaves F = C. After determining the number of degrees of freedom F via the model assumptions, it is necessary to choose a set of F primary variables. The choice of the primary variables affects the numerical solution procedure. The choice of the primary variables is predominantly motivated by the local phase state and the processes leading to the appearance or disappearance of phases.

3.4.2 An Algorithm to Substitute Primary Variables Models for simulating non-isothermal processes, for example, in water-gas-NAPL systems (as in Chap. 9) should typically be able to consider the disappearance and appearance of phases. In NAPL recovery or thermally enhanced soil remediation problems, the disappearance of the NAPL phase after the cleanup and the recondensation/appearance of NAPL in cooler regions is important to address. The disappearance of liquid water is relevant in regions where superheated steam is injected or where thermal wells with very high temperatures lead to a complete evaporation of the liquids. A check on the phase states and corresponding primary variables after each Newton step ensures that the model does not converge with a wrong phase state after a non-linear solution process. A crucial issue concerning the robustness of a primary variable substitution algorithm is the definition of appropriate substitution criteria. For disappearing phases, this can be easily indicated by negative values of the phase saturations whereas the appearance of phases requires a closer inspection of the physical processes behind this. For example, the appearance of a liquid phase resulting from the condensation of the corresponding component can be indicated by comparing the partial pressure

3.4 Primary Variables for Compositional Models

53

of the component in the gas phase with the saturation vapour pressure. If the partial pressure exceeds the saturation vapour pressure, then condensation occurs and a liquid phase is formed. Another typical case is the degassing of dissolved gas components, for example, due to pressure lowering. To indicate this in the algorithm one must compare the dissolved amount of a component with the maximum solubility.

3.4.3 Primary Variables for Non-isothermal Water-Gas-NAPL Systems Table 3.1 lists the sets of primary variables that can be used for the non-isothermal three-phase three-component concept as in Class et al. (2002), Class and Helmig (2002), Class (2008). As already mentioned above, for typical scenarios in the context of the thermally enhanced remediation of NAPL-contaminated soils, it is important to model the disappearance and (re-)appearance of the NAPL phase. Thus, phase states NWG and WG (see Table 3.1) occur most frequently. For high-temperature techniques where liquid water can also fully evaporate, it is further necessary to include state G with only the gas phase remaining. Temperatures beyond the boiling temperatures of the liquids can only occur in phase state G (gas only), where, for example, water would only exist as superheated steam. One example for a local change of the phase is illustrated in Fig. 3.5. The situation depicted there could occur, for example, due to a steam-injection from the left-hand side. Then, a steam front would propagate through the system in the x-direction. When the front reaches the element, the temperature increases. Since there is liquid NAPL present, the temperature can only rise up to the boiling temperature of the water-NAPL mixture. Subsequently, both liquids evaporate and, after some time (t0 + t), the NAPL phase disappears. The temperature can now increase further to the boiling temperature of pure water. Applying the phase states and primary variables listed in Table 3.1, one can see that a change from NWG to WG occurs at the outer left corner of the element in Fig. 3.5. The NAPL saturation no longer

Table 3.1 Non-isothermal three-phase three-component model: phase states, corresponding primary variables and criteria for substitution in the case of phase appearance Phase state Present phases Primary variables Appearance of phase Water NAPL Gas NWG NG

w, n, g n, g

Sw , Sn , pg , T Sn , x gw , pg , T

G

g

x gc , x gw , pg , T

WG

w, g

x gc , Sw , pg , T

– x gw pg > w psat w x g pg > w psat –

– –

– –

c x gc pg > psat – c x gc pg > psat –

54

3 Overview of Mathematical and Numerical Solution Methods T, Sn

boiling temperature of water

T, Sn

boiling temperature of water-NAPL mixture

NAPL saturation x

t0

water, gas, NAPL pg, Sw, Sn, T

x

t 0+ Δ t

water, gas, NAPL pg, Sw, Sn, T

water, gas, NAPL pg, Sw, Sn, T

water, gas, NAPL pg, Sw, Sn, T water, gas pg, Sw, Xcg, T

water, gas, NAPL pg, Sw, Sn, T water, gas, NAPL pg, Sw, Sn, T

water, gas, NAPL pg, Sw, Sn, T

Fig. 3.5 Process-adaptive substitution of primary variables after a local change of the phase state. Here, the NAPL phase disappears at one node during the time step of size t (Class 2008)

represents Sn = 0 as an independent variable and must be replaced, for example, by the mole fraction of the NAPL component in one of the other phases (here: in the gas phase). This concept works well and is stable for processes in the unsaturated zone. If the saturated zone should be considered, for example, in case of a steam-injection below the water table, some numerical difficulties appear which we will briefly address below.

3.4.4 Primary Variables for Modelling Steam Injection in the Unsaturated and Saturated Zones The injection of steam into the saturated zone may lead to a complete disappearence of the air component, unless the component air is part of the injected fluid. This requires particular attention. If steam is injected or liquid water is boiling, the constitution of the gas phase approaches fully-steam conditions with the fraction of the air component going towards a value of zero. If there is still a two-phase state with liquid water and gas (steam), the primary variables according to Table 3.1 are Sw , pg , and T . Thus, the composition of the gas phase is calculated according to the saturation vapour pressure as a function of the temperature which leaves only very small values for the mole fraction of air in the gas phase. Consequently, according to Henry’s Law, the mole fraction of air in the water phase is very small as well. From

3.4 Primary Variables for Compositional Models

55

Table 3.2 Non-isothermal two-phase single-component model for steam-injection in the saturated zone: phase states and corresponding primary variables Phase state Present phases Primary variables W G WG

w g w, g

pw , T pg , T pg (T ), Sw

the literature, it is known that very small values for the air content in the phases are given in the domain can affect the numerical robustness by oscillations of the solution. This phenomenon is described and explained by Ochs (2007). Another aspect in the same context is that the primary variables pg and T are no longer independent of each other when the air component has disappeared. When the system contains only liquid water and water vapour, pg is equal to the saturation vapour pressure and thus a unique function of the temperature. For problems that only consider steam-injection in the saturated zone, it is practical to neglect the air component and instead use a two-phase single-component model, cf. the work of Ochs (2007), Ochs et al. (2010). Then the following set of primary variables for the different phase states can be used (Table 3.2).

References Aavatsmark I, Eigestad G, Mallison B, Nordbotten J (2008) A compact multipoint flux approximation method with improved robustness. Numer Methods Partial Differ Equ 24:1329–1360 Atkins P (1996) Physikalische Chemie, 2 edn. Wiley-VCH Beck M (2018) Conceptual approaches for the analysis of coupled hydraulic and geomechanical processes. PhD thesis, University of Stuttgart Beck M, Rinaldi AP, Flemisch B, Class H (2020) Accuracy of fully coupled and sequential approaches for modeling hydro- and geomechanical processes. Comput Geosci. https://doi.org/ 10.1007/s10596-020-09987 Binning P, Celia MA (1999) Practical implementation of the fractional flow approach to multi-phase flow simulation. Adv Water Resour 22:461–478 Chavent G, Jaffre J (1978) Mathematical models and finite elements for reservoir simulation. NorthHolland Chen Z, Huan G, Ma Y (2006) Computational methods for multiphase flows in porous media. SIAM, Computational Science & Engineering Class H (2008) Models for non-isothermal compositional gas-liquid flow and transport in porous media. Habilitation thesis, University of Stuttgart Class H, Helmig R (2002) Numerical simulation of nonisothermal multiphase multicomponent processes in porous media - 2. Applications for the injection of steam and air. Adv Water Resour 25:551–564 Class H, Helmig R, Bastian P (2002) Numerical simulation of nonisothermal multiphase multicomponent processes in porous media - 1. An efficient solution technique. Adv Water Resour 25:533–550

56

3 Overview of Mathematical and Numerical Solution Methods

Deuflhard P, Bornemann F (2012) Scientific computing with ordinary differential equations. Springer Science & Business Media Edwards MG, Rogers CF (1998) Finite volume discretization with imposed flux continuity for the general tensor pressure equation. Comput Geosci 2:259–290 Fetzer T, Grüninger C, Flemisch B, Helmig R (2017) On the conditions for coupling free flow and porous-medium flow in a finite volume framework. In: Cances C, Omnes P (eds) Finite volumes for complex applications VIII - hyperbolic, elliptic and parabolic problems. FVCA 2017, vol 200. Springer Proceedings in Mathematics & Statistics Helmig R (1997) Multiphase flow and transport processes in the subsurface. Springer Kim J, Tchelepi H, Juanes R (2011) Stability and convergence of sequential methods for coupled flow and geomechanics: fixed-stress and fixed-strain splits. Comput Methods Appl Mech Eng 200:1591–1606 LeVeque R (1992) Numerical methods for conservation laws. Birkhäuser McWhorter D, Sunada D (1990) Exact integral solutions for two-phase flow. Water Resour Res 26:399–413 Mikeli´c A, Wheeler MF (2013) Convergence of iterative coupling for coupled flow and geomechanics. Comput Geosci 17:455–461 Ochs S (2007) Steam injection into saturated porous media: process analysis including experimental and numerical investigations. PhD thesis, University of Stuttgart Ochs S, Class H, Färber A, Helmig R (2010) Methods for predicting the spreading of steam below the water table during subsurface remediation. Water Resour Res 46:W05520 Rutqvist J, Wu Y-S, Tsang C-F, Bodvarsson G (2002) A modeling approach for analysis of coupled multiphase flow, heat transfer, and deformation in fractured porous rock. Int J Rock Mech Min Sci 39:429–442 Schneider M, Agélas L, Enchéry G, Flemisch B (2017) Convergence of nonlinear finite volume schemes for heterogeneous anisotropic diffusion on general meshes. J Comput Phys 351:80–107

Chapter 4

Software Concepts and Implementation

Coding a discretised mathematical model like the ones presented in Chaps. 2 and 3 yields a computational model in the form of software. Performing a numerical simulation corresponds to running a computational model for a specific scenario. In this chapter, we will illustrate the process of implementing and investigating computational models for the previously mentioned subsurface applications. In particular, we will focus on performing this process in a transparent and reproducible manner through open-science principles. Section 4.1 introduces the principles behind opensource code and data. Going into more detail, Sect. 4.2 introduces necessary and useful infrastructure components for open-source projects. The open-source porousmedia simulator DuMux is presented in Sect. 4.3.

4.1 Open Science: Principles of Open-Source Code and Data Free and Open-Source Software (FOSS) is omnipresent in many people’s professional and private lives. FOSS components may form the basis for operating systems and other essential programs in stationary and mobile devices such as desktop computers and clusters, smartphones, satellites, or wireless receivers. The use of FOSS has become common, particularly in computational science and engineering. However, for many researchers and working groups in academia, it is still rare to develop and publish their self-written code according to open-source principles. Here, we first motivate the development of open-source research software in Sect. 4.1.1. Then, we discuss the background and principles of open-source software development in Sect. 4.1.2. The content of the section is largely based on Bilke et al. (2019), Flemisch (2013). © Springer Nature Switzerland AG 2021 D. Scheer et al., Subsurface Environmental Modelling Between Science and Policy, Advances in Geophysical and Environmental Mechanics and Mathematics, https://doi.org/10.1007/978-3-030-51178-4_4

57

58

4 Software Concepts and Implementation

4.1.1 Why Develop Open-Source Research Software? Several of the most popular open-source projects in the world have their origins in academia (e.g., Linux), covering a wide variety of academic programs and fields of study. Through the point of view of the scientists, as an individual researcher, working group, or whole community, the following three reasons appear as the most popular for developing open-source research code. The first notes that access to source code is compulsory because reproducibility is central to the scientific method. Secondly, the use of FOSS principles is supposed to improve the quality of the code and its applicability. Finally, the capacity for collaboration with academic or industrial partners may be significantly aided and simplified. The German Research Foundation DFG rephrased one of the critical components of the scientific method in its “Proposals for safeguarding good scientific practice” (Deutsche Forschungsgemeinschaft 1998): “The primary test of a scientific discovery is its reproducibility.” The European Commission recommends that members adopt a wide range of open access policies to “…enable the use and reuse of scientific research results” (EU Recommendation 2018). The growing sophistication of scientific findings makes their reproducibility a highly challenging task, particularly if a result has been obtained via a computational simulation. A few decades earlier, in a journal article, it was still feasible to describe a numerical algorithm in such detail that interested researchers could reprogram it on their own and replicate the proposed results. Today, this is typically impossible because of the highly advanced computational models. The only feasible approach for the scientific community to replicate the findings produced by computer code is to give access to this code together with associated input data and parameters. Ince (2010) put it into more provoking terms: “…, if you are publishing research articles that use computer programs, if you want to claim that you are engaging in science, the programs are in your possession and you will not release them then I would not regard you as a scientist; I would also regard any papers based on the software as null and void.” Access to code is the first fundamental concept of free and open-source software. Clearly, having the source code alone might not be enough to ensure the research findings are reproducible. All related issues concern the scientific area of reproducible research, (Fomel and Claerbout 2009; Kitzes et al. 2017). Although we are not going to explore this further here, we recognize the fact that source code is an integral component of any reproducible computational result, or the evaluation/description of such a result. Another reason for developing research software through open-source concepts is hopes of an improvement in code quality and a decrease in the time of technical transition for new users and developers. For several academic groups, the typical software development reflects individual, fragmented work. While the generated software might demonstrate the superiority of a particular method or model for specific scenarios, it is rarely applicable to modified problem settings. The programs are, in fact, generally inaccessible to others and unable to manage actual application data such as industry-standard geological models. The code’s average lifetime is essentially the length of employment of the doctoral student or postdoc that programmed

4.1 Open Science: Principles of Open-Source Code and Data

59

it, and its reusability is severely limited. The community’s ability to look at and use the code for any reason is expected to increase the number of possibilities for finding programming mistakes and limitations of a computational model or algorithm. It needs the community, of course, to provide the code developers with input on their code usage experiences. This feedback would also enhance code quality, including coding style and the amount and understandability of code comments. When an active user community develops, programmers reporting issues with an open-source project may also be encouraged to not only include a problem description but also to contribute a software patch that fixes the problem. In total, the open-source development model can boost the quality, maintainability, usefulness, and robustness of code. This boost, in effect, is beneficial for the researchers who write the software and will enable the development of viable and durable programs. Developing open-source research software can also draw industry partners, as well as promote cooperation between researchers at different institutions in joint projects. The intention is that associates and colleagues will benefit from an accelerated exchange of knowledge, as new scientific findings will be available as free software immediately. It will enhance and simplify the creation, sharing, and testing of new computational models and methods. While proprietary products may be poorly suited to manage descriptions of non-standard models or parameters, specialized FOSS solutions may be able to cope with such descriptions or can be adapted to do so. While all claims have so far been rather generic, we focus more on porousmedia research in the following. Particularly in safety-critical environments such as subsurface protection and usage, only one simulator’s answers are insufficient to create trust in the computational results. Joint benchmarking such as defined, advocated, and conducted in, for example, Segol (1994), Islam and Sepehrnoori (2013), Class et al. (2009), Flemisch et al. (2018), Kolditz et al. (2018) is a vital measure to cope with this deficiency. Open-source code provides the ability not only to quantify and analyse the outcomes from various simulators, but also to verify and validate their computational models, helping to improve modellers and decision-makers’ trust. A FOSS implementation needs even more characteristics and capabilities to be incorporated into everyday commercial workflows by replacing a proprietary simulator: ease of use, adoption of industry requirements for input data, and a wide variety of possibilities for running exciting scenarios. MRST (Lie et al. 2012; Lie 2019) and OPM (Baxendale et al. 2018; Rasmussen et al. 2019) are examples of projects which are primarily devoted to dealing with requirements posed by the reservoir engineering industry. It is arguable that one can accomplish the above-mentioned positive effects only by pursuing the open-source idea, and that such effects are not only wishful thinking and can genuinely be obtained. Fuggetta (2003) distances himself from the opinion that the desired benefits can only be achieved by following open-source principles and that it is the solution to all these issues. He does, however, make a vital exception to research software: “Nevertheless, there are particular market situations where open source is probably the only viable solution to support successful and effective software development. Typical examples are research communities that need specific software products to support their research work.”

60

4 Software Concepts and Implementation

4.1.2 The Definitions of Free and Open-Source Software Free and open-source software (FOSS) has a history that is as long as the history of software itself, see González-Barahona et al. (2013) and its sources. There are two major institutions behind the concept of FOSS: the Free Software Foundation (FSF), fsf.org, and the Open Source Initiative (OSI), opensource.org. Both describe FOSS similarly, but, according to the FSF, have radically different motivational histories: “Open source is a development methodology; free software is a social movement …[and] an ethical imperative” (Stallman 2016). The two organizations are briefly presented and their FOSS definitions are given and discussed in the following.

4.1.2.1

Free Software Foundation

In October 1985, the FSF was founded by Richard Stallman who has since served as the president of the foundation. The FSF describes itself as a non-profit organization “with a worldwide mission to promote computer user freedom and to defend the rights of all free software users”, fsf.org/about/. The core efforts include promoting the GNU initiative, gnu.org, and lobbying to uphold the free software principle “against threats to computer user freedom.” The FSF retains the Free Software Definition, gnu.org/philosophy/free-sw.html: A program is free software if the program’s users have the four essential freedoms: • The freedom to run the program, for any purpose (freedom 0). • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this. • The freedom to redistribute copies so you can help your neighbour (freedom 2). • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

It is clear from the definition that the word “free” in “free software” is not meant to imply “free of charge” but instead seeks to emphasize the freedom of the user. Nonetheless, one of the key reasons for the tremendous popularity of free and opensource software is undoubtedly the fact that in many instances, the software is genuinely free.

4.1.2.2

Open Source Initiative

The Open Source Initiative (OSI) was founded in 1998, inspired by the release of the source code for the Netscape browser and the growing adoption of Linux operating systems. In response to the FSF’s missionary mentality, the term “open source” was favoured over the term “free”, both to eliminate its ambiguity and to create a more pragmatic viewpoint. The OSI identifies itself as “a non-profit corpo-

4.1 Open Science: Principles of Open-Source Code and Data

61

ration with global scope formed to educate about and advocate for the benefits of open source and to build bridges among different constituencies in the open-source community”, opensource.org/about/. Its primary task is to retain the Open Source Definition, opensource.org/docs/osd/: Open source doesn’t just mean access to the source code. The distribution terms of opensource software must comply with the following criteria: 1. The license shall not restrict any party from selling or giving away the software … 2. The program must include source code, and must allow distribution in source code as well as compiled form. … 3. The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software. … 6. The license must not restrict anyone from making use of the program in a specific field of endeavour. … …

Already the first statement of the definition demonstrates that also the label “open source”, just like “free”, is not sufficient for its intended description. Even though the definition given by the OSI is more comprehensive than the one of the FSF, both “lead to the same result in practice” and the terms free software and open source software “are essentially interchangeable”, as described in Open Source Initiative (2018). We will not differentiate the two terms in the following and employ mostly “open source”. Selecting the correct license may constitute a decisive move for research software developers. The choice is particularly difficult because of the current availability of over 70 FSF and OSI conforming licenses, many of which vary only in subtle details, with possibly significant legal consequences. A clear explanation of the major alternatives can be found in Morin et al. (2012). One should normally adopt any of the licenses approved by both the OSI and the FSF. It is also recommendable to pick a license from the most commonly employed ones. If the code depends on a specific software framework, typically the best option is to stick with the license from that particular framework. In practice, the two most important characteristics an open-source license can have are: • Allowing the possibly compiled code to be linked to another code/programme that is published under a different licence. For further use of the code as part of a proprietary product, this may be essential. • Requiring that code modifications must be published under the same licence. Licences of this type are called “copyleft” licences (Stallman 2018). Recommending one specific licence proves to be rather difficult. Stodden (2009) argues that computational research generates an integrated research compendium consisting of the typical publication in the form of a journal article, but also research data together with the numerical simulation including the corresponding source code, plus any supplementary material. She advocates to release the full compendium based on popular licences for its components, in compliance with a new to be established

62

4 Software Concepts and Implementation

“Reproducible Research Standard” (RRS). While the RRS seems to have not been accepted yet by the community, she makes a strong case that research software should be published under the permissive 3-clause BSD licence. A significant difficulty with selecting a licence is that a subsequent adjustment can be a rather complicated task. Anyone owning copyright on a part of the code in question would have to consent legally to a change of licence. It could be difficult if these people have quit the development team already, or if anyone disagrees with the change suggested.

4.2 Infrastructure for Open-Source Projects Following is the overview of some essential infrastructural elements of an opensource project, again following the exposition in Bilke et al. (2019), Flemisch (2013). All these elements can again be selected to be open-source products. We will look at version control and its management including code hosting and publication, automated testing, issue tracking, web presence, mailing lists and other communication utilities as well as analysis tools.

4.2.1 Version Control Version (or revision) control is the management of changes to the code and associated files such as build system, test setups, or documentation. For the joint development of software, correspondingly related services and products are invaluable, and at least very useful for individual coding. Version control systems can be classified as either centralised and distributed. There is no longer any relevance to the traditional approach of a centralised version control system using a client-server model. There is, theoretically, no single authoritative repository in distributed version control. All clones are repositories themselves, and it is possible to access or change data referring to each one individually. Even so, an approved authoritative repository must be decided upon by a community. Git, git-scm.com, has evolved as the most employed distributed version control system over the last decade.

4.2.2 Code Hosting and Version Control Management For code that is being developed under version control, a place is required to store the aforementioned authoritative repository. Developers and users employ such a place as a point of entry and as a reference location for updates and releases. While an academic group or institution may provide and administrate a repository on its servers, it may be helpful to employ a popular hosting service such as GitHub, github.com, GitLab, gitlab.com, or BitBucket, bitbucket.org. They promise to store

4.2 Infrastructure for Open-Source Projects

63

the repository, along with providing some of the necessary associated infrastructure modules, such as websites, issue trackers and discussion forums. The software that underlies GitLab is open-source, and a corresponding server can be self-setup and maintained to reduce reliance on third-party providers or to ensure data protection.

4.2.3 Code Publication It is often beneficial and sometimes mandatory, particularly in terms of reproducibility, to be able to refer to a particular instance of a code basis. For example, if the findings mentioned in a scientific paper are based on a computational result, the corresponding code version should be cited adequately. By referring to a commit hash or generating/employing a tag, this is easy to achieve with Git in principle. However, these steps do not yield a recognised persistent identifier. To resolve this, platforms such as Zenodo emit DOIs for submitted data sets, zenodo.org. Recently, institutional data repositories have been installed at several universities and research facilities which can also provide DOIs, see, for example, darus.uni-stuttgart.de. For such a case, the version control system can quickly give a compressed archive file of the code repository at the desired instance which may be employed as a data set. In this process, an author list has to be linked to a particular code version, which constitutes another advantage. The author list of a scientific article introducing a piece of research software becomes outdated quickly in the way that a quote from only this single article will ignore recent contributors and changes. By citing a specific code version, it is possible to acknowledge the efforts of the actually responsible individuals.

4.2.4 Automated Testing and Dashboards One important element for the sustainable development of software is the writing and execution of tests. Because manually conducting these tests can be a tiresome, inefficient, and error-prone activity, the employment of test automation resources is strongly advisable. A wide variety of corresponding utilities support the creation, compilation and execution of a test suite. Widespread automated-testing tools include BuildBot, buildbot.net, Travis, travis-ci.com and Jenkins, jenkins.io. They allow to execute the tests automatically, e.g. at set time intervals or once a code change is committed/proposed. Each test may then be classified as failed or passed, based on the particular result. As the volume of data generated by the compilation and execution of the test suite can become very big, it is highly beneficial that the automation tool offers the option of an appropriate graphic analysis or a textual description of the test outcomes and possibly observed problems. This material should be posted at a publicly available dashboard to assist developers and users in handling the issues.

64

4 Software Concepts and Implementation

4.2.5 Issue Tracking In software development, it is crucial to keep track of bugs in the code and other topics, such as feature requests, in a structured way. This bookkeeping can be realised in the form of an issue-tracking utility. There are a considerable number of these tools, many free of cost and with a web application which is simple to use and that can be tailored to the individual project. Software storage sites such as GitHub, GitLab, and BitBucket integrate issue trackers for their managed code repositories. Although providing and employing such a utility is beneficial to every type of software and development team, the open-source methodology allows users to submit bug reports of high quality and help developers improve the quality of the code. Users may figure out precisely where the bug is located and therefore take the first significant step towards fixing it. The user can also solve the problem himself and make a patch available to the developers in the form of a merge request which can be applied directly to the code mainline. Other users will benefit immediately from the revised code.

4.2.6 Website A designated webpage presents the simulator to the scientific community and the public. The site has to be easily found and encourage prospective users to install and evaluate the software. It will offer access to the code repository and, if appropriate, to compiled executables or packages for various platforms, as well as exemplary parameter sets and data such as benchmark case descriptions. It should also provide simple installation and usage guidelines and references to comprehensive documentation of, for example, the code, its licensing policy, programming standards, how to contribute, as well as additional documents like tutorials or user manuals. A gallery of result pictures and other illustrations from chosen example applications, a listing of published articles and funded projects, together with a comprehensive feature list are useful for demonstrating the simulator’s capabilities.

4.2.7 Mailing List and Other Communication Tools A mailing list is still the most commonly employed tool to foster the interaction between developers and users of research software projects. Anyone interested in the project may sign up and submit and obtain project-specific emails. The developers can use the list to distribute important announcements, such as the publication of a new release or the discovery of a severe issue. Users are supposed to ask code-related questions which the developers, or other users, can answer. Storing the mails in a searchable archive facilitates to keep track of addressed topics. More progressive

4.2 Infrastructure for Open-Source Projects

65

solutions such as Discourse, discourse.org, merge mailing lists with discussion site elements. Besides, chatroom-like approaches as, for example, Gitter, gitter.im, are becoming increasingly popular.

4.2.8 Project Analysis Except for individual user experiences, it is a difficult challenge to assess an opensource project’s quality. Dedicated websites offer such assessments, for example Open Hub, openhub.net, “a free, public directory of free and open-source software”. Once a project has a public repository available, it may be registered, and the site analyses the development of that repository. This analysis includes metrics such as the number of project commits and committers, what programming languages are employed and how the lines of code are spread across actual code, comments, and blanks. Based on the current numbers and their progression, labels are issued which describe the development team’s size and activity, for example, along with the continuity of the development process. It also offers an estimation of the project cost in terms of the Constructive Cost Model (Boehm 1981). The given figures may be of interest to project developers and also other interested parties. For example, the developers can detect a declining development activity, check if the decline meets their expectations and decide about possible remedial measures. Besides, the labels and numbers can assist other researchers in determining whether to employ the program or perhaps to participate in the project. Lastly, prospective donors may utilise them to evaluate whether project financing is worthwhile.

4.3 The Porous-Media Simulator DuMux DuMux , “DUNE for multi-{phase, component, scale, physics, domain …} flow and transport in porous media,” is an open-source simulator which is mainly developed at the Department of Hydromechanics and Modelling of Hydrosystems (LH2) at the University of Stuttgart (Flemisch et al. 2011; Koch et al. 2019) (dumux.org). Its development is driven mainly by the research objectives of the LH2 department. In terms of the subsurface applications considered in this book, DuMux has been successfully applied to greenhouse gas and CO2 storage (Nordbotten et al. 2012; Ahusborde et al. 2015b; Hagemann et al. 2016; Walter et al. 2012), radioactive waste disposal (Ahusborde et al. 2015a), environmental remediation problems (Weishaupt et al. 2016), and fractured porous media (Schwenck et al. 2015; Stadler et al. 2012; Tecklenburg et al. 2016; Gläser et al. 2017). The website puma.ub.uni-stuttgart.de/ group/dumux contains a continuously updated list of publications that also includes biological and technical applications. DuMux is based on DUNE, the “Distributed and Unified Numerics Environment” (Bastian et al. 2008a, b, 2019), a “modular toolbox for solving partial differential

66

4 Software Concepts and Implementation

equations (PDEs) with grid-based methods,” dune-project.org. Both codes are written in C++ and make heavy use of the available generic programming techniques to achieve an optimal trade-off between generality and efficiency. The DUNE framework consists of several core and extension modules which provide, most notably, a generic interface for parallel adaptive finite element grids and corresponding implementations, as well as state-of-the-art parallel solvers for systems of linear equations. Technically, DuMux is also a DUNE module that depends on all DUNE core modules and may optionally inherit functionality from several DUNE extension modules, see Sect. 4.3.3 for more details. The outstanding capability of DuMux is the simulation of possibly complex non-isothermal multiphase compositional flow, transport and reaction processes in porous media on the Darcy scale. The associated key features are a wide range of computational models of varying physical complexity, a versatile material framework for the description of fluid-fluid and fluid-matrix constitutive relationships, a number of finite-volume discretizations and a framework for model coupling. Many other open-source codes exist which target Darcy-scale flow and transport processes, for example, MODFLOW, water.usgs.gov/ogw/modflow/ (McDonald and Harbaugh 2003), MRST, sintef.no/projectweb/mrst/ (Lie 2019), OpenGeoSys, opengeosys.org (Kolditz et al. 2012), OPM, opm-project.org (Rasmussen et al. 2019), ParFlow, parflow.org (Maxwell et al. 2015), PFloTran, pflotran.org (Lichtner et al. 2015), or PorePy, github.com/pmgbergen/porepy (Keilegavlen et al. 2017). Moreover, several PDE software frameworks in the spirit of DUNE are available which could serve as a basis for the computational description of physical processes, such as deal.II, dealii.org (Bangerth et al. 2007), Feel++, feelpp.org (Prud’Homme et al. 2012), FEniCS, fenicsproject.org (Logg et al. 2012), MOOSE, mooseframework.org (Gaston et al. 2009), or OpenCMISS, opencmiss.org (Bradley et al. 2011). The rest of this section is structured as follows. Section 4.3.1 provides an account of the historical development of DuMux . Its underlying basic design principles are explained briefly in Sect. 4.3.2. In Sect. 4.3.3, the relevant software modules are described and the structure of the DuMux code base is outlined. Based on the applications considered in Chap. 2, Sect. 4.3.6 lists the DuMux models that are to be employed in Part B of this work.

4.3.1 Historical Development In the following, we present the history of DuMux . After a chronological account of the development of the DuMux code base from the very beginning to the 3.X release series, we focus on the more comprehensive aspects of quality assurance and reproducibilty, the Open Porous Media initiative as well as community building. The exposition closely follows the one in Bilke et al. (2019).

4.3 The Porous-Media Simulator DuMux

4.3.1.1

67

Initial Development and First Release

The development of DuMux started in January 2007 at the LH2 department in Stuttgart. Before this, the LH2 group developed the simulator MUFTE-UG (Helmig et al. 1998b) which was based on the PDE software framework UG (Bastian et al. 1997). Since the public development of and support for UG stopped for the time being in the early 2000s, the need to build on a different framework arose. The decision was made in favour of the C++ toolbox DUNE, the “Distributed and Unified Numerics Environment” (Bastian et al. 2008a), motivated by the offered philosophy and functionality as well as by the fact that Peter Bastian, a main developer of UG and MUFTE-UG, belonged to the core developer team of DUNE and also stayed with his working group in the close proximity of the University of Stuttgart. In March 2007, a Subversion repository was set up for DuMux to host and control the code development. From that point on, every new doctoral student and postdoc at the LH2 performed his/her modelling tasks by using and enhancing the programme code in that repository. Until today, all the developers of DuMux have belonged to the LH2 working group. This has naturally resulted in a continuing major goal of the project to provide a tool that enables every developer to perform his research and perhaps also teaching tasks. Motivated by the fact that the DUNE framework was open source and the wish to return something to the scientific community, the decision was made to publish DuMux under an open-source license. In July 2009, DuMux 1.0 was released under the GNU-GPL 2.0, see Sect. 4.1.2. While the DUNE framework is also issued under the GPL, it contains a so-called “runtime exception” allowing that DUNE source files to be used as part of another software library or application without restriction. For DuMux , the choice for the standard version without the exception was made deliberately based on the motivation that all code using DuMux also has to be issued open-source.

4.3.1.2

The 2.X Release Series

DuMux 1.0 consisted of a subset of the code stored in the Subversion repository because not everything in there was adequate for public release. Since selecting a subset of a private code base for public release proved to be rather impractical, the repository was split into a stable part dumux and a development part dumux-devel that was dependent on the stable part. Public read access to the repository of the stable part was granted. In dumux-devel, new capabilities of the software such as enhanced model concepts or new constitutive relations were and are to be added and used for the research work of the LH2. Once a new capability is considered to be stable and interesting enough for the scientific community, it can be moved to the stable part. In order to ensure the quality of added code and also all other changes to the stable part, each corresponding commit to the Subversion repository had to be reviewed internally by another DuMux developer. Since the code design still exhibited several shortcomings concerning dependencies, generality and modularity, a major refactoring of the code base was performed

68

4 Software Concepts and Implementation

in the following 1.5 years. As a result, DuMux 2.0 was released in February 2011. Since this release, interfaces should be kept stable for at least one release cycle in order to achieve more sustainability and security for the growing number of users and developers. Interface changes had to be discussed beforehand and, in case of an actual change, the old interface had to be kept for the next release and its use should emit a compiler deprecation warning. Through this policy, users of the software have been able to gradually adapt their own code after switching to a new release. The same guideline is followed in the DUNE framework. However, not all developers supported this policy change, arguing that it would slow down the development process. In March 2012, this lead to a fork of DuMux 2.2 in form of the module eWoms (Lauser 2014). After a phase of DuMux changes being partially integrated into eWoms, the developments of the two modules are independent of each other for several years now, see also the paragraph on OPM below. Following the tendency from centralised to distributed version control and particularly the DUNE framework, the DuMux Subversion repository was converted into Git repositories in September 2015. GitLab was employed as a version control management system, the Git repositories being publicly accessible at git.iws. uni-stuttgart.de. As outlined in Sect. 4.2, GitLab was chosen as it is an open-source alternative and offers the possibility to run self-hosted on the working group’s server infrastructure. Git and, in particular, GitLab, greatly facilitated the joint development process. Changes to the code base are developed in branches and may be integrated into the mainline by an approved merge request. The approval has to be granted by somebody different from the author of the change, which formalises and simplifies the aforementioned review process. The release process has been streamlined and, since release 2.4 in October 2013, DuMux has been released semi-annually in spring and autumn every year, the last release of the 2.X series being 2.12 in late 2017. Since 2.7, every release tarball is uploaded to Zenodo, thereby receiving a DOI, as for example Fetzer et al. (2017).

4.3.1.3

Transition to DuMux 3.0

During the 2.X release series, several new features were added to the code base. Many of these additions fell in line with the main intention of DuMux to be a framework for the implementation of porous-media model concepts and constitutive relations by actually providing such implementations. However, some more central additions had to be rather forced into the code base and proved to be inefficient, inconsistent with the original design ideas and/or increasingly difficult to maintain. For example, the spatial discretization of the fully-implicit models was hard-wired to the box scheme (Huber and Helmig 2000). While it was possible to add a cell-centred scheme with two-point flux approximation, the existing programme flow, data structures and interfaces didn’t fit well. Moreover, implementing a standard multi-point flux-approximation scheme (Aavatsmark 2002) proved to be extremely tedious. The rather hard dependency on the box scheme also resulted in a suboptimal discretization for the free-flow models.

4.3 The Porous-Media Simulator DuMux

69

While it was possible to add corresponding stabilization, the implemented method wasn’t robust enough for the envisioned range of applications. Obstacles of a different nature have been also encountered: For the coupling of porous-medium and free flow, DuMux depended on the DUNE module dune-multidomain (Müthing 2015). Unfortunately, the development of this module was discontinued in 2014. Since dune-multidomain, in turn, depended on the modules dune-pdelab and dune-multidomaingrid and, therefore, on the DUNE core modules, it would have been necessary to adapt it to changes in these modules along with continuing the development and updating the dependencies of DuMux itself. The maintenance burden for this task was considered too high and a decision was made to implement a model-coupling framework based only on the DUNE core modules, offering the additional advantage of being able to design that framework in line with the requirements posed by the targeted applications. Altogether, this led to a new major release cycle that was initiated by branching off the main development line in November 2016. Due to the large amount of changes and their extent into all parts of the code base, the requirement of backward compatibility was dropped. Right after the release of DuMux 2.12 in December 2017, the development branch was integrated back into the main line and an alpha release was published. It took another year before DuMux 3.0 was finally released in December 2018. Including remedies for the shortcomings mentioned above, the list of improvements is rather long, dumux/CHANGELOG.md. With the releases of DuMux 3.1 in October 2019 and 3.2 in May 2020, the semi-annual release cycle has been resumed.

4.3.1.4

Quality Assurance and Reproducibility

To assure the quality of the developed software, DuMux is accompanied by currently around 400 unit and system tests which are to be carried out by a BuildBot CI server at git.iws.uni-stuttgart.de/buildbot after each commit to the master branch. One guideline for developers is that every newly added feature has to be accompanied by a corresponding test. While adding tests should be a task that every researcher can perform as part of his daily routine, it is hard to develop a corresponding automated testing infrastructure as a mere by-product of scientific research. In the case of DuMux , this development was greatly facilitated by the “Quality assurance in software frameworks on the example of DUNE/PDELab/DuMux ” project, funded by the Baden-Württemberg Ministry for Science, Research and the Arts, the Cluster of Excellence SimTech as well as the Interdisciplinary Center for Scientific Computing at the University of Heidelberg. In order to achieve reproducibility of the computational results, the DuMux -Pub project was initiated in 2015. It provides a set of tools for the researcher to outsource the code that has been employed for a publication into a separate DUNE module, together with an install script to download and compile this code including all the necessary dependencies. Since 2015, every journal publication at the LH2 as well as every bachelor, master and doctoral thesis has to be accompanied by such a module. All resulting modules are published at git.iws.uni-stuttgart.de/dumux-pub. First

70

4 Software Concepts and Implementation

efforts have been undertaken to also provide a complete runtime environment in the form of a Docker container git.iws.uni-stuttgart.de/dumux-pub/Koch2017a. These efforts will be streamlined in the future as part of the DFG-funded project “Sustainable infrastructure for the improved usability and archivibility of research software on the example of the porous-media-simulator DuMux (SusI).”

4.3.1.5

The Open Porous Media (OPM) Initiative

In 2009, the OPM initiative was born with the principal objective “to develop a simulation suite that is capable of modelling industrially and scientifically relevant flow and transport processes in porous media and bridge the gap between the different application areas of porous media modelling” (Lie et al. 2009). With the SINTEF Simulation Group (Oslo), the IRIS Reservoir Group (Bergen), the IWR (Heidelberg), the LH2 (Stuttgart) and the Centre of Integrated Petroleum Research (Bergen), five research groups initially joined OPM. Actual work on OPM started by means of two projects between 2010 and 2013: “Simulation of EOR in Clastic Reservoirs” funded by Statoil and Total as well as “A numerical CO2 laboratory” funded by CLIMIT, www.climit.no and Statoil. During these years, SINTEF and IRIS constituted themselves as the main contributing groups to the simulator part of the emerging OPM code base, which they can still be considered today. Currently, the main focus of OPM is on reservoir engineering with a particular emphasis on being able to compete with proprietary industry-standard tools. As such, the black-oil simulator Flow is a main product of OPM (Rasmussen et al. 2019). Meanwhile, several additional funding and contributing partners joined the initiative and its spectrum has been enhanced by upscaling and visualization modules. DuMux initially contributed to OPM in the form of the eWoms module mentioned above, as the developer of eWoms joined the OPM development team. In particular, the module opm-material originated from the DuMux material framework and eWoms meanwhile constitutes the basis of the module opm-simulators. Parallel to the development of DuMux , the OPM modules have undergone significant changes since then. Up to now, the DuMux code base did not profit directly from the associated improvements by integrating (some of) these changes. Nevertheless, DuMux benefits from OPM in several respects. Directly, by the fact that DuMux has a suggested dependency on the OPM module opm-grid, which enables every DuMux user to easily employ corner-point grids that are standard in the petroleum industry. Indirectly, DuMux and OPM profit from each other in the form of joint activities and a lively exchange of ideas and details driven by the fact that both development teams use DUNE components and applications with similar targets. This is expected to be intensified by means of the “InSPiRE” project, iris.no. As of today, DuMux is considered to be “related” to the OPM initiative, opm-project.org.

4.3 The Porous-Media Simulator DuMux

4.3.1.6

71

Community Building

By going open-source, external users of DuMux have been welcome ever since the initial release in 2009. Between 2009 and 2018, tarballs of DuMux releases could be obtained by submitting name, institution and email address via the website. Over the years, around 1200 unique submissions from apparently “serious” email addresses could be recognised. The number of Subversion checkouts and Git clones has not been recorded so far. While these are only measures of interest rather than usage, we frequently scan scientific publications that mention DuMux . So far, about 25 articles and 6 doctoral theses from around 10 research groups have been identified as having cited and actually used DuMux for obtaining computational results without being co-authored by a current or former LH2 member. In order to get in contact with the users, a first DuMux user meeting was held in Stuttgart in June 2015. 26 participants were counted, 10 of them were external users. To attract new users, a first DuMux course was given in October 2017, followed by a second one in July 2018. This proved to be a great measure, indicated by increased traffic on the DuMux mailing list, mainly due to former course participants. While there had been around 5 posts per month prior to the first course (apart from automated notifications by the former issue tracker), this has increased to more than 35 per month during the second half of 2018. In May 2019, the kick-off workshop for the aforementioned project SusI took place in Stuttgart, where the project investigators and 12 external users focused on identifying current shortcomings and possible remedies associated with the usability of DuMux . With GitLab, all technical possibilities are available for users to upload contributions to the code base in the form of merge requests as well as for developers to review, discuss, improve and possibly integrate them. However, contributions from outside the LH2 have been scarce until now. Current notable exceptions are members from the neighbouring Institute of Applied Analysis and Numerical Simulation (IANS) at the University of Stuttgart as well as researchers from the Federal Waterways Engineering and Research Institute (BAW) in Karlsruhe. With the apparent increase in interest and actual usage over the last few years, we hope to increase the number of external contributors and, ultimately, developers in the future.

4.3.2 Design Principles In the following, we briefly describe some of the design principles of the DuMux code base. More details can be found in Koch et al. (2019). DuMux is a C++ header-only library aiming to be modular in the sense that a user can pick and combine exactly the required components to solve his problem at hand, as well as extensible by allowing to substitute an existing component by a tailormade replacement. Code using DuMux is supposed to be implemented in a separate DUNE module that depends on the DuMux code base. The modularity is reflected by the fact that most of the ingredients of the computational model can be exchanged

72

4 Software Concepts and Implementation

Fig. 4.1 Dependencies of DUNE and DuMux modules and groups

DUNE core

grid managers

DuMux Course

Lecture

Pub

Appl

easily, ranging from the participating fluids over the constitutive relations to the spatial discretization method. If none of the present implementations of a particular ingredient is able to accommodate the user needs, it can be easily replaced. For example, a custom capillary-pressure-saturation relationship can be provided by a class in a corresponding source file that is selected on the top level of the problem definition; no modification of the original source code is necessary. Flexibility is also inherited from the DUNE basis by allowing, for example, the change of the used grid implementation within a single code line, preserving the full optimization potential of the compiler. To guarantee an optimal trade-off between flexibility and efficiency, state-of-theart C++ generic programming techniques are used both in DUNE and in DuMux . The requirements on the standard conformance of the C++ compiler are continuously adapted to benefit from the ongoing improvements in the C++ language and standard library capabilities. For example, the current versions DUNE 2.7 and DuMux 3.1 require a C++14-compliant compiler, while for subsequent releases, both projects ask for compliance with the C++17 standard. In DuMux , the term model is employed for the description of a system of coupled PDEs, including the constitutive relations needed for closure. Many models are already implemented in DuMux , particularly PDEs describing non-isothermal multiphase compositional flow, transport and reaction processes in porous media on the Darcy scale and associated constitutive relations; see the following two subsections.

4.3.3 Modules and Structure DUNE and DuMux are organised in a modular manner. In the following, we describe the corresponding modules and their dependencies, as depicted in Fig. 4.1. Most prominently, these are the DUNE core modules which provide (interfaces for) computational grids, linear solvers and important discretization ingredients. We give a brief overview of the central DuMux module, followed by descriptions of derived (groups of) modules that are useful for learning DuMux as well as for implementation in teaching and research.

4.3 The Porous-Media Simulator DuMux

4.3.3.1

73

DUNE

DUNE, the Distributed and Unified Numerics Environment, is a modular toolbox for solving partial differential equations with grid-based methods, (Bastian et al. 2008a, b, 2019). The current release 2.7 includes the core modules dune-common (basic classes), dune-geometry (reference elements and quadrature rules), dune-grid (grid interface and implementations), dune-istl (iterative solver template library) (Blatt and Bastian 2007), and dune-localfunctions (interface for finite element shape functions). In addition to these, DuMux can make use of several grid managers that implement the DUNE grid interface: dune-alugrid (Alkämper et al. 2016), dunefoamgrid (Sander et al. 2017), dune-spgrid (Klöfkorn and Nolte 2012), dune-subgrid (Gräser and Sander 2009), dune-uggrid (Bastian et al. 1997), all available at duneproject.org/groups/grid/, as well as opm-grid, opm-project.org. The use of DUNE as a basis on which DuMux is built (Fig. 4.1) offers several advantages. The most important is the ability to use a wide range of different grid implementations and several linear solvers without having to consider the underlying data structures of the individual implementations. This particularly includes capabilities like parallelism and adaptability, which comes at minimal additional programming cost for the user. Thus, the main part of the development of DuMux can concentrate on the implementation of physical and mathematical models.

4.3.4 DuMux The core functionalities of DuMux are implemented and provided in a DUNE module named “dumux,” which has required dependencies on all DUNE core modules and suggested dependencies on the above-mentioned grid implementations. The module provides all models, constitutive relations and methods that we consider to be both interesting and stable enough for wider use. The available models are grouped into four categories: • Porous-medium flow: Comprises all models for Darcy-scale flow and transport phenomena in porous media, as described in Sect. 2.2.1. In terms of physical complexity, the range is from stationary, linear, isothermal one-phase one-component fluid flow up to instationary, nonlinear, nonisothermal three-phase multi-component reactive fluid flow and transport, possibly without local chemical or thermal equilibrium. • Free flow: Several free-flow models are also available in DuMux , mainly for the purpose of being coupled to porous-medium flow models. As base models, the Navier-Stokes equations for one compressible or incompressible fluid phase are implemented. For the modelling of turbulence, a number of standard zero- and two-equation approaches is provided. All the models can be enhanced by taking component or energy transport into account.

74

4 Software Concepts and Implementation

• Geomechanics: Models in this category form the basis for the processes described in Sect. 2.2.2 by implementing the quasi-state momentum balance (2.16), allowing it to be coupled to a porous-medium flow model. • Multidomain: A lot of research and development work has been undertaken to couple different models from one or more of the three categories listed above. This includes hybrid-dimensional coupling for e.g. the description of fractured porous media by means of discrete fracture-matrix models or modelling the interaction of roots with the surrounding soil. It also encompasses multi-compartment processes such as the coupling of free flow with porous-medium flow. Additionally, it covers multi-process models such as flow-induced geomechanics described in Sect. 2.2.2. In Sect. 4.3.6, we describe, in more detail, the models that are employed for the illustrative examples in Part B of this book. All the models can or have to make use of several available central DuMux components. We describe some of them in the following, indicated by the name of the corresponding subfolder in the DuMux directory tree. • adaptive: Infrastructure for grid adaptability, providing wrappers around the corresponding DUNE functionalities, as well as interfaces and default implementations for indicators and data transfer. An example of a model-specific specialization can be found in porousmediumflow/2p. • assembly: A DuMux assembler consists of a global part, currently implemented in terms of a universal FVAssembler and a local part that is specialised corresponding to the spatial discretization. The local part, in turn, employs a local residual that is responsible for calculating volume and flux terms with respect to a subcontrol volume (face). Implementations are again provided specific for the spatial discretizations in terms of a given generic interface, and forward corresponding calls to the actually employed model local residual. • common: The components found in this subfolder implement various helpful concepts and tools to be employed throughout DuMux , ranging from physical constants over spline curves to Valgrind wrappers. Most notable here are the DuMux property system, a framework for traits classes with improved inheritance behaviour and usability, as well as the TimeLoop which realises the required top-level functionalities for running transient simulations. • discretization: A collection of classes that are responsible for spatial discretization. Interfaces and default implementations are provided for e.g. global grid geometries and variables as well as local sub-control volumes / volume faces or the evaluation of element solutions and gradients. These are specialised for vertex-centred (Box), cell-centred (TPFA and MPFA) and staggered finite-volume approaches. • flux: Holds constitutive relations that employ spatial gradients and yield intercontrol-volume fluxes such as the laws of Darcy, Fick, Fourier and Hooke. Based on discretization-agnostic interfaces, specializations in terms of the available spatial discretizations are provided. • io: The DuMux input/output functionalities are collected here. For reading grids and grid-based parameters, so-called GridManagers provide the means to read from various file formats (DUNE DGF, Gmsh, Eclipse, ...) into a DUNE grid. The

4.3 The Porous-Media Simulator DuMux

75

managers or parts of a manager are specialised to accommodate the requirements of specific DUNE grid implementations. It is also possible to read solution values from VTK files, a feature that can be used for restarting simulations. Concerning output, we provide tools based on the DUNE VTK writers, as well as interfaces to Gnuplot that are mainly employed for the visualization of constitutive relations. • linear: Provides linear solver backends. These are mostly thin wrappers around available solvers and preconditioners from dune-istl. A more sophisticated wrapper is provided by means of the parallel AMGBackend. • material: Everything related to constitutive relations that don’t directly involve spatial gradients. This ranges from properties of individual chemical species over binary diffusion coefficients to properties for fluid mixtures as well as spatial parameters such as porosity and fluid-matrix interactions such as capillary pressure—saturation relationships. A basic design concept encountered here is the separation of the relation itself, i.e. the mathematical formula, from the state or parameters that the relation should be applied to. For example, the van-Genuchten capillary pressure and relative permeabilities come in form of the corresponding curves provided by static methods of the class VanGenuchten, while the parameters of the curves are realised by possible spatially- or even solution-dependent objects of a class VanGenuchtenParams. • nonlinear: Contains an implementation of Newton’s method that can be employed by any DuMux model. The method is highly configurable in terms of convergence criteria to be applied and corresponding thresholds. It interacts with the time loop by proposing an adaption of the time-step size depending on the convergence history. As already outlined in Sect. 4.3.1, all DuMux components are tested continuously by means of more than 400 unit and system tests. These tests are found in the subfolder test on the top level that itself is structured analogously to the source code folder dumux. Equally important is the documentation which is provided in the subfolder doc in two ways: a user handbook as well as a Doxygen-based technical documentation assembled from comments in the source code.

4.3.5 Derived Modules As with any other DUNE module, it is possible to derive new modules from the DuMux core module outlined above. In the following, we describe several such modules. The first, dumux-course, constitutes the essential entry point to learn how to use and extend DuMux . Dumux-lecture contains exercises accompanying some lectures given at the LH2 department, some of which are used as illustrative examples for the case studies in Part B of this book. The group of modules dumux-pub provides the source code used in publications of the LH2 group. Finally, dumux-devel is a module and dumux-appl a group of modules utilised for research work at the LH2 and beyond.

76

4 Software Concepts and Implementation

Course Dumux-course is a module created for the first DuMux course offered in Stuttgart in 2018. It is supposed to be used without further instructions as a tutorial for beginners. All course examples are documented and contain task descriptions and solutions. The module also contains the slides from the course. The exercises range from very basic first experiences in the form of building and running a DuMux application and explaining the structure and content of problem, parameter and main files, over the customization and extension of existing models and fluid systems, up to complex coupled state-of-the-art models exhibiting the real strengths of DuMux . Lecture The LH2 department offers lectures in several master study programmes, namely, Environmental and Civil Engineering, Water Resources Engineering and Management, Simulation Technology and Computational Mechanics of Materials and Structures. In most of these lectures, computer exercises using DuMux constitute an essential part. The module dumux-lecture contains these exercises, most of them belonging to the lecture “Multiphase Modelling.” Each exercise has its own folder containing the problem setup and the spatial parameter specification, as well as an input file where typically those parameters can be specified that are of educational value for the considered application. Explanations for each application are given in separate subfolders which contain Latex source files that can be used for generating a PDF document. Pub Publications using constantly evolving software packages such as DuMux raise the question on how the code can be archived in a way that allows the replication of the results at all times for all interested parties. Ideally, this accessibility should be achieved with as little effort as possible. As already described in Sect. 4.3.1, the dumux-pub group is designed to reach these objectives by a structure of subfolders for each publication, i.e. journal papers as well as doctoral, master and bachelor theses. These folders are fully-working DUNE modules containing all the relevant files to run the respective applications. In the near future, a Docker container will be provided for each publication that contains, in addition to the previously mentioned source code, the corresponding compiled executables, a standard Linux operating system and all the required dependencies to form a stand-alone integrated environment. Devel and Appl The actual research work based on DuMux is supposed to be performed in corresponding modules. As explained in Sect. 4.3.1, the single module dumux-devel has been used in the past and is still used today for selected efforts. In addition, several

4.3 The Porous-Media Simulator DuMux

77

new modules have been initiated in a GitLab group dumux-appl. It is also in the modules there, where collaborative research work is undertaken with participation from outside the LH2 group. Many of the modules in dumux-appl are private and not visible to the public. Once a publication based on such a module is submitted or has appeared, the contents that are relevant for reproducing the results in that publication are assembled in a corresponding dumux-pub module. Once a code component of a dumux-appl module is considered to be interesting and mature enough to be included in the DuMux core, it is transferred to a corresponding branch there in order to be reviewed and potentially merged.

4.3.6 Relevant Models In the following, we provide a few details on the DuMux models that are employed for the illustrative examples in Part B of this book. Described in terms of phase/ component numbers and differentiating between isothermal and nonisothermal, these are: one-phase n-component (1pnc), two-phase two-component nonisothermal (2p2cni), three-phase three-component nonisothermal (3p3cni) and two-phase discrete-fracture-matrix (2pdfm).

4.3.7 1pnc The one-phase n-component model is a specialization of the general multi-phase compositional model (2.11) in the case of a single fluid phase. It can be described by the mass balances φ

    ρ κ ∂(ρ X κ ) −∇ · X K(∇ p − ρg) − ∇ · D κpm ρ∇ X κ − q κ = 0 ∂t μ

(4.1)

for each component κ. One of the component mass balances is replaced by the total mass balance resulting from summing (4.1) up all the components κ. A particularity of the model is given by the fact that it enables fluid systems that have been designed for multiple fluid phases to be reused. This is achieved by plugging such a fluid system into a so-called OnePAdapter. The 1pnc model is used for the illustrative example of the convective mixing of CO2 and brine in Sect. 6.4.2.

4.3.8 2p2cni The two-phase two-component nonisothermal model is comprised of two balance equations (2.11) for two components κ in two phases α and one energy balance

78

4 Software Concepts and Implementation

(2.12). Technically, the DuMux TwoPTwoC model is derived from the more general TwoPNC model admitting an arbitrary number of components by specializing and simplifying the calculation of secondary variables from the primary ones. It is combined with the NonIsothermal model to derive the TwoPTwoCNI model. This model is employed for the illustrative example of the heat-pipe effect in Sect. 8.4. The TwoPTwoCNI model is specialised further for problems involving CO2 and brine as components by adapting again the calculation of secondary variables as well as the primary variable switch. This specialised model is used in Sect. 6.4.1 for the example of CO2 plume shape development.

4.3.8.1

3p3cni

Analogous to the 2p2cni model described above, the model for three-phase threecomponent nonisothermal fluid flow is given in terms of three balance equations (2.11) for three components κ in three phases α and one energy balance (2.12). Essentially the same ingredients are used for the calculation of secondary variables from primary ones. Larger differences come in the form of dedicated capillary pressure— saturation relationships and primary variable switching mechanisms. The 3p3cni model is used for the illustrative example of remediation of contaminated soils in Sect. 9.2.4.

4.3.8.2

2pdfm

The 2pdfm model implements a discrete fracture matrix (DFM) approach in the sense that the flow in the matrix is formulated with respect to a full-dimensional domain and grid, while the fracture flow is described by means of a lower-dimensional domain and grid. Here, two immiscible fluid phases are assumed and their flow in the matrix as well as the fracture domain is described by a phase mass balance φ

  kr α ∂(ρα Sα ) −∇ · ρα K(∇ pα − ρα g) − qα = 0 ∂t μα

(4.2)

for each phase α, in contrast to the component mass balance (2.11). The balance equations (4.2) are implemented in terms of the DuMux TwoP model. Matrix and fracture are coupled with the help of the multidomain framework by means of a FacetCouplingManager that is specialised for such a hybrid-dimensional coupling. While a standard grid manager can be used for the full-dimensional matrix grid, the module dune-foamgrid is employed for the lower-dimensional fracture grid. The 2pdfm model is used for the illustrative example of methane migration induced by hydraulic fracturing in Sect. 7.4.

References

79

References Aavatsmark I (2002) An introduction to multipoint flux approximations for quadrilateral grids. Comput Geosci 6:405–432 Ahusborde E, Amaziane B, Jurak M (2015a) Three-dimensional numerical simulation by upscaling of gas migration through engineered and geological barriers for a deep repository for radioactive waste. Geol Soc Lond Spec Publ 415:123–141 Ahusborde E, Kern M, Vostrikov V (2015b) Numerical simulation of two-phase multicomponent flow with reactive transport in porous media: application to geological sequestration of CO2 . ESAIM: Proc Surv 50:21–39 Alkämper M, Dedner A, Klöfkorn R, Nolte M (2016) The DUNE-ALUGrid module. Arch Numer Softw 4:1–28 Bangerth W, Hartmann R, Kanschat G (2007) Deal. II—a general-purpose object-oriented finite element library. ACM Trans Math Softw (TOMS) 33:24 Bastian P, Blatt M, Dedner A, Engwer C, Klöfkorn R, Ohlberger M, Sander O (2008b) A generic grid interface for parallel and adaptive scientific computing. I. Abstract framework. Computing 82:103–119 Bastian P, Birken K, Johannsen K, Lang S, Neuß N, Rentz-Reichert H, Wieners C (1997) UG – a flexible software toolbox for solving partial differential equations. Comput Vis Sci 1:27–40 Bastian P, Blatt M, Dedner A, Dreier N-A, Engwer C, Fritze R, Gräser C, Kempf D, Klöfkorn R, Ohlberger M, Sander O (2019) The DUNE framework: basic concepts and recent developments. arXiv e-prints arXiv:1909.13672 Bastian P, Blatt M, Dedner A, Engwer C, Klöfkorn R, Kornhuber R, Ohlberger M, Sander O (2008a) A generic grid interface for parallel and adaptive scientific computing. II. Implementation and tests in DUNE. Computing 82:121–138 Baxendale D, Rasmussen A, Rustad AB, Skille T, Sandve TH (2018) Open porous media flow documentation manual. https://opm-project.org/wp-content/uploads/2018/11/OPM-FlowDocumentation-2018-10-Rev-1.pdf. Online; Accessed 20-April-2019 Bilke L, Flemisch B, Kalbacher T, Kolditz O, Helmig R, Nagel T (2019) Development of opensource porous media simulators: principles and experiences. Transp Porous Media 130:337–361 Blatt M, Bastian P (2007) The iterative solver template library. In: Kågström B, Elmroth E, Dongarra J, Wa´sniewski J (eds) Applied Parallel Computing. State of the Art in Scientific Computing: 8th International Workshop, PARA 2006, Umeå, Sweden, June 18-21, 2006, Revised Selected Papers. Springer, pp 666–675 Boehm B (1981) Software engineering economics. Prentice-Hall Bradley C, Bowery A, Britten R, Budelmann V, Camara O, Christie R, Cookson A, Frangi AF, Gamage TB, Heidlauf T, Krittian S, Ladd D, Little C, Mithraratne K, Nash M, Nickerson D, Nielsen P, Nordbø Ø, Omholt S, Pashaei A, Paterson D, Rajagopal V, Reeve A, Röhrle O, Safaei S, Sebastiän R, Stegháfer M, Wu T, Yu T, Zhang H, Hunter P (2011) Opencmiss: a multi-physics & multi-scale computational infrastructure for the VPH/physiome project. Prog Biophys Mol Biol 107:32–47. ISSN 0079-6107 Class H, Ebigbo A, Helmig R, Dahle HK, Nordbotten JM, Celia MA, Audigane P, Darcis M, EnnisKing J, Fan Y, Flemisch B, Gasda SE, Jin M, Krug S, Labregere D, Naderi Beni A, Pawar RJ, Sbai A, Thomas SG, Trenty L, Wei L (2009) A benchmark study on problems related to CO2 storage in geologic formations. Comput Geosci 13:409–434 EU Recommendation (2018) EU, Commission Recommendation of 25 April 2018 on access to and preservation of scientific information. http://data.europa.eu/eli/reco/2018/790/oj. Online; Accessed 17-April-2019 Fetzer T, Becker B, Flemisch B, Gläser D, Heck K, Koch T, Schneider M, Scholz S, Weishaupt K (2017) Dumux 2.12.0. https://doi.org/10.5281/zenodo.1115500 Flemisch B (2013) Tackling coupled problems in porous media: development of numerical models and an open source simulator. Habilitation thesis, University of Stuttgart

80

4 Software Concepts and Implementation

Flemisch B, Berre I, Boon W, Fumagalli A, Schwenck N, Scotti A, Stefansson I, Tatomir A (2018) Benchmarks for single-phase flow in fractured porous media. Adv Water Resour 111:239–258 Flemisch B, Darcis M, Erbertseder K, Faigle B, Lauser A, Mosthaf K, Müthing S, Nuske P, Tatomir A, Wolff M, Helmig R (2011) DuMuX: DUNE for Multi-{Phase, Component, Scale, Physics, ...} flow and transport in porous media. Adv Water Resour 34:1102–1112 Fomel S, Claerbout JF (2009) Reproducible research. Comput Sci Eng 11:5–7 Forschungsgemeinschaft Deutsche (1998) Proposals for Safeguarding Good Scientific Practice. Wiley-VCH Fuggetta A (2003) Open source software - an evaluation. J Syst Softw 66:77–90 Gaston D, Newman C, Hansen G, Lebrun-Grandié D (2009) MOOSE: a parallel computational framework for coupled systems of nonlinear equations. Nucl Eng Des 239(10):1768–1778 Gläser D, Helmig R, Flemisch B, Class H (2017) A discrete fracture model for two-phase flow in porous media. Adv Water Resour 110:335–348 González-Barahona JM, Pascual JS, Robles G (2013) Introduction to free software. Free Technology Academy Gräser C, Sander O (2009) The dune-subgrid module and some applications. Computing 86:269– 290 Hagemann B, Rasoulzadeh M, Panfilov M, Ganzer L, Reitenbach V (2016) Hydrogenization of underground storage of natural gas. Comput Geosci 20:595–606 Helmig R, Class H, Huber R, Sheta H, Ewing R, Hinkelmann R, Jakobs H, Bastian P (1998b) Architecture of the modular program system MUFTE-UG for simulating multiphase flow and transport processes in heterogeneous porous media. Mathematische Geologie 2:123–131 Huber R, Helmig R (2000) Node-centered finite volume discretizations for the numerical simulation of multiphase flow in heterogeneous porous media. Comput Geosci 4:141–164 Ince D (2010) If you’re going to do good science, release the computer code too. The Guardian. www.theguardian.com, Online; Accessed 23-April-2019 Islam AW, Sepehrnoori K (2013) A review on SPE’s comparative solution projects (CSPs). J Petrol Sci Res 2:167–180 Keilegavlen E, Fumagalli A, Berge R, Stefansson I, Berre I (2017) PorePy: an open-source simulation tool for flow and transport in deformable fractured rocks. arXiv e-prints, arXiv:1712.00460 Kitzes J, Turek D, Deniz F (2017) The practice of reproducible research: case studies and lessons from the data-intensive sciences. University of California Press Klöfkorn R, Nolte M (2012) Performance pitfalls in the dune grid interface. In: Dedner A, Flemisch B, Klöfkorn R (eds) Advances in DUNE. Springer, pp 45–58 Koch T, Gläser D, Weishaupt K, Ackermann S, Beck M, Becker B, Burbulla S, Class H, Coltman E, Emmert S, Fetzer T, Grüninger C, Heck K, Hommel J, Kurz T, Lipp M, Mohammadi F, Scherrer S, Schneider M, Seitz G, Stadler L, Utz M, Weinhardt F, Flemisch B (2019) DuMux 3 – an open-source simulator for solving flow and transport problems in porous media with a focus on model coupling. arXiv e-prints, arXiv:1909.05052 Kolditz O, Bauer S, Bilke L, Böttcher N, Delfs JO, Fischer T, Görke UJ, Kalbacher T, Kosakowski G, McDermott CI, Park CH, Radu F, Rink K, Shao H, Shao HB, Sun F, Sun YY, Singh AK, Taron J, Walther M, Wang W, Watanabe N, Wu Y, Xie M, Xu W, Zehner B (2012) OpenGeoSys: an open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THM/C) processes in porous media. Environ Earth Sci 67:589–599 Kolditz O, Nagel T, Shao H, Wang W, Bauer S (eds) (2018) Thermo-hydro-mechanical-chemical processes in fractured porous media: modelling and benchmarking: from benchmarking to tutoring. Springer Lauser A (2014) Theory and numerical applications of compositional multi-phase flow in porous media. PhD thesis, University of Stuttgart Lichtner PC, Hammond GE, Lu C, Karra S, Bisht G, Andre B, Mills R, Kumar J (2015) PFLOTRAN user manual: a massively parallel reactive flow and transport model for describing surface and subsurface processes. Technical report, Berkeley Lab, Los Alamos National Lab, Sanida National Laboratories, Oak Ridge National Laboratory, OFM Research

References

81

Lie K-A (2019) An introduction to reservoir simulation using MATLAB/GNU Octave: user guide for the MATLAB reservoir simulation toolbox (MRST). Cambridge University Press Lie K-A, Bastian P, Dahle HK, Flemisch B, Flornes K, Rasmussen A, Rustad AB (2009) OPM – open porous media. Unpublished Lie K, Krogstad S, Ligaarden IS, Natvig JR, Nilsen HM, Skaflestad B (2012) Open-source MATLAB implementation of consistent discretisations on complex grids. Comput Geosci 16:297–322 Logg A, Mardal K-A, Wells G (2012) Automated solution of differential equations by the finite element method: the FEniCS book, vol 84. Springer Science & Business Media Maxwell R, Condon L, Kollet S (2015) A high-resolution simulation of groundwater and surface water over most of the continental US with the integrated hydrologic model ParFlow v3. Geosci Model Dev 8:923–937 McDonald M, Harbaugh A (2003) The history of MODFLOW. Ground Water 41:280–283 Morin A, Urban J, Sliz P (2012) A quick guide to software licensing for the scientist-programmer. PLoS Comput Biol 8:e1002598 Müthing S (2015) A flexible framework for multi physics and multi domain PDE simulations. PhD thesis, University of Stuttgart Nordbotten JM, Flemisch B, Gasda SE, Nilsen HM, Fan Y, Pickup GE, Wiese B, Celia MA, Dahle HK, Eigestad GT, Pruess K (2012) Uncertainties in practical simulation of CO2 storage. Int J Greenhouse Gas Control 9:234–242 Open Source Initiative (2018) What is “free software” and is it the same as “open source”? https:// opensource.org/faq#free-software. Online; Accessed 23-April-2019 Prud’Homme C, Chabannes V, Doyeux V, Ismail M, Samake A, Pena G (2012) Feel++: a computational framework for Galerkin methods and advanced numerical methods. In: ESAIM: Proceedings, vol 38. EDP Sciences, pp 429–455 Rasmussen AF, Sandve TH, Bao K, Lauser A, Hove J, Skaflestad B, Klöfkorn R, Blatt M, Rustad AB, Sævareid O, Lie K-A, Thune A (2019) The open porous media flow reservoir simulator. arXiv e-prints, arXiv: 1910.06059 Sander O, Koch T, Schröder N, Flemisch B (2017) The Dune FoamGrid implementation for surface and network grids. Arch Numer Softw 5:217–244 Schwenck N, Flemisch B, Helmig R, Wohlmuth BI (2015) Dimensionally reduced flow models in fractured porous media: crossings and boundaries. Comput Geosci 19:1219–1230 Segol G (1994) Classic groundwater simulations proving and improving numerical models. Prentice-Hall Stadler L, Hinkelmann R, Helmig R (2012) Modeling macroporous soils with a two-phase dualpermeability model. Transp Porous Media 95:585–601 Stallman R (2016) Why open source misses the point of free software. https://www.gnu.org/ philosophy/open-source-misses-the-point.en.html. Online; Accessed 23-April-2019 Stallman R (2018) The GNU project. https://www.gnu.org/gnu/thegnuproject.en.html. Online; Accessed 29-April-2018 Stodden V (2009) The legal framework for reproducible scientific research: licensing and copyright. Comput Sci Eng 11:35–40 Tecklenburg J, Neuweiler I, Carrera J, Dentz M (2016) Multi-rate mass transfer modeling of twophase flow in highly heterogeneous fractured and porous media. Adv Water Resour 91:63–77 Walter L, Binning P, Oladyshkin S, Flemisch B, Class H (2012) Brine migration resulting from CO2 injection into saline aquifers - an approach to risk estimation including various levels of uncertainty. Int J Greenhouse Gas Control 9:495–506 Weishaupt K, Bordenave A, Atteia O, Class H (2016) Numerical investigation on the benefits of preheating for an increased thermal radius of influence during steam injection in saturated soil. Transp Porous Media 114:601–621

Chapter 5

The Science-Policy Interface of Subsurface Environmental Modelling

Modelling activities in science are not limited to the scientific community itself, but relate to and impact other domains of society. With this chapter, we conceptually explore matters of modelling at the science-policy interface. Understanding scientific modelling as a tool and school to provide evidenced-based knowledge, there are several particularities when modelling enters the sphere of policy-making and public debate. The first sub-chapter outlines knowledge transfer between science and policy with elaboration on the specifics of both the science and policy systems, and identifies knowledge transfer patterns for scientific policy advice. Subsequently, we synthesize the challenges of simulations across science-policy borders in key finding statements: the black-box character of modelling, the (un-)certainty dilemma of modelling and simulation perception issues regarding hazard and risk assessment. Next, knowledge modes are introduced, elaborating a typology of knowledge production and knowledge communication adapted for the case of modelling. Finally, different types of research use by policy makers provides structuring in order to discuss processing patterns of modelling at the science-policy interface.

5.1 Knowledge Transfer Between Science and Policy 5.1.1 The Science-Policy Interface Scientific expertise for advising policy decision makers has become a key factor within the policy and administration procedures of modern democracies. According to Wollmann (2001: 376), scientific policy advice comprises the provision of information and recommended actions for decision makers and policy makers by scientists, experts from business, industry and society. In contrast, internal policy © Springer Nature Switzerland AG 2021 D. Scheer et al., Subsurface Environmental Modelling Between Science and Policy, Advances in Geophysical and Environmental Mechanics and Mathematics, https://doi.org/10.1007/978-3-030-51178-4_5

83

84

5 The Science-Policy Interface of Subsurface Environmental Modelling

advice refers to expertise delivered by staff of administration agencies. Taking a scientistic perspective, epistemic policy advice follows the conception that expertise will contribute to better-informed, more rational and more evidence-based policy decisions. The rationale of evidence-based policy-making puts the results of science at centre stage in the policy process. However, scientific policy advice as an information resource competes with other effects during processes of political opinion building and decision-making. Knowledge transfer at the science-policy interface, which provides input from science to policy decision makers and bureaucrats in government ministries, has been researched within policy science studies at a micro level. Existing studies reveal that time spent in processing scientific expertise by Members of Parliament has decreased over time. Malbin (1980: 243, cited by Beyme (1997)), for instance, showed that US parliamentarians spent one day per week in 1965 while in 1977 time budgets decreased to eleven minutes per day. Similar results are available for German Members of Parliament, although time budgets are considerable higher compared to the US. Based on self-reporting, German parliamentarians dispose of four to five hours a week for reading and processing scientific literature (Beyme 1997). As such, reading scientific expertise and policy advice is not the main task of policy decision makers. Rather, they engage as meeting and event participants, as networkers and consensus builders between policy and society. Administrative staff of Members of Parliament and ministries primarily carry out information processing and prepare relevant expertise by means of memo and excerpts for their superior authorities. These administrative elites benefit most from scientific policy advice. Heads of ministries and departments, however, read short summaries and assessments elaborated by agency staff. As an overall result, the reception and processing of scientific expertise in agencies is collaboratively organised as a mixed bottom-up and top-down approach. At lower levels, staff members process in-depth scientific information by condensing the expertise into summaries and recommendations. At a management level, administrative elites assess and review the provided information and recommendations for action. Decision makers thus rely on selected pieces of information. Information intake and processing at the level of decision-making is selective and influenced by the prior knowledge of recipients. Therefore, the process of understanding is not synonymous with the process of learning. In addition, scientific policy advice competes with other sources of information and ways of processing and judgment. Policy makers and decision makers have different ways of arriving at positions and decisions. In this sense, the political and administrative elites seem, in a certain way, similar to lay people and their way of decision-making procedures. As shown by social psychology for lay people, heuristics and biases play a major role in individual judgements (Kahneman et al. 1982). Besides focussing on individual and actor-related ways of information reception and processing, research has stressed the so-called structural and functional characteristics of the science and policy system at a macro level. The main characteristic of the body of literature on the science-policy interface from a functional perspective is an ideal type description of the most important features inherent in both the subsys-

5.1 Knowledge Transfer Between Science and Policy

85

tem of science and policy. The emphasis is on outlining functions of these subsystems in relation to sociological system theory. In essence, a functional perspective dominates the description of both subsystems as reciprocal rather than complementary. Bradshaw and Borchers (2000) outline both system rationales in the following way: While science accepts probability, policy seeks certainty. Moreover, the main feature of science is towards flexibility and problem orientation while the policy system seeks rigidity and implementation orientation. The corresponding system characteristics have been sharpened towards an ideal type and dualistic set of contradictory features. Heinrichs (2005) identified the following opposites: • • • • • • • • • •

truth versus power, theory versus practice, cognition logic versus action logic, facts versus values, abstraction versus concretion, complex language versus simplifying language, long-term time horizon versus short-term time horizon, modifiable models versus non-recurring life circumstances, principles of reproducibility versus principle of irreproducibility, substantial rationality versus instrumental rationality.

These juxtapositions reveal a rather difficult relationship between science and policy showing no clear-cut, simple and transparent exchange relations. Thus, the simple, one-dimensional assumption that science provides knowledge to policy so policy makers are able and willing to make decision on objective and rational evidence-based grounds is empirically difficult to find. On the other hand, scholars have hinted to the fact that both subsystems do not work as simply as ideal-type-based descriptions and confrontations may assume. In their view, this does not meet the reality of the complex science-policy interface. Jasanoff (1998), for instance, highlights some specific characteristics of the science system contradicting the above-mentioned ideal-type approach. She claims that science often delivers different explanations for the same phenomena and no coherent statement by differing disciplines is given. It is quite common that science provides different and, here and there, contradictory results (so-called expert dilemma). Overall, some scholars have criticized the dualistic approach as too simplistic and empirically unreliable (Mayntz 1994). Based on the results from comparative policy research in political science, empirical and conceptual work emphasizes cultural and institutional governance styles differing among nation-states and western democracies (Renn 1995). These policy styles rely on interaction patterns based on culturally-institutionalized arrangements and actor relations unique to nation-states or so-called families of nations. Renn (1995) distinguishes four governance approaches: adversarial, fiduciary, consensual and corporatist to qualify countries within “families of nations”. The perspective on historically institutionalized interaction patterns provides an explanation for the great empirical variety of real-world interaction in several policy and advisory fields.

86

5 The Science-Policy Interface of Subsurface Environmental Modelling

It also supplements assumptions on purely rational and objective interaction patterns with aspects of competing pluralistic interests and rationales of power politics. Within the adversarial policy style, the access of civil society groups to the political arena is relatively open, with an element of competition towards influence and power. Scientific expertise serves policy makers and decision makers in legitimizing their positions and power with high rates of individual fluctuation among decision makers. The adversarial policy style of open power access can be seen in Anglo-American countries. The fiduciary style, in contrast, shows a rather closed and isolated circle of decision makers who fiducially define the social well-being and manage decisionmaking. The inner circle of decision-making shows robust access and entrance barriers against societal interest groups and stakeholders. Scientific expertise is used very selectively and tied to individual experts, chosen by decision makers according to their personal reputation and institutional embedment. Countries with a fiduciary style of governance are, for instance, France and Japan. The consensual governance style integrates important societal interest groups and stakeholders in policy formulation and decision-making from the outset. However, once conceding access to the circle, decision-making takes place behind closed doors. The corporatist governance style is similar to the consensual one with the difference being that mechanisms of negotiation between stakeholders and interest groups are much more formalized. Germany in the post-war era is a good example of applying a corporatist policy style with its so-called “Rhenish capitalism” approach with close cooperation between management, labour and a comprehensive welfare state. In conclusion, the differentiated consideration of historically developed relationship patterns between interest groups as well as executive and legislative power seems more adequate to explain empirical varieties of policy advice and decision-making. Against this background, the governance style concept complements the approach of rational trade-offs between science and policy with institutional, cultural and power-related aspects. Let us now turn towards the science side of the science-policy interface. The science system is currently in a period of transition. Both changing internal science mechanisms and a new role of science in society have been stated as major transition aspects resulting in new forms of knowledge production and communication. The rapid change in science relates to two major reasons: on the one hand, the tremendous increase of the internal processing capacity; and, on the other hand, a clear change of society’s expectations towards science. Many scientific disciplines have considerably increased the capacities to integrate enormous quantities of data, theoretically understand and penetrate scientific problems and simulate complex problem areas by means of computer simulations. Interdisciplinary cooperation is encouraged with the established new processing capacities of science. On the other hand, society’s expectations towards science are changing. Alongside—or maybe in place of—scientific and technological reliability and accuracy, other criteria representing societal benefit and relevance are gaining importance (Nowotny 1999). To describe the new role of science, several notions have been proposed in literature such as ‘post-normal science’ (Funtowicz and Ravetz 1993), ‘mandated science’ (Salter 1988) or ‘Mode 2’ (Gibbons 1994).

5.1 Knowledge Transfer Between Science and Policy

87

In particular, the last-mentioned study stimulated a lively debate among scholars of philosophy of science. It can be stated that new forms of knowledge production have developed that are neither related to classical basic research nor applied research. Due to a combination of triggering factors for science transition—increased complexity processing and higher pressure on society’s expectations—new forms of sciencepolicy interfaces have emerged. On the one hand, science and research produce new knowledge for decision-making and implementation; on the other hand and at the same time, science exports uncertainties inherent in the research process to society. A manifestation of scientific uncertainties penetrating society is the fact that counter expertise exists in many areas of research. Key elements of science transition towards new forms of knowledge production refer to problem orientation and new forms of research organisation. The definition of problems and questioning takes place outside the scientific community and requires a translation of societal problems into adequate research questions. Besides scientific goals, non-scientific goals also have to be met and integrated into the science system. When it comes to research organisation and coordination, one may observe a loss of academic predominance towards an increase of inter- and transdisciplinary research including corresponding institutions. This implies new requirements and criteria for quality insurance and societal accountability. The transition of the science system, in addition, impacts the field of science communication. Knowledge production and transfer are seen as unified with, for instance, integrating practitioners into the science process itself, and place great significance on research dissemination and communication activities. The transition of science, however, differs across disciplines and fields of research. New forms of knowledge production relate to policy-oriented research that is knowledge relevant for policy decision-making with strong interfaces between science, policy and society. Exemplary fields of research are the areas of environment, health, energy and, in a broader sense, technology policy. In these policy- and public-oriented research areas, Weingart (1997) observed a simultaneous scientification of policy and the politicisation of science.

5.1.2 Knowledge Transfer Patterns at the Science-Policy Interface The relation between science and policy relies ideally on evidence-based knowledge transfer from research towards policy-making. That comprises aspects of research use by policy typologies and the service functions research can fulfil. In the following, we will summarize the main findings from literature concerning these aspects. In general, the main emphasis of describing knowledge transfer patterns at the science-policy interface is to highlight the supporting role of science for policy. It is widely seen that science has to contribute and support policy with the delivery of current state-of-the-art knowledge in order to help policy decision makers debate

88

5 The Science-Policy Interface of Subsurface Environmental Modelling

and make decisions on an objective and evidence-based basis. As such, the dominant view still sees science as a service provider for policy—and not vice versa. The body of literature on research use typologies is extensive and stems from both empirical work in different fields and conceptual consolidation (e.g. Weiss 1979; Nutley et al. 2007; Renn 1995; Heinrichs 2005). Weiss (1979) extracted seven types of research utilisation based on a literature review at that time. The seven different meanings of research use are shown in the following as summarized by Nutley et al. (2007: 38–40): • The knowledge-driven model: Basic research identifies knowledge of potential value to the policy or practice community. Applied research tests this knowledge out in real-world contexts, research-based technologies are developed and implemented and research use occurs. • The problem-solving model: Research helps policy makers find a solution to a particular problem. Researchers and policy makers agree on the nature of the problem and the goal to be achieved, and social science provides evidence and ideas to help clarify a way forward—drawing on existing research or commissioning new work. • The interactive model: Policy makers actively and interactively search for knowledge to help support their work, drawing on multiple sources of information— including their own experience—alongside research. The relationship between policy and research is typically iterative, messy and dynamic, and progress is gradual, involving ‘mutual consultations’ between policy makers, researchers and other players in the political process. • The political model: Where political opinions are long-standing and fixed or where interests have firmly coalesced, research is unlikely to have a direct influence. Instead, it may be used politically to support a particular stance or else destabilise opposing positions. • The tactical model: Sometimes, the findings from research are irrelevant: what matters is that research is being done. Funding or conducting new research can be a way for policy makers to avoid action. Researchers may be blamed for unpopular policy outcomes; or else research ‘experts’ can be drafted in to give legitimacy to an agency or its policies. • The enlightenment model: Over time, research will have a gradual and cumulative influence on the public policy sphere. Ideas, theories and ways of thinking that derive from the broad body of research-based knowledge gradually seep into the policy-making process through diverse and indirect routes, such as interest groups, journalists and the mass media. Research can thereby shape the ways both problems and their solutions are framed and can ultimately lead to fundamental shifts in the prevailing policy paradigm. • Research as part of the intellectual enterprise of society: New policy interests in an issue may be stimulated by a wider social concern and policy makers offer funds for its further research. Researchers are thereby drawn to study it, and may develop and reconceptualise the issue. This, in turn, shapes ways of thinking by both policy makers as well as at a broader societal level. The process is one of

5.1 Knowledge Transfer Between Science and Policy

89

mutual, ongoing influence between policy, research and the social context within which both are embedded. Besides functional types of research use by policy, one may ask what the benefits are of scientific policy advice as a service provider. Discussing technology assessment in terms of scientific policy advice, Grunwald (2006) exposes several service arguments. Scientific policy advice may contribute to optimizing the knowledge base among policy makers in delivering state-of-the-art knowledge for contributing and guaranteeing ‘robust’ policies. In that sense, the quality criteria for good advice relies exclusively on state-of-the-art of research. With optimizing the knowledge level, it may also contribute to objectifying the debate of policy formulation and decisionmaking. This helps to differentiate between evidence-based and value-based policy approaches. Policy advice may, in addition, contribute to a more informed concrete design of policy options and implementation procedures. On this basis, it contributes to ‘socially robust’ decisions with the broad inclusion of various and diverging societal value positions. Another service benefit of scientific advice is contributing to the avoidance or resolution of conflicts while sounding out potential for consensus and pointing out alternatives to conflict resolution. Finally yet importantly, reflexive meta-knowledge provides assessments and judgments on the level of certainty, risks and underlying premises of the knowledge base. The process is seen rather as an iterative and recursive process where no one-way interaction takes place and institutional and personal interchanges are widespread. Following Nutley et al. (2007), these models can be synthesized into a simplified fourfold functional typology of research use in policy (instrumental, conceptual, strategic, procedural) added to several service providing benefits which we take as a substantive framework of this book. Taking both the functional types and service benefits of scientific policy advice together, the following Table 5.1 lays out the relationship at the science-policy interface.

Table 5.1 Types of research use by and service provision for policy Functions Services Instrumental use

Conceptual use

Strategic use

Procedural use

– Identification and evaluation of policy options – Design and implementation of policy decisions – Impact assessment and monitoring – Problem identification and understanding – Coded knowledge archive – Early warning tool – Legitimacy base for normative positions – Scientific facade for interests and values – Technical manipulation of simulations – Knowledge communication towards lay people and non-experts – Conflict avoidance and consensus making – Networking and actor integration

Source Adapted from Scheer (2017)

90

5 The Science-Policy Interface of Subsurface Environmental Modelling

The instrumental use of research refers to a direct impact on policy and practice decisions. The direct impact of research on policy-making can be empirically identified and represents a widely held view of what research use means. Instrumental use provides instrumental knowledge to allow the assessment and evaluation of the likely consequences of each policy option and embed this type of knowledge in regulatory initiatives. The instrumental use embeds science and its results in acts, rules and, in general, the toolbox of regulatory instruments. Research can be used in rulemaking for technology support programmes, regulation and assessment, serving for evaluation and control measures. The conceptual use of research encompasses a more wide-ranging definition of research use and has a more indirect impact on the knowledge, understanding and attitudes of policy makers. In that sense, the impact of research on general orientation and perception is much more difficult to demonstrate. However, conceptual use provides factual insights to help identify and frame problems, and to better understand the (initial) situation, the following cause-impact chains, etc. The conceptual use of research results in better knowledge among decision makers. Thus, simulations contribute to early problem perception (e.g. climate modelling), delivering evidence (e.g. life-cycle assessment) and illustrating the consequences of future policy options. The strategic and tactical use of research refers to the instrumentalisation of research for non-evidence-based purposes such as for the support of an existing political position or use by competing party governments to push through a political decision or defer an action. Strategic use provides arguments and contextual knowledge to help policy makers reflect on their situation and to improve and sharpen their judgement and strategic positioning. Strategic use may be motivated by party competition seeking office during election campaign, by justifying the actions of office holders during political windows of opportunities or by a desire to stall for time. The procedural use of research refers to the design and process of research rather than the outcomes and results. Thus, the process of knowledge production is more important than the research evidence. The procedural use of research can support technology development and help design procedures for conflict resolution. Procedural use provides knowledge to help design and implement procedures for conflict resolution and rational decision-making, and centres on encouraging networking for technology development, conflict avoidance and consensus making (e.g. technology procurement, collaborative research). Most often, the instrumental use of research is seen as the most important function of science and represents a widely held view of what research use means. However, Nutley et al. (2007: 36–37) clearly state: “In fact, on the whole it seems that research is much more likely to be used in conceptual than in instrumental ways—changing perceptions and understanding rather than directly influencing policy or practice change”. Consequently, empirical research on the impact of research use in policy demonstrates methodological difficulties: while a direct impact is easily demonstrable, the more important indirect conceptual, strategic and procedural use of research is much harder to reveal. In addition, what should be stressed is the fact that these categories are foremost of analytical value, since they are sometimes difficult to apply empirically and the boundaries between different categories are often blurred.

5.2 Challenging Simulations Across Borders

91

5.2 Challenging Simulations Across Borders With computer simulations at the interface of science and policy we turn to their role of knowledge provider and knowledge communicator. If simulations have an impact on other areas of society and play a role in science production and scientific policy advice, then simulations are, first of all, matters of communication across the science-policy interface. Are there any outstandingly unique challenges for computer models at the sciencepolicy interface? A short literature review reveals several insights into communicating models across scientific borders. As a general observation, one can state that research literature on simulations impacting policy is moderate and for the time being no systematic meta-studies are at hand as Thorngate and Tavakoli (2009: 514) emphasize. On the other hand, there is a number of existing research based on case studies embedded in specific policy areas which lacks meta-study synthesization. Taking a closer look on single studies and their conclusions, a considerable amount of research dealt with simulation at the science-policy interface. A first outcome of research stresses several functions simulations do fulfil at the science-policy interface (Van Daalen et al. 2002; Fisher et al. 2010; Farber 2007; National Research Council 2007; Van der Sluijs et al. 2008). Van Daalen et al. (2002) proposed a fourfold function list for models. Firstly, models serve as eye-openers—that is in the pre-phase of problem identification models may help to place relevant issues on the political agenda. Secondly, models play an advocative role—that is they help to materialize dissenting arguments and as such serve for challenging opposing assessments. Thirdly, models help as a vehicle to create consensus among different stakeholders—that is they serve to ease consensus finding. And finally, models serve for management support—that is they help to lay out and identify concrete policy decision and assessment effects based on the implementation of policies. A further important result from literature is the emphasis of possible misinterpretaion and misunderstandings. A great variety of literature deals with problems in the area of simulation perception, credibility and misuse by decision makers, resulting in deficient policies. Van Daalen et al. (2002) states a “credibility crisis of models used in integrated environmental assessments”. Wagner et al. (2010: 293) observe a fundamental and systematic misperception among political decision makers when they state: “computational models are fundamental to environmental regulation, yet their capabilities tend to be misunderstood by policymakers”. Brugnach et al. (2007: 1075) emphasize deficient acceptance levels for science simulations among policy makers while Fisher et al. (2010: 251) argue that “[l]awyers and policy makers have overlooked models and not engaged critically with them”. Deficits that have been identified highlight both the aspects of the simulation tool itself and the people dealing with them. On the side of the simulation tool deficits refer to deficiant levels of simulation quality, and the lack of adequate outside communication of the role of complexity and uncertainty (Hellström 1996; Ivanovic and Freer 2009; Petersen 2006). The issue of complexity and uncertainty points to the important aspect whether simulation adequately match with real world phanomena and reality

92

5 The Science-Policy Interface of Subsurface Environmental Modelling

(King and Kraemer 1993; Pilkey and Pilkey-Jarvis 2007). Several scholars argue that policy makers understand and perceive models as an opaque black-box and thus are not able to understand and interpret simulation data in a right way (Olsson and Andersson 2007; Evans 2008; Wagner et al. 2010). Also contextual deficit are discussed in literature. Firstly, on a macro level, scholars hint to the fact that science and politics are too different to perfectly match. They have different modes of operation (e.g. truth seeking versus action-oriented) which leads to different roles to play for simulations (Haag and Kaupenjohann 2001). Secondly, model knowledge deficits among policy makers and the subjectivity of modellers are claimed to be relevant, while, thirdly, there is hardly any exchange between modellers and policy-makers (Walker et al. 2003; Fine and Owen 2004). Putting together these shortcomings of simulations at the science-policy interface, several scholars have come to conclusion that there is a fundamental misunderstanding and misuse of models in policy-making (Fisher et al. 2010; Wagner et al. 2010). Based on these shortcomings, several scholars have dealt with developing tools, guidelines and recommendations for improving the use of simulations at the science-policy interface (Boulanger and Bréchet 2005). It incompasses encouraged cooperation between researchers and decision makers (Alcamo et al. 1996), developing evaluation tools for adequate model selection (Brenner and Werker 2009; Yücel and van Daalen 2009) and innovating methods for enhancing transparency and the involvement and participation of decision-makers in the modelling process itself (Schmolke et al. 2010). However, to synthesise findings from literature, three crucial aspects are central for explaining the challenges of models and simulations at the science-policy interface: firstly, the black-box character of simulations, secondly, the (un-)certainty dilemma of simulations and, thirdly, the perception of simulations regarding hazard and risk assessment.

5.2.1 The Black-Box Character of Simulations A first challenge for simulations across borders is their black-box character. Simulations are a complex scientific method requiring a high level of detailed simulation expertise in order to design and implement them. People who do not possess simulation expertise—even within the scientific community—are often unable to fully understand the modelling exercise and adequately interpret its results. Thus, simulations appear as a black-box where it is hard to follow what is going on inside. Böhringer et al. (2003: 32), for instance, referring to computable general equilibrium (CGE) models1 state: “without detailed programming knowledge, the CGE approach is doomed to remain a ‘black-box’ for non-modellers”.

1 Computable

general equilibrium (CGE) models refer to computational economic models that estimate future economic changes and developments dependent on external policy, technology or other factors.

5.2 Challenging Simulations Across Borders

93

Carrying out simulations properly implies several subsequent working steps, which might result in misunderstandings and misinterpretation among non-modellers. One might call this the “foreign language character” of modelling where it is hard for non-modellers to check for plausibility and reliability. These potential failures of plausibility affect the perceived traceability and reliability of simulation results. An ideal type of simulation is a process-oriented approach including several working steps based on simulation textbooks (Banks 1998; Balci 1989). Banks (1998), for instance, differentiates the following working steps to guide model building: (1) problem formulation, (2) setting of objectives and overall project plan, (3) model conceptualization, (4) data collection, (5) model translation, (6) verification, (7) validation, (8) experimental design, (9) production runs and analysis, (10) more runs, (11) documentation and reporting, (12) implementation. Two fundamental ‘translations’ take place in the workflow, namely in the “model conceptualization” and “model translation” working steps. First, in model conceptualization, a conceptual model, based on a series of mathematical and logical relationships concerning the components and structure of the system, abstracts the targetsystem under investigation. In essence, the modeller analyses the real-world relations of components in a system and maps components and their relations in a model chain. First, a decision is made on the relation of interrelated components and system elements (i.e. linear, discrete, chaotic, etc.) based on state-of-the-art scientific knowledge. The relationships need to be weighted according to their approved and/or assumed real-world hierarchies. As a result, a flow diagram exists which contains the basic functional principles and performance of the real-world system. Within the conceptualization stage, the modeller must determine fundamental assumptions in the real-world system, the feasibility of mapping these system features in a model chart and the transferability into mathematical codes. Although textbooks hint at the need for model conceptualization to be transparent and communicated, this is not always the case. To help non-modelling experts understand simulation better, they must understand the basic assumptions, system boundaries and functional principles of the model. If this is not the case or badly communicated, difficulties of understanding may arise. A second translation step takes place in transferring the model concept into both a formalised mathematical language and a computer environment respectively. Variables designed within the model concept need to be transferred to a computer system. At this stage, the computer features play a role in determining whether conceptual variables and relationships are, in principle, programmable. If not, these essential features of the conceptual model do not find their way into the simulation model. In case non-modellers are unable to understand mathematical and/or computational languages, they cannot trace back the plausibility of modelling. Besides, a prerequisite for understanding is an open and transparent communication of the translations by modellers. However, what is deemed as important is the conclusion. Given the case that simulations are not well-understood by lay people and non-experts with regard to their function and implementation principles, other factors such as the manner of model communication, credibility and trust in e.g. modellers has become more relevant.

94

5 The Science-Policy Interface of Subsurface Environmental Modelling

Donella Meadows—one of the famous authors of the Limit to Growth study from 1972 puts it into the following words when summing up experiences on their World3 model perception a decade later: “One of the main problems with mathematical models is that most people aren’t trained to understand them, and so they must usually be accompanied by a translation back into ordinary words. The problem is compounded when the message is further translated from mathematics into the digital kinds of languages that computers can understand. Any process of translation requires a considerable amount of trust. The double-translation process from words to mathematics to computer code and back again makes people suspicious, and it should. It requires very good translators” (1982: 10). Thus, it should be obvious that model communication is a serious and necessary task in itself. This task is facilitated with improving technical possibilities and acceptance by the scientific communities regarding the open-source distribution of research software and the reproducibility of computational results, see Chap. 4.

5.2.2 The (Un-)certainty Dilemma of Simulations A second challenge for models across borders refers to the so-called (un-)certainty dilemma of simulations. On the one hand, there are relevant uncertainties inherent in simulation methods; on the other hand, simulation results may impart certainty due to its unique way of representing its results in numbers and figures. A prerequisite for assessing the accuracy and reliability of simulations is a verified and validated model. It is important to distinguish a two-step procedure. This is excellently reviewed in Oberkampf and Trucano (2002), according to which, verification is a “substantiation that a computerised model represents a conceptual model within specific limits of accuracy” while validation is the “substantiation that a computerised model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model”. To give an example: verification could be achieved by comparing the model with an analytical solution for the same set of equations, while validation would be the successful comparison of the model’s results with data from well-controlled measurements for a certain application problem. Contrary to that, calibration is usually ‘weaker’ in the sense that it estimates parameter values in order to achieve agreement between model results and measurements. It is not guaranteed that calibrated parameter values are the ‘true’ values, they might lead to a mismatch between model and measurements in other configurations. Models and simulations show uncertainties at different levels. These uncertainties have been reported and analysed extensively in literature, e.g. Morgan and Henrion (1990), Walker et al. (2003), Petersen (2006), Walter et al. (2012). Walker et al. (2003), for instance, developed an uncertainty categorisation using the following terminology: determinism, statistical uncertainty, scenario uncertainty, recognised ignorance and total ignorance while Walter et al. (2012) apply this approach for the risk estimation of brine migration resulting from CO2 injection into saline aquifers.

5.2 Challenging Simulations Across Borders

95

Without going into too much detail, the following are some illustrations. In the field of Integrated Assessment Models within climate modelling, scholars have hinted at the fact that climate models rely on an insufficient real-world knowledge base (Van der Sluijs 2002), meaning that models are based on assumptions of interdependencies and causalities that are, in fact, not yet fully known. A further categorisation of uncertainties relates to the quality of empirical data used as input for model validation and calibration. Most often, these data are simply unavailable or, in the event that they are, not representative and in bad quality. The example of Carbon Capture and Storage (CCS) shows, for instance, that the simulation of CO2 storage usually relies on selective empirical data—area-wide data-mining covering every inch of the underground is simply unfeasible and unavailable. A third example of model uncertainty relates to the time horizon. When modelling environmental problems with a long-term delay of the corresponding damage (e.g. climate change), empirical data necessary for model validation aren’t usually available due to its future event characteristic. On the contrary, political decisions based on these simulation results take place in the present (Olsson and Andersson 2007). In short: modelling contains a wide set of uncertainties limiting its reliability and validity. It should be obvious that it is difficult to cope with these uncertainties due to their partly fundamental character. However, it is most important to be aware of them and disclose them in a communication process. Ivanovic and Freer (2009: 2551) summarize these findings in the following way: “in summary, it is not that there is model uncertainty that undermines confidence in science, but the ignoring of uncertainty in prediction analysis and an inability to extrapolate the results.” On the other hand, simulation results may communicate certainty to its addressees through its unique feature of presenting results in terms of numbers. In general, simulation results are results presented in the format of numbers and their corresponding visualization in graphs and diagrams. What does it mean for a communication process and the perception by addressees to present simulation results in numbers? Porter (1996) argued that numbers serve as a communication medium, disguise its content with objectivity and universality, and are particularly important in the communication process, in case other procedures of consensus fail. Following his argument, quantitative, number-based statements seek to reduce complexity while simultaneously finding a high potential of consensus and acceptance. Transferring objectivity towards numbers is a science-based objectivity procedure, which takes place via the standardized observation of experiments and communication (Heintz 2007). Science has been very successful in using formalization measurements, such as the strongly formalised mathematical language, to attribute objectivity to its reasoning and evidence statement—and that is evidence by numbers. Heintz points out (2007: 72): “while language always includes a yes and no option and a phrase provides the opportunity of its negation, numbers need to produce their negation actively. In order to contradict numerical statements, one needs alternative numbers and the knowledge of their production”. This mechanism of attributing objectivity to numbers is one major reason why models can be seen as objective by policy makers. It is then that policy makers base their decisions on this partly evidence-based and objective information finding legitimacy (Jasanoff et al. 1998; Oreskes 2000; Pahl-Wostl et al. 2000; Sarewitz et al.

96

5 The Science-Policy Interface of Subsurface Environmental Modelling

2000). According to Olsson (2007: 99) the convincing power of numbers is linked to the perception that objective and science-based knowledge allows people to change their attitude and actions. In summary, models and simulations deliver results to be perceived with a high level of certainty while the production of these results includes (high) levels of uncertainties. In that sense, simulations are special as a matter of communication across borders.

5.2.3 Perception of Simulations Regarding Hazard and Risk Assessment The third challenge for models across borders is in the field of risk analysis. The scientific understanding of risk assessment terminology differentiates the notions of risks and hazards and their corresponding concepts (Renn and Sellke 2011; Scheer 2013; Scheer et al. 2014). Hazard is associated with the intrinsic ability of an agent or situation to cause adverse effects to a target such as people, the environment, etc. This ability may never materialise if, for example, the targets are not exposed to the hazards or made resilient against the hazardous effect. Risk, on the contrary, takes into account the probability that a harmful event will occur and scale of damage it could cause. The decisive factor is weighing the possible scale of damage against the probability of exposure and the related harm. Thus, risk is deemed to be the probability of the occurrence of a harmful event. The difference between both analytical approaches, therefore, is that a hazard assessment approach identifies anything that could cause damage and harm while the risk assessment approach means identifying how likely (i.e. calculating probabilities) it is that a hazard will do harm and how big the caused harm will be. Computer simulations have established hazard and/or risk assessments as important tools. Their great benefit is to carry out assessments on a trial basis without error approach. Unlike empirical testing, computers have the ability to use trials without experiencing harmful error consequences. Computer simulations of hypothetical scenarios replace learning from actual mistakes (Aven and Renn 2010). However, what often remains unclear and is differently interpreted among addressees outside the scientific community is whether a modelling exercise refers to the field of hazard identification or risk assessment. In other words: do calculated simulation results hint at an agent that potentially could cause a damage or rather indicate the probabilities and extent of damage that may occur. In subsurface modelling, quantitative risk assessment modelling needs sitespecific data input, exposure data, etc. Hazard identification models in geology do not require site-specifics, but may use realistic but not real geological data. From that end, one major interpretation of subsurface modelling results exists: results indicating an identified hazard is interpreted as a definite happening—that is, interpreters add statements of probabilities of occurrence. Hazard and risk is also understood and communicated very differently among stakeholders and the public at large. What can be observed repeatedly in risk gover-

5.2 Challenging Simulations Across Borders

97

nance are misunderstandings and communication gaps around these crucial terms of risk analysis (Scheer et al. 2014). If one takes a closer look at the public discussion, it seems that many stakeholders from environmental and consumer organizations base their interpretations and efforts mainly on aspects of hazards. On the other side, business and industry and many public authorities, in contrast, focus their communication more toward exposure and the probability of the hazard’s occurrence. Thus, they tend to focus more on the risk issue. The hazard versus risk distinction bears some impacts for risk communication practices. In the field of risk communication, when communicating the hazard, there is a need to define the guiding parameter or the good to be protected (e.g. intact environment, the inviolability of human life or compliance with stipulated health standards). When communicating the concept of risk, one needs additional information about probable occurrence and exposure. Probability and exposure are much more difficult to communicate than the hazard issue. Besides the needed information on exposure and probabilities, the addressee (e.g. consumer) needs to estimate his own exposure and the degree of hazard for his own specific situation in order to judge whether he is in a risky situation. What can often be seen is that the public at large tends to perceive the hazard in a rather intuitive manner, while either underestimating or completely ignoring the probability of exposure. The ambiguity of simulations to be clearly recognized as either hazardor risk-related fits well into these perception patterns.

5.3 Producing Knowledge—Communicating Knowledge: Modes of Modelling Across Borders The challenges of simulations across borders need some deeper and more systematic consideration. Science-based subsurface modelling—in line with science outputs in general—aims at producing evidence-based knowledge to gain better insights which need to be communicated across borders. The dual function of simulations in producing and communicating knowledge can be seen as a fundamental characteristic of modelling at the science-policy interface. As such, different types of knowledge and communication constitute the role of modelling across borders. Both the knowledge and communication mode can be subdivided into three types, each summarised in Table 5.2. On the knowledge level, one may differentiate the types of secure knowledge, unsecure knowledge and recognised non-knowledge. On the communication level, modelling may enable communication, amplify communication and may provide feedback on the communication process and its communicators respectively. Scientific simulations serve as a knowledge-production tool and method complementing the two other scientific approaches, namely theory and experimentation. Scholars of epistemology in the area of philosophy of science have illuminated the issue about whether simulations fall within either the area of theory or experimentation or constitute an independent third method approach of science (Durán 2018).

98

5 The Science-Policy Interface of Subsurface Environmental Modelling

Table 5.2 Types of knowledge and communication modes Modes of knowledge Secure knowledge Unsecure knowledge − Availability of full target system − Nature of uncertainty: knowledge epistemic and/or ontic uncertainty − Transferability of target system − Range of uncertainty: to computer environment statistical and/or scenario uncertainty − Successful verification and − Methodological validation procedures unreliability − Value diversity Modes of communication Enabling communication Amplifying communication − Specify and visualise foresight − Modellers act as political knowledge voices − Methodical and thematic interface for communicators

− Institutionalization of modelling community

Recognised non-knowledge − Recognised ignorance: e.g. knowledge deficits − Early warning tool − But limited to point on outside phenomena

Feedback on communication − Simulation results shape the way we think about the world − New visual components enter our cognitive patterns

− Simulations as dialogue and communication platform Source Adapted from Scheer (2017)

So far, no agreement or consensus has been reached as to whether simulations contribute to existing epistemic approaches or go beyond traditional scientific methods. The dividing line runs between emphasizing that simulations are a mere numerical continuation of mathematical models and/or experimentation and stressing that simulations are an independent and original knowledge instrument, largely separate from theory and experimentation (Durán and Arnold 2013). Whatsoever, there is hardly any discipline of science which is not applying and using simulation and modelling as a knowledge production tool at the current time. Thus, it is indisputable that scientific computer simulations are widely-used for producing epistemic evidence and knowledge.

5.3.1 The Knowledge Mode of Simulations The modes of knowledge show a fourfold characteristic as elaborated by sociology and the philosophy of science. Knowledge can be secure or unsecure and— looked from the other side—non-knowledge can be recognised or unrecognised. However, only the first three types are scientifically tangible. While we dispose of

5.3 Producing Knowledge—Communicating Knowledge: …

99

tools and methods in order to identify secure and unsecure knowledge and recognise non-knowledge, we are not able to specify unrecognised non-knowledge. The latter remains in the field of a theoretical construct due to its unknown-unknown peculiarity. Taking a closer look at modelling and simulation concerning the three types of knowledge, we find evidence for all of them. Secure knowledge is the ideal of scientific knowledge production. A high level of solid, uncontested knowledge is what the scientific community strives for and what defines reputation and legitimacy of science. Referring to the rationality theorem in line with Karl Popper, scientific truth and knowledge is seen valid temporarily as long as it is not falsified via validity tests. In line with Popper’s understanding of rationality, knowledge approximates an objective truth claim as long as it is not falsified, though it will always remain in the twilight of unsecure knowledge since it never reaches the ideal of absolute and perpetual knowledge. Falsification can always happen, no matter how long the knowledge claim is undisputed. Thus, solid knowledge is always relative. Modelling and simulation—as theory and experimentation—contributes to the secure knowledge base in case matters of uncertainty and insecurity remain as low as possible, while reliability and validity are achieved as completely as possible. This is particularly true for technology-oriented simulations with cause-effect relationships that follow simple deterministic laws and whose initial conditions and system boundaries are easily defined. Simulation science has elaborated a large toolbox of verification and validation methods in order to test for reliability issues. Unsecure knowledge is an area where knowledge and evidence are delivered but still uncertainties remain. Thus, the body of knowledge is fragmentary and incomplete. It covers the huge area of science where knowledge is produced but relevant questions are still open. Epistemic constraints and the limitations of simulations are claimed since they are not able to adequately match with reality and the target system (King and Kraemer 1993; Oreskes et al. 1994; Oreskes 2000; Pilkey and Pilkey-Jarvis 2007). Thus, particular ranges of uncertainty limit the validity of scientific results. The scientific community ever lays a focus on uncertainty research, and has developed a broad range of methods researching uncertainty—among them, for instance, probability statements by means of quantifiable confidence intervals. Also in the field of simulation science uncertainty research is prominent. Simulation experts have laid much emphasis on specifying the range of uncertainty when performing simulations. A major distinction can be drawn between objective and subjective methods of uncertainty modelling. The objective concept quantifies ranges of uncertainty with a frequentist approach. In contrast, the subjective concept applies the Bayesian statistics approach for making probability statements. But uncertainty in computer modelling goes beyond these lines. The great variety of uncertainty in modelling has been systematically researched by Petersen (2006). He distinguishes between several uncertainty categories referring to the nature of uncertainty (epistemic, ontic), the range of uncertainty (statistical, scenario), methodological unreliability and value diversity. Simulations and its results may also contribute to the area of recognised nonknowledge in case they indicate existing knowledge deficits and known-unknowns. Although known-unknowns cannot be specified in detail since the unknown part

100

5 The Science-Policy Interface of Subsurface Environmental Modelling

dominates, it is possible to identify the contours of what we do not know—and simulations have a role to play in that. While doing so, simulations serve as an important early warning tool for the policy system, indicating emerging problems and forthcoming policy issues. However, different to experimentation simulations have difficulties discovering very new and completely unknown facts which are beyond their conceptual framework. This is because the cause-effect relationship is pre-determined in the algorithm and leaves no space for fundamental surprises. Simulations are able to specify and predict phenomena by, for instance, detailing specific time-dependent system states. Nevertheless, they have difficulties to point out and specify phenomena which are outside their conceptual frame. Simulations are much more limited towards recognised unknowns as compared to experimentation, which has the capability of referring to phenomena outside the experimental design.

5.3.2 The Communication Mode of Simulations Communicating knowledge is a further remarkable function which simulations are supposed to fulfil at the science-policy interface. From that perspective, one may ask whether simulations play a role in what to communicate and, in principle, how communication runs. We distinguish three types of communication modes. Modelling and simulation may enable communication, amplify communication and have an impact on communicators themselves. Simulation modelling may enable communication based on its content and the subject of the modelling exercise. Simulation is often used as a tool to generate knowledge about the future, leading to projective and prognostic knowledge claims. In that sense, modelling is able to sketch, specify and materialise possible future states of a system by means of virtual representation. Via techniques of visualisation, the produced foresight knowledge may initiate and enable communication, providing a platform of discourse, exchange and deliberation for decision makers, stakeholders and society. Potentially differing visualised system states from simulations may help to illustrate and exemplify available options to shape, design and decide the future. Political decision-making in pluralistic democracies and societies always relies on communication about differing and alternative courses of action whereas policy outputs and outcomes are a matter of decision-making based on communication. Based on the paradigm of evidenced-based policy, science and its results have a role to play in the communication process. Scientific simulations enable communication by providing a methodical and thematic interface for communicators in order to frame, localise and focus on distinct topics and messages. The future-orientation facet of simulation serves as an important foresight tool to make the future tangible, which is an aspect, for instance, not accessible to empirics. Hence, ‘editing’ the future via simulation studies serves as an important communication object to stimulate reflections and deliberations on future developments. Simulation and modelling has the capability of amplifying communication due to its community organisation and institutionalisation. Modellers advocating their

5.3 Producing Knowledge—Communicating Knowledge: …

101

work, institutions and communities of common interests carry out modelling. By establishing, institutionalizing and networking a scientific simulation community, the communication power of its representatives is strengthened in order to receive a hearing. When simulations address topics relevant in the political debate, they have the potential to amplify and widen the discourse and leave their ‘footprint’. In competition with other communication arenas, where actors and topics compete for political attention, simulation-based communication is an advantage in political discourse. The simulation content (e.g. energy scenarios, subsurface modelling) is indirectly granted a louder voice and receives greater attention compared to competing communication arenas and issues. Thus, simulations support and amplify political agenda-setting by raising awareness for specific topics and increasing their significance compared to competing issues. Amplifying communication goes hand-in-hand with selectively pushing forward adequate topics while holding back inadequate ones. Simulation, thus, is lobby work. Simulations may also have decisive effects on the communication itself and its communicators providing feedback on communication: how it evolves and communicators think, speak and act. As such, the modelling exercise influences the cognitive perception and plausibility patterns of how we think about the world. Simulations comprise several characteristics, which might favour a specific view of the world: they are number-, image- and solution-focussed and they facilitate a certain degree of accuracy perception (i.e. the (un-)certainty dilemma). It is highly probable that these characteristics shape and influence the way in which we think about issues. The problem at stake has been addressed but no systematic research on its empirical effect and manifestation has been done. Warnke (2019) argues that it is obvious that the increasing significance of computer simulation as a method of technology-oriented knowledge production, and the corresponding need to analyse simulation-based visualizations, will bring in new visual components, which inevitably shapes how engineers think through issues. Visualised images and displays will gain importance in cognitive perception patterns. As stated above Porter (1996) elaborated on the high significance of numbers in communication and deliberation processes, arguing that numbers serve as a communication medium, disguising their content with objectivity and universality. As such, numbers act as a decisive (scientific) source for ways of worldmaking.

5.3.3 Computerised Policies - Politised Computers: Types of Simulation Usage The usage of simulations by the policy system can vary. Following the research use and service matrix developed above (see Table 5.1), we now discuss the role of simulations along functional and service-benefit issues. The conceptual use of simulations by policy is the broadest and most unspecific usage pattern. It refers more widely to the area of problem identification and under-

102

5 The Science-Policy Interface of Subsurface Environmental Modelling

standing. In this field, simulation serves as a coded knowledge archive and as an early warning tool. This can be illustrated with climate and environmental science. The famous World3-Model used in the groundbreaking analysis of The Limits of Growth by Meadows et al. (1972) illustrates the conceptual use of modelling for a better understanding of the problem dimension. The World3-Model was composed as a system dynamic approach in order to inform on trade-offs between relevant parameters, namely population, growth, foodstuff and ecosystem boundaries. The Limits of Growth study had a stunning perception world-wide and stimulated deeper environmental problem perception, both among decision makers and the public at large. A key result of the study referred to ecosystem capacity limitations and the fact that exponential growth of the population will exceed the world’s capacity limits. The modelling of the World served as an early warning tool for potential environmental problems. This encouraged political agenda-setting of environmental issues which date back to the early 1970s. This is even more astonishing due to the fact that the World3-Model study received (harsh) criticism right from the beginning (Cole 1973)—even by the authors themselves (Meadows et al. 1982). Another example for conceptual use can be taken from earth system and geo-science modelling. Simulations in the field of CCS are an indispensable knowledge instrument, since they are able to reduce complexity, overcome temporal and spatial constraints in situ, and test out several future policy options. In addition, simulations provide a knowledge base for pre-assessing technology potentials and, thus, provide crucial insights before decision makers step into the implementation of pilot and demonstration project stages. Modelling allows the investigation of complex interacting processes as far as they are considered in the conceptual model-building process. From that end, the influence of model input parameters on model results or predictions can be tested. A thorough understanding of parameter sensitivities can be very valuable, for example, in setting priorities for the detailed exploration or dedication of particular research efforts. In that sense, modelling serves as an ‘eye-opener’ both for scientists and policy makers to better understand real-world phenomena. The instrumental use of simulations refers to embed simulations directly into policy decisions and policy instruments as well as policy impact assessment and monitoring. A striking example reflecting instrumental usage in policy is the European Water Framework Directive (EU Water Framework Directive 2000) where simulations play a significant role. The implementation of the directive follows several consecutive steps with different tools such as pollution quantification algorithms, decision support systems and dynamic simulation models. The usage of simulations helps to better understand how water systems function, provides a framework for data management and validation, and support water management activities (Kämäri and Rekolainen 2005). In addition, the monitoring process within the water framework directive models can be used for several purposes such as surveillance or operational monitoring activities (Højberg et al. 2007). Strategic and tactical use of simulations fulfils a communication function of modelling. In the sense of a communication act, science is used due to its reputation and power of persuasion. It is not the evidence claim of science but the legitimacy claim stimulates strategic usage. Science reputation is used to legitimise singular norma-

5.3 Producing Knowledge—Communicating Knowledge: …

103

tive positions and may serve as a scientific façade for individual and/or collective interests, values and strategies. Decision makers than reinterpret and frame scientific results according to their own interests and normative positions. According to Wagner et al. (2010: 336), the strategic use of simulations appears in three intimately related but distinguishable strategies that a devious regulatory participant can deploy to reap benefits from models. The first strategy builds on the widespread misconception that models are fact-generating machines. Regulators tend to portray simulations as answering machines in order to sidestep some unpleasant accountability controversies with statements such as ‘the model made me do it.’ Such a strategy may prevent further scrutiny by institutional authorities and stakeholders. A second strategy finds it beneficial to be opaque about the assumptions and uncertainties incorporated into the model. This opacity helps insulate the agency’s many assumptions and modelling decisions from critical review, particularly by adversarial stakeholders. Finally, a third strategy demands an unobtainable level of empirical certainty. Because of strategically demanded unfulfillability, both the use of the model and its deduced policy options and decisions may be blocked. Demanding perfection from the model leads to the delegitimisation of both the model and the political and scientific proponents behind it. Finally, the procedural use of simulation puts emphasis on the process of modelling rather than the use of its results. On the one hand, the simulation process serves the scientific communication to lay people and non-experts with its capability of complexity reduction and visualization. On the other hand, simulations may be used as vehicles for conflict avoidance and consensus making. Van Daalen et al. (2002) quote the RAINS model (Regional Acidification Information and Simulation) as a corresponding example. The RAINS model was central to the integrated assessment of acid rain and became a key element in the negotiations of the Second Sulphur Protocol of the UN-ECE Convention on Long-Range Transboundary Air Pollution. The model cleared the path for establishing international political consensus as passed by the corresponding UN protocol. Grünfeld (1999) argues that the agreed emission reduction targets fixed in the protocol are largely based on results obtained from RAINS model scenario calculations. Another example illustrating procedural use of simulation can be attributed to the area of participatory modelling. One major goal of participatory modelling is to integrate both experts and lay people at very early stages of research for learning effects and mutual understanding (Hare et al. 2003; Bots and van Daalen 2008; Dreyer et al. 2015; Scheer et al. 2015). Participatory modelling opens up the modelling process to external actors who do not possess detailed simulation and modelling expertise. In that sense, participatory modelling is a generic term for a large variety of experiments with non-expert involvement in scientific development in order to improve mutual understanding and the quality of modelling.

104

5 The Science-Policy Interface of Subsurface Environmental Modelling

References Alcamo J, Kreileman E, Leemans R (1996) Global models meet global policy: how can global and regional modellers connect with environmental policy makers? what has hindered them? what has helped? Global Environ Change 6:255–259 Aven T, Renn O (2010) Risk management and governance: concepts, guidelines and applications. Springer Balci O (1989) How to assess the acceptability and credibility of simulation results. In: Proceedings of the 21st conference on Winter simulation. ACM, pp 62–71 Banks J (1998) Handbook of simulation: principles, methodology, advances, applications, and practice. Wiley Beyme K (1997) Der Gesetzgeber: Der Bundestag als Entscheidungszentrum. VS Verlag für Sozialwissenschaften Böhringer C, Rutherford TF, Wiegard W (2003) Computable general equilibrium analysis: opening a black box. Technical report, ZEW Discussion Papers Bots PW, van Daalen CE (2008) Participatory model construction and model use in natural resource management: a framework for reflection. Syst Pract Action Res 21:389–407 Boulanger P-M, Bréchet T (2005) Models for policy-making in sustainable development: the state of the art and perspectives for research. Ecol Econ 55:337–350 Bradshaw GA, Borchers JG (2000) Uncertainty as information: narrowing the science-policy gap. Conserv Ecol 4:7 Brenner T, Werker C (2009) Policy advice derived from simulation models. J Artif Soc Soc Simul 12:2 Brugnach M, Tagg A, Keil F, de Lange WJ (2007) Uncertainty matters: computer models at the science-policy interface. Water Resour Manag 21:1075–1090 Cole HS (1973) Models of doom: a critique of the limits to growth. Universe Pub Dreyer M, Konrad W, Scheer D (2015) Partizipative Modellierung: Erkenntnisse und Erfahrungen aus einer Methodengenese. In: Niederberger M, Wassermann S (eds) Methoden der Experten-und Stakeholdereinbindung in der sozialwissenschaftlichen Forschung. Springer, pp 261–285 Durán JM, Arnold E (2013) Computer simulations and the changing face of scientific experimentation. Cambridge Scholars Publishing Durán JM (2018) Computer simulations in science and engineering. Concept, practices, perspectives. Springer EU Water Framework Directive (2000) EU, Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community action in the field of water policy. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX. Online; Accessed 20-April-2019 Evans S (2008) A new look at the interaction of scientific models and policymaking. Technical report, Record of the Workshop within the Policy Foresight Programme Farber DA (2007) Modeling climate change and its impacts: law, policy, and science. Tex Law Rev 86:1655–1699 Fine JD, Owen D (2004) Technocracy and democracy: conflicts between models and participation in environmental law and planning. Hastings Law J 56:901–982 Fisher E, Pascual P, Wagner W (2010) Understanding environmental models in their legal and regulatory context. J Environ Law 22:251–283 Funtowicz SO, Ravetz JR (1993) Science for the post-normal age. Futures 25:739–755 Gibbons M (1994) The new production of knowledge: the dynamics of science and research in contemporary societies. Sage Grünfeld H (1999) Creating favorable conditions for international environmental change through knowledge and negotiation. Lessons from the Rhine Action Program and the Second Sulphur Protocol, implications for Climate Change. PhD thesis, Delft University of Technology Grunwald A (2006) Scientific independence as a constitutive part of parliamentary technology assessment. Sci Public Policy 33:103–113

References

105

Haag D, Kaupenjohann M (2001) Parameters, prediction, post-normal science and the precautionary principle - a roadmap for modelling for decision-making. Ecol Model 144:45–60 Hare M, Letcher RA, Jakeman AJ (2003) Participatory modelling in natural resource management: a comparison of four case studies. Integr Assess 4:62–72 Heinrichs H (2005) Advisory systems in pluralistic knowledge societies: a criteria-based typology to assess and optimize environmental policy advice. In: Maasen S, Weingart P (eds) Democratization of expertise?. Springer, pp 41–61 Heintz B (2007) Zahlen, Wissen, Objektivität: Wissenschaftssoziologische Perspektiven. In: Mennicken A, Vollmer H (eds) Zahlenwerk: Kalkulation, Organisation und Gesellschaft. Springer, pp 65–85 Hellström T (1996) The science-policy dialogue in transformation: model-uncertainty and environmental policy. Sci Public Policy 23:91–97 Højberg AL, Refsgaard JC, van Geer F, Jørgensen LF, Zsuffa I (2007) Use of models to support the monitoring requirements in the water framework directive. Water Resour Manag 21:1649–1672 Ivanovic RF, Freer JE (2009) Science versus politics: truth and uncertainty in predictive modelling. Hydrol Process 23:2549–2554 Jasanoff S (1998) Skinning scientific cats. In: Conca K, Alberty M, Dabelko G (eds) Green planet blues. Westview Press, pp 153–156 Jasanoff S, Wynne B, Buttel F, Charvolin F, Edwards P, Elzinga A, Haas P, Kwa C, Lambright W, Lynch M, Miller C (1998) Science and decisionmaking. In: Rayner S, Malone E (eds) Human choice & climate change: the societal framework. Battelle Press, pp 1–88 Kahneman D, Slovic SP, Slovic P, Tversky A (1982) Judgment under uncertainty: heuristics and biases. Cambridge University Press Kämäri J, Rekolainen S (2005) Models in the implementation of the Water Framework Directive: benchmarking as part of the modelling proccess. Freshw Forum 23:166–170 King JL, Kraemer KL (1993) Models, facts, and the policy process: the political ecology of estimated truth. Technical report, Center for Research on Information Systems and Organizations Mayntz R (1994) Politikberatung und politische Entscheidungsstrukturen: Zu den Voraussetzungen des Politikberatungsmodells. In: Murswieck A (ed) Regieren und Politikberatung. Leske + Budrich, pp 17–29 Meadows D, Meadows D, Randers J, Behrens W (1972) The limits to growth. A report for the Club of Rome’s project on the predicament of mankind. Universe Books Meadows D, Richardson J, Bruckmann G (1982) Groping in the dark: the first decade of global modelling. Wiley Morgan MG, Henrion M (1990) Uncertainty: a guide to dealing with uncertainty in quantitative risk and policy analysis. Cambridge University Press National Research Council (2007) Models in environmental regulatory decision making. National Academies Press Nowotny H (1999) Es ist so - es könnte auch anders sein: Über das veränderte Verhältnis von Wissenschaft und Gesellschaft. Suhrkamp Nutley SM, Walter I, Davies HT (2007) Using evidence: how research can inform public services. Policy Press Oberkampf W, Trucano T (2002) Verification and validation in computational fluid dynamics. Prog Aerosp Sci 38:209–272 Olsson JA, Andersson L (2007) Possibilities and problems with the use of models as a communication tool in water resource management. Water Resour Manag 21:97–110 Oreskes N (2000) Why believe a computer? models, measures, and meaning in the natural world. In: Schneiderman J (ed) The earth around us. Routledge, pp 70–82 Oreskes N, Shrader-Frechette K, Belitz K (1994) Verification, validation, and confirmation of numerical models in the earth sciences. Science 263:641–646 Pahl-Wostl C, Schlumpf C, Büssenschütt M, Schönborn A, Burse J (2000) Models at the interface between science and society: impacts and options. Integr Assess 1:267–280

106

5 The Science-Policy Interface of Subsurface Environmental Modelling

Petersen A (2006) Simulating nature: a philosophical study of computer-simulation uncertainties and their role in climate science and policy advice. Het Spinhuis Pilkey OH, Pilkey-Jarvis L (2007) Useless arithmetic: why environmental scientists can’t predict the future. Columbia University Press Porter TM (1996) Trust in numbers: the pursuit of objectivity in science and public life. Princeton University Press Renn O (1995) Style of using scientific expertise: a comparative framework. Sci Public Policy 22:147–156 Renn O, Sellke P (2011) Risk, society and policy making: risk governance in a complex world. Int J Perform Eng 7:349–366 Salter L (1988) Mandated science: science and scientists in the making of standards. Kluwer Academic Publishers Sarewitz D, Pielke R, Radford B (2000) Prediction: science, decision making, and the future of nature. Island Press Scheer D (2013) Computersimulationen in politischen Entscheidungsprozessen: Zur Politikrelevanz von Simulationswissen am Beispiel der CO2 -Speicherung. Springer Scheer D, Benighaus C, Benighaus L, Renn O, Gold S, Rder B, Bl G-F (2014) The distinction between risk and hazard: understanding and use in stakeholder communication. Risk Anal 34:1270–1285 Scheer D (2017) Between knowledge and action: conceptualizing scientific simulation and policymaking. In: Resch M, Kaminski A, Gehring P (eds) The science and art of simulation I. Springer, pp 103–118 Scheer D, Konrad W, Class H, Kissinger A, Knopf S, Noack V (2015) Expert involvement in science development: (re-)evaluation of an early screening tool for carbon storage site characterization. Int J Greenhouse Gas Control 37:228–236 Schmolke A, Thorbek P, DeAngelis DL, Grimm V (2010) Ecological models supporting environmental decision making: a strategy for the future. Trends Ecol Evol 25:479–486 Thorngate W, Tavakoli M (2009) Simulation, rhetoric, and policy making. Simul Gaming 40:513– 527 Van Daalen CE, Dresen L, Janssen MA (2002) The roles of computer models in the environmental policy life cycle. Environ Sci Policy 5:221–231 Van der Sluijs JP (2002) A way out of the credibility crisis of models used in integrated environmental assessment. Futures 34:133–146 Van der Sluijs JP, Petersen AC, Janssen PH, Risbey JS, Ravetz JR (2008) Exploring the quality of evidence for complex and contested policy decisions. Environ Res Lett 3:024008 Wagner W, Fisher E, Pascual P (2010) Misunderstanding models in environmental and public health regulation. NYU Environ Law J 18:293–356 Walker WE, Harremoës P, Rotmans J, van der Sluijs JP, van Asselt MB, Janssen P, Krayer von Krauss MP (2003) Defining uncertainty: a conceptual basis for uncertainty management in model-based decision support. Integr Assess 4:5–17 Walter L, Binning P, Oladyshkin S, Flemisch B, Class H (2012) Brine migration resulting from CO2 injection into saline aquifers - an approach to risk estimation including various levels of uncertainty. Int J Greenhouse Gas Control 9:495–506 Warnke P (2019) Computersimulation und Intervention. Eine Methode der Technikentwicklung als Vermittlungsinstrument soziotechnischer Umordnungen. http://tuprints.ulb.tu-darmstadt.de/ epda/000277/DissWarnke_LHB.pdf. Online; Accessed 23-April-2019 Weingart P (1997) Neue Formen der Wissensproduktion – Fakt, Fiktion und Mode. IWT Paper, vol 15 Weiss CH (1979) The many meanings of research utilization. Public Adm Rev 39:426–431 Wollmann H (2001) Politikberatung. In: Nohlen D (ed) Kleines Lexikon der Politik. C.H.Beck, pp 376–380 Yücel G, van Daalen E (2009) An objective-based perspective on assessment of model-supported policy processes. J Artif Soc Soc Simul 12:3

Part II

Case Studies of Subsurface Environmental Modelling

Chapter 6

Geologic Carbon Sequestration

6.1 Background The energy transition has become a key political issue in many countries. The main emphasis of responses to climate change challenges is in transforming the energy system from high to low carbon energy supply and in decoupling energy demand from economic growth. The general principles of energy policy objectives comprise the three paradigms of economic efficiency, security of energy supply and environmental compatibility. These well-established objectives constitute the so-called energy policy triangle in many Western and European countries (Dugstad and Roland 2003; Solorio Sandoval and Morata 2012). However, some scholars argue there is a need to add a fourth energy policy target—that is, the public acceptance of energy system change (Devine-Wright 2008; Hauff et al. 2011). Against this background, carbon mitigation techniques such as carbon capture and storage techniques are at the core of current research and policy debates. Carbon Capture and Storage (CCS) or Geologic Carbon Sequestration (GCS) of CO2 seems to be one of the most promising techniques for carbon mitigation success with fossil fuels such as coal and natural gas as a bridging technique towards renewable energies. Even after the transition to renewable energies, CCS based on biomass could serve as a carbon sink for the atmosphere. The main idea of CCS is to capture emitting CO2 from coal or gas firing during the energy production process and subsequently storing the captured CO2 in the adequate subsurface. With permanent CO2 storage, the greenhouse gas is intended to be isolated from the atmosphere. The CCS technique is appropriate for large point source emissions such as power stations (in particular coal power stations) and several power stations used in specific high carbon dioxide emitting industries (cement and steel production, oil refineries). Evidently, it is storage in the subsurface that is of major interest within the range of topics in this book and which poses huge societal challenges. Since the early 2000s, ideas and plans for utilising vast subsurface reservoirs for a large-scale storage of greenhouse gases, in particular CO2 , have © Springer Nature Switzerland AG 2021 D. Scheer et al., Subsurface Environmental Modelling Between Science and Policy, Advances in Geophysical and Environmental Mechanics and Mathematics, https://doi.org/10.1007/978-3-030-51178-4_6

109

110

6 Geologic Carbon Sequestration

received increasing interest. Mitigating the effects of global warming has become a priority since the relation between increasing atmospheric CO2 concentrations and globally rising temperatures is an accepted fact (IPCC 2005)—at least in science. We have already previously mentioned the widely known Science paper of Pacala and Socolow (2004) who suggest that a large-scale implementation of carbon dioxide capture and sequestration in deep geologic formations can provide a significant contribution towards mitigating increasing atmospheric CO2 concentrations. The CCS-process chain consists of three fundamental stages, namely the capture of CO2 , its transport and the final underground storage of CO2 . There are three ways of capturing CO2 during energy production. CO2 capturing may take place after the combustion process (post-combustion), it can be captured before the combustion (pre-combustion) and, finally, it can be captured while using oxyfuel. A critical factor in efficient capturing processes is the level of CO2 purity. As a general guideline, CO2 should be captured in as pure a form as possible in order to guarantee technical feasibility and to minimise ecological risk in the following process stages of transport and storage. The transport of CO2 from capture to the storage site is the second fundamental process stage. In most cases, capture and storage are spatially separated and the distance can be hundreds (or even more) kilometres. CO2 transport can be done via pipelines, shipping, trucks and railway. The decision on the appropriate means of transport relates to the CO2 quantity and corresponding transportation costs. However, pipeline networks are considered to be a major means for transportation. The third process stage is the long-term underground storage of captured CO2 which is critical to the success of carbon mitigation. Potential storage sites are depleted oil and gas fields, unmineable coal seams, saline formations and saline-filled basalt formations. Basically, there are different options for the storage of CO2 in the deep subsurface, as is illustrated schematically in Fig. 6.1. Depleted oil and gas reservoirs have already proven their capability of containing fluids over geological time periods, but their global storage capacity is rather limited. Using CO2 for enhanced oil and gas recovery, and also for enhanced coal bed methane recovery, clearly has an economically interesting perspective but will likely not contribute on a climate-relevant scale to reduced emissions. Thus, the largest potential is ascribed to deep saline formations, hosting salt water in rocks with sufficient permeability to inject large amounts of fluid at acceptable pressures to not compromise the integrity of the overlying caprocks. The trapping of CO2 in the subsurface can occur via different mechanisms as schematically illustrated in Fig. 6.2. When injected into a saline aquifer, CO2 forms a gaseous or super-critical fluid phase which is lighter than the ambient brine. The multiphase flow CO2 /brine is advection-dominated due to pressure gradient and buoyancy, also affected by capillarity. Safe storage is guaranteed by sealing cap-rocks that enforce a structural or stratigraphic trapping to prevent escape from the target reservoir. CO2 migration will stop upon reaching structural/stratigraphic barriers or if its mobility becomes zero, the latter leading to residual trapping. In the long term, increasing quantities of CO2 dissolve in the formation water and are then subject to the movement of the groundwater and the diffusion/dispersion processes therein. Solubility trapping has the great advantage that storage safety is further increased,

6.1 Background

111

Fig. 6.1 Options for the geological storage of CO2 according to IPCC (2005) 100 trapping contribution %

structural and stratigraphic trapping residual trapping increasing storage security solubility trapping 0 1

10

100

mineral trapping

1,000

10,000

time after stop of CO2 −injection (years) dominating processes

Fig. 6.2 Variation of CO2 trapping mechanisms in the subsurface and the corresponding dominating processes on different time-scales (modified after IPCC (2005))

geochemical phase transfer processes reactions dissolution and diffusion multiphase behavior advection−dominated (viscous, buoyant, capillary)

112

6 Geologic Carbon Sequestration

since water that is rich in dissolved CO2 has an increased density and tends to sink towards the bottom of the reservoir. Another term that is often used—and that comprises residual and solubility trapping—is hydrodynamic trapping. The dissolution of CO2 in the water also forms ionic species. This causes changes in the pH and initiates geochemical reactions. If some fraction of the CO2 can be converted into stable carbonate minerals, this mineral trapping is expected to be the most permanent form of geological storage.

6.1.1 Overview of Selected Commercial/Research CCS Projects Sleipner/Norway, Snøhvit/Norway, In Salah/Algeria, Gorgon/Australia: commercial natural gas projects with CO2 re-injection Natural gas, which contains in all cases a significant amount of CO2 , is produced in these projects. The CO2 is therefore separated and processed for re-injection into the subsurface. Equinor’s1 Sleipner off-shore gas project has re-injected CO2 into the Utsira formation about 700 m underneath the North Sea since 1996. Sleipner was the first industrial CCS project in the world with more than 16 Mt CO2 injected in the first 20 years of operation (Furre et al. 2017). For a description of the geological reservoir characterisation, we refer, for example, to Chadwick et al. (2004). The Utsira formation is a sandstone unit overlaying the Sleipner gas field. The storage project is accompanied by repeated seismics to monitor the development and containment of the CO2 plume (Furre et al. 2017; Furre and Eiken 2014; Eiken et al. 2017). Even though the resolution of seismic images is limited and a transfer into numbers, such as for CO2 saturation, is only qualitative, seismics provide a very good idea of the fate of the CO2 at Sleipner. Many modelling studies have been undertaken using data from the Sleipner project and applying very different modelling approaches, e.g. Ghosh et al. (2015), Cavanagh (2012), Zhu et al. (2015), Bandilla et al. (2014). Eventually, all these studies show that whenever a project provides real data, it becomes very attractive to test modelling approaches since it allows the increase (or decrease) in confidence in models. Real projects like Sleipner show how many—or better: how few—data are typically available, thus leaving a lot of uncertainty that can only be addressed and quantified by using models and exploring the reasonable ranges of parameters and accordingly the ranges of impacts like CO2 plume evolution. Interestingly, even the most sophisticated model cannot reproduce the real shape of the evolving CO2 plume when essential data, as for example the exact topography of the caprock, are not available with sufficient accuracy (Bandilla et al. 2014). Snøhvit is another off-shore CO2 storage project conducted by Equinor/Statoil, situated in the Barents Sea with a storage depth of 2400 m below the seafloor. The injection of CO2 commenced in 2008 in a more deeply buried and tighter formation with lower permeability and at significantly higher pressures compared to Sleipner. 1 Formerly

Statoil and StatoilHydro until 2018.

6.1 Background

113

Downhole pressures are constantly monitored at Snøhvit and frequent stops in the first years of the injection give a valuable series of pressure build-ups and fall-offs (Eiken et al. 2017). Additionally, as in Sleipner, seismic data provides an indication of the evolution of the CO2 plume. Modelling studies on Snøhvit are dealing, for example, with parameters affecting the processes of CO2 migration from the actual reservoir. Tasianas et al. (2016) studied the potential flow of CO2 through gas chimney structures, which can be detected at Snøhvit via seismic surveys. Gas chimneys are the geologic remains of historic gas escape processes. They are considered to be potential preferential pathways. Another study tries to predict the long-term fate of the CO2 based on the available exploration data (Estublier and Lackner 2009). In Salah in Algeria hosts an on-shore CO2 storage project, operated by Equinor/Statoil and BP, at around 500 m altitude above ground level in the Sahara desert. CO2 is collected from multiple gas fields in the area in a central facility, afterwards compressed and stored in an almost 2 km deep Carboniferous sandstone unit (Ringrose et al. 2013). Pressure conditions are similar to those at Sleipner, while the temperatures in the reservoir are significantly higher, so the CO2 is in a supercritical condition. Permeability and porosity are very low due to compaction and diagenesis and horizontal wells for injection were drilled in response to that challenge. Injection commenced in 2004. A broad range of monitoring techniques were deployed (Mathieson et al. 2010). Particularly interesting are the InSAR satellite data (Vasco et al. 2010) which allowed ground elevation monitoring with precision to within millimetres. Uplift of up to a few centimetres has been observed and transferred into subsurface pressure expansion (Vasco et al. 2008). This opens an interesting playground for numerical simulators to use coupled flow, transport and geomechanical models as we have discussed in Sect. 2.2.2. Rinaldi et al. (2017) have performed an inverse modelling study on the ground surface uplift with such an approach. The Gorgon site on Barrow Island, Australia, is operated by Chevron and includes an LNG (liquefied natural gas) plant, a plant for domestic gas (both since 2016) and will include also a CO2 storage project, expected to be the largest in the world with a capacity to store up to four million tonnes per year. The gas in the Gorgon field contains roughly 14 % CO2 which needs to be separated during the liquefaction process, otherwise the CO2 would turn solid. The separated CO2 will be injected into the Dupuy formation at a depth of more than two kilometres beneath the site. Information about the project is found on the websites of Chevron Australia (Gorgon 2019). It is expected that once in operation, the CO2 storage site will attract significant attention from science, industry and politics. This list of commercial projects including CO2 storage activities is, of course, incomplete. One could also mention the Weyburn project in Canada amongst others.

6.1.1.1

The Ketzin Pilot Site (Ketzin 2019)

67,000 tonnes of CO2 were stored at the Ketzin pilot site near Berlin/Germany between 2008 and 2013. Within a research context, the site hosted the first on-shore CO2 storage project in Europe, funded, among others, by the European Union, the

114

6 Geologic Carbon Sequestration

German government and different industrial companies. The CO2 was injected into a saline aquifer at a depth of about 650 m below ground level (Martens et al. 2012); thus, the injection occurred at a slightly super-critical pressure and temperature. Before the actual injection, this so-called Stuttgart Formation had been characterised as rather heterogeneous, composed of higher permeable sand channels embedded in lower permeable flood-plain facies rock (Förster et al. 2006). The infrastructure at the site was comprised of three wells into the formation, two of which were used as observation wells. Data sets from a state-of-the-art site characterization and monitoring included information on the geological structure, seismic surveys, core material and well-log data as well as hydraulic pumping tests. The main task of the modelling work for the Ketzin site has been the development of calibrated and, to some extent, predictive models. A modelling study focusing on the prediction of the arrival time at the observation wells has revealed, after the availability of the observation data, that, irrespective of some deviations of the simulations performed by two groups with three different simulators, all three gave reliable estimates of the arrival of CO2 (Kempka et al. 2010). By intercomparison, the deviations were well-understood and could mainly be attributed to small differences in input data, different choices in grid-sizes and model domains. A more accurate match of both arrival times as well as the available data of pressure in the injection well and in an observation well within a history matching could be obtained by a revision of the static geological model (Kempka et al. 2013). The basic response of the reservoir to the injection of the CO2 in terms of pressure increase could be calibrated satisfactorily, thus allowing predictive simulations for pressure responses for a considerable time period. However, it is evident that the exact spatial distribution of permeabilities and porosities is impossible to determine and, thus, reliable predictions for long-term plume evolution in a highly heterogeneous reservoir are much more difficult to achieve (Class et al. 2015b). An interesting aspect about the Ketzin site is that, although still smaller than the industrial scale, the measurements and observations are well-monitored according to the state-of-the-art. Thus, one should expect that monitoring data from any future real storage projects are unlikely to be available with a higher reliability or accuracy than in Ketzin.

6.1.1.2

The Sim-SEQ Project: Understanding Model Uncertainties in Geological Carbon Sequestration (Sim-SEQ 2019)

This initiative by the U.S. Department of Energy focussed on the comparison of different models applied to a specific CO2 injection field test site located in the state of Mississippi. The authors of the study viewed it as a response to previous code verification and benchmarking studies (Class et al. 2009; Pruess et al. 2003). Sim-SEQ extended the model comparison into a broader sense with much lower specifications of the models, initial and boundary conditions, model domains, etc. than the previous studies. This allowed there to be a focus on uncertainties arising from conceptual model choices. The participating modelling teams had to build their

6.1 Background

115

own models, considering the requirements from the geology and hydrogeology of the site and its environment, much like in a real storage project. The study and its results are summarised in a number of publications, e.g. in Mukhopadhyay et al. (2015), Mukhopadhyay et al. (2012). It could be shown that predicting the behaviour of the site can be carried out confidently and that the remaining differences in the prediction are well understood when there is the possibility to compare and exchange data and knowledge.

6.1.1.3

The CO2 Brim Project

Problems and questions related to the characterisation of potential CO2 storage sites require approaches and methods that can range from rankings based on simple characteristic numbers, e.g. dimensionless numbers, to sophisticated numerical modelling tools. The CO2 Brim research project,2 funded from 2011 to 2015 within the GEOTECHNOLOGIEN R&D-programme of the German Federal Ministry of Education and Research, addressed site characterisation at different stages with a particular focus on the North German Basin. The distinction of stages in the process of site characterisation implies, mainly for reasons of saving computational efforts, that rather simple approaches are taken at earlier stages when a multitude of sites needs to be screened, while more complex simulation models can be used after the selection of sites has been restricted. CO2 Brim has followed an interdisciplinary approach with ‘technical scientists’, i.e. geologists, environmental and civil engineers for the static and dynamic modelling, and ‘social scientists’ for the task of bridging the gap towards policy (Class et al. 2015a). A simple screening tool has been proposed on the basis of a dimensionless gravitational number (Kissinger et al. 2014). Procedures and results have been discussed via the early involvement of external experts and decision makers (Scheer et al. 2015). The same authors have also present a regional-scale modelling study of brine migration along vertical pathways. The technical details and results of the numerical simulation study in a geologic setup in the North German Basin are reported in Kissinger et al. (2017), while the so-called participatory modelling approach, which allows the involvement of different stakeholders at various stages of the project, is described in Scheer et al. (2017). In particular, it is important to find a commonly accepted understanding of the assumptions taken in a modelling study, e.g. whether assumptions are (overly) conservative (Walter et al. 2013) or rather realistic, or whether the scenarios chosen are relevant. Selected results such as the participatory approach of the CO2 Brim-study will be presented below. The results of the CO2 Brim project are presented and discussed in more detail further below in Sect. 6.3.2.

2 The

authors of this book were involved in this project among other colleagues.

116

6 Geologic Carbon Sequestration

6.2 Modelling Issues 6.2.1 The Scope of Modelling in CCS The previous remarks reveal that subsurface modelling plays a crucial role within the field of carbon capture and storage. However, subsurface modelling is not the only realm of modelling and corresponding modelling approaches. In the following, subsurface environmental modelling is set in context with the larger picture of transitioning the energy system towards sustainability and climate friendliness. The idea is to identify and exemplify the diverse set of modelling activities in this wider context. What role do simulations play in worldwide CCS research and development activities? Fig. 6.3 provides an overview of the main topics covered by CCS simulation studies. One can distinguish three types of CCS modelling: firstly, the integration of the technology into the wider socio-technical system; secondly, modelling individual technology components; and, thirdly, modelling technology assessment impacts. The main characteristics of these modelling approaches will be outlined in the following while Table 6.1 summarises the main features comparatively. Modelling of technology integration covers eco-system impacts, energy modelling and CCS chain-related issues. The research focus is on integration and feedback mechanisms between human, nature and technology systems as well as on future pathways of the energy system as a whole. Energy scenario modelling is centre stage, based on socio-technical system modelling. Scenario modelling has a longstanding tradition in research and has evolved with great diversity towards applicable methods and research areas. What a scenario is, has been defined by IPPC as a “a

Fig. 6.3 Types of CCS modelling, adapted from Scheer (2013)

6.2 Modelling Issues

117

Table 6.1 Types of modelling in the area of carbon capture and storage Technology Technology Geological impact integration components assessment Examples

Foci

Disciplines

• Electricity models • Energy system models • Energy economy models • Technology integration in ecosystem and human environment

• Economics • Environmental sciences • Social Sciences • System analysis

• Flow and transport models for combustion processes

• Flow and transport models in the subsurface

• Technical feasibility

• Technology impact on the underground

• Optimizing single pieces of technologies • Quasi-experiment • Engineering sciences

• Geosciences

Source Adapted from Scheer (2013)

coherent, internally consistent and plausible description of a possible future state of the world” (IPCC 1994) (p. 3). A scenario is a forward-looking tool to provide an outlook on the future by means of a ‘constructed’ image how the future may unfold. The ‘constructed’ image relies on past and present developments, and assumptions how these development impact the future. Against this background, scenarios take differing assumptions, boundary conditions, parameters and their values as point of departure. Constructing future images as projection, prediction or outlook may serve as a decision support and basis for action (van Notten 2005). With exploring future trajectories, scenarios deliver dynamic views of the future indicating on problem definition and problem-solving options (Mahmoud et al. 2009). Energy scenario modelling focuses on the energy system and, thus, “aims at providing a comprehensive view of the impact of different developmental trends on the likely evolution of the energy system and potential outcome of energy systems’ variables and performance indicators” (IRGC-International Risk Governance Council 2015: 9). In literature, several meta-studies and review analyses are available assessing the state of the art of energy scenario modelling. Jebaraj and Iniyan (2006), for instance, review energy planning and supply-demand models, and discuss forecasting and optimization computer simulations. Scenario-planning research is also analysed and evaluated by Blomgren et al. (2011) for research published over a time period of four decades. Also Dieckhoff et al. (2011) address energy modelling and energy scenarios from several perspectives. Finally, some scholars undertook comparative

118

6 Geologic Carbon Sequestration

studies of energy models (Bazmi and Zahedi 2011; Bhattacharyya and Timilsina 2010; Pfenninger et al. 2014). According to literature, a major distinction has been made differentiating energy scenario approaches into forecast-based, exploratory and normative scenario studies (IRGC-International Risk Governance Council 2015). Firstly, forecast scenarios are usually computer-based scenarios with a quantitative design. Forecast scenario rely on historical data sheets which they extrapolate into the future. Extrapolation is based on taking plausible assumptions on societal developments, policy interventions and their impacts, and other framework parameters. Forecast scenarios can be judged point predictions. Point predictions are good for short time periods but tend to error for longer time frames due to underestimation of complexities and uncertainties in real-world developments. Secondly, exploratory scenarios have usually also a quantitative design, although they can be carried out qualitatively. They are again based on extrapolation trends but consider something unexpected such as ‘sudden events’ more systematically. These approaches emphasize disruptive developments as a consequence of accidental occurrences. Thirdly, normative scenarios or target scenarios, in contrast, take a contrary approach. They start in the future from a normative, desirable future state, and calculate back to the present. The desirable future state is based on specific normative setting such as, for instance, climate change targets, sustainability target, etc. Using a backcasting approach, the scenario demonstrates by calculating the most efficient and effective trajectories to get there from the present. Long-term energy modelling is at the core of system analysis research in the field of energy. The CCS simulation models identified all use so-called bottom-up models. These are partial models, which simulate the technological adaptation processes of a system as a response to exogenous factors from a cost and price (respectively) optimization perspective (e.g. CO2 -reduction policies, price development of energy carriers). However, when energy scenarios model the technology integration of CCS, this technology option needs to be defined as being available in the technologymodelling portfolio attributed with cost factors. In other words, in the conceptual design of the study, the modeller determines which energy technologies to include or exclude as a boundary condition. Examples of energy scenario modelling including the CCS option are, for instance, Spataru et al. (2015) for the UK, Viebahn et al. (2012) for Germany, Lund and Mathiesen (2012) for Denmark and Blesl et al. (2010) for the EU-27. An energy scenario study for Germany where CCS is excluded from the technology portfolio as an ex-ante boundary condition is UBA (2014). Research topics of energy modelling cover the technological energy system considering primary energy, energy services, etc. The modelling maps plant and technological-economic factors in order to identify changing processes in the future energy system. Popular research questions focus on the future development of the energy mixes. These studies predict possible technological and economic developments. The main research objective is the minimization of system costs, that is the identification of optimised resource allocation against defined system boundaries. According to Möst and Fichtner (2009), the strategic behaviour of actors does not

6.2 Modelling Issues

119

play a role within these models. Thus, agent-based real-world decisions are not covered in the modelling. Nevertheless, energy system modelling predicts human consequences based on rational-economic decisions, mostly by economic actors. Within these models, the underlying assumptions for human behaviour rely on the rationale of homo economicus. Modelling of technology components comprises process optimization of single technology components as a separated and isolated part of the comprehensive technology chain. The focus here is, from an engineering perspective, on process optimization and technical feasibility. Considering the CCS chain, these single technology components relate to one process stage such as, for instance, the capturing process. Using flow and transport models, simulations can virtualise the combustion process in order to find the optimal configurations of influencing parameters for the most efficient and technically feasible combustion runs. Examples include capture optimization models for identifying the most efficient configuration of technological artefacts considering relevant parameters which influence the capture process (temperature, pressure) (Dugas et al. 2009). Moreover, process optimization via computer simulations has been carried out for new and innovative capture strategies such as membrane technology (He et al. 2009), chemical looping (Abad et al. 2010) or condensation (Takami et al. 2009). There are also studies available for analysing the process optimization of the pipeline network in the field of CO2 transport. While Essandoh-Yeddu and Gülen (2009) simulated the pipeline network’s cost efficiency, taking rights of way and environmental aspects into account, Middleton and Bielicki (2009) focused on the spatial optimization and site selection issues of setting pipelines in place. Hence, the main objective of these studies is on cost and spatial efficiency. Even though these studies take factors such as environmentally sensitive areas or property rights into account, the whole set of intermediary variables such as interdependencies and rebound effects are not considered. Within this type of modelling, a technical and engineering research objective dominates in terms of identifying the best technology configuration possible considering influencing variables and parameters. Based on (deterministic) natural science laws of cause-impact chains, the modelling exercise is a virtualised quasi-experiment aiming at optimizing the technical processes. Both modelling and real experiments often go hand-in-hand in order to validate and verify the simulations. Insights gained from component modelling serve as an important guideline for detailed technology design and development.

6.2.2 Focussing on Subsurface CCS Modelling Technology assessment oriented modelling has a focus on implementation requirements and the possible consequences for humans and the environment caused by a particular technology. In the case of CCS, modelling on technology assessment and impacts refers primarily to the process stage of CO2 geological storage and it comprises a wide range of geoscientific modelling activities. The scope of these studies

120

6 Geologic Carbon Sequestration

is wide, ranging from estimating storage capacities, aiding site selection, predicting impacts of CO2 -injection, short- and long-term CO2 -behaviour in the underground and associated risk scenarios such as, for instance, CO2 -leakages. Existing geoscientific modelling is adequate for several subsurface resource recoveries such as oil and gas production or groundwater management, but is currently often perceived as being insufficient for areas such as nuclear waste disposal or carbon dioxide storage. This is due to the fact that “current practice in inverse modelling tends to decouple processes, to aggregate parameters across scales, and to include only a limited amount of the available real data” (DePaolo et al. 2007). Current shortcomings relate firstly to the importance of coupled processes in hydrothermal systems, which denote the interplay of fluid flow, solute transport, heat transfer as well as chemical and mechanical interactions between rocks and fluids. Secondly, shortcomings relate to the presence of structures and interactions on a vast range of space and time scales, which are not compatible with the space and time resolution of the models (DePaolo et al. 2007; Gessner et al. 2009). We should note that current existing model concepts for problems like geological CO2 storage or nuclear waste disposal are no less sophisticated in terms of the included physical or geochemical complexity compared with those used in hydrocarbon production. While the hydrocarbon industry usually uses models on a relatively small time scale in order to optimise production following economic considerations, the focus of models in environmental modelling is very often on long-term risk evaluations where no history matching is possible and even small uncertainties in parameters and processes, extrapolated in time, can produce huge discrepancies concerning predicted events. In the field of geoscientific carbon dioxide storage, modelling is primarily used for the characterization of potential CO2 storage reservoirs and to secure CO2 storage containment and safety over different time scales. Considering the above-mentioned discrepancies in aspirations and the realism of environmental subsurface models, efforts are needed to develop modelling capabilities for subsurface processes across multiple space and time scales to evaluate hazards and risks that may be associated with the design, operation and monitoring of storage facilities and operations (IPCC 2005; DePaolo et al. 2007). However, the geoscientific modelling of carbon dioxide storage also meets some principle constraints, as pointed out by Oreskes et al. (1994). In their opinion, the verification and validation of numerical models of natural systems is sui generis impossible because natural systems are never closed and model results are always non-unique. The specific literature on geologic carbon sequestration is vast and cannot be covered here exhaustively. Nordbotten and Celia (2012), for example, provide a very general introduction to the topic by describing the challenges of the carbon problem and, very specifically, they introduce simulation methods, both conceptually and with respect to solution approaches. It is important to see how the CO2 behaves during the injection and the postinjection period. A key question is the probability and extent of damage induced from possible CO2 leakages. Related research emphasis is on explanations and predictions, respectively, of transport processes and flow dynamics of CO2 in the subsurface,

6.2 Modelling Issues

121

which can only be done by tightly linking experimental research and exploration (e.g. seismicity) with computer simulations. Frykman et al. (2009) modelled a specific site in Denmark in order to better understand flow dynamics. Several studies focus on impact parameters for the CO2 -injection period (pressure, temperature, etc.), e.g. Gapillou et al. (2009). Bacon et al. (2009) used a geomechanical model to simulate processes associated with injected CO2 in formations and its effect on the hydraulic properties of host rocks, since these processes are meant to have an impact on porosity and permeability. Research on CO2 behaviour in the underground focuses on issues of technological feasibility such as the impact of CO2 impurification or the level of injection pressure. Class et al. (2009) summarise results of a comprehensive benchmarking study where different modelling groups with different simulators worked on a set of specific scenarios related to the geological storage of CO2 . Studies like this one clearly reveal that modelling and simulation can contribute significant and valuable input to the development of the CCS process in general and to individual storage projects in particular, while uncertainties on different levels will always require attention (see also Walter et al. 2012; Walker et al. 2003). This also includes the level of human error in the form of misinterpretations of data by the modellers or ambiguity in selecting model domains, assigning boundary conditions, etc. Thus, quality control and assessment have to be addressed by legislative norms for communicating the simulation results of carbon storage problems to the public and political decision makers. We consider it important to guarantee a modelling process that is transparent for all recipients. Regarding the model parameters, there are typically major uncertainties in the geological scenarios, i.e. the presence of distinguished geological features like fault zones as well as in the enormous variability of properties like porosity and permeability, and in multiphase parameters like capillary pressures and relative permeabilities. Modelling must play a key role to analyse uncertainties systematically and to improve the basis for decision-making in different phases of a geological carbon storage project. Simulation is of particular importance in the planning phase of a storage project. It is, first of all, essential to perform a screening of candidate storage sites according to certain criteria. For example, Bachu (2003) has defined different criteria like tectonic setting, depth, geothermal gradient, maturity, accessibility, infrastructure and others. He has also defined different classes, suggested to assign weights to each criterion and, thus, achieve a ranking of potential storage sites. A subsequent phase of storage-site selection should then include more accurate assessments, for example, of available storage capacities. In further manuscripts, Bachu with coauthors (Bachu et al. 2007; Bachu 2015) summarises methodologies and reviews the concept of storage efficiency in terms of the ratio between the volume of CO2 stored in a formation and the volume of pore space theoretically available. By performing a dimensional analysis, Kopp et al. (2009a) stated that the ratio of buoyancy to viscous forces, or capillary forces to viscous forces, can already allow a qualitative ranking of achievable storage efficiencies for a selection of sites, while they also show (Kopp et al. 2009b) that only full-scale simulations can provide numerical values for storage

122

6 Geologic Carbon Sequestration

capacity. Szulczewski and Juanes (2009) carried out capacity estimates for the United States based on simulations. Capacity estimates for CO2 sites currently do not apply standardised procedures and definitions (Bachu et al. 2007). One prominent approach is the Reserve-Resource-Pyramid Approach (Bachu et al. 2007) that takes different CO2 storage aspects into account, such as process-dependent time scales, spatial evaluation scales and storage possibilities. Simulations play a key role in research on capacity estimates. The main emphasis is on calculating storage coefficients and the amount of CO2 placed in storage via numerical simulations based on basin models or reservoir models (Kopp 2009). Other simulation studies concentrate on geophysical, geomechanical and geochemical processes in the underground, such as the spatial CO2 distribution, the influence on groundwater aquifers, flow dynamics, pressure behaviour and the permeability of sediments. Simulation studies for the risk assessment of potential CO2 leakages clarify factors and impacts such as CO2 buoyancy forces, fractures and faults, impacts of leakages on ecosystems and groundwater, and CO2 dispersion into other spatial environments. Site selection for CO2 storage is a crucial and important process of decision-making for any planned industrial-scale CCS-project. Possible CO2 storage sites must have both sufficient capacity and a geological formation that guarantees safe and long-term CO2 storage. Moreover, the CO2 storage site should be as close as possible to the emission source due to cost efficiency for transport. The fluid characteristics vary in the highly complex multi-fluid system we typically have in CCS projects, depending, for instance, on the depth. Empirically, the detailed geological structure and its corresponding process-relevant physical parameters are hardly known. Thus, modelling comes into play. During the last two decades, models based on numerical simulations have become available to cover these multiphase processes in the subsurface related to CO2 -sequestration. On the other hand, researchers can build on experiences gained from the field of ground contamination and vast simulation-related knowledge in the oil and gas industry. Measurements from the exploration of real sites and monitoring data during and after the injection of CO2 are very attractive for modellers. Even if the data density and data quality from real storage projects mostly doesn’t allow a thorough validation of numerical models, they are still a valuable inspiration for modelling studies, for example for history matching, benchmarking or code and model intercomparison exercises for large-scale heterogeneous problems.

6.3 Science-Policy Issues 6.3.1 Processing of Modelling Results by Policy Makers and Stakeholders Legislation has already addressed carbon capture and storage in a number of countries worldwide. In most instances, modelling is mentioned explicitly and certain

6.3 Science-Policy Issues

123

tasks in the regulations are assigned to it. For example, the Australian Offshore Petroleum and Greenhouse Gas Storage Regulations 2011 (Australian Government 2011) require applications to declare a storage formation as an identified greenhouse gas storage location of which detailed modelling of the expected migration of the injected gas needs to be provided. This law further specifies that any modelling undertaken to make predictions must detail the methodology and types of models used as well as any assumptions made in the course of modelling. Furthermore, the law asks for probability distributions associated with predictions made. Thus, it addresses both points raised above: the quality assessment and the uncertainties. Likewise, we find similar specifications in other legislation. To mention just one more example, we refer to a law that the German federal government adopted in 2012: the ‘Kohlendioxid-Speicherungsgesetz—KSpG’ (Carbon Dioxide Storage Law) (KSpG 2012). According to the European Union’s intent to provide a legal framework for the safe geological storage of carbon dioxide, it formulated the socalled CCS Directive in 2009 (EU CCS Directive 2009). Also here, modelling has been assigned a key role, to the extent of detailing the modelling process and the corresponding tasks, sub-divided, for example, into data collection, static geological earth modelling, dynamic modelling for risk assessment and the characterization of the storage behaviour and sensitivities. The dynamic modelling shall provide insight into a large number of specified parameters and processes, like pressure and temperature of the storage formation, areal and the vertical extent of the CO2 plume, the CO2 trapping mechanisms, the displacement of formation fluids and many other aspects. This EU Directive required member states to implement these measures by 2011. The German law, the KSpG, takes on the specifications of the EU CCS Directive and, particularly in its Annex 1, again addresses the tasks and objectives of static and dynamic modelling in detail. Empirical studies on processing and usage of simulation data at the sciencepolicy interface are rare. However, a German case-study in the area of carbon capture and storage has been carried out researching how policy makers and stakeholders perceive, process and use simulation data based on the example of geoscientific CCS. The following presentation in this book is a short overview of the study and delivers some empirical insights of simulation impacting policy-making. The research objective centred on the identification and analysis of empirically backed use patterns among policy makers and stakeholders dealing with the topic of CCS in Germany. CCS has been chosen as a case-study due to the fact that computer simulations are of major importance within the technology development stage of the CCS chain. The political CCS framework has been set by European and national policy-makers and consists on one hand of the EU CCS directive (EU CCS Directive 2009), and on the other of the national implementation of the directive within the German federal carbon dioxide storage act (KSpG 2012). Both regulatory acts consider deeply geo-scientific simulation as relevant instruments for CCS site selection and monitoring purposes. The case study has been carried out on the basis of 19 semi-structured interviews with stakeholders and experts in the German CCS field. The key research question focussed on finding empirical evidence on how policy makers and stake-

124

6 Geologic Carbon Sequestration

Fig. 6.4 Conceptual framework for analysing simulations at the science-policy interface, adapted from Scheer (2015)

holders perceive, process and use data and knowledge from CCS simulation studies. Conceptually, the research has been carried out using the following two ways for researching the impact of geo-scientific simulations: On one hand, the key focus was on analysing process and use of one specific study within the 19 interviews. The so-called “Regional pressure impact of CO2 storage in saline aquifers” (Schäfer et al. 2010) is a study which was jointly conducted by the Federal Institute for Geosciences and Natural Resources (BGR), and the Department of Hydromechanics and Modelling of Hydrosystems at the University of Stuttgart. It was published online in 2010 as a pdf-report. The analysis carried out a simulation study researching spatial pressure dispersion caused by the injection of CO2 . Spatial dispersion had been considered with a diverse set of scenarios with, for instance, differing boundary conditions. On the peak of the CCS discussion in Germany in 2011, the report has been attentively followed and perceived by German stakeholders and the CCS expert community. On the other hand, the interviews focussed on CCS simulation in a broader and more general understanding in case the Schäfer et al. (2010) study was barely/not known by interviewees. The scope of geo-scientific simulations confined studies to the area of carbon storage leaving the capture and transport stage aside. As such, issues like characterizing potentially suitable storage sites, storage capacity, CO2 subsurface behaviour and risk assessments of leakages or brine migration were discussed during the interviews. The study was based on an analytical framework conceptualizing how policy makers perceive, process and use simulation data. The framework summarizes the outcome of a literature review in the field of policy advice and communication theories (see Fig. 6.4). The framework considers both the knowledge and communication role of simulations as the fundamental features of simulations at the science-policy interface. The knowledge production role of simulation emphasized the fact that computer simulation produce and archive scientific knowledge. The communication role states that all actors directly and indirectly involved with modelling communicate its results to policy across the science border to decision makers and policy makers. The analytical framework covers these functional roles with three axis differentiating a processing, an evaluating, and a use dimension. Drawn from communication science,

6.3 Science-Policy Issues

125

the approach is based on a message-receiver perspective. The first level of processing simulations stresses the physical stages of information processing. It distinguishes the processing steps of information perception, selection and reception. Within the perception stage, simulation results compete with all other types of surrounding information in the policy making arena—be it scientific or other. The selection stage follows where simulation results gained attention among receivers against competing information. Decision makers and stakeholder select modelling results for deeper processing and consideration. Finally, in the reception stage, people process the content with thorough examination. The mere physical information processing is simultaneously accompanied by the phases of evaluation and usage. While the framework analytically separates these three phases, they are closely interlinked in reality with repeating iterative loops. Information receivers continuously and iteratively evaluate and use perceived modelling results. The evaluation phase consist of examining simulation as a knowledge instrument, that is the simulation tool as a method to produce robust knowledge is judged against the two other science method for knowledge production (experimentation, theory). A second evaluation focus is on assessing the process of composing and running simulations while a third cornerstone evaluates the simulation results against aspects of validity and reliability. The usage phase in the framework asks for different impact dimension of modelling. Following the literature review presented above, we consider the four impact categories of conceptual, instrumental, strategic and procedural usage as significant. The interviews delivered empirical data and input for coding categories and specification for the first axis of processing simulations. Table 6.2 shows the results of the physical simulation processing for the stages of perception, selection and reception. The data confirm the information competition hypothesis within the perception stage. Scientific simulation data are in competition with a great variety of other scientific and non-scientific information. We observed specifications of extensive, partial, and no perception patterns. Members of Parliament, for instance, did not perceive modelling as a stand-alone scientific method—there has been no perception among them. On the other hand, several interviewees representing stakeholders and public administration officers were aware of several specific CCS research studies that among others used modelling approaches—thus, there was partial perception among receivers. What also became clear is the fact that a small number of studies were very well known among the interviewees. As is seems these studies were used to form kind of a literary canon of expertise. Experts with geo-scientific and simulation background extensively perceive and follow the state of science and are fully aware of relevant simulation studies in the field. Communication channels are decisive for information perception and selection. The data show that internet communication for searching and collecting information as a quick and easy approach is crucial. Besides, personal communication, emerging policies, and institutional networks are essential for policy makers and stakeholders. The CCS initiative of the European Union, for instance, which resulted in the CCS directive in the year 2009, had considerable impact on national member state level with stakeholders and administration staffers informing themselves more extensively about the CCS technology and its impacts.

126

6 Geologic Carbon Sequestration

Table 6.2 Categories and specification of processing simulations among stakeholders Category Empirical evidence Perception

Communication channels

Motivation

Extent of reception

Mechanisms

− Extensive perception − Partial perception − No perception − Use of ICT and literature references − Political decision-making − Personal/institutional networks − Knowledge − Lack of understanding − Instrumentalization − Prevention of hazards − Extensive primary reception − Partial primary reception − Secondary reception − Individual − Cooperative − Balancing with pre-knowledge and pre-expectation − Adjustments to one’s own position

Source Adapted from Scheer (2015)

The next category informs on different types of motivation why people care about scientific simulation studies. On the most general level, receivers strive of knowledge for a better problem understanding. They simply are interested to better understand the functioning and impacts of CCS. From a policy maker point of view, the motivation to avert danger, minimize risks, and prevent hazards is amongst the primary motivation reasons. On the other hand, one type of motivation deals with exploiting scientific expertise for personal and strategic reasons. Simulation results, for instance, can be used to substantiate and argue in favor for a single policy option to choose. An interviewee stated the following quote that illustrates remarkably the instrumentalism argument: “Of course we select only studies which are in line with our argumentation. We only use those studies that back our position”. A reason for not perceiving simulations is related to a lack of understanding that no motivates people to actively search and differentiate simulation studies. Provided simulation studies are perceived, the extent of perception is a relevant category. Here, the level of geo-scientific and simulation expertise comes into play. Geo-scientific and simulation experts working in government administration on department level work entirely through the body of studies. On the other hand, higher-ranked heads of departments or directors selectively focus on conclusions, abstracts or summaries of research papers and reports. Policy makes such as Member of Parliament, in contrast, do not work through studies themselves but prefer face-to-face communication with representatives from business, science and society in order to reflect the current state of science and technology.

6.3 Science-Policy Issues

127

The extent of reception replicates in the mechanisms of information reception. Patterns of physical information processing can be divided into an individual and cooperative type. Provided specific knowledge on geo-scientific simulation is given, experts individually become acquainted with the current state of research aimed at fully understanding all the details of a simulation study. Experts lacking specific simulation knowledge, on the contrary, use patterns of legwork and division of labour, that is specialists synthesis and condense relevant simulation results and forward these up-stream in the hierarchy. When it comes to cognitive processing of information, we may distinguish two types: balancing with the level of pre-knowledge and pre-set expectations, and adjustments to one’s own position. The first mechanism confronts new information with the existing level on pre-knowledge and preexpectation. Within the phase of information digestion, a cognitive balancing process is at work where new information is judged right or wrong against the benchmark of pre-knowledge and expectations. On the other hand, another mechanism sets personal and institutional mediated attitudes and opinions as a benchmark. As it revealed in the interviews, cognitive processing approaches seem to follow a binary perception pattern. According to an interviewee, receivers assess new information as: “this is right or this is wrong, or this is interesting or this is not interesting according to my current state of knowledge”. The binary evaluation mechanism serves to easily classify new information and knowledge by means of bipolar complexity reduction. What can be observed is the fact that adjusting new information balanced with existing opinions and stands on technologies is very probable in highly politicised areas. In an interview with an expert for a business association it reads like this: “It will hopefully [sic!] be demonstrated by studies carried out, chemists will hopefully [sic!] verify that the substances stored underground will mineralise one day”. The axis of evaluating simulations fed with the empirical results from the interviews is shown in Table 6.3. Evaluating simulations relates to either the simulation instrument, the simulation processing or its results. Evaluating the simulation as a tool, interviewees emphasized the epistemic added-value, and research object as stimulus for simulation relevance. In the field of geoscience and subsurface matters, experts see modelling as an indispensable method for knowledge production. Theory and experiments have their limitations in geoscience. Experiments in geology predominantly rely on drill core examples. However, drill core experiments are very expensive and produce only selective subterranean knowledge, and are not able to deal with long times scales of geological processes. Computer simulations are a necessary complement to research these processes, which are otherwise not explorable. They facilitate to describe, represent and mediate geological processes. Considering types of simulations, interviewees clearly differentiated between natural science and social science based simulation. Natural science based simulations were judged far more reliable and robust than social science simulations. The category of uncertainty assessment of simulation results is an essential evaluation criterion. According to the interviews, we developed a threefold assessment specification pattern, which differentiate between an active, reactive and selective assessment. What again becomes clear, is the fact that the level of geo-scientific

128

6 Geologic Carbon Sequestration

Table 6.3 Assessment of the simulation methods and tools among stakeholders Category Empirical evidence Simulation as instrument

Types of simulations Uncertainties

Quality of results

− Research object determines knowledge instrument − Epistemic added-value through simulations − Natural science simulations − Social science simulations − Active uncertainty assessment − Reactive uncertainty assessment − Selective uncertainty assessment − Model-inherent: Data, boundary conditions, assumptions, parameter, model, causality, balancing model versus reality − Model-contextual: Source, discourse, study comparison, disciplinary knowledge, participation

Source Adapted from Scheer (2015)

expertise and simulation expertise is decisive for choosing the one or the other assessment pattern. When both types of expertise are given, experts choose an active uncertainty assessment encouraging themselves to deeply evaluate a broad range of uncertainty issues. Among these issues, they assess, for instance, the quality of data, parameters and their values, the model itself, the consideration of scientific laws within the model, algorithms, boundary conditions, etc. In the case, experts have a moderate level of the expertise, they do reactive or selective uncertainty assessment. On one hand, they are reliant on the disclosure and discussion of uncertainties by the modellers (reactive uncertainty assessment). Interviewees with a very low level of geo-science and modelling knowledge were even more selective. They focused on much chosen criteria to consider uncertainty, such as basic assumptions, boundary conditions and parameters. Thus, these experts checked selectively the initial conditions of a simulation rather than considering process- or output-related issues of uncertainty. The final evaluation category refers to the quality of simulation results. The empirical data provide a broad variety of different quality variables. They are excellently summarized in the following quotation by an interviewee: “The crucial questions always are: who did the simulation, who participated in it, what about the used methodology, and were all currently known facts considered in setting up the simulation”. In order to discuss the set of quality variables systematically, a distinction between simulation-inherent and simulation-contextual quality specifications seems promising. While experts disposing of specific geo-scientific and simulation knowledge predominately use simulation-inherent variables, experts without a geoscientific and simulation background rather focus on contextual quality criteria.

6.3 Science-Policy Issues

129

Simulation-inherent criteria cover the broad range of issues dealing with the technique of planning, implementing and running a simulation. Relevant quality criteria comprise the evaluation of simulation data input, setting the boundary conditions and underlying assumptions, parameters and their values, the model used, considered natural laws and causalities, and, finally, balancing the model versus reality. In this way, the quality assessment is carried out from an inside modelling perspective. The interviews revealed good quality simulation data arise from the availability of empirical data with small ranges of error assessment, well-defined boundary conditions and underlying main assumptions. The parameters used as impact variables in the simulations need to be validated and tested by sensitivity analyses. The goal is to use only high impact parameters in the modelling process. The modelling code should be rather simple and useable without conflicting with real word phenomena and cause-impact chains. To meet these requirements the modelers need to understand the theory and underlying laws of the target system in order to transfer the system adequately into the virtual environment. Balancing modelling results with the target system is one more important quality aspect. Empirical validation of simulations results—if possible—are essential to make simulation data reliable. In other word as put by an interviewee: “Simulation results are only reliable when they were compared with the reality. A model without empirical validation is an animated cartoon”. In the area of simulation-contextual assessment, the quality is evaluated based on ‘mediated’ criteria surrounding the modelling. In total, the empirical data provide five contextual criteria, that is: source of the simulation, the reception discourse as reflected within the scientific community, the comparison of simulation results with similar studies, the level of knowledge in the corresponding scientific discipline, and participation of stakeholders in the modelling process. Judging the source of a simulation is an essential quality criterion. Trust and credibility, independence and neutrality, and the overall reputation of authors, modelers and affiliated institutions are key aspects of source judgment. Even though science is assigned a high level of trust in society, a scientific affiliation is not per se trustworthy, which the following quote from a Member of Parliament illustrates: “is this a professor who has been foreseen as a forthcoming minister for economic affairs by our political opponent, then we would not rely on him in politically controversial topics”. In polarized and politicized debates, scholars and science institutions respectively are somehow assigned to the political spectrum and the range of technology support or opposition. The second quality aspects refers to the so-called reception discourse stimulated by a specific modelling study. If necessary, policy makers and stakeholders closely follow and overserve how other experts position themselves against a new published study. As a rule, one can frame the following statement: The less disagreement among researchers and the smaller the degree of the expert dilemma, the more decision makers rate the modelling data as valid and of good quality. A related quality criterion refers to the comparison of specific simulation results with comparable research and data from other studies. The benchmark for quality assessment here is the state-of-the-art of research. In case the-state-of-the-art is

130

6 Geologic Carbon Sequestration

Table 6.4 Quality assessment of the Regional Pressure Study Simulation-inherent Relevance Simulation-contextual Relevance criteria criteria − Data



− Boundary conditions − Assumptions − Parameter

o

− Model − Causality − Balancing model versus reality

o o – + +

− Source of + simulation − Reception discourse − − Study comparison − Disciplinary knowledge − Participation

o o –

Explanation: + = high relevance; o = modest relevance; − = no relevance. Source Adapted from Scheer (2015)

perceived as very heterogeneous, then simulations data are interpreted as of less quality—and vice versa. The interviews also hinted to the fact that the general level of a science discipline is considered. The more consolidated a science discipline is perceived, the more reliable modelling results are evaluated—and vice versa. The final contextual aspect deals with the degree of expert participation and involvement in the modelling exercise. The involvement of critical experts and opponents in the process of modelling is an asset for good quality assessment. An interviewee has put this criteria in the following quote: “You may present the most beautiful models and simulations studies which are totally correct. However, as far as these simulations are not checked and understood by opponents, mistrust prevails”. In a second step, the two sets of simulation-inherent and simulation-contextual criteria are confronted with the reception of a specific study. The aim was to use this case study for a detailed analysis which criteria were considered and which not. This became clear when researching the reception of the Regional Pressure Study (Schäfer et al. 2010). The results of this comparison shows Table 6.4. The table reveals a selective assessment pattern where some criteria are widely used while others were of no interest. We used a threefold qualitative relevance indication differentiating between high, modest and no relevance backed by the interviews. High relevance indicates broad references, modest relevance refers to only few references, and no relevance stands for no mentioning at all within the interviews. What becomes obvious is the fact that balancing the model results versus reality, considering the causality, and source evaluation are the dominant assessment criteria. Balancing the model and its results with the target system is a general quality approach to better understand the (non-)evidence of a simulation exercise. The interviews took also the subsurface causality and mechanisms when injecting CO2 into account. In contrary, experts and stakeholders did not reflect the quality of data input and the model itself.

6.3 Science-Policy Issues

131

Table 6.5 Assessment of usage types among stakeholders Category Empirical evidence Conceptual use

Instrumental use Strategic use Procedural use

− Singular knowledge instrument in geosciences − Knowledge base for assessing technology potentials and policy options − Control and monitoring instrument in CCS regulation and licensing − Simulation results as evidence base for risks in order to hide economic interests − Simulations as reference in public debate and citizen communication − Encourage networking in simulation research to strengthen the competitiveness of national science

Source Adapted from Scheer (2015)

Source assessment plays the crucial role in the field of simulation contextual criteria. The large majority of interviewees referred to the study’s authors, namely the Federal Institute of Geosciences and Resources, reflecting on its scientific status and reputation. As it seems, source assessment is the most important contextual evaluation strategy. Other criteria such as comparing simulation results with similar studies and assessing the level of discipline knowledge were still considered but on a much lower level. In contrast, participation and reception discourse did not play any role. The axis of using simulations is the final dimension of the analytical framework. The results from the interviews are depicted in Table 6.5. The interviews revealed a high significance of conceptual use. Many interviewees mentioned the Regional Pressure Study as relevant to better understand the impact of pressure dispersion when injecting CO2 . Interviewees stressed the conviction that modelling in the area of CCS is indispensable for ex-ante impact assessment to understand possible future developments, and to simulate different policy options. Therefore, it is essential to pre-assess technology potentials by means of simulations before stepping into pilot and demonstration projects. Instrumental use of modelling results were located in several regulatory acts on both European and German federal state level. Within the European CCS directive (EU CCS Directive 2009), geo-science modelling is an essential feature for site characterization, controlling and monitoring. Likewise, the German CCS legislation that replicated the EU annexes, bringing them into force in 2012. Both regulations detailed necessity and application of modelling requirements. Modelling is required for the characterization and assessment of the potential storage complex and surrounding area (annex I), for establishing and updating the monitoring plan and for post-closure monitoring (annex II), for analyzing storage dynamic behaviour, sensitivity characterization and for carrying out risk assessment. As a quality criterion, the annexes stipulate that predictive simulation results need to be continuously compared with empirical data collection in order to update the monitored plan—thus, the quality issue ‘balancing model with reality’ entered the regulatory framework.

132

6 Geologic Carbon Sequestration

The interviews shed also light on strategical use of modelling. On one hand, it referenced to the so-called amine problem in reference to a CCS plant planned to be established in Mongstad in Norway. There was a decision made to set up a full-scale capture facility at Mongstad in the year 2006. However, the decision was revised and the construction cancelled in September 2013. The main argument put forward by the responsible Norwegian ministry referred to concerns about possible health and environmental risks related to the amine capture technology. It is interesting to see that the mentioned health risks due to amine dispersion were based on a simulations dispersion model. However, interviewees assumed the amine problem to be an argument put forward to hide economic interests. To their opinion, the true argument for cancelling the project was due to costs expected to be too high. The procedural use of simulation indicates modelling as a reference base in public debates, and stakeholder and public communication with authorities. The case of the Regional Pressure Study revealed a procedural use pattern put forward by the contracting authorities. The funding bodies of the study—the Federal Ministry of the Economy and the Federal Ministry of Research—aimed at demonstrating that high quality research capacity and expertise regarding pressure dispersion is available in Germany. This motive gained importance during the tendering process. In the beginning, the two Ministries planned to give away the contract to a well-reputed research group in the Netherlands. In the further course of the process, however, they decided to contract a German based research cooperation in order to strengthen the German centre of research. Taking together the results presented above on how decision makers and stakeholders process, evaluate and use simulation data, we draw the following conclusion. A key conclusion refers to the type of policy makers and decision makers. It obviously makes a difference in processing patterns whether a high level of expertise is available or not. The attributes level of simulation and geo-scientific expertise, and/or their institutional and organizational involvement are key features determining the pattern of simulation processing. From that end, we distinguish between what we call the knowledge experts and role experts. Knowledge experts dispose of highly skilled geo-scientific and simulation-based expertise. They work as heads of departments or senior experts in subordinated state agencies, interest groups or as external scientific advisors. Knowledge experts deal almost exclusively with the CCS topic or at least with subsurface topics in general. Role experts, on the contrary, usually work in ministries, non-governmental organizations or as Members of Parliament as top-level staff. As such, they deal with the topic of CCS as one among many others in their daily work. This could be climate policy, business development, energy policy, sustainable development or alike. Knowledge experts rather focus on their geo-scientific and simulation-based expertise, whereas role experts largely consider socially and institutionally mediated interests, world views and values when processing carbon storage simulations. While the former, generally-speaking, balances new simulation results with their pre-existing level of knowledge, the latter adjusts them to their own positions and institutionally mediated expectations. A second conclusion can be drawn with reference to the high relevance of subsurface modelling for policy making. The case study on the Regional Pressure Study

6.3 Science-Policy Issues

133

clearly revealed that policy makers and stakeholder have to consider scientific environmental subsurface modelling—provided these are available and perceived, alongside the great variety of competing information. The truth claim of science also attributed to scientific modelling is attractive and cannot be ignored. Scientific results may be contested and disputed by policy makers and stakeholders, but they cannot be ignored. Science as a legitimacy resource remains essential and serves as a key point of reference and orientation. However, it is also clear that science itself is not the one and only benchmark for policy decisions. Scientific results and subsequent policy recommendations need to meet the social acceptability and responsiveness of decision makers. In case simulations are policy relevant, policy makers apply a wide range of reception patterns to process geo-scientific simulations. This includes processing patterns based on, for instance, the division of labour, the use of different uncertainty assessment strategies and considerations of a wide range of quality criteria to evaluate the quality of simulation data. One may conclude that an all-embracing systematic misperception and misunderstanding of simulations among decision makers is hard to find in the case study presented. As a rule, the interviewees did not conceive models as bare answering machines and truth generators, taking simulation data as clear-cut point predictions. However, non-experts of geoscience and simulations are much more likely to adjust new simulation data to their existing framework of attitudes, values and opinions. This, in fact, leaves considerable room for misconceiving and misunderstanding simulations.

6.3.2 Participatory Modelling Approaches of Brine Migration Any effort to investigate and develop the CCS process unavoidably interferes with a debate that is growing in the public, meanwhile leading to global phenomena like ‘Fridays for Future’, but after all it is important to direct the focus to the rational options and decisions related to energy policy. Recent research on CCS has been often perceived as supporting decision making on the operational and political level. For example, the AUGE project (Streibel and Schöbel 2013), funded by German funding agencies, was dedicated to compile the scientific results of CCS-related research in Germany within the political context of the European CCS Directive (EU CCS Directive 2009) and its German implementation. When we talk about modelling, there are various fields, where modelling contributes to decision making. On the one hand, modelling of the physical processes in the subsurface allow for risk assessment and quantification of uncertainties, eventually assisting political frameworks. On the other hand, modelling of economic issues supports investors and operators in their decisions. The scientific advancement in such contested and politicised fields as CCS has inherently a number of challenges. Scientific results need to be understood, the value of information and the scope of the respective studies need to be clear to both researchers and decision makers. When modelling is used as a support tool, the

134

6 Geologic Carbon Sequestration

expectations of both groups for the value of the actual support need to be balanced. Decision makers need to understand the potential and limitations of the tool in order to assign a priority to it for their process of decision making. At the same time, the researchers as the developers of the modelling tool can use feedback from decision makers to re-design or adapt their tool to the specific requirements. Large-scale projects of societal relevance, such as CCS projects, are impossible without public acceptance. This includes a detailed understanding of involved risks, possible hazardous events and scenarios as well as the benefits on the other side of the coin (Scheer et al. 2014). Involving stakeholders already at an early stage in the planning phase of a project (Scheer et al. 2014), where modelling is used as a support tool, allows for the concept of participatory modelling. Kissinger et al. (2017) and Scheer et al. (2017) describe the application of this concept in the context of CCS. Participatory modelling essentially fosters the integration of external expertise in the development and the application of models (Bots and van Daalen 2008; Dreyer et al. 2015; Dreyer and Renn 2011; Röckmann et al. 2012). The approach is generic, open to various methods, and its application to the geosciences is still new. Following Hare et al. (2003), we understand participatory modelling as the integration of stakeholders or external experts in the phases of conceptual development of a simulation model and of its usage or application. Effectively, this approach extents the group of people involved into modelling to people without own modelling expertise. Involvement of experts, stakeholders, or citizens has been mainly practised in decision-making processes for increasing the legitimacy and better representing the diversity of conceptions (National Research Council 2008). The demand for such approaches has increased in the course of more complex technologies associated with significant risk and uncertainties (Jasanoff 2005). Examples are found in the literature, e.g. in Fischer (1995, 2000), National Research Council (2012), Petts et al. (2003). On the basis of more data produced by experts, like in simulation, it is more and more challenging to interpret the facts therein. ’Facts’, expressed in numbers, become softer while ‘values’, in terms of contributed knowledge from involvement, become harder within participatory modelling (Burgess and Chilvers 2006; Funtowicz and Ravetz 1992; Stirling 2003). Research on participatory modelling thus pursues distinct goals (Dreyer and Renn 2011). The first one is achieving consensus as well as robust conclusions and recommendations for decision makers. Secondly, there is a focus on the benefits of collective learning processes in the group of stakeholders. Recent studies of participatory modelling have addressed water management, forestry, or land use (e.g. Refsgaard et al. 2005; Antunes et al. 2006; Cockerill et al. 2006; Bogner et al. 2011; Webler et al. 2011). A study of participatory modelling applied to CCS with particular attention to the migration of brine displaced by injection of CO2 is described in Kissinger et al. (2017), Scheer et al. (2017), where essentially two approaches of involvement were taken within the interdisciplinary CO2 Brim-project (see Sect. 6.1.1.3). The first approach used face-to-face interviews which were aimed at collecting knowledge of experts/stakeholders with a particular focus on the geological issues related to the selection process for a CCS site and on the physical processes related to the migration of displaced brine. Later on, in a second step of involvement, stake-

6.3 Science-Policy Issues

135

holders attended a World Café event and evaluated preliminary scientific results of modelled scenarios presented by the project consortium. The starting position of the project was its focus on brine migration as one of the serious potential hazards related to CO2 storage as well as on storage capacity as a criterion for site selection. Intentionally, leakage of CO2 or seismic events, which both are also very important, were not particularly addressed in this project. Hazards induced by displaced brine might include water production wells contaminated with salt concentrations above regulatory limits. The initial idea of using real geologic data from a site in Northern Germany was abandoned and a realistic (but not real) virtual site was derived from a geological model with characteristic features and geological structures as they are representative for the North German Basin. According to Knopf et al. (2010), it is in particular this region which has high capacities for storing CO2 in Germany. The geological model included distinct layers, with the injection horizon a deep saline aquifer, a caprock, as well as a shallow aquifer with fresh water which can be viewed as a resource of drinking water. The flow and transport model considered the coupling of flow in the deep saline aquifers with the shallow aquifers. Both the construction of the geological model, a static model, and the dynamic numerical simulation of flow and transport are considered here as modelling, a term defined by the KSPG (KSpG 2012), the German national law for the storage of CO2 . Stakeholder involvement was possible already during the construction of the geological model and the selection of relevant scenarios for the dynamic modelling, which is typically at a rather early stage of a storage project. Contributions from stakeholders to various aspects were expected; this included discussions on the physical processes and their relevance for potentially hazardous situations which may be facilitated or prevented by certain geological features. The participation in the modelling process at the early stage of the CO2 Brim project addressed thus primarily (i) the assessment of the geological model as proposed by the ‘technical scientists’, (ii) a reflection and, where possible, an improvement of the scenarios considered relevant for brine migration, as well as (iii) a review of preliminary results from first scenario simulations. For the first step, the technical scientists prepared a sketch of the geology considered realistic for the study. This was used for the discussion, moderated by the social scientists, and the experts/stakeholders were encouraged to contribute their own knowledge and opinion with respect to the relevant mechanisms and scenarios of brine migration as well as related consequences or hazards. The sketch served thereby as a stimulator for the discussions and allowed also the interviewed expert to supplement their opinion and explanations with a drafted illustration on the paper. The sketch is shown in Fig. 6.5 including hand drawn explanations and improvements by an interviewee. In a next step, the collected expert knowledge from these interviews was used for a re-evaluation by the technical scientists and resulted then in a final geological model and a selection of scenarios for the following numerical modelling. The results from these scenario simulations were then later on again discussed with external experts in a workshop. This approach of participatory modelling employed for CCS-related

136

6 Geologic Carbon Sequestration

Fig. 6.5 Sketch of a geological model used for interviews with experts

brine migration produced a number of interesting learnings. An overview is presented in the following. A number of interesting and also controversial issues were raised in the interviews. For example, the concept of ‘damage’ was discussed and questioned. In this case, this refers to brine contaminating drinking water. Some experts preferred a more ‘absolute’ definition of damage in the sense that any intrusion of salt water into fresh water is a damage. Such a position does not tolerate any risk. In contrast, some experts took a more relativistic view and favoured a definition of damage based on threshold values exceeded by intruded brine. In the latter case, the expected volumes of brine in a fresh water aquifers, their salinities, and the corresponding probabilities need to be quantified to assess the damage. In the interviews, there was no consensus on the concept of ‘damage’. Different backgrounds or interests on the part of the experts may have shaped their perception and thinking. The learning from this is that there exist distinctly different concepts of damage, hazard, and risk amongst the stakeholders and experts. They need to be considered in moderating the debate. A rather unanimous agreement among the experts was elaborated regarding the potential pathways of brine migration. Pathways induced by the geology need to be distinguished from anthropogenic influences, like boreholes or wells. Amongst the geological features were mentioned faults, salt domes or diapirs, imperfect seals and caprocks, specifically for the North German basin a non-continuous Rupelian clay barrier. The relevance of hazards related to geological features was considered higher. It was argued that man-made features, like boreholes, are usually better known and thus easier to deal with. Also, the amounts of brine migrating through a leaky well was thought to be rather small. The major hazards associated with displaced brine were

6.3 Science-Policy Issues

137

named as vertical migration into shallower aquifers and the corresponding contamination of fresh groundwater as well as pressure increases that may induce uplifting. Individual arguments and opinions in the interviews have stated, for example, that the site-specific context needs to be seen instead denoting vertical brine migration a general hazard; or that brine migration should be seen more in a legal context along with definitions of contamination and collection of data. The question of prioritising different values was raised, where water is one amongst others, like people or biota, etc. Most experts agreed that one should seek consensus over a definition of target variables. Mentioned in this context were, for example, the TDS (total dissolved solids) or electrical conductivity (depends on salt content) as cumulative variables to measure increased salt concentrations, or specific ions, or thresholds for pressure changes. Regarding the sketch itself, there were a number of concerns raised by the interviewed experts. Some considered it a too strong simplification to be useful for discussing realistic and complex problems. Yet, they admitted that at an early stage of a project the geological model can only be simplified. Another issue addressed the boundaries of the model. While the geological sketch did not give information on model boundary conditions, it was unanimous that results of simulated brine migration will be very strongly affected by the boundary condition, whether it is open or closed. Then, some experts reminded that one abandoned well as in the sketch would not be sufficient or that water should not be produced from a well close to windows in the Rupelian clay barrier. The Rupelian barrier itself was addressed in more detail. For example, resulting from pressure changes pathways may be induced along the barrier or through it were it is non-continuous. The missing third dimension of the salt diapir was raising concern by some experts since they were not able to determine its shape, which would influence the expected flow of CO2 and water. Furthermore, the fault zones were noticed to be in the wrong place, not big enough, and not specified well enough. It was mentioned that according to the new German law on CCS (KSpG 2012), the evidence of at least a two-fold hydraulic barrier is required. Finally, it was proposed by some experts that the point of injection might be better placed underneath the Zechstein, i.e. below the bottom layer of the model that was presented to the experts. Alternatively, injection might be done at the top of the anticline instead of the position in the flank which was suggested in the sketch. Let us note once again that the interviews with the experts were led by the social scientists with the technical scientists intentionally absent. The social scientists in their role as moderators summarised their collected recommendations from the expert interviews and pass them on to the technical scientists, i.e. the modellers. Subsequently, every single expert recommendation was reviewed and the modellers elaborated on a decision how to deal with it for continuing their work in the project, i.e. where to revise the initial approach. The recommendations of the experts were grouped into four categories. (1) Recommendations were already considered in the modelling prior to the expert input.

138

6 Geologic Carbon Sequestration

(2) Recommendations that were considered before but implemented only after the input from experts. (3) Recommendations which were not on the original agenda but were included during the involvement process. (4) Recommendations without implementation, two reasons: too much effort to be done within the frame of the project or after evaluation by the modellers not prioritized. Table 6.6 and continued in Table 6.7 summarise the feedback and recommendations collected from the experts and gives a brief comment for each. What are the main findings and conclusions from this participatory modelling approach? First of all, we can state that the concept of joint social-science and natural-science research has fostered mutual scientific benefits and learnings and it has proven a useful approach to advance CCS-related research. The ‘technical’ results of the brine migration study within the CO2 Brim project were published in Kissinger et al. (2017) after a number of revisions in the geological model and the scenarios for the dynamic modelling. Thus, we can view the results of the project two-fold: (i) the learnings from the participatory modelling and (ii) the scientific insights on brine migration due to CO2 injection. Among the main conclusions belonging to (ii) are the following. Increased values of salt concentrations in fresh-water aquifers are typically a local phenomenon in regions where salt concentration was already high. This means that the initial state needs to be well characterised in order to predict increases in salt concentration. A further clear conclusion can be drawn with respect to boundary conditions. Defining a boundary as open or closed has a huge impact on the pressure field that is developing during the injection, and which is eventually the driving force for the directions of brine displacement. More details and discussion of the scientific results are given in Kissinger et al. (2017). There are a number of insights which are related to the process of participatory modelling itself, above denoted with (i), with some of them very specific to the particular topic and its timing. The CO2 Brim project started in 2012. At that time, the public debate on CCS was clearly beyond its peak, the informed public and relevant stakeholders were clearly no longer expecting to see large-scale CCS projects in Germany in the foreseeable future. The regulatory framework (KSpG 2012) passed in 2012, whilst fierce opposition from many sides against the implementation of CCS was dominating the discourse, was no adequate for investors to advance the technology. Of course, this had reduced the incentives for stakeholders to participate in an involvement exercise on CCS-related research. Nevertheless, the project consortium has managed to receive the required attention for the project to finally achieve its scientific goals. Helpful in this regard was also an article in a renowned German newspaper (Schrader 2014). The topic of CCS has been highly politicised at the time, which effectively prevented the recruitment of certain groups of stakeholders. In this case, the field of drinking water was not represented due to unwillingness. Another difficult issue was the decision whether to use a reference to a real site or, as was done, a virtual but realistic site. The expert group included people from differ-

6.3 Science-Policy Issues

139

Table 6.6 Overview of recommendations from stakeholders and category of revision complemented by brief comments Recommendation Category Comment Brine migration: general issues Absolute versus 1 relative damage Man-made 4 versus geological hazards Geological model issues Model simplicity 1

Model boundaries Rupelian clay barrier Fault zones and fractures Injection point Scenario issues Boundary conditions Space dimensions Variable layer permeabilities Injection points and volumes Pressure management Grid discretization

3 1 1 4

1+3 1+3 1 4 4 4

Interpretation of results needs to be relative to initial salt concentrations, zero impact not possible Stakeholders: anthropogenic features less important

Decision by modellers: pathways representative for the NGB are considered, i.e. fault zone at flank, ‘windows’ in the Rupelian barrier; further pathways, e.g. leaky wells render the setup too complex for meaningful participatory modelling Extended domain (100 km), thus conditions as in infinite aquifer Relevant barrier layer with discontinuities at hydrogeological windows Include permeable fault between injection horizon with shallow aquifer For large-scale brine migration, exact position of the injector has minor influence, thus fixed position Variation of lateral boundary conditions (infinite aquifer, no-flow and constant pressure) Extended model domain (100 km) allows more realistic lateral boundary conditions (infinite aquifer) Permeability variations of important Upper Buntsandstein barrier No variable injection volumes/rates considered; inter- or extrapolation (superposition) from results sufficient Complex, many degrees of freedom, thus beyond the scope of this study Refinement near geological weak points desirable, but computationally too costly; comparing results to analytical solution using a simplified geological model showed acceptable agreement for this grid resolution

ent fields and backgrounds within the CCS and geosciences community: scientists, regulators, public authorities, representatives from the industry and from NGOs. Not included were stakeholders with a regional or local relation to the project as it would have been necessary if it was a real site. Thus, there were no such actors as citizen initiatives or environmental groups with local interests. The reason for this choice was the expected maximisation of valuable scientific input and the minimisation of politicised bias.

140

6 Geologic Carbon Sequestration

Table 6.7 Continued: overview of implementation and revision of stakeholder input Recommendation Category Comment Numerical simulation issues Spatial 3 dimension Permeable salt 3 wall flank (fault zone) Brine injection 1 Pressure 1 evolution Identification of 1+2 areas prone to salinization Variable-density 2 flow Groundwater 3 recharge Multiple 4 injection sites with overlapping pressure

Extended model domain (100 km) allows more realistic lateral boundary conditions (infinite aquifer) Permeability variations along the salt wall make sense; sensitivity of leakage depending on fault zone permeability is investigated Volume-equivalent injection rate of brine instead of CO2 Considering compressibility of solid and fluids; infinite aquifer Spatial distribution of flow rates per unit area and increased salt concentration Density and viscosity are both functions of salt concentration Groundwater recharge for the top aquifers to establish more realistic flow conditions in the shallow formations Beyond the scope of this study; computationally costly; requires a basin scale model of the North German Basin which is not available yet

Source Adapted from Scheer et al. (2017)

It remains a matter of discussion to which extent external experts can be involved in the processes of design and construction of the models used in a participatory approach. Typically, the dynamic flow model is already existing and rarely developed specifically for the participatory modelling, while the geological model is usually a specific effort for a project. In this particular study, the involvement in the construction of the geological model comprised recommendations for the improvement of the initial idea of a virtual realistic geology that was provided to the experts. The recommendations and comments of the stakeholders have clearly helped shaping the final version of the geological model. Later on, this became the basis for the brine migration scenarios that were also improved, starting from initial proposals of the project consortium, by the expert knowledge contributed in the participatory process. While this might be perceived as ‘only a fine-tuning’ without real influence on the model construction, it still affected the major decisions on scenarios, boundary conditions, and geology. Uncertainties due to geological features or due to different choices of scenarios are usually as important, or sometimes even more, as the choice of the ‘correct’ dynamic flow model, e.g. Walter et al. (2012, 2013), Class et al. (2009). To conclude, participatory modelling has been successful in improving a modelling approach on brine migration. Knowledge was exchanged with a transfer in both directions, from external experts to project-internal scientists and the other way round. The framework consisted of three basic groups of actors: (i) the social scien-

6.3 Science-Policy Issues

141

tists as moderators, they were here part of the project consortium; (ii) the technical scientists or modellers; (iii) the stakeholders and experts. In order to make this group of actors beneficial for a project, the modellers need to be open for the feedback and input they receive from the experts. This is not always that easy since modellers have typically very profound insights into their models and their limitations. Therefore, the involved experts need to have a certain level of expertise in order to be able to contribute. This sets limits to the group of potential experts. On the other hand, the stakeholders need to accept the framework of a participatory process, the moderators, and also constraints in their impact on the outcome of the process. It is clear that the conditions need to be known beforehand by all actors. Importantly, the social scientists as moderators need to take their two-fold role well balanced. They have to set the framework with a profound knowledge and expertise in the methods of social sciences that are employed. And, where necessary, they need to be the translators between the stakeholders and the modellers. Research questions are sometimes a matter of tricky detail, which need to be translated into an appropriate topic of discussion for stakeholder involvement.

6.4 Selected Model-Based Illustrations: CO2 Plume Shape Development and Convective Mixing The aim of this section—and Sects. 7.4, 8.4 and 9.2.4—is to illustrate some of the relevant processes of the topics that are addressed in this book. Strongly simplified generic scenarios have been chosen as they are implemented for educational purposes in the dumux-lecture module that was briefly introduced in Sect. 4.3.3. These examples are reduced in complexity in order to allow very fast calculations so that students (or more generally: users) have the possibility to explore the influence of initial conditions, boundary conditions and all kinds of model parameters on basic system behaviour. As outlined at the end of Chap. 1, these examples can all be downloaded and tested by the interested reader. In the following, the locations of code and parameter files are described by means of subfolders of the module dumux-lecture. The topic we are addressing here is geological carbon sequestration or, more precisely, the injection of CO2 into saline aquifers. There, we want to highlight two generic scenarios. The first focusses on the development of the CO2 plume in the formation, which is relevant for different reasons, e.g. to estimate storage capacities. The second scenario is concerned with convective mixing and solubility trapping, which is extremely relevant for the permanent immobilisation of the injected CO2 . All the parameter values, initial and boundary conditions of these examples can be found in the subfolders lecture/mm/co2plume for Sect. 6.4.1 and lecture/mm/convectivemixing for Sect. 6.4.2. We summarise them here for convenience and a better overview. Disclaimer: not all the numerical values for parameters and boundary conditions are applicable to realistic sites.

142

6 Geologic Carbon Sequestration

Table 6.8 Boundary conditions Section Range (m) Boundary type Left

0–30

Neumann (inj.-rate) Neumann (no-flow) Dirichlet Neumann (no-flow) Neumann (no-flow)

30–60 Right Bottom

0–60 0–200

Top

0–200

Table 6.9 Initial conditions (t = 0) Subdomain pw A

1.5e7 (top) to 1.56e7 (bottom) pa

Table 6.10 Model input parameters Parameter Permeability Porosity Entry pressure Brooks-Corey λ Salinity Injection temperature Grid size

k  Pe λ Tin j

Mass balances: Water CO2

Energy bal.

qw = 0

q E (313.15 K )

qw = 0

qC O 2 = 1 2 30 g/s.m qC O 2 = 0

qE = 0

pw = const qw = 0

x C O2 = 0 qC O 2 = 0

T = 283.15 K qE = 0

qw = 0

qC O 2 = 0

qE = 0

x C O2

Temperature

0

283.15 K

Unit

Value

m2 – pa – kg/kg K cells

1e-13 0.3 5e3 2 0.1 313.15 100 × 30

6.4.1 CO2 Plume Shape Development: Injection in a Saline Formation This scenario assumes that we have a 2D saline aquifer in a rectangular model domain of 200 m × 60 m, which is initially fully saturated with brine. CO2 is injected at the lower part of the left boundary, implemented as a Neumann boundary condition. For details and a complete overview of boundary and initial conditions see Tables 6.8 and 6.9. A two-phase (non-compositional), isothermal model is applied using the FluidSystem brineco2, see Sect. 4.3.8. Table 6.10 lists the values of the most relevant model parameters.

6.4 Selected Model-Based Illustrations: CO2 Plume Shape Development …

143

Fig. 6.6 Gas saturation (left) and CO2 mass fraction in the liquid water phase (right) after 1.55 · 108 s simulated time. Total CO2 injection rate was 0.001 kg/s

Fig. 6.7 Gas saturation (left) and CO2 mass fraction in the liquid water phase (right) after 1.55 · 107 s simulated time. Total CO2 injection rate was 0.01 kg/s, thus ten times higher than the value given in Table 6.8

Exemplary model results for this scenario with the values and boundary conditions as described here are given in Figs. 6.6 and 6.7, where the latter figure shows results for an increased injection rate. The left plot of Figs. 6.6 and 6.7 shows, in both cases, the saturation of CO2 as a separate fluid phase. Both figures show the state of the CO2 distribution after the same amount of CO2 injected (at different simulated times!). First of all, it is obvious that the CO2 plumes strongly differ. While in Fig. 6.6, for the lower injection rate, the CO2 phase rapidly rises to the top of the formation and then spreads right underneath the caprock—a phenomenon that is usually denoted as gravity segregation—, this looks different in Fig. 6.7. There, the CO2 phase spreads more cylindrically around the injection well and shows less tendency towards gravity segregation. This behaviour is very important, for example, for considerations about storage safety and storage capacity. Regarding the safety of storage, it is clear that the amount of mobile CO2 accumulating just underneath the caprock should be minimised. Concerning storage capacity, it is important to assess the available pore space in a geological formation efficiently, see Kopp et al. (2009a), Kopp et al. (2009b). The injection regime with the higher injection rate is clearly more efficient here. The more gravity segregation dominates the spread of the CO2 plume, the less efficient is the utilisation of available pore space. An analysis of driving forces leads to the finding that it is the ratio of gravitational forces to viscous forces that characterises this behaviour. This can be expressed by using a dimensionless number, the gravitational number, which is defined here as

144

6 Geologic Carbon Sequestration

Gr =

(w − CO2 )gKh μCO2 vcr

=

gravitational forces , viscous forces

(6.1)

where the characteristic velocity of the injection is related to the injection rate (a model input parameter!) via vcr =

injection rate φCO2 A

(6.2)

with A = 30 m2 depicting the surface area of the injection well. With viscous forces, here we denote the resistance to flow exerted by the porous medium where the no-slip condition at each pore surface leads to viscous shear. Viscous forces are therefore increased by an increased injection rate or reduced permeability. In the comparison of Figs. 6.6 and 6.7, it was the injection rate that was varied and thus changed the gravitational number. The gravitational number depends on fluid properties, which are a function of depth via pressure and temperature and of salinity, and on properties of the formation. It can be shown (Kissinger et al. 2014; Scheer et al. 2015; Class et al. 2015a) that the gravitational number can be used as a screening criterion in ranking different available potential storage sites according to the expected efficiency of pore-space utilisation and storage capacity. It is important to note that the gravitational number in a realistic 3D scenario is not constant. With increasing distance from the injection well, the generated surface area of the CO2 plume increases. Due to the continuity condition, this leads to a decreasing flow velocity of the injected CO2 towards the front of the plume. Thus, the gravitational forces, which stay more or less the same, will always dominate over the diminishing viscous forces with increasing distance to the injection well. This is discussed in great detail, e.g. in Kopp et al. (2009a, b), Kissinger et al. (2014), Scheer et al. (2015). A second aspect that can be seen in Figs. 6.6 and 6.7 (plots on the right) is that the dissolved amount of CO2 does not depend on the saturation of CO2 but only on the question of whether CO2 as a phase has already touched a certain location. The region where dissolved CO2 is present at the time of these plots is more or less the same as the region where CO2 as a phase is present. Then, CO2 dissolves according to the equilibrium conditions that are assumed to be valid. Note that we do not consider kinetics in the dissolution. The time scale for dissolved CO2 to spread beyond this region is much larger and will be addressed in the following section.

6.4.2 Convective Mixing The convective-mixing scenario is related to the previous scenario for cases where gravity has segregated the fluid phases water (brine) and CO2 . In such a situation, we typically find an extended thin layer of CO2 in phase right underneath the caprock.

6.4 Selected Model-Based Illustrations: CO2 Plume Shape Development … Table 6.11 Boundary conditions Section Range (m) Left

0–5

Right

0–5

Bottom Top

0–5 0–5 0–5

Boundary type Neumann (no-flow) Neumann (no-flow) Dirichlet Dirichlet Neumann (no-flow)

145

Mass balances: Water

CO2

qw = 0

qC O 2 = 0

qw = 0

qC O 2 = 0

pw = const

SC O2 = 0 SC O2 = const

qw = 0

Table 6.12 Initial conditions (t = 0) Subdomain pw A

Linear between 1.51547e7 pa (top) and 1.52087 pa (bottom)

SC O2 0

Over longer time scales (see Fig. 6.2), the dissolution of CO2 into brine becomes a trapping mechanism of increasing significance. Dissolved CO2 leads to a slight increase in the brine density, resulting in a layer of denser brine—with dissolved CO2 —resting above brine of lower density. This layering is unstable and a fingering regime develops, driven by the density difference according to the CO2 concentration in the brine and counteracted by diffusion since diffusion reduces the concentration gradients and thus the driving force for the fingers. This mechanism leads to a significantly enhanced mixing, compared to diffusion alone, and is accordingly denoted as convective mixing. For the example here, a single-phase multi-component model is applied, again using the FluidSystem brineco2. The chosen setup resembles the one used by Pruess and Zhang (2008). It is assumed that, at the top of the domain, CO2 is present as a phase (not modelled!) and therefore acts as a Dirichlet boundary condition for CO2 dissolved in equilibrium and in constant concentration. Another Dirichlet condition is applied for the pressure with constant pressure at the bottom boundary. All other boundary conditions are Neumann no-flow as listed in Table 6.11 as well as the initial conditions in Table 6.12. A list of relevant model input parameters is provided by Table 6.13. Figure 6.8 shows the mole fraction of dissolved CO2 at two different times. The wavelike curve in the left plot indicates the beginning of the instabilities and the onset of the fingering regime, which is then displayed more clearly in the right plot at a later time. These results were obtained with a mesh of 30 × 30 grid cells. In comparison, Fig. 6.9 shows the same results of the same scenario at the same simulated times, but with a refined mesh of 100 × 100 grid cells. There are two important differences to note. The first difference is the onset time for the instabilities. The coarser mesh seems to catch the onset of the stabilities rather exactly at the

146

6 Geologic Carbon Sequestration

Table 6.13 Parameters Parameter Depth Permeability Porosity Tortuosity Dynamic viscosity Diffusion coefficient Density difference Grid size

d k  τ μ D ρ

Unit

Value

m m2 – – pa s m2 s−1 kg m−3 cells

1400 3e-13 0.2 0.706 0.63 2e-9 9.16 30 × 30/100 × 100

Fig. 6.8 The mole fraction of CO2 in brine after a time of 1.7 · 108 s (left) and after 2.8 · 108 s (right). A mesh with 30 × 30 cells was used

Fig. 6.9 The mole fraction of CO2 in brine after a time of 1.7 · 108 s (left) and after 2.8 · 108 s (right). A mesh with 100 × 100 cells was used

simulated time of 1.7 · 108 s. However, at the same time in the simulation with the finer mesh, the fingers are already clearly developed. Thus, the onset time was earlier. This is indeed remarkable. In reality, there are small heterogeneities triggering the onset of instabilities. In the model, there is no heterogeneity, thus instabilities only occur through ‘artificial’ numerical errors, round-off errors, which are not dampened but are rather self-enhancing. The literature provides detailed (in-)stability analyses and estimates for onset times and wavelengths of the developing fingers, e.g. Ennis-King

6.4 Selected Model-Based Illustrations: CO2 Plume Shape Development …

147

and Paterson (2003a, b). Yet, numerical models have severe difficulties in reproducing this, as is shown in these small examples. The second difference between the plots concerns the number of fingers that are developing. In fact, this shows an even stronger dependence on the discretization length than the predicted onset time. Grid convergence studies show that it requires relatively small discretization lengths to achieve convergence in the number of fingers developing. Usually, the required grid resolution for properly reproducing such convective mixing regimes leads, in 3D field-scale applications, to huge numbers of unknowns which cannot be calculated anymore. It is therefore often more reasonable for long-term simulations to account for convective mixing by effective long-term mass-flux rates of dissolved CO2 in a vertical direction. Such effective rates, dependent on permeability, density difference (or concentration) and fluid viscosity can be found, for example, in the article by Green and Ennis-King (2018). Undoubtedly, a ‘correct’ representation of the CO2 mass flux induced by convective mixing is crucial for predicting solubility trapping as a mechanism based on large time scales. Besides the uncertainties in the onset time and the resolution of fingers, there is also an uncertainty introduced by assuming equilibrium dissolution which is aggravated further when coarse grids are used.

References Abad A, Adánez J, García-Labiano F, Luis F, Gayán P (2010) Modeling of the chemical-looping combustion of methane using a Cu-based oxygen-carrier. Combust Flame 157:602–615 Antunes P, Santos R, Videira N (2006) Participatory decision making for sustainable development - the use of mediated modelling techniques. Land Use Policy 23:44–52 Australian Government (2011) Offshore petroleum and greenhouse gas storage (greenhouse gas injection and storage) regulations 2011. https://www.legislation.gov.au/Details/F2011L01106. Online; Accessed 23-April-2019 Bachu S (2003) Screening and ranking of sedimentary basins for sequestration of CO2 in geological media in response to climate change. Environ Geol 44:277–289 Bachu S (2015) Review of CO2 storage efficiency in deep saline aquifers. Int J Greenhouse Gas Control 40:188–202 Bachu S, Bradshaw J, Bonijoly D, Burruss R, Holloway S, Christensen N, Mathiassen O-M (2007) CO2 storage capacity estimation: methodology and gaps. Int J Greenhouse Gas Control 1:430–443 Bacon DH, Sass BM, Bhargava M, Sminchak J, Gupta N (2009) Reactive transport modeling of CO2 and SO2 injection into deep saline formations and their effect on the hydraulic properties of host rocks. Energy Procedia 1:3283–3290 Bandilla K, Celia M, Leister E (2014) Impact of model complexity on CO2 plume modeling at sleipner. Energy Procedia 63:3405–3415 Bazmi AA, Zahedi G (2011) Sustainable energy systems: role of optimization modeling techniques in power generation and supply - a review. Renew Sustain Energy Rev 15:3480–3500 Bhattacharyya SC, Timilsina GR (2010) A review of energy system models. Inte J Energy Sect Manag 4:494–518 Blesl M, Kober T, Bruchof D, Kuder R (2010) Effects of climate and energy policy related measures and targets on the future structure of the European energy system in 2020 and beyond. Energy Policy 38:6278–6292

148

6 Geologic Carbon Sequestration

Blomgren H, Jonsson P, Lagergren F (2011) Getting back to scenario planning: strategic action in the future of energy Europe. In: Energy Market (EEM), 2011 8th international conference on the European. IEEE, pp 792–801 Bogner A, Gaube V, Smetschka B (2011) Partizipative Modellierung. Beteiligungsexperimente in der sozialökologischen Forschung. Österreichische Zeitschrift für Soziologie 36:74–97 Bots PW, van Daalen CE (2008) Participatory model construction and model use in natural resource management: a framework for reflection. Syst Pract Action Res 21:389–407 Burgess J, Chilvers J (2006) Upping the ante: a conceptual framework for designing and evaluating participatory technology assessments. Sci Public Policy 33:713–728 Cavanagh A (2012) Benchmark calibration and prediction of the Sleipner CO2 plume from 2006 to 2012. Energy Procedia 37:3529–3545 Chadwick R, Zweigel P, Gregersen U, Kirby G, Holloway S, Johannessen P (2004) Geological reservoir characterization of a CO2 storage site: the Utsira sand, Sleipner, northern North Sea. Energy 29:1371–1381 Class H, Mahl L, Ahmed W, Norden B, Kühn M, Kempka T (2015b) Matching pressure measurements and observed CO2 arrival times with static and dynamic modelling at the Ketzin storage site. Energy Procedia 76:623–632 Class H, Ebigbo A, Helmig R, Dahle HK, Nordbotten JM, Celia MA, Audigane P, Darcis M, EnnisKing J, Fan Y, Flemisch B, Gasda SE, Jin M, Krug S, Labregere D, Naderi Beni A, Pawar RJ, Sbai A, Thomas SG, Trenty L, Wei L (2009) A benchmark study on problems related to CO2 storage in geologic formations. Comput Geosci 13:409–434 Class H, Kissinger A, Knopf S, Konrad W, Noack V, Scheer D (2015a) Combined natural and social science approach for regional-scale characterisation of CO2 storage formations and brine migration risks CO2 Brim. In: Liebscher A, Münch U (eds) Geological storage of C O2 - long term security aspects, GEOTECHNOLOGIEN Science Report No. 22. Springer, pp 209–227 Cockerill K, Passell H, Tidwell V (2006) Cooperative modeling: building bridges between science and the public 1. JAWRA J Am Water Res Assoc 42:457–471 DePaolo D, Orr F, Benson S, Celia M, Felmy A, Nagy K, Fogg G, Snieder R, Davis J, Pruess K et al (2007) Basic research needs for geosciences: facilitating 21st century energy systems. Technical report, DOESC (USDOE Office of Science (SC)) Devine-Wright P (2008) Reconsidering public acceptance of renewable energy technologies: a critical review. In: Grubb M, Jamasb T, Pollitt M (eds) Delivering a low carbon electricity system: technologies, economics and policy. Cambridge University Press, pp 443–461 Dieckhoff C, Fichtner W, Grunwald A, Meyer S, Nast M, Nierling L, Renn O, Voß A, Wietschel M (eds) (2011) Energieszenarien: Konstruktion, Bewertung und Wirkung - “Anbieter” und “Nachfrager” im Dialog. KIT Scientific Publishing, Karlsruhe Dreyer M, Renn O (2011) Participatory approaches to modelling for improved learning and decisionmaking in natural resource governance: an editorial. Environ Policy Gov 21:379–385 Dreyer M, Konrad W, Scheer D (2015) Partizipative Modellierung: Erkenntnisse und Erfahrungen aus einer Methodengenese. In: Niederberger M, Wassermann S (eds) Methoden der Experten-und Stakeholdereinbindung in der sozialwissenschaftlichen Forschung. Springer, pp 261–285 Dugas R, Alix P, Lemaire E, Broutin P, Rochelle G (2009) Absorber model for CO2 capture by monoethanolamine - application to CASTOR pilot results. Energy Procedia 1:103–107 Dugstad E, Roland K (2003) So far so good: experiences and challenges in the Scandinavian power market. Int J Regul Gov 3:135–160 Eiken O, Ringrose P, Hermanrud C, Nazarian B, Torp T, Høier L (2017) Lessons learned from 14 years of CCS operations: Sleipner, In Salah and Snøhvit. Energy Procedia 4:5541–5548 Ennis-King J, Paterson L (2003a) Rate of dissolution due to convective mixing in the underground storage of carbon dioxide. Greenhouse Gas Control Technol 1:507–510 Ennis-King J, Paterson L (2003b) Role of convective mixing in the long-term storage of carbon dioxide in deep saline formations. In: Society of petroleum engineers annual fall technical conference and exhibition, SPE-84344, Denver

6.4 Selected Model-Based Illustrations: CO2 Plume Shape Development …

149

Essandoh-Yeddu J, Gülen G (2009) Economic modeling of carbon dioxide integrated pipeline network for enhanced oil recovery and geologic sequestration in the texas gulf coast region. Energy Procedia 1:1603–1610 Estublier A, Lackner A (2009) Long-term simulation of the Snøhvit CO2 storage. Energy Procedia 1:3221–3228 EU CCS Directive (2009) EU, Directive 2009/31/EC of the European Parliament on the geological storage of carbon dioxide. http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX: 32009L0031. Online; Accessed 23-April-2019 Fischer F (1995) Hazardous waste policy, community movements and the politics of nimby: participatory risk assessment in the USA and Canada. In: Fischer F, Black M (eds) Greening environmental policy. Springer, pp 165–182 Fischer F (2000) Citizens, experts, and the environment: the politics of local knowledge. Duke University Press Förster A, Norden B, Zinck-Jorgensen K, Frykman P, Kulenkampff J, Spangenberg E, Erzinger J, Zimmer M, Kopp J, Borm G, Juhlin C, Cosma C-G, Hurter S (2006) Baseline characterization of the CO2SINK geological storage site at Ketzin, Germany. Environ Geosci 13:145–161 Frykman P, Bech N, Sørensen AT, Nielsen L, Nielsen C, Kristensen L, Bidstrup T (2009) Geological modeling and dynamic flow analysis as initial site investigation for large-scale CO2 injection at the vedsted structure, NW Denmark. Energy Procedia 1:2975–2982 Funtowicz SO, Ravetz JR (1992) Three types of risk assessment and the emergence of post-normal science. In: Krimsky S, Golding D (eds) Social Theories of risk. Praeger, pp 251–274 Furre A, Eiken O (2014) Dual sensor streamer technology used in sleipner CO2 injection monitoring. Geophys Prospect 62:1075–1088 Furre A, Eiken O, Alnes H, Vevatne J, Kiær A (2017) 20 years of monitoring CO2 -injection at Sleipner. Energy Procedia 114:3916–3926 Gapillou C, Thibeau S, Mouronval G, Lescanne M (2009) Building a geocellular model of the sedimentary column at rousse CO2 geological storage site (Aquitaine, France) as a tool to evaluate a theorical maximum injection pressure. Energy Procedia 1:2937–2944 Gessner K, Kühn M, Rath V, Kosack C, Blumenthal M, Clauser C (2009) Coupled process models as a tool for analysing hydrothermal systems. Surv Geophys 30:133–162 Ghosh R, Sen M, Vedanti N (2015) Quantitative interpretation of CO2 plume from Sleipner (North Sea), using post-stack inversion and rock physics modeling. Int J Greenhouse Gas Control 32:147– 158 Gorgon (2019) Gorgon CO2 injection project. https://australia.chevron.com/our-businesses/ gorgon-project. Online; Accessed 23-April-2019 Green C, Ennis-King J (2018) Steady flux regime during convective mixing in three-dimensional heterogeneous porous media. Fluids 3:1–21 Hauff J, Heider C, Arms H, Gerber J, Schilling M (2011) Gesellschaftliche Akzeptanz als Säule der energiepolitischen Zielsetzung. ET. Energiewirtschaftliche Tagesfragen 61:85–87 He X, Lie JA, Sheridan E, Hägg M-B (2009) CO2 capture by hollow fibre carbon membranes: experiments and process simulations. Energy Procedia 1:261–268 IPCC (1994) Technical guidelines for assessing climate change impacts and adaptations. https:// www.ipcc.ch/report/ipcc-technical-guidelines-for-assessing-climate-change-impacts-andadaptations-2/. Online; Accessed 23-April-2019 IPCC (2005) Special report on carbon dioxide capture and storage. In: Metz B, Davidson O, de Coninck HC, Loos M, Meyer LA (eds) Prepared by working group III of the intergovernmental panel on climate change. Cambridge University Press IRGC - International Risk Governance Council (2015) Assessment of future energy demand: a methodological review providing guidance to developers and users of energy models and scenarios. https://irgc.org/issues/energy-transitions/demand-anticipation/. Online; Accessed 23-April2019

150

6 Geologic Carbon Sequestration

Jasanoff S (2005) Technologies of humility: Citizen participation in governing science. In: Bogner A, Torgersen H (eds) Wozu Experten? Ambivalenzen der Beziehung von Wissenschaft und Politik. Springer, pp 370–389 Jebaraj S, Iniyan S (2006) A review of energy models. Renew Sustain Energy Rev 10:281–311 Kempka T, Kühn M, Class H, Frykman P, Kopp A, Nielsen C, Probst P (2010) Modelling of CO2 arrival time at Ketzin - Part i. Int J Greenhouse Gas Control 4:1007–1015 Kempka T, Class H, Görke U-J, Norden B, Kolditz O, Kühn M, Walter L, Wang W, Zehner B (2013) A dynamic flow simulation code intercomparison based on the revised static model of the Ketzin pilot site. Energy Procedia 40:418–427 Ketzin (2019) Pilot site Ketzin. http://www.co2ketzin.de/en/home/. Online; Accessed 23-April2019 Kissinger A, Noack V, Knopf S, Scheer D, Konrad W, Class H (2014) Characterization of reservoir conditions for CO2 storage using a dimensionless gravitational number applied to the North German basin. Sustain Energy Technol Assess 7:209–220 Kissinger A, Noack V, Knopf S, Konrad W, Scheer D, Class H (2017) Regional-scale brine migration along vertical pathways due to CO2 injection - part 2: a simulated case study in the North German basin. Hydrol Earth Syst Sci 21:2751–2775 Knopf S, May F, Müller C, Gerling JP (2010) Neuberechnung möglicher Kapazitäten zur CO2 Speicherung in tiefen Aquifer-Strukturen. ET. Energiewirtschaftliche Tagesfragen 60:76–80 Kopp A (2009) Evaluation of C O2 injection processes in geological formations for site screening. PhD thesis, University of Stuttgart Kopp A, Class H, Helmig R (2009a) Investigations on CO2 storage capacity in saline aquifers: Part 1. dimensional analysis of flow processes and reservoir characteristics. Int J Greenhouse Gas Control 3:263–276 Kopp A, Class H, Helmig R (2009b) Investigations on CO2 storage capacity in saline aquifers: Part 2. estimation of storage capacity coefficients. Int J Greenhouse Gas Control 3:277–287 KSpG (2012) Kohlendioxid-Speicherungsgesetz - Gesetz zur Demonstration der dauerhaften Speicherung von Kohlendioxid. https://www.gesetze-im-internet.de/kspg/BJNR172610012.html. Online; Accessed 23-April-2019 Lund H, Mathiesen BV (2012) The role of carbon capture and storage in a future sustainable energy system. Energy 44:469–476 Mahmoud M, Liu Y, Hartmann H, Stewart S, Wagener T, Semmens D, Stewart R, Gupta H, Dominguez D, Dominguez F et al (2009) A formal framework for scenario development in support of environmental decision-making. Environ Model Softw 24:798–808 Martens S, Kempka T, Liebscher A, Lüth S, Möller F, Myrttinen A, Norden B, Schmidt-Hattenberger C, Zimmer M, Kühn M (2012) Europe’s longest-operating on-shore CO2 storage site at Ketzin, Germany: a progress report after three years of injection. Environ Earth Sci 67:323–334 Mathieson A, Midgley J, Dodds K, Wright I, Ringrose P, Saoula N (2010) CO2 sequestration monitoring and verification technologies applied at Krechba, Algeria. Lead Edge 29:216–221 Middleton RS, Bielicki JM (2009) A comprehensive carbon capture and storage infrastructure model. Energy Procedia 1:1611–1616 Möst D, Fichtner W (2009) Einführung zur Energiesystemanalyse. In: Möst D, Fichtner W, Grundwald A (eds) Energiesystemanalyse: Tagungsband des Workshops “Energiesystemanalyse” vom 27. November 2008 am KIT Zentrum Energie, Karlsruhe. Universitätsverlag Karlsruhe, pp 11–31 Mukhopadhyay S, Birkholzer J, Nicot J, Hosseini S (2012) Comparison of selected flow models of the s-3 site in the Sim-SEQ project. J Environ Earth Sci 67:601–611 Mukhopadhyay S, Doughty C, Bacon D, Li J, Wei L, Yamamoto H, Gasda S, Hosseini S, Nicot J, Birkholzer J (2015) The Sim-SEQ project: comparison of selected flow models for the s-3 site. Transp Porous Media 108:207–231 National Research Council (2008) Public participation in environmental assessment and decision making. National Academies Press National Research Council (2012) Understanding risk: informing decisions in a democratic society. National Academies Press, Washington

6.4 Selected Model-Based Illustrations: CO2 Plume Shape Development …

151

Nordbotten J, Celia M (2012) Geological storage of CO2 - modeling approaches for large-scale simulation. Wiley Oreskes N, Shrader-Frechette K, Belitz K (1994) Verification, validation, and confirmation of numerical models in the earth sciences. Science 263:641–646 Pacala S, Socolow R (2004) Stabilization wedges: solving the climate problem for the next 50 years with current technologies. Science 305:968–972 Petts J, Homan J, Pollard S (2003) Participatory risk assessment: involving lay audiences in environmental decisions on risk. Research and Development Technical Report E2-043/TR/01. The University of Birmingham, Birmingham, UK Pfenninger S, Hawkes A, Keirstead J (2014) Energy systems modeling for twenty-first century energy challenges. Renew Sustain Energy Rev 33:74–86 Pruess K, Bielinski A, Ennis-King J, Fabriol R, Le Gallo Y, Garcia J, Jessen K, Kovscek T, Law D-S, Lichtner P, Oldenburg C, Pawar R, Rutqvist J, Steefel C, Travis B, Tsang C-F, White S, Xu T (2003) Code intercomparison builds confidence in numerical models for geologic disposal of CO2 . In: Gale J, Kaya Y (eds) GHGT-6 conference proceedings: greenhouse gas control technologies, pp 463–470 Pruess K, Zhang K (2008) Numerical modeling studies of the dissolution-diffusion-convection process during CO2 storage in aquifers. https://escholarship.org/uc/item/2fc5v69p. Lawrence Berkely National Laboratory Refsgaard JC, Henriksen HJ, Harrar WG, Scholten H, Kassahun A (2005) Quality assurance in model based water management-review of existing practice and outline of new approaches. Environ Model Softw 20:1201–1215 Rinaldi A, Rutqvist J, Finsterle S, Liu H-H (2017) Inverse modeling of ground surface uplift and pressure with iTOUGH-PEST and TOUGH-FLAC: the case of CO2 injection at In Salah, Algeria. Comput Geosci 108:98–109 Ringrose P, Mathieson A, Wright I, Selama F, Hansen O, Bissell R, Saoula N, Midgley J (2013) The In Salah CO2 storage project: lessons learned and knowledge transfer. Energy Procedia 37:6226–6236 Röckmann C, Ulrich C, Dreyer M, Bell E, Borodzicz E, Haapasaari P, Hauge KH, Howell D, Mäntyniemi S, Miller D et al (2012) The added value of participatory modelling in fisheries management - what has been learnt? Mar Policy 36:1072–1085 Schäfer F, Walter L, Class H, Müller C (2010) Regionale Druckentwicklung bei der Injektion von CO2 in salinare Aquifere. BGR Abschlussbericht Scheer D (2013) Computersimulationen in politischen Entscheidungsprozessen: Zur Politikrelevanz von Simulationswissen am Beispiel der CO2 -Speicherung. Springer Scheer D (2015) In silico science for climate policy: how policy-makers process and use carbon storage simulation data. Environ Sci Policy 47:148–156 Scheer D, Benighaus C, Benighaus L, Renn O, Gold S, Röder B, Böl G-F (2014) The distinction between risk and hazard: understanding and use in stakeholder communication. Risk Anal 34:1270–1285 Scheer D, Konrad W, Class H, Kissinger A, Knopf S, Noack V (2015) Expert involvement in science development: (re-)evaluation of an early screening tool for carbon storage site characterization. Int J Greenhouse Gas Control 37:228–236 Scheer D, Konrad W, Class H, Kissinger A, Knopf S, Noack V (2017) Regional-scale brine migration along vertical pathways due to CO2 injection - part 1: the participatory modeling approach. Hydrol Earth Syst Sci 21:2739–2750 Schrader C (2014) Expressfahrstuhl für Salzwasser. Süddeutsche Zeitung, 13 Oct 2014, p 16 Sim-SEQ (2019) The Sim-SEQ project - understanding model uncertainties in geological carbon sequestration. https://eesa.lbl.gov/projects/sim-seq/. Online; Accessed 23-April-2019 Solorio Sandoval I, Morata F (2012) Introduction: the re-evolution of energy policy in Europe. In: Solorio Sandoval I, Morata F (eds) European energy policy. An environmental approach. Edward Elgar Cheltenham, pp 1–22

152

6 Geologic Carbon Sequestration

Spataru C, Drummond P, Zafeiratou E, Barrett M (2015) Long-term scenarios for reaching climate targets and energy security in UK. Sustain Cities Soc 17:95–109 Stirling A (2003) Risk, uncertainty and precaution: some instrumental implications from the social sciences. In: Berkhout F, Leach M, Scoones I (eds) Negotiating change: new perspectives from the social sciences. Edward Elgar, pp 33–76 Streibel M, Schöbel B (2013) The AUGE project: compilation of scientific results to undermine the implementation of the German transposition of the European CCS directive. In: Abstracts, 7th Trondheim CCS conference. TCCS-7 Szulczewski M, Juanes R (2009) A simple but rigorous model for calculating CO2 storage capacity in deep saline aquifers at the basin scale. Energy Procedia 1:3307–3314 Takami KM, Mahmoudi J, Time RW (2009) A simulated H2 O/CO2 condenser design for oxy-fuel CO2 capture process. Energy Procedia 1:1443–1450 Tasianas A, Mahl L, Darcis M, Buenz S, Class H (2016) Simulating seismic chimney structures as potential vertical migration pathways for CO2 in the Snøhvit area, SW Barents Sea: model challenges and outcomes. Environ Earth Sci 75:504 UBA - Umweltbundesamt (2014) Treibhausgasneutrales Deutschland im Jahr 2050. https://www. umweltbundesamt.de/sites/default/files/medien/378/publikationen/07_2014_climate_change_ dt.pdf. Online; Accessed 23-April-2019 van Notten PW (2005) Writing on the wall: scenario development in times of discontinuity. Thela Thesis & Dissertation.com Vasco D, Ferretti A, Novali F (2008) Reservoir monitoring and characterization using satellite geodetic data: interferometric synthetic radar observations from the Krechba field, Algeria. Geophysics 73:WA113–WA122 Vasco D, Rucci A, Ferretti A, Novali F, Bissell R, Ringrose P, Mathieson A, Wright I (2010) Satellitebased measurements of surface deformation reveal fluid flow associated with the geological storage of carbon dioxide. Geophys Res Lett 37:L03303 Viebahn P, Daniel V, Samuel H (2012) Integrated assessment of carbon capture and storage (CCS) in the German power sector and comparison with the deployment of renewable energies. Appl Energy 97:238–248 Walker WE, Harremoës P, Rotmans J, van der Sluijs JP, van Asselt MB, Janssen P, Krayer von Krauss MP (2003) Defining uncertainty: a conceptual basis for uncertainty management in model-based decision support. Integr Assess 4:5–17 Walter L, Binning P, Oladyshkin S, Flemisch B, Class H (2012) Brine migration resulting from CO2 injection into saline aquifers - an approach to risk estimation including various levels of uncertainty. Int J Greenhouse Gas Control 9:495–506 Walter L, Binning P, Class H (2013) Predicting salt intrusion into freshwater aquifers resulting from CO2 injection - a study on the influence of conservative assumptions. Adv Water Res 62:543–554 Webler T, Tuler S, Dietz T (2011) Modellers’ and outreach professionals’ views on the role of models in watershed management. Environ Policy Gov 21:472–486 Zhu C, Zhang G, Lu P, Meng L, Ji X (2015) Benchmark modeling of the sleipner CO2 plume: calibration to seismic data for the uppermost layer and model sensitivity analysis. Int J Greenhouse Gas Control 43:233–246

Chapter 7

Hydraulic Fracturing

7.1 Background Hydraulic fracturing—fracking, for short—refers to the stimulation of rock via the injection of fluids, typically water with additives, aiming at increasing the rock’s permeability. For hydrocarbon-bearing rocks, this facilitates the extraction and production of natural gas or oil. It is also common to denote such hydrocarbons as shale gas or shale oil when it is stored in shale with such low permeabilities that it cannot be produced with conventional technologies. Hydraulic fracturing typically requires a horizontal wellbore which is used to inject a liquid, the so-called fracking fluid, in stages over intervals into the formation. These fracking fluids are water with a cocktail of chemicals and so-called proppants. The chemicals serve for different purposes, e.g. anti-corrosion, biocides, etc., and many of the additives are considered to toxic and harmful to humans and the environment. Proppants are added in order to keep the newly created fractures open so gas flow is possible during the hydrocarbon production. The stimulation period, where fractures are generated in the shale layers, is relatively short—a few hours, for example—while the subsequent gas production period can continue for a long time. Potentially hazardous events related to fracking are manifold and many of them do not need our flow and transport models to be prevented and managed, like the spill of fluids and accidents at the surface, not properly sealed wells, etc. However, there are hazards that are inherently-related to subsurface flow processes and hydrogeological backgrounds, and these are those which are associated with enormous uncertainties. This makes risk assessment a difficult task and a delicate issue for communication to stakeholders and public opinion. It is precisely the short period of stimulation when these hazards are initiated and triggered. They can roughly be distinguished in three categories: (i) The release of fracking fluid in the subsurface which may harm groundwater resources, (ii) the uncontrolled migration of methane released from shale into the overburden, eventually also into aquifers or even to the surface and (iii) induced seismicity, which means micro© Springer Nature Switzerland AG 2021 D. Scheer et al., Subsurface Environmental Modelling Between Science and Policy, Advances in Geophysical and Environmental Mechanics and Mathematics, https://doi.org/10.1007/978-3-030-51178-4_7

153

154

7 Hydraulic Fracturing

Fig. 7.1 Exemplary illustration: Hydraulic fracturing. The fracked shale layer is strongly exaggerated in the vertical in order to show the typical features of fracked shale: fracks perpendicular to the horizontal well, a network of small fractures, methane released from the shale and potentially escaping, fracking fluid pumped into the shale

seismic events, small earthquakes, triggered by high pressure during the stimulation period (Fig. 7.1). Applied on a large scale in the United States in particular since the first decade of the 21st century, fracking is considered to be a major driver for falling oil prices at this time. Environmental concerns have led to an intense political and societal debate over hydraulic fracturing in other parts of the world, particularly also in Europe. Fracking is among the most controversially discussed topics of environmental concern. Public opposition to fracking has taken rather an emotional than rational nature which is likely driven by a mistrust in proper surveillance as well as in transparent and comprehensive regulations. Data for the examination of risks usually stem from explorations by the companies who, of course, have an interest in turning a very costly exploration phase into an economically successful hydrocarbon production period. Howarth et al. (2011) named their 2011 Nature article “Should fracking stop?” in which they focus the discussion concisely on the area of conflict between economic/strategic interests and environmental risks. There is currently, as of 2019,

7.1 Background

155

no binding regulation for fracking in the European Union or its member states, while a recent (November 2019) suspension of fracking activities in the UK indicates a tendency towards a stricter policy. A Recommendation (EU Recommendation 2014) was issued in 2018 which refers to the application of other EU Directives, for example on the water/groundwater framework Directives (EU Water Framework Directive 2000; EU Groundwater Directive 2006) or the Directive on granting hydrocarbon production (EU Hydrocarbon Directive 1994).

7.1.1 Overview of Selected Commercial/Research Fracking Projects Several EU-funded H2020 projects on fracking were completed by 2018 which were aimed at contributing new findings and possibly allowing to further proceed the provision of binding and reliable regulations.

7.1.1.1

The EU-FracRisk-Project

The so-called FracRisk-Project (https://www.fracrisk.eu/) has been an EU funded research project for developing a knowledge base for understanding, preventing and mitigating the potential impacts of fracking. Since significant shale gas reservoirs are available all over Europe, the project intended to contribute with science-based recommendations for decision makers in order to minimise the environmental footprint of shale gas extraction via a regulatory framework also addressing public concerns. In detail, FracRisk had several objectives: an assessment of the environmental impact (footprint) expressed in seismic activities and released substances in the environment involved in the exploration and production of shale gas resources; further development of modelling with mathematical models to predict the effect of the migration of chemicals and gases, and the mechanical effects (seismics) together with a risk and uncertainty assessment; development and testing of a framework for risk assessment to be used both by regulators and contractors, based on the well-known ASTM RBCA (Risk Based Corrective Action) paradigm; development of criteria for appropriate monitoring strategies to measure baseline conditions; and provision of scientific recommendations and a knowledge base for best practices for shale gas development and with direct application and relevance for the provision of consistent regulation. A key document gathered best practice recommendations (Carrera et al. 2018) on shale gas development oriented toward the provision of consistent regulation. Recommendations provided differentiate between pre-operation procedures to address impacts and risks, and operation procedures with a focus on well drilling and fracturing procedures. Taking a closer look at pre-operation procedures, the following best-practice recommendations were developed (Carrera et al. 2018).

156

7 Hydraulic Fracturing

• Insisting on public concerns and acknowledging the right of affected communities to directly benefit from shale gas extraction. • The need for a step forward in Risk Assessment, making it a tool for effective management, including extraction operations, monitoring and mitigation. Specifically, a proper Risk Assessment demands site-specific numerical models that take not just parametric uncertainty into account but also model uncertainty, and are oriented to improve site management, increase understanding of geohazards and anticipate possible problems. • The need to treat monitoring not as a passive exercise mainly oriented to measuring impact. Instead, it should be a proactive task that is oriented to guaranteeing that the operation proceeds as expected and, if not, helps to define corrective and mitigation actions before impacts occur. • The need to explicitly include corrective and mitigation action in future regulations. • The need to acknowledge risks of induced seismicity. • The need for further research for improving modelling tools for real-time monitoring, using monitoring data to improve site understanding and to update reservoir properties during production, and the early detection of anomalies, such as leakage events, for early definition of corrective and/or mitigation actions. • The need for further research for being able to hold a portfolio of leakage remediation techniques that can be implemented rapidly. Altogether, best practice recommendations emphasize a fair and equal benefit distribution between profit-maker and risk-taker, advance risk assessment methods via computer simulations, the development of adaptive risk governance regulations with corresponding monitoring techniques and acknowledge existing hazards and risks which exist with fracking.

7.1.1.2

The EU-SHEER-Project

The EU funded SHEER project (http://www.sheerproject.eu/) in order to understand, prevent and mitigate the potential environmental impacts and risks of shale gas exploration and exploitation. Its focus on environmental issues covered assessing the short and long-term risks linked to groundwater contamination, air pollution and induced seismicity. The analysis of impacts and risks was attached to a pilot test site of shale gas exploration close to the village of Wysin in the central-western part of the Peribaltic syncline in Pomerania, Poland. In this area, the Polish Oil and Gas Company (PGNiG) drilled several boreholes for prospecting and exploring oil and natural gas. Besides a pilot-test bed related impact assessment, the SHEER project gathered knowledge on a wider range of risk categories such as economic and social problems associated with US shale gas experiences. Positive economic impacts due to shale gas extraction result from direct increases in employment and incomes in the mining sector. In contrast, analysis from the US shows that a negative economic

7.1 Background

157

impact relates to decreased housing prices, the potential of increased crime rates, traffic accidents and health issues (e.g. headaches, throat and eye irritation). However, the main outcomes of the SHEER-project yielded to a set of guidelines based on complex, multidisciplinary environmental monitoring before, during and after drilling and hydro-fracturing operations were undertaken at the Wysin pilot site (SHEER 2018). As general guidelines, the persons responsible for the project stressed the need for gathering the basic geological, structural and seismotectonic information on the monitored area in order to understand the dynamics and characteristics of the faults existing in the area, as well as to create the velocity model of the subsurface. During site-set up and operation, close cooperation between operator and the scientific community is a key success factor. In addition, public engagement strategies targeting both local residents and the public at large are important. The best practice recommendation further elaborated specific recommendations for seismic monitoring and water monitoring guidelines as well as generic guidelines for the Risk Management of shale gas exploration and exploitation.

7.1.1.3

The EU-M4ShaleGas-Project

The M4ShaleGas-project (http://www.m4shalegas.eu/) stands for measuring, monitoring, mitigating and managing the environmental impact of shale gas exploration. As such, it addresses research on impacts on subsurface activities (hydraulic fracturing, induced seismicity, well integrity), shale gas activities (water, soil, well site), air quality and climate as well as public perception. The project elaborated a large quantity of factsheets with specific recommendations on a broad range of issues. Numerical modelling is centre stage at several impact assessment tasks and has its role to play in minimizing risks and assuring safe shale gas operation. Taking the example of hydraulic fracturing, the M4ShaleGas scientific recommendations emphasize the following with regards to modelling (M4ShaleGas 2017): • The full role of interfaces and pre-existing fractures should be determined so as to improve available modelling approaches. The physics driving the deviation, arrest, slowing, branching and lithological bounding of hydrofractures needs to be determined and incorporated into modelling approaches. • Many numerical approaches exist; modelling should work towards a common approach of describing fracture propagation in shale, with a consensus sought as to what the minimum requirement is for baseline data that should be captured in order to gain the most comprehensive understanding of the mechanical behaviour of shale systems. Numerical models tend to over-predict the length of hydraulic fractures that have formed and that propagation is sub-vertical. Current understanding of fracture arrest in a complex geological unit, such as shale, needs to improve to numerically represent the hydraulic fracturing process. • A hydraulic fracture propagation model should be used to predict the initiation and propagation of hydrofractures. This should be continually revised as new

158

7 Hydraulic Fracturing

information becomes available, using observations of micro-seismic monitoring during stimulation to improve the model of hydraulic fracture growth. • Regulation should include an independent assessment of all available data, their applicability and observations of operations. Hydraulic fracture stimulation predictions should be scrutinised and an assessment made of the likely breaches of sealing units. A thorough appraisal should be made of differences between what was predicted and what was observed by microseismic monitoring in real time during operations. The independent observer should have discretionary power to halt operations if deviation from the prediction is above a set threshold.

7.2 Modelling Issues 7.2.1 The Importance of Risk Assessment Fears over environmentally hazardous effects need to be addressed by measures that increase confidence in decision makers and their decisions. Therefore, perceived risks associated with fracking need to be well-identified and then quantified. The identification of contaminant pathways and their representation in appropriate modelling approaches has been addressed in a vast number of recent publications in this field, e.g. Myers (2012), Birdsell et al. (2015), Pfunt et al. (2016). Every regulation on hydraulic fracturing demands or will demand risk assessment. The task is enormously complex due to a huge diversity of frame conditions for fracking projects while a clear procedure for risk assessment is currently lacking, despite efforts to improve this situation, e.g. Carrera et al. (2018). It is evident that risk assessment should include both a deterministic and a probabilistic assessment. Site exploration must provide fundamental information on geological features and their properties. While it might be possible to include, in a deterministic manner, fault zones and different geological layers as identified from exploration in the static site model, it is the great variability of their properties, like permeability or porosity, that inevitably also demands probabilistic approaches to the dynamic modelling of flow and transport. The above-mentioned EU H2020 projects all addressed the topic of risk assessment in one way or another. The FracRisk project undertook particular efforts to address risk assessment assisted by modelling. A stochastic model was linked with the forward flow model to allow the evaluation of probability density functions of certain consequences, e.g. the amount of fracking fluid or methane reaching a target (or: receptor) such as a freshwater aquifer used for drinking water production; details on the procedure are given in Class et al. (2018). It is furthermore useful to subdivide the task of quantifying potentially hazardous fluid migration into subtasks of source-zone modelling and migration along pathways. This can provide a number of benefits. For example, the different compartments (source/pathway) can be addressed, where necessary, with models of different levels of complexity, which can be viewed as a multi-physics approach to the overall problem. It might be useful

7.2 Modelling Issues

159

to model the release of fracking fluid or methane with a rather complex model that considers effects like sorption, chemical reactions, multiphase multicomponent processes, etc., while the pathway scenario can use the computed outputs of the source scenario and calculate flow and transport along the pathway with a physically simpler model. Another benefit of the source-pathway-receptor approach is that probabilistic assessments on parameter sensitivities and uncertainties are more flexible. They can be made for the source zone alone, for the pathway alone or for both together, which allows a better in-depth understanding of the processes; an example of this issue will be given below.

7.2.2 Example: Scenario-Related Risk Assessment in the EU H2020 FracRisk Project A brief illustration of such a source/pathway-related risk assessment,1 as discussed above, is provided in the following. The scenarios are restricted to the risk of methane released from the shale—the source—and eventually reaching the target aquifer by migrating through the pathways—the overburden and the fault in the overburden. This setup is one of the FEPs (Features, Events, Processes) that were ranked highly by experts in the FracRisk consortium in a comprehensive list of FEPs (Wiener et al. 2015). Figure 7.2 shows the schematic setup of the source scenario, which effectively puts the focus on the fracking region in the shale. Along the 10 m fracking region at the bottom boundary, an injection rate is prescribed over two hours. Obviously, the corresponding overpressure due to the injection acts as a driving force for fracking fluid and methane migration, as a last resort the fluids might reach the overburden. In a worst-case type of assumption, a hydraulic connection is established here between the shale and the overburden through a number of fractures as shown in Fig. 7.2. Of course, such a situation should be avoided in practice. The source scenario employs a certain number of input parameters which are considered as variable within a certain range, e.g. as listed in Table 7.1. All other parameters were assumed to be known and accordingly assigned as fixed values. Testing the parameter space, as obtained from the parameters and their ranges in Table 7.1, allows the calculation of probabilities for any desired output quantity; of course, these probabilities are conditional on all the fixed model parameters which are not listed here. Thus, this procedure only makes sense if there is sufficient trust in the values of the fixed parameters. As output quantities, the maximum methane flux rate into the overburden M F [kg/s] as well as the maximum pressure value pmax at the interface between the shale and the overburden were chosen. These output

1 The methods and results we present in this section were achieved by colleagues from Politecnico di

Milano and Universität Stuttgart within work package 5, more details can be found in the deliverables of FracRisk at www.fracrisk.eu, in particular in Class et al. (2018).

160

7 Hydraulic Fracturing

Fig. 7.2 Schematic of the source scenario, including the shale layer and overburden. Boundary conditions and dimensions are shown in the figure. The fracking region is displayed zoomed. The scenario employs a variable number of fractures in a worst-case assumption all connecting the injection well (bottom boundary) to the overburden Table 7.1 Model input parameters considered as variable in the source scenario. Values of their ranges are given exemplarily Injection rate during fracking operation [kg/s] [100–300] Distance between the fractures, i.e. number of fractures connecting shale with overburden Permeability anisotropy in the overburden, i.e. at a given (horizontal) permeability this effects the vertical permeability Residual methane saturation in the overburden van Genuchten’s α for scaling capillary pressures in the overburden

[m]

[1–5]

[–]

[0.2–0.5]

[–] [1/Pa]

[0.0–0.1] [1.3−2 – 1.3e−4 ]

quantities and their probability density functions were subsequently transferred as input quantities to the pathway/target scenarios. The pathway scenario with the fault zone in the overburden, as shown in the left part of Fig. 7.3, first of all receives the methane flux rate M F and the maximum pressure pmax from the source scenario. This pathway setup then yields a total methane flux which finally reaches the top of the overburden. Accordingly, this methane flux can then be considered an input for the target scenario which is seen as an aquifer on the right of Fig. 7.3. In the same way as in the source scenario, there are also variable parameters in the pathway and target scenarios. They are listed in Tables 7.2 and 7.3. What is important to note is that the pathway scenario shares fixed and variable input parameters with the source scenario, in this case the permeability anisotropy in the overburden. Therefore, this is a link between the two scenarios. As an output parameter from the pathway scenario, the mass of methane reaching the target aquifer is calculated. With respect to the global sensitivity analysis, this provides us with quite

7.2 Modelling Issues

161

Fig. 7.3 Schematic of the pathway (left) and target (right) scenarios. The pathway features a fault zone at a variable distance from the fracking region, which receives data from the results of the source scenario as in Fig. 7.2. Likewise, the target aquifer receives an inflow of methane as calculated from the pathway scenario Table 7.2 Model input parameters considered as variable in the pathway scenario. Values of their ranges are given exemplarily Methane influx from the source scenario [kg/s] [0.03–0.23] Pressure at the interface shale/overburden (from the [Pa] source scenario) Permeability anisotropy in the overburden, i.e. at [–] a given (horizontal) permeability this effects the vertical permeability Distance between the fault and the source zone [km]

[280–1600] [0.2–0.5]

[0.2–0.5]

Table 7.3 Model input parameters considered as variable in the target scenario. Values of their ranges are given exemplarily Methane influx from the pathway scenario [kg/s] [0.03–0.23] Horizontal permeability of the aquifer Permeability anisotropy

[m2 ] [–]

[5e−12 –20e−12 ] [0.1–0.3]

some flexibility. Each of the scenarios (source and pathway) has variable parameters included, which are exclusive for the respective scenario, like the distance between fractures connecting shale and the overburden in the source scenario or the distance between the source zone and the fault in the pathway scenario. On the other hand, there are shared parameters. This allows a separate assessment of the relevance of each input parameter for the scenarios (source or pathway) or their relevance in the joint consideration of the source scenario connected to the pathway scenario. Eventually, the methane flux reaching the target might also be investigated in more detail with respect to its consequences for the target aquifer. There, one might be formulated with another output quantity which then eventually serves as a measure to quantify a potentially hazardous event. This could be, for example, the breakthrough

162

7 Hydraulic Fracturing

Fig. 7.4 Source scenario: Histograms of the methane mass flux (left) and the maximum pressure at the interface shale/overburden (right) as obtained from a Monte-Carlo analysis of the surrogate model using PCE with 4th order polynomials as well as 50 runs of the full-complexity model for comparison

time of methane at a hypothetical well at a certain distance (here: 250 m) from the point where methane flows in. A surrogate model has been constructed for all these output quantities, i.e. for the methane flux and the maximum pressure from the source scenario, the methane flux into the aquifer from the pathway scenario and the breakthrough time of methane in the target aquifer scenario. The comparatively low complexity of the surrogate model allows very rapid model runs and thus aims at performing a Monte-Carlo analysis which would not be possible with the full-complexity model due to exploding computational demands. For this study, the surrogate model was constructed based on the Polynomical Chaos Expansion (PCE) method and was then applied within a global sensitivity analysis as explained e.g. by Dell’Oca et al. (2017). The surrogate model based on the PCE essentially consists of a set of polynomials which need to be trained using a certain number of runs of the full model where the parameter range is defined by Tables 7.1, 7.2 and 7.3. The number of required full-complexity model runs in order to construct the polynomials depends on the degree of the polynomials. The higher their degree, the higher the number of runs, which again finds a limit in the available computational time. Within the global sensitivity analysis, the relevance of each varied input parameter with respect to the corresponding output quantities of interest can be evaluated. Figure 7.4 shows histograms from the source scenario outputs M F and pmax as they are obtained from many evaluations of the polynomials and, for comparison, 50 runs of the full-complexity Dumux model. From the histogram based on the polynomials, a probability density function (pdf) is determined. It can be clearly seen that the full-complexity model and the polynomials are in good agreement while it is also obvious that the 50 full-model runs are statistically insufficient. The figures further show that the shapes of the two probability density functions for the two different output quantities differ strongly. This is essentially a very valuable piece of information from such a risk assessment and a particular strength of this

7.2 Modelling Issues

163

Fig. 7.5 Results of the AMAE index evaluation for the source scenario (see Dell’Oca et al. (2017)). The highest sensitivity is seen for the overburden permeability anisotropy in this setup

approach. In fact, the shape of the pdf of pmax is strongly skewed, it shows a distinct peak value at around 230 bar and it has a strong tailing for higher pressure values. Thus, the pdfs for the methane flux and for the maximum pressure have distinct differences in the third and fourth statistical moments, the skewn and the kurtosis. In summary, from an evaluation of the first four statistical moments, more information than only the expected value or the variance (as given in Fig. 7.4) can be obtained and, for example, be used in operational decisions and project management. Figure 7.5 shows exemplarily evaluations of the AMAE index (Dell’Oca et al. 2017) which reveals parameter sensitivity in terms of influence on the expected values of the considered output quantities, in this case the predicted methane fluxes reaching the target aquifer after one year. Figure 7.5 indicates that the overburden permeability anisotropy is the main factor of influence. Since the horizontal permeability in the overburden is a fixed value, this simply means that the vertical permeability in the overburden determines the amount of methane migrating in the overburden. It may be surprising that the distance to the fault, which supposedly acts as a flow path, does not seem very important. Therefore, we need to conclude that, in this setup, the flow of methane occurs predominantly through the overburden and not much through the fault. The driving force for the flow is buoyancy and the velocity of upward migration of the injected methane is proportional to the vertical permeability of the overburden. Moreover, we should further note that here we are evaluating the accumulated mass of methane after one year: with both the time of observation as well as the height of the overburden fixed, the accumulated mass is a function strongly dependent on this travelling speed of methane. Simulations where the fault is relatively close to the fracking source region only showed an earlier arrival of methane but no significantly different accumulation over the period of one year. This effectively changes when the amount of methane that can leak from the shale is limited, by assuming a less conservative boundary condition (details are found in Class et al. (2018)) for the fracking source region. Then, the permeability in the

164

7 Hydraulic Fracturing

Fig. 7.6 Variation of the expected value of methane flux into the overburden from the source scenario dependent on fixed input parameter values

overburden is much less important relative to the methane flux coming from the source scenario, while the distance to the fault is still not very significant. Figure 7.6 is another way of illustrating how the output quantity for the methane mass flux from the source scenario depends on the variable parameters of this scenario. It shows that the injection flux at the fracking well into the shale during the assumed two-hour stimulation period as well as the distance between the connecting fractures into the overburden (or: the number of fractures connecting shale and overburden) dominate the expected rate of methane flux leaving the shale and entering the overburden. What is also of high relevance is the residual methane saturation in the overburden. It acts like a buffer, since methane is only mobile after exceeding this residual saturation.

7.2.3 Challenges for Fracking-Related Predictive Modelling Models can cover different aspects of fracking-related risks. This includes the reactive transport of fracking fluid and additives in the subsurface, the flow and transport of methane in fractured porous media as well as the generation of fractures, fracture propagation, fault reactivation or geomechanical processes in general. The state-ofthe-art with respect to modelling these kinds of processes is at very different stages and states of maturity with respect to useful application in engineering practice. While flow and transport through porous media employing Darcy’s Law is well understood and well established, there are still major challenges when fractured porous media are considered. New developments have promoted new robust and efficient discretisation methods, for example, by using Discrete Fracture Models (DFM) (Gläser et al. 2017; Flemisch et al. 2018). Still, the geometrical description of discrete fractures is not practical in most cases of near-field investigations around the borehole since there are many fractures generated and their orientation and interconnectedness are not accessible in detail. Therefore, DFM models for fracking-related risks are limited to generic scenarios or rather far-field applications where discrete faults are iden-

7.2 Modelling Issues

165

tified via site exploration, while upscaled approaches or so-called MINC (Multiple INteracting Continua) approaches are deemed more appropriate for densely fractured media. Lovell et al. (2018) highlight the significance of matrix diffusion for the gas production phase by partly considering the shale in a discrete fracture model. Including geomechanical processes in the conceptual models is required in order to predict effects like fracture generation and propagation or the reactivation of preexisting faults. This is, in fact, a field of intensive research where the conceptual models themselves are heavily debated in the scientific community. It is not yet clear to which extent such models can be derived from first principles or rather take advantage of empirical relationships. Fault reactivation and induced slip, for example, as a result of fluid injection and pressure increase is understood better and approaches for such effects have recently emerged, e.g. Beck et al. (2016), Beck and Class (2019), Simone et al. (2017). We should note that much of the efforts in model development on geomechanical effects, fracture generation, fracture propagation and induced seismicity also applies to deep geothermal reservoirs, where hydraulic fracturing is required to create transmissivity for the circulation of water (or other fluids) for heat production. A very basic problem that modelling studies related to hydraulic fracturing usually have is the lack of information and data related to the geological/hydraulic properties of shale and its surroundings. Edwards and Celia (2018) have therefore collected a large amount of such data, including operational data, and have summarised them, always naming their sources, in a tabulated form in order to enable the modelling of fracking-related processes on a more substantiated basis. In summary, the use of state-of-the-art forward models for flow and transport, geomechanical effects or geochemistry, combined with a probabilistic approach to parameter sensitivities and uncertainties, is clearly recommended for a proper risk assessment. It supports decisions on risk-based corrective actions and it can quantify uncertainty reduction due to newly-acquired knowledge, thus it can provide guidance for exploration and monitoring efforts.

7.3 Science-Policy Issues 7.3.1 Fracking as a Contested Technology The hydraulic fracturing technique for unconventional oil and gas production started, as briefly mentioned above, to boom in the US in the post-millennium decade, producing considerable impacts on fuel deliveries, prices and shifts in oil and natural gas import-export shares. Stimulated by the US shale gas success, fracking was seen as a potential “game changer” in Europe, as stated by Günther Oettinger in 2013, then European commissioner for energy. However, after fierce public debate, fracking in Europe turned from hope to silence. Total bans were imposed in France, Bulgaria and Scotland whilst a temporal moratorium on fracturing practices was imposed in

166

7 Hydraulic Fracturing

Denmark and Germany, mainly due to public concerns (Tawonezvi 2017). In July 2016, a legal package on fracking came into force so that commercial fracking in Germany was no longer permitted. The regulatory framework sets a strict agreement on a commercial fracking ban. The ban holds until 2021 at the earliest with the German Bundestag then to decide whether or not the regulation will remain in place. With no further action taken by parliament, the ban will continue. However, four exploratory boreholes carried out during research projects will be permitted in compliance with existing regulations and with the authorisation of relevant mining authorities in order to gain scientific expertise. As such, the case of modelling hydraulic fracturing is set within the spheres of contested debates on emerging technologies. In order to carve out the role of subsurface modelling in contested domains, we will retrace the situation in Germany concerning the rise and fall of fracking from a modelling perspective. Since the early 1960s, the fracking technique has been used in German oil and gas exploitation as well as drinking water production. However, after the US success, initial thoughts on using fracking in Europe for unconventional shale gas based on fracking fluids including chemicals stimulated public attention and debate from 2010 onwards. Several expert reports and research projects were carried out (reports mostly in German language) during that period addressing the environmental safety and the potential of fracking. In the following, we list the most important projects: • The “Risk Study Fracking” as an outcome of the information-and-dialogue process titled the “InfoDialog Fracking” (Ewen et al. 2012) funded by ExxonMobile Germany and published in 2013. • A study to assess the shale gas potential in Germany (BGR - Bundesanstalt für Geowissenschaften und Rohstoffe 2012) with an update in 2016 (BGR - Bundesanstalt für Geowissenschaften und Rohstoffe 2016) by the Federal Institute for Geosciences and Natural Resources (BGR). • The “Fracking for Shale Gas Production” statement by the German Advisory Council on the Environment (SRU) (Buchholz et al. 2013) published in 2013. • The German Environment Agency (UBA) published an expert report on the environmental impacts of fracking during exploration and production from unconventional sources, including an assessment of risks, recommendations for best practices and an evaluation of existing regulations (UBA 2012). • The “Environmental Impacts of Fracking Related to Exploration and Exploitation of Unconventional Natural Gas Deposits” expert report commissioned by the German Environment Agency (UBA) and the Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (BMU) (Meiners et al. 2012), • The “Fracking in unconventional natural gas deposits in North Rhine-Westphalia” expert report commissioned by the State Ministry of Environment in North RhineWestsphalia (Meiners et al. 2012). In the following, we will briefly summarise the first three reports and review them from a modelling perspective. ExxonMobil Germany initiated the so-called InfoDialog Fracking in 2011 since the company intended to concentrate on gas exploration in the late 2000s with a focus on tight gas, shale gas and coal gas with sites all located

7.3 Science-Policy Issues

167

in northern Germany. The company proactively initiated a so-called informationand-dialogue process based on an independent expert committee consisting of seven scientific experts from the fields of geology, water, hydromechanics, toxicology, etc. The InfoDialog Fracking started in April 2011 and lasted one year with results published in 2013 (Ewen et al. 2012). The expert committee elaborated on scientific results within several working groups on risks in the geologic system, toxicology and groundwater, technical risks and modelling multi-phase transport among others. While ExxonMobil contributed the financial resources, the competent authorities observed the elaborated results and matched them with state-of-the-art knowledge, while the scientific community examined the findings. Their study included numerical simulations based on very conservative assumptions to estimate worst-case scenarios. These studies did not include probabilistic approaches. Details on these modelling efforts are reported in Kissinger et al. (2013). Affected citizens and stakeholders in the region contributed local knowledge and raised questions for the expert committee. One may call this format a privately organised business dialogue with public access limited to questions of information (Saretzki and Bornemann 2014). In its final report, the InfoDialog Fracking concluded: “The independent expert panel does not see any objective reasons for a general ban of fracking” (Ewen et al. 2012)— a judgement widely quoted in the German media as well as stakeholder and public communities. Saretzki and Bornemann (2014: 78) criticized this judgment as not adequate for scientists since, with such a statement, they depart from objective, scientific grounds. They argue that a general ban of a technology can never be done on purely objective reasoning and fact statements but needs to refer to (legal) norms and normative positions. In addition, the statements allude to proponents basing their judgement for a technology ban on non-objective reasons. Consequently, opponents of fracking did not agree with the overall panel’s conclusion. Subsurface modelling was centre stage in the “risks in the geological system” working group. Based on multi-phase transport modelling, three scenarios were modelled in order to assess the impact of fracking fluids and methane on shallow groundwater aquifers. The modelling used a potential but highly improbable worst-case scenario approach. The driving forces differed among the scenarios with pressure gradients due to fracturing, natural horizontal and vertical pressure gradients and gravitation and capillary forces initiating the migrating fluids because of hydraulic fracturing activities (Kissinger et al. 2013). The results show some vertical migration activities. However, the authors strongly emphasised that the approach is not a quantitative risk assessment, but a qualitative evaluation of certain, conservative scenarios. Against this background, combinations of unfavourable assumptions and parametrisations that may lead to hazards were identified as a first step towards a quantitative risk assessment. Future quantitative risk assessment studies have much higher demands, especially on site-specific data for statistical parameter uncertainty assessments. Based on these results, the “risks in the geological system” working group determined a rough estimation of a minimum distance of 1,000 m between fracking activities and the earth’s surface (Ewen et al. 2012). Other modelling activities did not play a fundamental role within the “InfoDialogFracking” project.

168

7 Hydraulic Fracturing

The Federal Institute for Geosciences and Natural Resources carried out an assessment of the shale gas potential in Germany on behalf of the Federal German Government, published in 2012 with an update in 2016. The assessment provides resource potential for shale gas and shale oil considering all clay and mudrock formations. For shale gas, a distinction is made between tight gas, shale gas and coal bed methane. The BGR estimated Germany’s technically recoverable resources of shale gas assuming a production rate of 10% at an average of between 320 and 2013 trillion m3 in a depth range from one thousand to five thousand metres. Adding up resource potentials in the range from 500 to 1000 m, the estimate increases from 380 up to 2340 trillion m3 . The estimated resources of unconventional shale gas and oil are much higher than the currently exploited annual conventional gas and oil rates in Germany (factor 100) and the annual natural gas consumption (factor 10). Compared with other European countries, Germany ranks fourth with its shale gas reserves and fifth with its shale oil reservoirs. However, compared to global reservoirs such as China, Argentina, Algeria and the US which are considered to have the largest amounts, Germany plays a minor role. Besides resource capacity estimates, the study researched the environmental impacts of hydraulic fracturing, focussing on the dispersion of fracking fluids, crack propagation and induced seismicity. Computer modelling was used for several research outputs. For the assessment of the resource potential of shale gas and oil, a volumetric in-place approach was used in combination with Monte Carlo simulations to account for the variability inherent in the input. Using the randomness approach of Monte Carlo served for the calculation of the probability of results when uncertain input parameters are in place. In addition, a petroleum-geological basin modelling approach served as a starting point to model the formation, migration and accumulation of hydrocarbons in order to assess shale gas and oil capacities, eventually leading to similar results to the volumetric in-place approach. In the area of environmental impact assessment, simulations were used to model the short and long-term impacts of fracking fluid flow and transport processes, crack propagation and induced seismicity. In 2013, the German Advisory Council on the Environment (SRU) published a statement on fracking for shale gas production. The SRU, established in 1972, is an independent advisory body for policy making, responsible for evaluating the environmental situation in Germany, drawing attention to particular undesirable developments and conditions, and recommending ways and strategies to cope with them. Within the report, the SRU laid emphasis on considering the broad overall picture with not just addressing energy policy aspects, but also taking into account a whole ranking of environmental risks. In addition, the report discusses fracking in the broader picture of the German Energiewende since it is important to assess whether and under what conditions shale gas can in fact make a positive contribution to the German energy transition or may run counter to its objectives (Buchholz et al. 2013). In light of the overall German Energiewende perspective in order to meet its policy goals, the report highlights shale gas’ potential support for the Enenergiewende since gas-fired power stations are seen as a good supplement to renewable energy sources. Natural gas has a better climate balance than coal and gas-fired power stations benefit from its shorter payback periods. However, long-term energy scenario

7.3 Science-Policy Issues

169

modelling calculates a considerable drop in natural gas as renewables expand. Based on the so-called Lead Study (Nitsch et al. 2012), demand for natural gas will halve in comparison to the demand in 2010. The report estimates no impact of German (and European) shale gas production on fuel prices—at least in the short term—due to its small potential quantities compared to the quantities world-wide. The advisory experts draw the central conclusion within their report that “German shale gas will not bring any benefits for the transformation of the energy system and that society can therefore have no overriding interest in promoting this source of energy” (Buchholz et al. 2013) (p. 20). The environmental risk analysis addresses several environmental impacts caused by fracking with considerable challenges for the long-term conservation of water, health, air, soil, biodiversity and climate. Table 7.4 depicts the identified need for action and research on environmental impacts as shown in the SRU report grouped according to general environmental risks, regulation and environmental management as well as site-specific issues to be dealt with in each project. The challenges identified cover a great variety of need for action and research. The SRU report did not carry out new research but rather reviewed the existing literature on fracking, discussing its results in a broader socio-technical context. Thus, computer simulations and modelling are not centre stage in the report. However, modelling is addressed in several matters. • The SRU report refers to the modelling results of the “InfoDialog Fracking” study which showed “that the fracking fluids could only rise about 50m, even on the basis of conservative assumptions” (Buchholz et al. 2013) (p. 26). However, it is important to know that the InfoDialog-Report continues its result interpretation as follows: “They [N.B.: the fracking fluids] can only rise as long as the fracking pressure is maintained. That means: over this pathway no pollutants will enter the groundwater” (Ewen et al. 2012) (p. 39). Again, the SRU report references another result: “In the case of coalbed, horizontal transport within formation water would be possible. Thus depending on geological conditions, horizontal movement of pollutant plumes could reach about 20 m a year, which would permit a long-term range running into kilometres” (Buchholz et al. 2013) (p. 26). And leaves out the original result interpretations by the InfoDialog Fracking report which reads: “In the very long term, the transport towards thermal bath is highly unlikely but theoretically conceivable” (Ewen et al. 2012) (p. 39). It is interesting to see that the quotation by SRU (2012) (Buchholz et al. 2013) frames the results quite differently since it leaves out the conclusions drawn in the original text of Ewen et al., which are included here in italics. While the advisory council alludes to interpreting the modelling of results with the probability harm is caused, the original authors qualified the results as very unlikely to lead to damages. The example shows how proponents and sceptics of a technology frame and adjust the conclusions to their own normative positions. • The report states that “reliable models that describe the possible routes of contaminated water depend on detailed information about the geological and hydrogeological conditions. This also includes information about the hydrochemi-

170

7 Hydraulic Fracturing

Table 7.4 Need for action and research on environmental impacts due to fracking according to the SRU report Research needs or knowledge gaps concerning general environmental risks (that need to be filled to allow the assessment of the basic risks) − Impacts of the special technical features of shale gas production (such as horizontal drilling, pipe stress due to high pressure and chemicals, a large number of boreholes) and, where appropriate, further development of technical safety standards − The long-term effects of fracking on the stability of the strata in the rock formation and in relation to potential microbial processes along the fissures created − The probability and intensity of seismic events − The suitability of existing safety assessments for the subsurface use of the additives and mixtures used − Information about the effects, behaviour and whereabouts of the chemical additives in fracking, over and above the assessment of the chemicals under the classification of the CLP Regulation. For instance, it is unclear what secondary products may form in chemical reactions between the additives and brine components of the formation water at high temperatures and pressures − Search for technically adequate alternatives to the chemicals used − Summary of experience with injection of formation water from conventional oil and gas production in Germany, systematic evaluation (location of wells, drilling depth, rock, quantities, monitoring and evidence of permanent integrity) − Possibilities for processing and reuse of flowback − Extent of diffuse losses of volatile components (methane and other hydrocarbons) and means of minimising them − Greenhouse gas balance of shale gas taking account of conditions specific to German reservoirs (drilling depth, production volume, technology used, etc.) and in comparison with other fuels − Assessment of land use to be expected in Germany against the background of the National Sustainability Strategy’s objective of 30 hectares per day by 2020 and more far-reaching land conservation objectives Need for regulation and appropriate environmental management concepts to minimise the environmental impacts − Define areas to be excluded on precautionary grounds − Ensure complete access to and exchange of decision-relevant facts and figures between the actors (companies, water and mining authorities, scientists, the public); archive information for long-term use; prepare data for modelling and long-term monitoring − Select suitable parameters for a monitoring programme capable of registering possible events at depth − Draw up strategy for and further develop safety monitoring for occupational and environmental protection at the production facilities and the associated infrastructure. Prepare an early warning plan, including the relevant parameters for decisions − Impose requirements to justify the need for additives − Define a safety level for flowback disposal and devise an authorisation procedure that ensures the appropriate integration of the water authority and weighs up conservation interests and conflicts of use − Ensure the use of closed systems, so that volatile (methane-) emissions in the flowback are captured by technical means and not released Source Adapted from Buchholz et al. (2013)

7.3 Science-Policy Issues

171

cal situation and the target formations, and information about existing legacy wells and disturbances including their hydraulic function” (Buchholz et al. 2013) (p. 26). From that end, they see the “urgent need for a publicly accessible register to bring together all the existing data on boreholes and geological data from the investigations conducted during the long history of drilling” (Buchholz et al. 2013) (p. 26). Against this background, the report calls for the necessity of responsible technical authorities to be in or put in a position in order to collect this technical knowledge both across authorities and across federal states (Buchholz et al. 2013) (p. 26). • The report acknowledges that modelling for long-term consequences is the only way to produce knowledge since long-term fracking operations can only be modelled. However, in the absence of practical experience, the report concludes that no reliable forecasting models exist for the geological formations found in Germany, which are able to forecast, for instance, potential pathways and connections between saline deep water and injected fracking fluids at groundwater-bearing strata (Buchholz et al. 2013) (p. 36). Currently, there is no evidence that computer simulations have played an outstanding role in the discourse and conflict on fracking in Germany. Anti-fracking initiatives and campaigns started in the US in 2008 with the journalist network Pro Publica providing information on a planned fracking site in a drinking water catchment area for New York. In 2010, the wide success and perception of the movie Gasland amplified anti-fracking movements in the US, spreading to Europe and Germany (Yang 2015). From 2010 onwards, German anti-fracking movements emerged locally around areas where exploration permits were issued, mainly in the area of North Germany. The decentralised groups rapidly gained media and policy attention while building a network of German and Europe-wide anti-fracking support. Proponents and opponents of fracking have framed the technology in the context of the German Energiewende since the very beginning (Bornemann and Saretzki 2018). The advocates argue in favour of fracking to guarantee a secure energy supply from unconventional domestic natural gas in order to stabilise the energy grid at times of increasing volatility due to renewable expansion. Opponents argue in favour of no fracking based on the technology’s problematic climate balance and corresponding environmental risks to drinking water, seismicity, land use, biodiversity, etc. Technology conflict analysis uses conflict typologies to specify what is at the core of a dispute. A common distinction differentiates between conflict of interests, value, knowledge and power (Benighaus et al. 2010). Bornemann and Saretzki (2018) discussed fracking with regard to these conflict types with a dominance in conflicts of interest and knowledge. Fracking incorporates a conflict of interest since conflicting actors claim the usage of interlinked environmental goods. The energy industry seeks to exploit natural gas from unconventional gas fields; the water industry claims drinking water sources and aquifiers; the agricultural industry seeks intact soil and water resources, local residents attach importance to the integrity of health, property and lifestyle. Fracking is also in the conflict of knowledge domain where disputes arise about truth and uncertainty—and the corresponding methods for their validity

172

7 Hydraulic Fracturing

and reliability. While fracking advocates claim longstanding experience with fracking with conventional gas exploitation, opponents point to considerable knowledge deficits, notably in the area of environmental impacts. Conflicts of knowledge typically expand to expert communities with the so-called experts dilemma (Grunwald 2003; Wassermann 2015). The experts dilemma of fracking can be judged considerable as Bergmann et al. (2014) conclude “that currently missing knowledge and data prevent a profound assessment of the risks and their technical controllability in Germany”.

7.4 Selected Model-Based Illustration: Methane Migration Induced by Hydraulic Fracturing Hydraulic fracturing is a big challenge for numerical modellers. As the discussion in this chapter has shown, hydraulic and mechanical processes are very complex. This includes flow through extremely complex geometries in fractured and fracturing porous media as well as geomechanical processes during the generation of the fractures and the propagation of existing fractures during the fracturing period. A comprehensive description of all this complexity in a single model concept is currently unavailable. The example we present here is contained in the subfolder lecture/mm/fracture of the dumux-lecture module (Sect. 4.3.3) and has been simplified in several respects, in particular with respect to the fracture network which is described here as static and pre-existing with discretely described fractures using the model of Gläser et al. (2017). The scenario includes an injection of water into a fracture-matrix system which has pore space with an initial methane saturation of 0.25 in a ‘shale’ formation and with no methane present in an overlying overburden during a period of 4 h, see Table 7.7. The setup is two-dimensional. The total model domain has a size of 50 m length by 50 m height. It includes the ‘matrix subdomain’ and the ‘fracture subdomain’ which are overlap in the lower part of the total model domain. The boundary conditions, see Tables 7.5 and 7.6, and the initial conditions, see Table 7.7, are assigned separately to each subdomain. The matrix subdomain represents a ‘shale’ layer and has, accordingly, very low permeability and porosity as well as high capillary pressures. The fractures have an aperture of 0.1 m and are highly permeable with high porosity and low capillary pressures. The corresponding model parameters are summarised in Table 7.8. Fluid properties are evaluated dependent on pressure at a constant temperature of 10 ◦ C. Initially, the pressure is hydrostatic. During the first 4 h, an injection interval of 10 m at the center of the bottom boundary is used for the injection of water. In the real application, this represents the fracturing process and generates the injectivity in the form of new fractures or the propagation of existing ones. In this case, the injection period is simply a time period during which the water enters the domain under high pressure and then displaces methane. The high overpressures diminish in the time after the injection has been stopped while more methane is displaced, partly

7.4 Selected Model-Based Illustration: Methane Migration Induced … Table 7.5 Boundary conditions in the matrix domain Section Boundary type

173

Mass fluxes [kg/(s m2 )]

Neumann

qwater 0 0.04

qmethane 0 0

Dirichlet

Sn [-] 0

pw [Pa] 105

Left, right, bottom (except injection region) Injection region (10 m at the centre of the bottom boundary) for 14400 s Top

Table 7.6 Boundary conditions in the fracture domain, existing only in the lower part of the total model domain and ‘overlapping’ with the matrix domain Section Boundary Mass fluxes [kg/(s m)] type Neumann Left, right, top (is within the total model domain!), bottom (except injection region) Injection region (10 m at the centre of the bottom boundary) for 14400 s

qwater 0

qmethane 0

0.04

0

Table 7.7 Initial conditions in the matrix and the fracture domain Subdomain Sn [-] pw [Pa] Fracture domain (lower part of the total model 0.25 domain) Matrix domain (in overlap with fracture 0.25 domain) Matrix domain (above fracture domain) 0

105 − ρw g(domainHeight − z) 105 − ρw g(domainHeight − z) 105 − ρw g(domainHeight − z)

driven by this overpressure and partly by gravity since methane has a much smaller density than water. Figure 7.7 shows the distribution of the pressure after 4 h and after ≈28 h. It should be noted that the injection stops exactly after 4 h, thus the left plot shows the highest occurring pressures in this scenario in the injection zone. It can be seen that the pressure signal is propagating quickly along the highly-permeable fractures, where the pressures are visibly elevated compared to the surrounding matrix regions. In the plot on the right, one can observe that the pressure quickly declines from its peak values after a few hours.

174

7 Hydraulic Fracturing

Table 7.8 Model parameters in the fracture-matrix setup, all residual saturations are zero everywhere Parameter Unit Value Matrix (y ≤ 35 m); shale Permeability k (For 30 < y < 32 m) k Porosity φ Van Genuchten VGα Van Genuchten VGn Matrix (y > 35 m); overburden Permeability k Porosity φ Van Genuchten VGα Van Genuchten VGn Fractures Aperture a Permeability k Porosity φ Van Genuchten VGα Van Genuchten VGn

m2 m2 – 1/Pa -

1 · 10−14 1 · 10−18 0.1 5 · 10−3 2

m2 – 1/Pa -

5 · 10−10 0.3 5 · 10−3 2

m m2 – 1/Pa -

0.1 1 · 10−7 0.85 1 · 10−2 2

Fig. 7.7 Pressure distribution after 14400 s (4 h, left) and 105 s (≈28 h, right). After 4 h, the injection is stopped. Note that the max. pressure in the matrix after 4 h reaches a value of ≈1.1e7 Pa

The saturation of gas (methane) can be seen in Fig. 7.8 for the same points in time as the pressure in Fig. 7.7. Thus, the left plot shows methane saturation exactly at the stop of the injection. During the injection period, water has displaced methane in the injection region as can be seen in the distribution of saturations. The fractures have already taken up a small amount of methane in exchange with water from the matrix which is mainly driven by the difference in capillary pressures, see Table 7.8. The comparison with the right plot in Fig. 7.8 reveals that there is not much change in

7.4 Selected Model-Based Illustration: Methane Migration Induced …

175

Fig. 7.8 Gas (methane) saturation after 4 h (left) and after ≈28 h (right)

Fig. 7.9 Methane saturation after ≈30 days—or: ≈2.6e6 s—(left). The plot on the right shows the amount of gas that escaped into the overburden over time. Crosses correspond to the times of the example graphs in Fig. 7.8

the injection region, while the processes in the highly conductive fractures continue. Methane is driven by gravity in an upward direction. Therefore, in some fractures, which are not further connected towards the overburden, the methane accumulates, while the part of the fracture network that connects the left part of the model domain to the overburden clearly acts as a preferential flow path. The left plot in Fig. 7.9 is a snapshot of methane saturation at a much later time than those in Fig. 7.8. Note that the legend has now changed and red indicates saturation values of about 40%, whereas it was previously about 25%. After 30 days, the methane remains in the shale mainly in those regions where it accumulated in or underneath those fractures which can act as local traps for the upwards migrating methane. We see also at this point in time that the layer between 30 m < y < 32 m, which has a very low permeability (see Table 7.8), can still hold back its initially trapped methane. Most of the methane escapes through the fracture at ≈x = 7 m which reaches into the overburden. The right plot in the same figure presents a curve of the accumulated escape of methane into the overburden or out of the domain. Two crosses mark the points in

176

7 Hydraulic Fracturing

time when the plots in Figs. 7.7 and 7.8 are taken, i.e. after 4 and 28 h. It can be observed that after a sharp increase during and after the injection period with high (or still high and then declining) overpressures in the shale, the slope of the curve turns later on into an almost constant release rate of methane from the shale, which is driven by gravity at later times. It is important to note that this scenario is not very realistic for practical shale gas production and likely overestimating gas release. In practice, the gas is produced through the production well. This could be modelled, for example, by applying an underpressure at the well (bottom boundary). Furthermore, the gravity-driven release of methane in this scenario is not expected to be as strong in a realistic setup. The methane is trapped in the shale by different mechanisms like adsorption or capillary trapping. Adsorption is not considered here and methane residual saturation is zero, both for keeping the setup simple. A more realistic setup for estimating the risk of methane release should consider this.

References Beck M, Class H (2019) Modelling fault reactivation with characteristic stress-drop terms. Adv Geosci 49:1–7 Beck M, Seitz G, Class H (2016) Volume-based modelling of fault reactivation in porous media using a visco-elastic proxy model. Transp Porous Media 114:505–524 Benighaus C, Kastenholz H, Renn O (2010) Kooperatives Konfliktmanagement für Mobilfunksendeanlagen. In: Feindt PH, Saretzki T (eds) Umwelt-und Technikkonflikte. VS Verlag für Sozialwissenschaften, pp 275–296 Bergmann A, Weber F-A, Meiners HG, Müller F (2014) Potential water-related environmental risks of hydraulic fracturing employed in exploration and exploitation of unconventional natural gas reservoirs in Germany. Environ Sci Europe 26:10 BGR - Bundesanstalt für Geowissenschaften und Rohstoffe (2012) Abschätzung des Erdgaspotentials aus dichten Tongesteinen (Schiefergas) in Deutschland. http://www.bgr.bund.de/ DE/Themen/Energie/Downloads/BGR_Schiefergaspotenzial_in_Deutschland_2012.pdf; jsessionid=BC5C009A675E8C1E74D0457B3689EAD9.1_cid297?__blob=publicationFile& v=7. Online; Accessed 20-April-2019 BGR - Bundesanstalt für Geowissenschaften und Rohstoffe (2016) Schieferöil und Schiefergas in Deutschland - Potenziale und Umweltaspekte. https://www.bgr.bund.de/DE/Themen/ Energie/Downloads/Abschlussbericht_13MB_Schieferoelgaspotenzial_Deutschland_2016. pdf?__blob=publicationFile&v=5. Online; Accessed 23-April-2019 Birdsell DT, Rajaram H, Dempsey D, Viswanathan HS (2015) Hydraulic fracturing fluid migration in the subsurface: a review and expanded modeling results. Water Resour Res 51:7159–7188 Bornemann B, Saretzki T (2018) Konfliktfeldanalyse: Das Beispiel ‘Fracking’ in Deutschland. In: Holstenkamp L, Radtke J (eds) Handbuch Energiewende und Partizipation. Springer, pp 563–581 Buchholz G, Droge S, Fritsche U, Ganzer L, Herm-Stapleberg H, Meiners G, Muller J, Ruske H, Uth H, Weyand M (2013) Fracking for shale gas production: a contribution to its appraisal in the context of energy and environment policy. Technical report, Statement of the German Advisory Council on the Environment Carrera J, McDermott C, Parry S, Couples G, Yoxtheimer D, Jung R, Edlmann K, Class H, O’Donnel M, Bensabat J, Guadagnini A, Riva M, de Simone S, Maier U, Gouze P, Bokelmann G, Tatumir A, Sauter M, Vilarrasa V, Suess M (2018) Fracrisk: best practice document: recommendations

References

177

from the FracRisk project for European legal guidelines on shale gas development. https://cordis. europa.eu/project/id/636811/results. Online; Accessed 13-July-2020 Class H, Gläser D, Dell’Oca A, Riva M, Guadagnini A, Tatomir A, Beck M (2018) Fracrisk: results of the flow and transport simulations. https://cordis.europa.eu/project/id/636811/results. Online; Accessed 13-July-2020 Dell’Oca A, Riva M, Guadagnini A (2017) Moment-based metrics for global sensitivity analysis of hydrological systems. Hydrol Earth Syst Sci 21:6219–6234 Edwards R, Celia M (2018) Shale gas well, hydraulic fracturing, and formation data to support modeling of gas and water flow in shale formations. Water Resour Res 54:3196–3206 EU Groundwater Directive (2006) EU, Directive 2006/21/EC of the European Parliament and of the Council of 12 December 2006 on the protection of groundwater against pollution and deterioration. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32006L0021. Online; Accessed 20-April-2019 EU Hydrocarbon Directive (1994) EU, Directive 94/22/EC of the European Parliament and of the Council of 30 May 1994 on the conditions for granting and using authorisations for the prospection, exploration and production of hydrocarbons. https://eur-lex.europa.eu/legal-content/GA/ TXT/?uri=CELEX:31994L0022. Online; Accessed 20-April-2019 EU Recommendation (2014) EU, Commission Recommendation of 22 January 2014 on minimum principles for the exploration and production of hydrocarbons (such as shale gas) using high-volume hydraulic fracturing. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex Online; Accessed 20-April-2019 EU Water Framework Directive (2000) EU, Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community action in the field of water policy. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX Online; Accessed 20-April-2019 Ewen C, Borchardt D, Richter S, Hammerbacher R (2012) Risikostudie Fracking: Sicherheit und Umweltverträglichkeit der Fracking-Technologie für die Erdgasgewinnung aus unkonventionellen Quellen. Technical report, Informations-& Dialogprozess der ExxonMobil über die Sicherheit und Umweltverträglichkeit der Fracking-Technologoie für die Erdgasgewinnung, Darmstadt Flemisch B, Berre I, Boon W, Fumagalli A, Schwenck N, Scotti A, Stefansson I, Tatomir A (2018) Benchmarks for single-phase flow in fractured porous media. Adv Water Resour 111:239–258 Gläser D, Helmig R, Flemisch B, Class H (2017) A discrete fracture model for two-phase flow in porous media. Adv Water Resour 110:335–348 Grunwald A (2003) Technology assessment at the German Bundestag: ‘expertising’ democracy for ‘democratising’ expertise. Sci Public Policy 30:193–198 Howarth RW, Ingraffea AR, Engelder T (2011) Should fracking stop? Nature 477:271–275 Kissinger A, Helmig R, Ebigbo A, Class H, Lange T, Sauter M, Heitfeld M, Klünker J, Jahnke W (2013) Hydraulic fracturing in unconventional gas reservoirs: risks in the geological system, part 2. Environ Earth Sci 70:3855–3873 Lovell A, Srinivasan S, Karra S, O’Malley D, Makedonska N, Viswanathan H, Srinivasan G, Carey J, Frash L (2018) Extracting hydrocarbon from shale: an investigation of the factors that influence the decline and the tail of the production curve. Water Resour Res 54:3748–3757 M4ShaleGas (2017) Recommendations from the m4shalegas project on minimizing the environmental footprint of shale gas exploration and exploitation. http://www.m4shalegas.eu/downloads/ SP5/M4ShaleGas. Online; Accessed 9 Aug 2019 Meiners G, Fernández J, Deißmann G, Filby A, Barthel R, Cramer T, Bergmann A, Hansen C, Weber F, Dopp E (2012) Fracking in unkonventionellen Erdgas-Lagerstätten in NRW. Kurzfassung zum Gutachten. Ministerium für Klimaschutz, Umwelt, Landwirtschaft, Natur-und Verbraucherschutz des Landes Nordrhein-Westfalen (ed.), 74 Myers T (2012) Potential contaminant pathways from hydraulically fractured shale to aquifers. Ground Water 50:872–882

178

7 Hydraulic Fracturing

Nitsch J, Pregger T, Naegler T, Heide D, de Tena DL, Trieb F, Scholz Y, Nienhaus K, Gerhardt N, Sterner M et al (2012) Langfristszenarien und Strategien für den Ausbau der erneuerbaren Energien in Deutschland bei Berücksichtigung der Entwicklung in Europa und global. Technical report, Schlussbericht im Auftrag des BMU, bearbeitet von DLR (Stuttgart), Fraunhofer IWES (Kassel) und IfNE (Teltow) Pfunt H, Houben G, Himmelsbach T (2016) Numerical modeling of fracking fluid migration through fault zones and fractures in the North German Basin. Hydrogeol J 24:1343–1358 Saretzki T, Bornemann B (2014) Die Rolle von Unternehmensdialogen im gesellschaftlichen Diskurs über umstrittene Technikentwicklungen: Der ‘InfoDialog Fracking’. Forschungsjournal Soziale Bewegungen 27:70–82 SHEER (2018) Guidelines for the monitoring of shale gas exploration and exploitation induced environmental impacts. http://www.sheerproject.eu/images/deliverables/SHEER-Deliverable-8. 2.pdf. Online; Accessed 9 Aug 2019 Simone SD, Carrera J, Vilarrasa V (2017) Superposition approach to understand triggering mechanisms of post-injection induced seismicity. Geothermics 70:85–97 Tawonezvi J (2017) The legal and regulatory framework for the EU’ shale gas exploration and production regulating public health and environmental impacts. Energy Ecol Environ 2:1–28 UBA (2012) mweltbundesamt: Umweltauswirkungen von Fracking bei der Aufsuchung und Gewinnung von Erdgas aus unkonventionellen Lagerstätten - Risikobewertung, Handlungsempfehlungen und Evaluierung bestehender rechtlicher Regelungen und Verwaltungsstrukturen. https:// www.umweltbundesamt.de/sites/default/files/medien/461/publikationen/4346.pdf. Online; Accessed 20-April-2019 Wassermann S (2015) Expertendilemma. In: Niederberger M, Wassermann S (eds) Methoden der Experten-und Stakeholdereinbindung in der sozialwissenschaftlichen Forschung. Springer, pp 15–32 Wiener H, Goren Y, Bensabat J, Tatomir A, Edlmann K, McDermott C (2015) FracRisk: Ranked FEP list. https://www.fracrisk.eu/sites/default/files/Deliverable4.1_rankedFEPlist_UPDATED. pdf. Online; Accessed 23-April-2019 Yang M (2015) Anti-Fracking Kampagnen und ihre Mediennutzung. In: Speth R, Zimmer A (eds) Lobby Work: Interessenvertretung als Politikgestaltung. Springer, pp 283–299

Chapter 8

Nuclear Energy and Waste Disposal

8.1 Background A field of subsurface environmental engineering which is likely to (re-)receive enormous societal attention is the fate of vast amounts of nuclear waste world-wide. One of the most likely options, given today’s state of knowledge, is permanent geological storage. Thus, we have another topic of the subsurface where flow and transport processes play a dominant role in risk assessments. The public opinion on nuclear energy is not easy to evaluate. However, when it comes to hosting a nuclear waste repository in the neighbourhood, local opposition has to be expected. It will need convincing concepts and good arguments to achieve local acceptance (Sjöberg 2004), and modelling is at the forefront of the science-policy interface once again, as we will discuss below. While significant volumes of nuclear waste also originate from medical as well as industrial and research applications, the major source of radiotoxicity in nuclear waste arises from the many nuclear power stations worldwide. The International Atomic Energy Agency (IAEA) has devised a classification of radioactive waste with corresponding disposal options as illustrated in Fig. 8.1. The waste classification takes the activity content of the waste (vertical axis) and the half-lives of the radionuclides (horizontal axis) as the defining parameters. Both parameters are checked against long-term safety as the main consideration for defining waste classes. The IAEA (2009: 21) (IAEA Safety Standards Series No. GSG-1 2009) explains that: “Waste is classified according to the degree of containment and isolation required to ensure its safety in the long term, with consideration given to the hazard potential of different types of waste. This reflects a graded approach towards the achievement of safety, as the classification of waste is made on the basis of the characteristics of the practice or source, taking account of the magnitude and likelihood of exposures”. In Germany, the main distinction between classes of nuclear waste is made between heat-generating radioactive waste and radioactive waste with negligible heat generation (Röhlig 2016). Heat-generating nuclear waste is comprised of, for © Springer Nature Switzerland AG 2021 D. Scheer et al., Subsurface Environmental Modelling Between Science and Policy, Advances in Geophysical and Environmental Mechanics and Mathematics, https://doi.org/10.1007/978-3-030-51178-4_8

179

180

8 Nuclear Energy and Waste Disposal

Fig. 8.1 Conceptual illustration of the waste classification scheme according to the IAEA in 2009 (IAEA Safety Standards Series No. GSG-1 2009)

instance, so-called spent nuclear fuel from nuclear power stations and of the waste from reprocessing which belongs to the class of high level waste for deep geological disposal. In contrast, radioactive waste with low heat generation is classified as either “intermediate level waste” (ILW) or “low level waste” (LLW) according to the IAEA. Volumes predicted in Germany for the year 2080 comprise approx. 28,100 cubic metres of heat generating radioactive waste and 304,000 cubic metres of radioactive waste with negligible heat generation (Röhlig 2016). The current state of German nuclear policy-making is based on the Site Selection Act (StandAG) adopted in 2013 (amended in 2017). The Act foresees detailed provisions for searching and exploring a disposal site, particularly for high level radioactive waste. This Act requires a search to be performed throughout Germany in order to determine a site that guarantees the best-possible safety for one million years. Before deciding on a site, several potential sites should undergo surface and underground exploration (StandAG 2017). Alongside this, the Commission on the “Storage of Highly Radioactive Waste Material” was set up in 2014 in order to elaborate a report containing an analysis and evaluation of all the fundamental questions pertaining to radioactive waste disposal that arise in connection with the site selection procedure.

8.1 Background

181

emplacement depth

shaft monitoring drift host rock

emplacement drift

plug

backfill

canister

emplacement drift length

Fig. 8.2 Generic model for a deep geological repository, modified after Stahlmann et al. (2016)

The report was published in July 2016 (Deutscher Bundestag: Kommission Lagerung hoch radioaktiver Abfallstoffe gemäß §3 Standortauswahlgesetz 2016). According to the final report, Germany intends to separate nuclear waste disposal according to waste classes: low heat generating waste is planned to be put underground in the Konrad disposal facility. The Konrad iron ore mine near Salzgitter in North Germany was, after iron ore mining was halted in 1976, kept open on behalf of the Federal Government so as to perform investigations into whether the mine would be suitable as a disposal facility and, after a long licensing procedure, is now being redeveloped into a disposal facility for low-heat-generating waste (Deutscher Bundestag: Kommission Lagerung hoch radioaktiver Abfallstoffe gemäß §3 Standortauswahlgesetz 2016). For high level radioactive waste disposal, transfer to an underground facility in a deep geological repository with reversibility is foreseen. The ‘deep repository with reversibility’ option will allow a high degree of flexibility with regard to the utilisation of newly acquired bodies of knowledge. It will remain possible to switch over to other disposal pathways for a long time. Within the ENTRIA research project (https://www.entria.de) a generic model for a deep geological repository with a ‘reversibility’ option has been elaborated (see Fig. 8.2). The overall objective is to guarantee waste disposal retrieval if necessary without ruining the integrity of host rock units in the deep underground. Granites, clays and salt formations are the main types of host rocks in the nuclear waste debate. Each of which have characteristic properties which can be categorized as advantageous or disadvantageous based mainly on considerations of flow and transport. Crystalline rock such as granite is very impermeable and mechanically

182

8 Nuclear Energy and Waste Disposal

rigid, but it is typically brittle and, thus, in spite of its low permeable matrix, has many potential pathways which may be interconnected. Salt also has a very low permeability and is not brittle but it is prone to plastic deformation and is water soluble. Finally, clay also has very low permeability while its low permeability might be endangered by desiccation processes, e.g. due to heat development in the near-field of buried nuclear-waste containers.

8.2 Modelling Issues Modelling has become an important or the important discipline in shaping the energy systems of the future and in dealing with the burdens and damages the different technologies have inflicted. Modelling in the context of energy systems is typically used for scenario analyses, where developments of different technologies and options are evaluated, under technical and economic considerations. Energy scenario models can be used as planning tools to predict related factors over a certain time period and based on the level of a city, a region, or nation-wide. For models on a global scale, questions around climate change play, of course, an important role. A transformation of the global energy systems that substitutes in the long run fossil resources receives more and more priority. From a climate point of view, nuclear energy is a low-carbon technology and thus an option to limit greenhouse-gas emissions. Many countries, like France, rely strongly on nuclear power as a major or their main energy source, while others, like Germany, plan to replace nuclear energy in the near future, even faster than abandoning coal. Like in hydraulic fracturing (see Chap. 7) and like in geologic carbon sequestration (see Chap. 6), a major environmental concern associated with the permanent storage of nuclear waste affects the subsurface, and the quantification of risks related to subsurface nuclear-waste disposal requires sophisticated flow and transport models. As has been mentioned above, potential host rocks are in salt formations, in granite, or in clay formations, and the questions around deciding for one or another host rock focus on the possible impact on the biosphere from emissions out of the subsurface containments. The time scales, which are necessary to consider, are far beyond any experimentally observable scale. Due to very long-living radionuclides, the safety of containments for nuclear waste has to be ensured for thousands of years. Therefore, all risk assessment related to nuclear waste storage is almost solely based on numerical simulations. The modelling capabilities required for simulating the spreading of radionuclides are basically covered by the models and equations we have provided in Chap. 2 as long as the nuclides are considered as conservative tracers. Processes like decay of radionuclides or retardation require, of course, further specific relationships. Modelling scenarios are often distinguished as near-field and far-field scenarios. Nearfield models allow for an evaluation of the performance of the barrier around the containment, which includes besides flow and transport also geomechanical aspects, for example, when gas pressure builds up during corrosion of containers (Xu et al.

8.2 Modelling Issues

183

2008), when swelling materials are exposed to saturation changes, or when temperature increases induce stress-field alterations. The topic of gas generation and migration in deep geological radioactive waste is extensively covered in the book edited by Shaw (2015). It is focused on understanding the behaviour of gases emitted from such sites. Referring to different underground research laboratories, articles in this book address, for example, the performance of bentonite buffers and backfill materials used for encapsulating waste containers in crystalline formations or in clay, or the fate of radionuclides migrating driven by gas generation in several modelling studies. High temperature in the near-field of a container may also induce heat pipes, which are complex thermo-hydraulic systems that can be modelled with non-isothermal multiphase multi-component models as we have introduced them in this book (see the selected model-based illustration of the heat-pipe effect in Sect. 8.4 and the conceptual models explained in Chap. 2). Far-field models on the multiple-kilometre scale consider the site-specific hydrogeologic conditions and are typically used for risk assessment. The processes studied in the near field around a disposal site include also geomechanical aspects which are in interaction with hydraulic processes. For example, it is obvious that excavation of a disposal site leads to a severe disturbance of the rock, be it granite, salt rock, or clay, and such excavation disturbed zones are likely to affect a redistribution of in-situ stresses as well as changes in permeability to flow (Tsang et al. 2005). The impact of an excavation disturbed zone is, of course, different in the different host rocks as Tsang et al. discuss Tsang et al. (2005). This is studied in underground rock laboratories like the Mont Terri laboratory in Switzerland which is situated in a clay formation, e.g. Martin and Lanyon (2003); Shao et al. (2015). Model development is very active in this field and aims also at a better understanding of healing processes in plastic rocks like salt and clay. In general, modellers often distinguish the description of thermal, hydrological (or hydraulic), mechanical, and chemical processes (THMC). Accordingly, there are benchmark studies or comparison studies set up like the DECOVALEX project, which has a strong focus on processes around nuclear waste repositories. DECOVALEX stands for DEvelopment of COupled models and their VALidation against EXperiments and includes partners like the French National Radioactive Waste Management Agency (ANDRA), the German Federal Institute for Geosciences and Natural Resources (BGR), the Canadian Nuclear Safety Commission (CNSC), the Department of Energy in the United States, and many others. Just as an example, Rutqvist et al. (2017) summarise such a recent DECOVALEX simulation study on THM processes where they conclude that current models and modelling teams can achieve good agreement in their predictions of thermal and mechanical responses while discrepancies in hydrological responses require further studies. Another study using the TOUGH-FLAC and TOUGHREACT-FLAC codes (Rutqvist et al. 2014) links THM processes also with chemical processes (C) and puts a focus on engineered barrier systems (EBS) like the Swiss concept for a repository in Opalinus clay (the same formation as the rock in the Mont Terri laboratory), where temperatures above 100 ◦ C are expected for the canisters, and effects on the bentonite buffer and the

184

8 Nuclear Energy and Waste Disposal

surroundings are studied. This study suggests that fluid flow from the repository could under certain circumstances last for up to 10,000 years. Yet, it shows also that near-field and far-field THMC processes are strongly interactive, and in particular farfield effects require coupled simulations for site-specific rather than generic setups. A similar study on bentonite is described in Romero and Gonzalez-Blanco (2017) in a Nagra (the Swiss National Cooperative for the Disposal of Radioactive Waste) report on engineered barriers. They used CODE_BRIGHT Olivella et al. (1996); Olivella and Alonso (2008), a finite-element code, to model air transport processes dependent on dilatant embedded pathways. The GRS (Gesellschaft für Anlagen- und Reaktorsicherheit) has developed an own code, VIPER, that particularly addresses the micro-structural properties of bentonite and aims at describing the very complex re-saturation of compacted air-dry bentonite including swelling under changing hydraulic, mechanical, and thermal conditions (Kröhn 2011). This brief overview of modelling issues related to nuclear waste storage in geologic formations has solely addressed topics related to their relevance for this book, and still was very selective since the literature and work on this important topic is really abundant.

8.3 Science-Policy Issues 8.3.1 Subsurface Modelling as a Key Factor for Site Selection Processes For the time being, the most convincing disposal option for high-level nuclear waste disposal is seen in deep geological repositories (with or without retrievability). However, so far no single deep geological disposal site is in operation. Several disposal sites plan to start operation in the 2020s. Finland is supposedly the first country operating a disposal site to bury used nuclear fuel at a depth of around 400 m in the Onkalo bedrock on Olkiluoto Island which is around 230 km northwest of Helsinki. Further sites selected are located in Sweden (Forsmark in the province of Uppsala) and France (Bure in the region of Alsace-Champagne-Ardenne-Lorraine) (Röhlig 2016; Brunnengräber et al. 2015). Many countries, in contrast, are still in different phases of site selection procedures—such as Germany, as already described above. We will now illustrate the role of subsurface modelling in the German site selection process. The German site selection process is based on the Site Selection Act and the recommendations given by the Commission on the Storage of high-level radioactive waste. The search process comprises a stage model and a corresponding decisionmaking criteria approach to ensure the safe disposal of radioactive waste for one million years. The stage model is distinguished by the following three phases according to the Commission’s final report (Deutscher Bundestag: Kommission Lagerung hoch radioaktiver Abfallstoffe gemäß §3 Standortauswahlgesetz 2016):

8.3 Science-Policy Issues

185

• Phase 1: Start with a ‘blank map’ of Germany. Exclusion of regions in accordance with the agreed exclusion criteria and minimum requirements. Comparative analysis on the basis of available data in accordance with the specified consideration criteria and the representative preliminary safety analyses. • Phase 2: Surface exploration of the potentially suitable siting regions identified in Phase 1. Comparative analysis and consideration in accordance with the agreed exclusion criteria, minimum requirements and consideration criteria, as well as further developed preliminary safety analyses. • Phase 3: Underground exploration of the disposal sites selected at the outcome of Phase 2. In-depth study informed by the requirements placed on the final safe disposal. Comprehensive preliminary safety analyses. Comparative consideration of possible disposal sites with the aim of identifying the site with the best possible safety. While the three-phase model intends primarily to assess subsurface suitability, an extensive community participation process accompanies it. The public participation approach aims at providing a transparent information policy characterized by its breadth and depth, shaping the public interest with the participation of affected parties, successful participation through collaboration and re-examination, joint development of future prospects for the region affected and holding course with an adaptive, self-healing procedure. The institutional governance approach foresees the establishment of a National Societal Commission, a series of regional conferences and further platforms for dialogue and participation. The site selection process within the proposed three-stage model to ensure safe high-level radioactive waste disposal includes a decision-making criteria approach. The Commission has proposed five types of criterion for substantiating decision-making. It envisages so-called geoscientific exclusion criteria, geoscientific minimum requirements, geoscientific consideration criteria, safety requirements and requirements placed on safety analyses as well as spatial planning criteria. The set of exclusion criteria prohibits site selection in case large-scale vertical movements, active faults, seismic activity or volcanic activity exist. The geoscientific minimum criteria stipulate minimum requirements a potential site must meet and thus is a sine qua non for site selection. The five recommended criteria refer to rock permeability, thickness and depth of the isolating rock zone, the area of the disposal facility and information concerning the isolating rock zone over the reference period. The application and appraisal of minimum and exclusion criteria will identify geological areas which are, in principle, favourable and suitable for site selection. These areas will be assessed further by applying the so-called set of consideration criteria. In total, the commission proposed eleven consideration criteria divided into three criteria groups as shown in Table 8.1. Once the geological areas to be searched have been identified, with the geoscientific exclusion criteria and minimum requirements applied, they are to be appraised with the help of the consideration criteria on whether a generally favourable overall geological situation is found in a subarea and/or siting region. In this respect, it is accepted as a matter of principle that one individual consideration criterion is not enough to provide evidence of, or rule out,

186

8 Nuclear Energy and Waste Disposal

Table 8.1 Consideration criteria for the German site selection process Criteria group Criteria requirements Group 1: quality of isolation capacity and reliability of evidence

Group 2: protection of isolation capacity

Group 3: further safety-relevant properties

– No or slow transportation through groundwater at the repository level – Favourable configuration of rock bodies, in particular host rock and isolating rock zone – Ease of spatial characterisation – Good predictability of the long-term stability of favourable conditions – Favourable rock-mechanical preconditions – Low tendency to the formation of water flow paths in the host rock body/isolating rock zone – Good conditions for the prevention and/or minimisation of gas generation – Good temperature resistance – High radionuclide retention capacity of the isolating rock zone – Favourable hydrochemical conditions – Protection of the isolating rock zone by the favourable structure of the overburden

Source Adapted from Deutscher Bundestag: Kommission Lagerung hoch radioaktiver Abfallstoffe gemäß §3 Standortauswahlgesetz (2016)

an overall favourable geological situation. Such an overall favourable geological situation will not therefore depend on the particularly good fulfilment of a single criterion, but on the sum of the requirements and associated consideration criteria fulfilled or the extent to which all the requirements and associated consideration criteria are fulfilled (Deutscher Bundestag: Kommission Lagerung hoch radioaktiver Abfallstoffe gemäß §3 Standortauswahlgesetz 2016). Subsurface environmental modelling plays a fundamental role in the German site selection process and is a fundamental basis for decision-making and implementation towards the deep geologic waste disposal option. The integration of modelling in the policy-making process hints at a predominantly instrumental use of subsurface simulations and addresses the threefold services as specified in Table 5.1. The first service of the instrumental use of modelling relates to the identification and evaluation of policy options. In its review of past and current available options for nuclear waste disposal, the Commission discussed and reflected on ideas not to be pursued further such as disposal in outer space, in the Antarctic or Greenland ice sheets or the oceans or indefinite storage on or close to the Earth’s surface. Concerning possible alternatives to final disposal in an underground facility, options such as long-term interim storage, transmutation and deep borehole disposal were considered. However, the Commission finally recommended the option of ‘deep repository with reversibility’ to the German Bundestag. It is interesting to see the grounds for their decision which are as follows (2016: 28) (Deutscher Bundestag: Kommission Lagerung hoch radioaktiver Abfallstoffe gemäß §3 Standortauswahlgesetz 2016): “The Commission has studied these options thoroughly. The central arguments for

8.3 Science-Policy Issues

187

recommending the option termed ‘deep repository with reversibility’ to the German Bundestag are as follows: Final disposal in a deep geological formation is the only option that, in the opinion of the Commission, offers the prospect of permanent, safe disposal of radioactive waste for the reference period of one million years. The long-term reliability of the containment function and the integrity of the geological characteristics that will ensure its safety can be scientifically proven by means of empirical surveys and modelling exercises (N.B.: emphasis added by authors).” Thus, modelling serves to evaluate different nuclear waste disposal options. Empirics and modelling are, thus, expected to prove the containment and integrity of the geological characteristics for a safe long-term disposal and therewith prove safety advantages over the other disposal options. The second service of instrumental use addresses the design and implementation of policy decisions. Modelling plays a fundamental role in the implementation and evaluation process towards site selection. The criteria approach for site selection, as outlined above, implements the site selection process in order to identify a generally favourable overall geological situation in a subarea and/or siting region. The Site Selection Act stipulates a two-step approach combining a qualitative assessment with a subsequent detailing via subsurface modelling. The qualitative assessment serves as a pre-assessment as far as no site-specific geoscience data for modelling are available. The qualitative assessment distinguishes a favourable, relatively favourable and less favourable assessment group linked to the assessment-relevant property of the criterion. Taking the example of a favourable configuration of rock bodies, in particular host rock and isolating rock zone from Table 8.1, several properties of the criterion are specified. The barrier effectiveness, for instance, is indicated by barrier thickness in metres, while the volume of the isolating rock zone is expressed by the areal extent at a given thickness. However, the Site Selection Act clearly stipulates that the qualitative assessment is the first step when no geoscience data for long-term modelling is available. Numerical modelling is essential for the evaluation of the long-term safety of any disposal facility based on the safe isolation of the radioactive waste being permanently guaranteed and the prevention of any impermissible release of radionuclides into the biosphere within the reference period of one million years. Numerical modelling also has to prove the safety as soon as the necessary geoscience data are accessible. At the latest, the evidence-based modelling has to be delivered when the ‘final site comparison and proposal’ is provided, according to Sect. 18 paragraph 3 of the Site Selection Act. This short overview exemplifies the instrumental use of simulation in the process of German site selection for nuclear waste disposal. What is still unclear are other use patterns of subsurface modelling because site selection is still at a very early phase. The Site Selection Act aims to specify a site in the year 2031. However, the Commission does not agree with this ambitious timeline—and abstains from presenting its own timetable. Against this background, it is hard to detail other use patterns of simulations such as tactical or procedural use. However, what can be assumed is the heavy tactical use of simulation results for when site-specific decisions will be taken based on numerical modelling exercises that substantiate the comparative site ranking in order of eligibility within public debates and policy-making processes.

188

8 Nuclear Energy and Waste Disposal temperature temperature curve

heat source

length of heat pipe conduction

convection l

x

0 single−phase region water

conduction

single−phase region vapour (gas)

two−phase region of heat pipe

vapour (gas) flow condensation

evaporation water (liquid) flow

Fig. 8.3 Basic setup and processes involved in a heat-pipe effect

8.4 Selected Model-Based Illustration: Heat-Pipe Effect The effect of developing heat pipes is a scenario which has been discussed in the context of nuclear waste storage in the subsurface. Doughty and Pruess (1985, 1988) provide a comprehensive review of heat-pipe effects in nuclear waste isolation while also presenting a relatively simple semi-analytical solution to describe such effects near high-level nuclear waste containers in partially saturated geological media. Essentially, the requirements for a heat pipe to be developed consist of (i) a heat source, which can be heat from hot waste containers, (ii) a volatile/condensable fluid, which is simply water present near the heat source, (iii) a mechanism to facilitate a counter-current flow system with water vapour flowing away from the heat source and liquid water, upon condensation, returning to the heat source. Capillary forces in the porous subsurface are usually sufficient to act as the driving force for liquid water returning, see also Fig. 8.3. A heat-pipe problem can be modelled as a compositional two-phase two-component problem using Eq. (2.11) in terms of mass balances for the components of water and air in the aqueous phase and the gas phase. It is, in particular, a non-isothermal problem and a single thermal energy balance equation is required in the case of thermal equilibrium as given by Eq. (2.12). The system is dominated by coupled processes of convection and heat conduction. Diffusion and, in particular, capillary forces play an essential role. Udell and Fitch (1985) provide a semi-analytical solution for the setup we present in the following, which allows, for example, model verification by comparing model results with those of the semi-analytical solution.

8.4 Selected Model-Based Illustration: Heat-Pipe Effect Table 8.2 Boundary conditions Section Boundary type Left Right

Dirichlet Neumann

189

Mass balances: Water

Air

Energy bal.

pg = 101300 Pa qw = 0

Sw = 0.99 qa = 0

T = 68.6 ◦ C qh = 100 J/s

The setup is illustrated in Fig. 8.3, as implemented in subfolder lecture/mm/heatpipe of the module dumux-lecture. It includes a horizontal 1D section of porous medium. Neumann boundary conditions in terms of a constant heat flux of qh = 100 W and no-flow for mass components are applied to the right-hand boundary. The left-hand boundary is modelled with Dirichlet conditions, where constant values for the gas-phase pressure ( pg = 101300 Pa), the effective water saturation (Sw,e = 1) and the temperature (T = 68.6 ◦ C) are given. Note that the given pressure and temperature also fix the value of the air’s mole fraction in the gas phase to x ga = 0.71 for this case where water vapour is in equilibrium with the liquid water which is also present on this boundary. The boundary conditions are summarised in Table 8.2. The heat pipe is a system that establishes, depending on the initial conditions, a steady state in which the counter-current flow systems perpetuates after a certain period of time. Therefore, the initial conditions are not really important. One could start, for example, with 50% saturation and ambient pressure. If that is the case, then the heat flux at the right-hand boundary heats the sand until boiling point is reached. Liquid water evaporates and the saturation decreases near the heat source. Due to the high volume expansion from liquid water to steam, the gas-phase pressure increases and drives the steam away from the heat source into cooler parts of the sand column. There, the steam re-condenses and increases the water saturation locally. This, in turn, leads to a gradient in water saturation and, thus, to a gradient in capillary pressure that brings the condensed water back towards the heat source. The circulation of water perpetuates with a continuous, efficient transfer of heat away from the heat source and into the sand column. Once the steady-state conditions are established, three distinct zones can be associated with a dominant heat transport process: (i) the single-phase region with liquid water where conduction dominates, (ii) the two-phase region with convection dominating the actual heat pipe, and (iii) a single-phase region with gas or water vapour where all the liquid water has evaporated and heat is transported via conduction. The temperature gradient is established accordingly. The plots in Figs. 8.4 and 8.5 show the steady-state results of the simple 1D setup as described above. Exemplarily, the results are compared for two different capillary pressure-saturation relationships. It is interesting to see how capillary forces and other parameters (not shown here) affect the extension of the heat pipe. Since one end of the heat pipe is fixed by the Dirichlet boundary condition at the left-hand boundary, the length of the heat pipe is visible in the region towards the right-hand boundary. The smaller the heat pipe, the more it moves away from the heat source.

190

8 Nuclear Energy and Waste Disposal

Fig. 8.4 Spatial distribution of the molar fraction of water in the gaseous phase (left) and the liquid saturation (right) for a 1D heatpipe problem. The heat source is placed at the right end of the model area. Continuous curves represent a simulation with a capillary pressure factor of 1 while the dashed curves represent simulations where it is 0.5. In the left diagram, the two curves cannot be distinguished

Fig. 8.5 Spatial distribution of the gaseous pressure and liquid pressure (left) and temperature (right) for a 1D heatpipe problem. The heat source is placed at the right end of the model area. Continuous curves represent a simulation with a capillary pressure factor of 1 while the dashed curves represent simulations where it is 0.5

8.4 Selected Model-Based Illustration: Heat-Pipe Effect

191

Higher capillary pressures lead to lower aqueous-phase pressures at the dry end of the heat pipe and thus force more liquid water from the fully saturated boundary on the left towards the heat source. Therefore, the heat pipe is larger in this case as can be seen in the plots. Similarly, an increased heat flux would cause a shorter heat pipe since, in that case, the evaporation rate of the liquid water delivered by capillary forces is increased. One might further investigate the influence of parameters like permeability, porosity or heat conductivity, which are all important for the design of a barrier around a containment of nuclear waste, in the sense that they alter the relative influence of heat transport processes, like conduction and convection, as well as the hydraulic transport of the water flow both in the aqueous and the gaseous phase.

References Brunnengräber A, Di Nucci MR, Losada AMI, Mez L, Schreurs MA (eds) (2015) Nuclear waste governance: an international comparison. Springer, Berlin Deutscher Bundestag: Kommission Lagerung hoch radioaktiver Abfallstoffe gemäß §3 Standortauswahlgesetz. Verantwortung für die Zukunft - Ein faires und transparentes Verfahren für die Auswahl eines nationalen Endlagerstandortes. https://www.bundestag.de/resource/blob/434430/ f450f2811a5e3164a7a31500871dd93d/drs_268-data.pdf, 2016. Accessed 23 Dec 2019 Doughty C, Pruess K (1988) A semianalytical solution for heat-pipe effects near high-level nuclear waste packages buried in partially saturated geological media. Int. J. Heat Mass Transfer 31:79–90 Doughty C, Pruess K (2019) Heat pipe effects in nuclear waste isolation - a review. Technical report, Lawrence Berkeley Laboratory, Earth Sciences Division, 1985. URL https://digital.library.unt. edu/ark:/67531/metadc698151/m2/1/high_res_d/60603.pdf. Accessed 20 April 2019 IAEA Safety Standards Series No. GSG-1 (2009) Classification of radioactive waste. general safety guide. Technical report, International Atomic Energy Agency Vienna Kröhn K-P (2011) Code viper - theory and current status. Technical report, GRS 269, Gesellschaft für Reaktorsicherheit Martin C, Lanyon G (2003) Measurements of in-situ stress in weak rocks at mont terri rock laboratory. Int J Rock Mech Mining Sci 40:1077–1088 Olivella S, Alonso E (2008) Gas flow through clay barriers. Géotechnique 58:157–176 Olivella S, Gens A, Carrera J, Alonso E (1996) Numerical formulation for a simulator (CODE_BRIGHT) for the coupled analysis of saline media. Eng Comput 13:87–112 Röhlig K-J (2016) Techniken – Konzepte – Herausforderungen: Zur Endlagerung radioaktiver Reststoffe. In: Brunnengräber A (Ed) Problemfalle Endlager: Gesellschaftliche Herausforderungen im Umgang mit Atommüll. Nomos Verlagsgesellschaft, pp 33–54 Romero E, Gonzalez-Blanco L (2017) Hydro-mechanical processes associated with gas transport in MX-80 bentonite in the context of Nagra’s RD&D programme. Technical report, Nagra - National Cooperative for the Disposal of Radioactive Waste Rutqvist J, Zheng L, Chen F, Liu H-H, Birkholzer J (2014) Modeling of coupled thermo-hydromechanical processes with links to geochemistry associated with bentonite-backfilled repository tunnels in clay formations. Rock Mech Rock Eng 47:167–186 Rutqvist J, Barr D, Birkholzer J, Chijimatsu M, Kolditz O, Liu Q, Oda Y, Wang W, Zhang C (2017) Results from an international simulation study on coupled thermal, hydrological, and mechanical processes near geological nuclear waste repositories. Nuclear Technol 163:101–109 Shao H, Xu W, Marschall P, Kolditz O, Hesser J (2015) Gas generation and migration in deep geological radioactive waste repositories. In: Shaw R (ed) Numerical interpretation of gas-injection tests at different scales. Geological Society of London

192

8 Nuclear Energy and Waste Disposal

Shaw R (ed) (2015) Gas generation and migration in deep geological radioactive waste repositories, vol 415. Geological Society of London, London Sjöberg L (2004) Local acceptance of a high-level nuclear waste repository. Risk Anal 24:737–749 Stahlmann J, Vargas RL, Mintzlaff V (2016) Geotechnische und geologische Aspekte für Tiefenlagerkonzepte mit der Option der Rückholung der radioaktiven Reststoffe. Bautechnik 93:141– 150 StandAG (2017) Standortauswahlgesetz, Gesetz zur Suche und Auswahl eines Standortes für ein Endlager für hochradioaktive Abfälle. https://www.gesetze-im-internet.de/standag_2017/ BJNR107410017.html. Accessed 13 Aug 2019 Tsang C-F, Bernier F, Davies C (2005) Geohydromechanical processes in the excavation damaged zone in crystalline rock, rock salt, and indurated and plastic clays - in the context of radioactive waste disposal. Int J Rock Mech Mining Sci 42:109–125 Udell K, Fitch J (1985) Heat and mass transfer in capillary porous media considering evaporation, condensation and non-condensible gas effects. In: 23rd ASME/AIChE national heat transfer conference, Denver, CO, August 1985 Xu T, Senger S, Finsterle S (2008) Corrosion-induced gas generation in a nuclear waste repository: reactive geochemistry and multiphase flow effect. Appl Geochem 23:3423–3433

Chapter 9

Further Subsurface Environmental Modelling Cases

The scope of this book does not allow all related subsurface environmental engineering applications to be covered in as much detail as in the topics of the previous sections. Yet, many fields and problems remain where, on the one hand, flow and transport processes play a dominant role and, on the other hand, enormous societal implications are inherent. Just a few are briefly touched upon below. While all the modelling-related parts of this book can be directly transferred and applied, we do not elaborate on hydrocarbon (oil and gas) production. The latter is already covered in many textbooks and, although it affects the environment immensely, it is not usually considered an environmental engineering topic. In fact, environmental engineering related to subsurface flow and transport problems has enormously benefited from the petroleum industry that has greatly contributed to the modelling concepts we apply today. This holds true particularly for the modelling of contaminated soils.

9.1 Energy-Related Subsurface Engineering Applications 9.1.1 Geothermal Energy Exploitation Deep geothermal energy projects exploit the heat in the earth’s crust for heating, hot water, electricity or combined heat and power. The biggest part of the geothermal heat resource originates from the radioactive decay of long-living nuclides in the earth’s mantle. Thus, geothermal energy is a very reliable and sustainable source of energy. Geothermal energy can be used, for example, for heating buildings, by being locally produced just underneath the surface (shallow geothermal energy). For large-scale applications, also with the aim to produce electricity, there are basically two different options to be distinguished: hydrothermal systems and petrothermal systems. © Springer Nature Switzerland AG 2021 D. Scheer et al., Subsurface Environmental Modelling Between Science and Policy, Advances in Geophysical and Environmental Mechanics and Mathematics, https://doi.org/10.1007/978-3-030-51178-4_9

193

194

9 Further Subsurface Environmental Modelling Cases

Hydrothermal systems can be installed where aquifers with high water temperatures are found. Such water can be produced, cooled down above ground and subsequently re-injected into the subsurface. Petrothermal systems, often known as hotdry-rock systems, usually have hot rocks with low permeability which does not allow the production of hot water. If they have no sufficient water transmissivity, an artificial fracture system is generated or a natural fracture system expanded, which then allows the circulation of water between two wells. This is usually denoted as Enhanced Geothermal Systems (EGS). Essentially, the hot rock acts as a heat exchanger for heating the injected cold water, while warm water can then be produced. The major engineering challenge in hydrothermal systems is the stimulation of the rock to generate the required permeability, usually via fluid injection, also optionally followed by chemical stimulation where, for example, acids are added to remove minerals that impede fluid flow in the fracture networks. A further possibility for geothermal heat exploitation is to use single-well systems that employ closed-system down-hole heat exchangers; however, the achievable heat output is much less than in open systems due to the much smaller area for heat exchange. A multitude of mathematical/numerical modelling studies is available in the literature. They address issues like technical feasibility, parameter sensitivities, etc., e.g. Yang and Yeh (2009). An example of an industrially exploited hot fractured rock is the site at Soultz-sous-Forêts within the Rhine Valley in France. Temperatures at about 5 km depth are more than 200 ◦ C in a crystalline rock. Conditions of temperature, stress fields and rock characteristics were described in 2003 by Genter et al. (2003). In Soultz-sous-Forêts, chemical stimulation in a three-well EGS system has also been applied (Portier et al. 2009). Despite mixed success, it has encouraged the further development of chemical stimulation as a possibility of reducing the need for hydraulic stimulation since the latter may induce seismic events as will be discussed in more detail further below. The development of renewable and sustainable energy resources is a superordinate, high-level goal, which has a broad acceptance in our society. Yet, most of the renewable energies also face strong criticism in different respects. Geothermal energy exploitation is one of the renewables that falls within the topics of this book on subsurface flow and transport problems. It is viewed critically mainly for two reasons. Deep geothermal energy projects bear the risk of induced seismic events due to the high pressures and large fluid volumes that are applied in order to fracture the rocks to create a sufficient permeability and heat exchange surface. We will elaborate on fluid injection and induced seismicity a bit more below, in Sect. 9.1.3. A prominent example of a hazardous event related to deep geothermal energy is the Basel geothermal project, an enhanced geothermal system which, in 2007, was stopped a few days after the start of the rock stimulation as earthquakes of magnitude 3 occurred, e.g. Deichmann and Giardini (2009), Ellsworth (2013). In this context, one should note that Basel sits atop a historically very active fault which saw an earthquake above magnitude 6 in 1356 which largely destroyed the city. The aftermath of the Basel event in the technical and socio-political landscape shows how quickly huge investments can be lost, and it provides further proof for the significance of the interface between science, policy and society. As Giardini (2009) denoted: “Geothermal quake

9.1 Energy-Related Subsurface Engineering Applications

195

risks must be faced”. This suggests that “society risks a public backlash that could unnecessarily quash a promising alternative energy technology.”

9.1.2 Aquifer Thermal Energy Storage (ATES) The storage and recovery of thermal energy in the subsurface can be achieved by extracting and re-injecting groundwater with a seasonal mode of operation. Groundwater used in summer, e.g. for the cooling of buildings, is injected back into the aquifer at elevated temperatures. The heat accumulated during the summer can then be exploited in winter by reversing the system and using the elevated groundwater temperatures for the heating of buildings. Such energy storage allows, more generally, gaps between periods of surplus production of renewable thermal energy to be bridged, as in industrial processes or from solar systems and periods with excess demands. While the potential benefits of a cost-effective and climate-friendly technology are evident, there is concern about the impacts of elevated groundwater temperatures on drinking water resources or interference with soil contamination, for example, with NAPLs as discussed below. Soil and groundwater contamination is found particularly in urban areas, which is exactly where ATES has the highest potential. Groundwater pumping and injection will certainly affect the distribution and spread of such substances (Willemsen and Groeneveld 1987). In the case of several ATES systems interaction is likely, and a once local contaminant source might be shifted and become scattered over a large area. Essentially, ATES systems are controlled by thermo-hydraulic processes and modelling can contribute to estimating their performance. Fluid displacement due to injection involves strong viscous forces which interact with buoyant forces in the aquifer to produce temperature-dependent effects of variable fluid densities. Multiple aquifer layers can lead to thermal losses. Important factors are, for example, the distance between the boreholes, the thermo-hydraulic properties of the aquifer and its surroundings and the pumping/injection rate. A number of mathematical/numerical studies on ATES systems has been published in recent years with a focus on case studies, e.g. Kim et al. (2010), Bozkaya et al. (2017) and the development of specific numerical and analytical models, e.g. Lee (2010), Nordbotten (2017), Zhu et al. (2015).

9.1.3 Fluid Injection and Induced Seismicity The disposal of waste water in deep geological formations is common practice in the context of conventional oil and gas production as well as in hydraulic fracturing operations. The topic of induced seismicity as a fluid-flow-related risk in subsurface activities has already been touched upon in Chap. 7 in the context of hydraulic fracturing. Earthquakes have been observed in the vicinity of fracking operations in

196

9 Further Subsurface Environmental Modelling Cases

Oklahoma, where e.g. Holland (2013) discusses the temporal correlation of fracking operations and a sequence of more than 100 earthquakes in close proximity to the well. A similar study by Skoumal et al. (2015) reports on fracking-induced earthquakes in a township in Ohio. The disposal of waste fluids by injecting them into deep geological formations has been linked to triggering a swarm of earthquakes as discussed by Horton (2012). Events that received a great deal of attention were the Mw 5.7 earthquake in 2011 in central Oklahoma, which has been linked to wastewater injections from oil production in the Wilzetta North field (Keranen et al. 2013) and an east Texas earthquake in 2012 (Frohlich et al. 2014). Fluid injection has an effect on the stress state of a geological formation in the subsurface. However, the stress state prior to injection is rarely known and the potential for earthquakes is often unknown. It is also important to note that induced earthquakes need to be distinguished from triggered earthquakes. Shear stresses can be close to the strength limit and fluid injection can be the small perturbation sufficient to trigger large seismic events even over large distances, like in the Mw 3.9 earthquake in Paradox Valley, Colorado, which occurred in 2013, several kilometres away from a long-term injection of salt water more than 15 years after initial seismic responses were observed (Ake et al. 2005; Ellsworth 2013). Such earthquakes can be caused by a further increase of the shear stress or a decrease in the normal stress, which can be the case when pore pressures are increased as a result of fluid injection. Shear failure can be simply explained and evaluated with the Mohr Circle and the Mohr-Coulomb failure criterion. Ellsworth (2013), who reviews injection-induced earthquakes in his Science paper, uses the term “induced” to refer to both—induced and triggered—as long as they are related to human activity. He relates the risk of large seismic events particularly to long-term, high-volume injections. It is therefore obvious that induced or triggered earthquakes will raise issues on responsibilities when they bring damage. Therefore, modelling capabilities in waste-water injection to predict the potential magnitude of induced seismic events are immediately relevant for the science-policy interface. Ellsworth (2013) suggests regulation for the improved timely collection of data on related hydrological conditions with seismic events, for example, in terms of daily injection volumes, peak and mean injection pressures, but also measurements of the pressure in the formation prior to fluid injection. The predictive modelling of seismic events is currently (and in the foreseeable future) impossible. However, there are efforts to construct simulation models that help to interpret the available data and relate them to possible expected magnitudes of induced seismic events. Beck (2018) has worked on a conceptual model for fault reactivation which allows evaluations of altered stress states due to fluid injection, including the effects of pressures on normal stresses and shear stresses which can be compared with slip-failure criteria. It is concluded there, for example, that slower pressure increases with lower maximum pressures can, sometimes in fact, be worse in terms of the resulting seismicity. Therefore, numerical models, e.g. by Beck (2018), Rutqvist et al. (2013), Simone et al. (2017), even with a limited capability to predict events, are definitely also an important tool for this very complex engineering application, although the transfer from science to policy becomes more challenging.

9.2 Contamination in the Subsurface: Spreading and Remediation

197

9.2 Contamination in the Subsurface: Spreading and Remediation 9.2.1 Contaminant Spreading in the Unsaturated Zone Contaminants can enter the subsurface in many different ways. “Contaminant” is here an umbrella term for substances that are potentially harmful for humans and the environment. Sources for contamination are manifold. Agriculture, industry or mining are among the most important ones. Entry into the soil can be diffuse or occur locally in spills. A comprehensive and general discussion of contaminant spreading is, of course, far beyond the scope of this book. Thus, in the following, we will focus on a particular class of contaminants which will be denoted as NAPL (non-aqueous phase liquids), i.e. organic substances that typically form separate liquid phases and only have very limited solubility in water. The unsaturated (or: vadose) zone is a region in the subsurface that represents the interface between the saturated groundwater zone and the atmospheric environment; and, usually, it is the unsaturated zone that first gets into contact with contaminants. The unsaturated zone is very important, for example, for agriculture; and interacts strongly with the local climate and water cycle. Many contaminants reside in the unsaturated zone for a long time after they become trapped, due to different mechanisms. They, therefore, pose a serious threat to the aquifers below since they may be washed out by the natural groundwater recharge due to precipitation and, in this way, they can shift into the saturated zone. Unless they are only present in very small amounts, NAPLs exist in a separate liquid phase which is in equilibrium with dissolved contaminants in the water phase and with contaminant vapour in the soil air. To model contaminant spreading, it is essential to understand the basic trapping mechanisms dependent on the contaminant’s phase state: (i) Contaminants like NAPL in a separate liquid phase leave, typically, a trace of residually trapped liquid behind after their passage through the unsaturated zone. Capillary forces, which depend on the soil type and grain size distribution, can hold liquids back and immobilise them. Furthermore, depending on their viscosity, fluids can become retarded and, thus, in the absence of strong pressure gradients, remain as long-term contamination. Heterogeneities in the soil structure have a strong influence on this phase-trapping. Low-permeable lenses are typically associated with higher capillary pressures; thus, they can act as both a hydraulic—due to lower permeability—and capillary barrier for a non-wetting fluid if the lenses are filled with a wetting fluid. (ii) Dissolved contaminants in the water phase are trapped by different mechanisms. Soils have a certain capacity to adsorb components which are dissolved in the water. Adsorption/desorption is a complex physical and/or chemical process and is commonly described by isotherms as a function of the component’s partial pressures. Variations of pressure, temperature or concentrations can disturb the equilibrium of adsorption/desorption and, thus, trigger the immobilisation or re-mobilisation of sub-

198

9 Further Subsurface Environmental Modelling Cases

Fig. 9.1 The dominating processes of contaminant spreading after a NAPL spill in the subsurface can vary with the time-scale of interest; eventually, the same holds true for a subsequent steaminjection

stances via adsorption and desorption. Generally, soils with higher carbon contents have higher sorption capacities. Evaporated contaminants in the soil air, like NAPL vapours, cannot really be considered as trapped since the soil air is, in general, a rather mobile phase. However, these vapours may dissolve into the soil water and afterwards be trapped, as explained above. (iii) Chemical reactions or bio-degradation are ultimately the most desired and, at the same time, most convenient ways to remove contaminants. However, the timescales of these processes, dependent on the substance and the availability of reaction partners and microbes, can be very long. Currently, there is a lot of research underway to go beyond naturally occurring processes (→ natural attenuation) and to exploit and enhance them for new remediation technologies like accelerating redox processes with catalysts, nanoparticles, or heat (NanoRem 2019; Bordenave 2015). Figure 9.1 schematically illustrates how the different processes during and after a spill of a contaminant (in this case an NAPL) can have very different significance. This can have strong implications for modelling. Following the basic principle of model efficiency that a model should be as simple as possible and only as complex as necessary, we can take advantage of these changes over time and, for example, adapt the complexity accordingly (Class et al. 2007). Such a decision would lead to recognised ignorance (see Table 5.2) in the context of uncertainties. Class et al. (2007) presented a sequential coupling of a three-phase non-compositional model with a three-phase three-component model which is used after the multiphase systems comes close to rest. Their results confirmed the hypothesis that the processes can be considered separately on different time-scales. The NAPL plume takes less than 50 days to reach an almost immobile state while the further ongoing diffusive spread of volatilised or dissolved NAPL would take several orders of magnitude longer.

9.2 Contamination in the Subsurface: Spreading and Remediation

199

Under these circumstances, it is justifiable to first use a simpler multiphase model with less computational effort and then couple it with a more complex compositional model. However, if, for some reason, compositional effects are already of interest at an earlier stage, it is certainly necessary to apply the compositional model with its full complexity from the very beginning of the NAPL spill. This depends on the demands on the results which are specified by regulators, site-management or other stakeholders.

9.2.2 Remediation of NAPL-Contaminated Sites The remediation of contaminated land is a very broad field with many different options for different types of contamination. In general, all the decisions on which technology is most appropriate for a given site need to be embedded in a comprehensive risk management context. The toxicity and size of a contamination, the accessibility, the availability of cost-effective technologies, the urgency for re-use and the type of re-use in addition to many other factors need to be reflected upon. One can distinguish, in general, between in-situ and ex-situ remediation, while, for those technologies which are related to subsurface flow and transport, we will only look at in-situ methods. Criteria for the choice of an in-situ remediation technique are manifold. One of the most important is the density of the NAPL, whether it is a DNAPL (denser than water) or an LNAPL (lighter). A DNAPL can potentially submerge in the groundwater, while the LNAPL pools on top of the groundwater table. Furthermore, the volatility of the NAPL is crucial. Highly volatile NAPLs vaporise easier and can therefore be extracted via a soil-air extraction, while less volatile NAPLs might be stimulated to evaporate via heat before being recovered by soil-air extraction; this is usually denoted as thermally enhanced soil-air extraction. Another important question is where the source of the contamination resides: in the unsaturated zone (LNAPL or DNAPL) or in the saturated zone (typically only DNAPL).

9.2.2.1

Steam or Steam/Air Injection in the Unsaturated Zone

Figure 9.2 illustrates the characteristic processes during a scenario of thermally enhanced soil remediation in the unsaturated zone and thereby distinguishes steam injection (left) and steam/air injection (right). The most basic differences between steam and steam/air injection are due to (i) steam being condensable while air is not and (ii) steam delivering much more thermal energy than air. The steam front during steam injection can only propagate when the soil and the fluids are heated up to the boiling point of water (≈100 ◦ C at atmospheric pressure) or—in the twophase region—to the boiling point of the water-NAPL mixture, which depends on the saturation vapour pressures of the two liquids. Before that, the arriving steam is fully condensed together with NAPL vapours and delivers its latent heat of vaporisa-

200

9 Further Subsurface Environmental Modelling Cases steam injection

liquid phase extraction

gas phase extraction

steam/air injection

region of contamination NAPL flowing downward

T

T

steep and sharp temperature front

broader temperature front

Fig. 9.2 Schematic diagram: comparison between steam injection and steam/air injection, combined with soil-air extraction for contaminant recovery in the unsaturated zone

tion. This leads to an accumulation of liquid NAPL in the steam/condensation front and, due to gravity, the NAPL can flow downward and possibly reach the groundwater table. The addition of air to the injected steam can have important benefits for avoiding the downward shift of NAPLs into the groundwater which should be absolutely avoided in case of a DNAPL contamination. Air is non-condensing; thus, it can maintain a continuous gas flow towards the extraction well; and, dependent on the temperature, the air can continuously transport a certain amount of NAPL vapour. This reduces the risk of liquid NAPL accumulations and, therefore, limits downward shifts of NAPL; and it leads to a broadening of the temperature front as is indicated in Fig. 9.2. Modelling has always been an important pillar for the development of this technology. In the 1990s, there was a high research incentive in the field of remediation; a milestone was the establishment of the VEGAS1 groundwater research facility in 1995 (Barczewski and Koschitzky 1996), which was funded directly by the German Federal Ministry for Education and Research (BMBF, at that time BMFT) and through its Karlsruhe Project Management Agency in the frame of the PWAB (Projekt Wasser-Abfall-Boden) initiative. In the early years of VEGAS, there was intensive experimental research on the development of thermally-enhanced soil-air extraction and numerical modelling has always been an essential support to the experiments and decisive for understanding the complex non-linear multiphase multi-component flow and transport processes. On the other hand, the development of modelling concepts 1 Versuchseinrichtung

zur Grundwasser- und Altlastensanierung, University of Stuttgart.

9.2 Contamination in the Subsurface: Spreading and Remediation

201

Fig. 9.3 This diagram illustrates the significantly different processes when injecting steam into the unsaturated zone (left half) and below the groundwater table (right half). In this figure, the thermal radius of influence would be too small to reach the NAPL spill

has strongly benefited from the availability of well-controlled experimental data on different scales for model validation, e.g. Helmig et al. (1998), Class et al. (2002), Class and Helmig (2002), Falta et al. (1992).

9.2.2.2

Steam Injection Below the Groundwater Table

Below the groundwater table, steam injection can still be a proper means of delivering thermal heat to an NAPL-contaminated zone, but it has strong limitations. The large density difference between steam and liquid water leads to gravity segregation, with steam striving to override the water and eventually break through the water table if there is no hydraulic barrier on top. This limits the range of the steam zone to the so-called thermal radius of influence (TRI) as shown in Fig. 9.3 where the TRI is too small to reach the contamination. The TRI depends on several factors which can be well reflected by the dimensionless gravity number Grlin =

viscous forces μs qs = . b K s ρs h (ρw − ρs ) buoyant forces

(9.1)

The dynamic viscosity μs of the steam and the densities ρs and ρw are fluid properties. K s represents the soil’s permeability to steam, qs is the steam-injection rate and h and b are characteristic lengths of the transient steam zone. Since h appears in the denominator of Eq. (9.1), we can recognise that the gravity number Grlin becomes smaller at later times (increasing h). Thus, buoyancy effects gain influence with

202

9 Further Subsurface Environmental Modelling Cases

increasing steam zones. Van Lookeren (1983) explains how the shape or inclination of the steam in the saturated zone depends on the ratio between viscous and buoyant forces as expressed by the gravity number. While Van Lokeren was considering an aquifer confined by an overlying impermeable cap-rock, Ochs et al. (2010) investigated the unconfined spread of steam and developed a scheme to use the gravity number for predicting the TRI and designing remediation scenarios accordingly. For cost-effective remediation, it is important to maximise the TRI in order to minimise the costly drilling of injection wells. While the gravity number gives a very cheap rough estimate for the TRI, it requires more complex modelling to optimise the details of the remediation scenario. Preheating the soil with hot water in the desired direction to increase the range of the steam zone is discussed by Weishaupt et al. (2016) who conducted full-complexity 3D numerical simulations on generic scenarios. This modelling study showed that an increase of more than 10% can be achieved which makes preheating an interesting option depending on the costs for thermal energy and well drilling.

9.2.2.3

Further In-situ Technologies

Many in-situ technologies have been developed or are under development. While in steam- or steam/air-injections it is only (hot) water or air that are introduced into the subsurface, other technologies often use chemical components to achieve NAPL re-mobilisation and recovery. For example, the injection of cosolvents, surfactants or alcohol cocktails renders non-polar organic contaminants miscible with the bipolar water. Thus, by reducing the interfacial tensions, residual contaminants can also be mobilised and the liquid mixtures can be extracted via “classical” pump-and-treat wells. After the extraction, the liquids need extensive processing and treatment (Sellers 1999). Another technology preferably applied in the saturated zone is the injection of nano-scale zero-valent iron (nZVI) (Bardos et al. 2015). These small particles have a large reactive surface where redox processes can take place. nZVI is a strong reductant, for example, for chlorinated solvents and its ecotoxicity is considered low. The challenge for this technology is the transport of the nano-scale particles into the target zone. A model concept for this niche technology has been developed; e.g. within the NanoRem project (NanoRem 2019) and subsequent research. In particular, these models need to predict the spreading behaviour of the nano-particles which tend to be retarded by different adsorption mechanisms. Among the techniques aiming at oxidising contaminants are also ISCO (in-situ chemical oxidation) or HPO (hydrous pyrolysis oxidation). They are applied, for example, to treat chlorinated solvents like trichloroethene or tetrachloroethene or strongly and less volatile components like polycyclic aromatic hydrocarbons (PAH), often in combination with (or after) a thermally-enhanced soil vapour extraction which can remove the less volatile components of a spill. Experiences with these techniques (e.g. Bordenave 2015; United States Department of Energy 2000) show

9.2 Contamination in the Subsurface: Spreading and Remediation

203

that a total mineralisation of the contaminants is not always achieved but, in many cases, the reactions produce less toxic metabolites.

9.2.3 Contaminated Lands Between Stakeholders, Policy and Science Contaminated lands pose environmental and also socio-economic problems and, therefore, involve a diversity of stakeholders such as land owners, government experts, consultants, scientists and technology developers. The general public and politics became increasingly aware of contaminated lands in the 1980s. The perception of risks associated with contaminated lands and the approach to manage them has changed over the years. The problem of contaminated land is widespread in all industrialised countries. An estimate given by Ditterich (1996) in 1996 mentions around 200,000 NAPL-contaminated sites in Germany alone. A report of the European Environment Agency (2014) reckons on 2.5 million potentially contaminated sites in the EU. It is clear from such numbers that cleaning up all these sites is not feasible, neither technically nor economically. Thus, the management of contaminated sites requires clear strategies that distinguish remediation efforts depending on the intended land use (industrial versus residential) and implement them in accordance with the immediate threat level to subjects of protection like groundwater resources. The management of contaminated sites involves different steps and actions. Sites need to be identified and mapped; they require a preliminary survey, if necessary, followed by an intensive site investigation; eventually, they require a decision on remediation measures. Today, there is a number of national and international efforts to coordinate actions in the field of contaminated lands between stakeholders, regulators, technology developers and others. As early as in 1980, the United States’ Congress established Superfund, a programme of the US EPA (Environmental Protection Agency). The decision for Superfund was a reaction to the detection of singular hazardous sites like Love Canal and the Valley of the Drums in the late 1970s. Since then, Superfund provides EPA funds and authorisation to clean up contaminated sites (Superfund 2019). On the European level, networks were established in the 1990s, supported through concerted actions within the Environment and Climate Research and Development Programme of the European Commission, like NICOLE (Network for Industrially Contaminated Land in Europe) (2019a), CARACAS (Concerted Action on Risk Assessment for Contaminated Sites in the European Union) (Ferguson et al. 1998; Ferguson and Kasamas 1998; Ferguson 1999) or CLARINET (Contaminated Land Rehabilitation Network for Environmental Technologies) (CLARINET 2019). The NICOLE network can date its history back to 1996; it provides a forum that promotes co-operation between industry, academia and service providers. Its overall objective is to enable European industry to identify, assess and manage contaminated land efficiently and cost-effectively by supporting the dissemination and

204

9 Further Subsurface Environmental Modelling Cases

exchange of knowledge and ideas, stimulating interdisciplinary projects and networking (NICOLE 2019a). Recent offshoots of NICOLE have been founded for Africa in 2014 NICOLA (2019) and for Brazil in 2015 (NICOLE 2019b). The risk assessment network CARACAS (1996–1998) addressed topics like human toxicology, ecological risk assessment, the fate and transport of contaminants or site screening. It was coordinated by the German Environment Agency. The CLARINET risk management network immediately followed this from 1998 to 2001. It focused primarily on the impact of contaminated land on water resources, brownfield re-developments, health, remediation technologies and decision support tools. CLARINET was coordinated by the Austrian Environment Agency. CARACAS and CLARINET involved partners from around 20 European countries, among them environment ministries and agencies, national research organisations and industry. On the national level in Germany, there are organisations and associations like the ITVA (Ingenieurtechnischer Verband für Altlastenmanagement) (2019), established in 1990 to foster expert dialogue, which has regional sub-groups across the country. An independent regional group (since 1997) is the Baden-Württemberg Altlastenforum (2019); it defines itself as a platform for information exchange and communication between politics, industry and administration in the fields of contaminated land, brownfield redevelopment as well as soil and groundwater protection. The Baden-Württemberg Altlastenforum has strong ties to the VEGAS groundwater research facility which was founded in 1995. All the joint efforts of stakeholders, politicians, technology developers and others involved need to improve the approaches to treating the problem of contaminated land, which are different in different countries. One should keep in mind that the funds for dealing with contaminated lands are not the same in each countries which, in practice, leads to many compromises in public health and water quality. Furthermore, binding guidelines are needed from the regulators’ side to ensure economical confidence in the required measures to be taken by stakeholders.

9.2.4 Selected Model-Based Illustration: Remediation of Contaminated Soils In this case study on contaminated soils, we would also like to illustrate a small example problem. It is based on a laboratory experiment in the VEGAS groundwater research facility (Barczewski and Koschitzky 1996), carried out in the framework of a research project on thermally-enhanced soil air extraction. The VEGAS facility allows experiments from small labs to near-field scale 3D containers. In this case, we have a quasi 2D flume as is shown in Fig. 9.4. The flume was filled with coarse sand with a lens of fine sand embedded into it as seen in the schematic figure. An NAPL contamination in residual saturation was placed partly into the fine sand lens and partly into the coarse sand above the lens. This resembles a realistic scenario where NAPL has leaked into the subsurface through

9.2 Contamination in the Subsurface: Spreading and Remediation

205

110 cm 8.5 20

60

9

coarse sand

NAPL contamination in residual saturation

13

25

fine sand 25

74 cm p = 101300 Pa T = 20 C

32

coarse sand

Sw = 0.11

Fig. 9.4 Setup of the 2D VEGAS experiment Table 9.1 Boundary conditions Section Boundary type Neumann Left Top and bottom Dirichlet Right

Mass fluxes (mol/(s m)) qwater

qair

qmesit ylene

Heat flux (J((s m))) qh

0.3435 0

10−7 0

10−7 0

15150.0 0

Sw 0.12

pg (Pa) 105

x gc 10−6

T (◦ C) 20

the unsaturated zone and has finally accumulated on top of a fine-sand lens with low permeability. Thus, this scenario can be considered a typical trapping scenario for an NAPL in the unsaturated zone. The flume setup as shown in Fig. 9.4 was used to show the characteristic differences of steam injection, steam injection with the co-injection of air and pure air injection. In the following, we only present some exemplary results of the pure steam injection, while discussing the characteristic processes, advantages and disadvantages— in particular the latter and then motivating the use of the co-injection of air under certain circumstances. The code and parameter files can be found in the subfolder lecture/mm/remediationscenarios of the module dumux-lecture. Tables 9.1 and 9.2 provide data for the boundary and initial conditions assigned to the steam-injection scenario. The flow of steam into the flume is controlled by a Neumann boundary condition on the left side of the flume, while the flume is open to the environment on its right boundary where Dirichlet boundary conditions equal to the initial conditions are applied. Essentially, the Neumann fluxes at the left boundary are 0.3435 mol/(s m) for the water component, which is interpreted, in combination with the heat flux of 15150 J/(s m) as an influx of steam. The small fluxes for air

206

9 Further Subsurface Environmental Modelling Cases

Table 9.2 Initial conditions Parameter Gas phase pressure Temperature Water saturation NAPL saturation in the contaminated area Gas molar fraction of contaminant (in initially non-contaminated region)

Unit

Value

pg T Sw Sn

Pa ◦C – –

1.0e5 20 0.12 0.07

x gc



1e-6

Table 9.3 Model parameters related to the hydraulic properties of the fine and coarse sand in the flume filling Parameter Unit Value Fine material Permeability Porosity Van Genuchten Van Genuchten Coarse material Permeability Porosity Van Genuchten Van Genuchten

k φ VGα VGn

m2 – 1/Pa –

6.28 · 10−12 0.42 5 · 10−4 4

k φ VGα VGn

m2 – 1/Pa –

9.14 · 10−10 0.42 1.5 · 10−3 4

and mesitylene are only for the sake of numerical robustness to avoid values of zero in the respective balance equations of these components. The scenario is situated in the unsaturated zone of the soil, i.e. above the groundwater table, which is reflected in the initial conditions, in which residual water saturation is given. Pressure and temperature are equal to ambient conditions and contaminants are only present in the fine-sand lens as residual NAPL saturation. The two materials in the flume— the fine sand and the coarse sand—have different properties for permeability and capillary pressures. Typically, less permeable materials, like the fine sand in this case, show higher capillary pressures. For this scenario, a van-Genuchten model Van Genuchten (1980) is used to parameterise relative permeabilities and capillary pressures. The van Genuchten model uses the parameter VGα as a scaling parameter. The capillary pressure values are inversely proportional to VGα . Therefore, the fine material has a smaller VGα than the coarse material. Table 9.3 lists the parameters relevant for hydraulic properties in an overview.

9.2 Contamination in the Subsurface: Spreading and Remediation

207

Fig. 9.5 Spatial distribution of NAPL saturation at time t = 0 s (left) and after t = 4320 s (right). The area marked with the blue square-shaped dots is the lens of lower permeability

Fig. 9.6 Spatial distribution of temperature (left) and contaminant mole fraction in the gas phase (right) after t = 4320 s

Some results of the steam scenario are shown in Figs. 9.5 and 9.6. The left plot of Fig. 9.5 shows the initial conditions of the NAPL saturation. All other plots are snapshots of different variables after 4320 s of steam injection. Figure 9.5 (right) provides, in direct comparison with the left part of this figure, the NAPL saturation after 4320 s. It is clearly visible that the initial saturation of the NAPL is already strongly influenced by the propagating steam front. Essentially, one can observe that the NAPL, which was initially placed in the coarse sand above the lens, is now shifted into the lens and accumulates there to an increasing saturation. The propagation of the front can be seen in Fig. 9.6 (left) in the form of the temperature front. The injected steam has a temperature of around 100 ◦ C, corresponding to 373 K in the legend of this plot. The front is very sharp. When steam comes in contact with the cold sand, it condenses and transfers its latent heat of evaporation to the sand, thus heating the sand efficiently to steam temperature. Thereupon, the steam can penetrate further. It is also very clear that the steam flows around the fine-sand lens with low permeability. However, the lens is subsequently also heated via heat conduction. Thus, the heating of the lenses occurs with some retardation in time, dependent on their size. Finally, in Fig. 9.6 (right) we can see a plot of the mole fraction of the contaminant in the gas phase. While Fig. 9.5 (right) shows the progress of remediating the NAPL plume as a liquid phase, Fig. 9.6 (right) provides further insight into the remaining NAPL concentrations in the gas phase. The NAPL evaporates when the steam front arrives, the vapour is then transported with the steam to the front and recondenses there. This process leads to the accumulation of (condensed) liquid NAPL at the steam front, resulting in an increased relative permeability, and then, due to gravity, in the NAPL is shifted in a downward vertical direction in the unsaturated zone. Note that the mole fraction of mesitylene in the gas phase is in equilibrium with its liquid phase as a function of temperature in those regions where the liquid

208

9 Further Subsurface Environmental Modelling Cases

is still present. It is eventually the evaporated contaminants in the gas phase that are extracted in practical applications by soil air extraction. The shifting of the NAPL phase from its initial position in the coarse sand into the fine-sand lens is an unwanted effect since, from there, it is actually more difficult to recover. Yet, it is a characteristic feature of the steam-injection scenario. At the front, both steam and the evaporated contaminants condense and soon find themselves in saturations above residual, thus with increased mobility. To avoid that, it would be necessary to transport them beyond the sharp steam front. In fact, this can be achieved to some extent when a non-condensable gas is added. Therefore, the co-injection of air poses an interesting option. Although the transported energy of hot air is much less than that of steam at the same temperature, the air still has the capacity to transport a certain amount of contaminant vapour beyond the steam front, which reduces the effect of the downward shift of the NAPL. For engineering practice, it requires a continuous adaption of the ratio of injected air and steam. In the early phase of the remediation scenario, the amount of co-injected air should be rather high in order to transport the easily accessible parts of the contamination from the highly-permeable regions. Later on, the steam share should be applied in order to also reach the parts with low permeability due to convective/conductive heating, as explained above.

References Ake J, Mahrer K, O’Connell D, Block L (2005) Deep-injection and closely monitored induced seismicity at Paradox Valley, Colorado. Bull Seismol Soc Am 95:664–683 Altlastenforum Baden-Württemberg e.V. (2019) Flächenrecycling, Boden- und Grundwasserschutz. http://www.altlastenforum-bw.de. Accessed 23 Apr 2019 Barczewski B, Koschitzky H-P (1996) The VEGAS research facility. Technical equipment and research projects. In: Kobus H, Barczewski B, Koschitzky H-P (eds) Groundwater and subsurface remediation. Springer, Berlin, pp 129–157 Bardos P, Bone B, Cernik M, Elliot D, Jones S, Merly C (2015) Nanoremediation and international environmental restoration markets. Remediat J 25:83–94 Beck M (2018) Conceptual approaches for the analysis of coupled hydraulic and geomechanical processes. PhD thesis, University of Stuttgart Bordenave A (2015) Traitement in situ des HAPs par co-injection air-vapeur: mécanismes physicochimiques et optimisation énergétique. PhD thesis, Université Michel de Montaigne - Bordeaux III, Sciences de la Terre. https://tel.archives-ouvertes.fr/tel-01340836 Bozkaya B, Li R, Labeodan T, Kramer R, Zeiler W (2017) Development and evaluation of a building integrated aquifer thermal storage model. Appl Therm Eng 126:620–629 CLARINET (2019) Contaminated land rehabilitation network for environmental technologies. http://www.umweltbundesamt.at/en/clarinet. Accessed 23 Apr 2019 Class H, Helmig R (2002) Numerical simulation of nonisothermal multiphase multicomponent processes in porous media - 2. Applications for the injection of steam and air. Adv Water Resour 25:551–564 Class H, Helmig R, Bastian P (2002) Numerical simulation of nonisothermal multiphase multicomponent processes in porous media - 1. An efficient solution technique. Adv Water Resour 25:533–550 Class H, Helmig R, Neuweiler I (2007) Sequential coupling of models for contaminant spreading in the vadose zone. Vadose Zone J 7:721–731

References

209

Deichmann N, Giardini D (2009) Earthquakes induced by the stimulation of an enhanced geothermal system below Basel (Switzerland). Seismol Res Lett 80(5):784–798 Ditterich E (1996) Wirbel um den Bodenschutz. Umweltmagazin Nr. 4. Vogel Verlag, Würzburg Ellsworth WL (2013) Injection-induced earthquakes. Science 341 European Environment Agency (2014) Progress in management of contaminated sites. Technical report, European Environment Agency, 2014. http://www.eea.europa.eu/data-and-maps/ indicators/progress-in-management-of-contaminated-sites-3/assessment. Accessed 20 Apr 2019 Falta R, Pruess K, Javandel I, Witherspoon P (1992) Numerical modeling of steam injection for the removal of nonaqueous phase liquids from the subsurface. 1. Numerical formulation. Water Resour Res 28:433–449 Ferguson C (1999) Assessing risks from contaminated sites: policy and practice in 16 European countries. Land Contam Reclam 7:33–54 Ferguson C, Kasamas H (eds) (1998) Risk assessment for contaminated sites in Europe; volume 2, policy frameworks. LQM Press, Nottingham Ferguson C, Darmendrail D, Freier K, Jensen B, Jensen J, Kasamas H, Urzelai A, Vegter J (eds) (1998) Risk assessment for contaminated sites in Europe; volume 1, scientific basis. LQM Press, Nottingham Frohlich C, Ellsworth W, Brown W, Brunt M, Luetgert J, MacDonald T, Walter S (2014) The 17 May 2012 m 4.8 earthquake near Timpson, East Texas: an event possibly triggered by fluid injection. J Geophys Res: Solid Earth 119:581–593 Genter A, Guillou-Frottier L, Feybesse J-L, Nicol N, Dezayes C, Schwartz S (2003) Typology of potential hot fractured rock resources in Europe. Geothermics 32:701–710 Giardini D (2009) Geothermal quake risks must be faced. Nature 462:848–849 Helmig R, Class H, Färber A, Emmert M (1998) Heat transport in the unsaturated zone - comparison of experimental results and numerical simulations. J Hydraul Res 36:933–962 Holland A (2013) Earthquakes triggered by hydraulic fracturing in South-Central Oklahoma. Bull Seismol Soc Am 103:1784–1792 Horton S (2012) Disposal of hydrofracking waste fluid by injection into subsurface aquifers triggers earthquake swarm in Central Arkansa with potential for damaging earthquake. Seismol Res Lett 83(2):250–260 ITVA (2019) Ingenieurtechnischer Verband für Altlastenmanagement und Flächenrecycling e.v. (itva). http://www.itv-altlasten.de/der-itva.html. Accessed 23 April 2019 Keranen K, Savage H, Abers G, Cochran E (2013) Potentially induced earthquakes in Oklahoma, USA: links between wastewater injection and the 2011 mw 5.7 earthquake sequence. Geology 41:699–702 Kim J, Lee Y, Yoon W, Jeon J, Koo M-H, Keehm Y (2010) Numerical modeling of aquifer thermal energy storage system. Energy 35:4955–4965 Lee KS (2010) A review on concepts, applications, and models of aquifer thermal energy storage systems. Energies 3:1320–1334 NanoRem (2019) NanoRem - nanotechnology for contaminated land remediation. http://www. nanorem.eu. Accessed 20 Apr 2019 NICOLA (2019) NICOLA - network for industrially contaminated land in Africa. http://www. nicola-org.com. Accessed 23 Apr 2019 NICOLE (2019a) NICOLE - network for industrially contaminated land in Europe. http://www. nicole.org. Accessed 23 Apr 2019 NICOLE (2019b) NICOLE BRASIL - Latin America network for soil and water management. http://www.ekobrasil.org/nicole.html. Accessed 23 Apr 2019 Nordbotten JM (2017) Analytical solutions for aquifer thermal energy storage. Water Resour Res 53:1354–1368 Ochs S, Class H, Färber A, Helmig R (2010) Methods for predicting the spreading of steam below the water table during subsurface remediation. Water Resour Res 46:W05520

210

9 Further Subsurface Environmental Modelling Cases

Portier S, Vuataz F-D, Nami P, Sanjuan B, Gerard A (2009) Chemical stimulation techniques for geothermal wells: experiments on the three-well EGS system at Soultz-sous-Forêts, France. Geothermics 38:349–359 Rutqvist J, Rinaldi A, Cappa F, Moridis G (2013) Modeling of fault reactivation and induced seismicity during hydraulic fracturing of shale-gas reservoirs. J Pet Sci Eng 107:31–44 Sellers K (1999) Fundamentals of hazardous waste site remediation. CRC Press, Boca Raton Simone SD, Carrera J, Vilarrasa V (2017) Superposition approach to understand triggering mechanisms of post-injection induced seismicity. Geothermics 70:85–97 Skoumal R, Brudzinski M, Currie B (2015) Earthquakes induced by hydraulic fracturing in Poland Township, Ohio. Bull Seism Soc Am 105: 189–197. https://doi.org/10.1785/0120140168 Superfund (2019) US Environmental Protection Agency. https://www.epa.gov/superfund. Accessed 23 Apr 2019 United States Department of Energy (2000) Hydrous pyrolysis oxidation/dynamic underground stripping, 2000. Innovative Technology Summary Report DOE/EM-0504. https://frtr.gov/ costperformance/pdf/itsr1519.pdf. Accessed 20 Apr 2019 Van Genuchten R (1980) A closed-form equation for predicting the hydraulic conductivity of unsaturated soils. Soil Sci Soc Am J 44:892–898 Van Lookeren J (1983) Calculation methods for linear and radial steam flow in oil reservoirs. Soc Pet Eng J 23:427–439 Weishaupt K, Bordenave A, Atteia O, Class H (2016) Numerical investigation on the benefits of preheating for an increased thermal radius of influence during steam injection in saturated soil. Transp Porous Media 114:601–621 Willemsen A, Groeneveld G (1987) Environmental impacts of aquifer thermal energy storage (ATES): modelling of the transport of energy and contaminants from the store. In: Proceedings of the international conference on groundwater contaminations: use of models in decision-making, Amsterdam, The Netherlands, 26–29 October 1987, Organized by the International Ground Water Modeling Center (IGWMC), Indianapolis - Delft Yang S-Y, Yeh H-D (2009) Modeling heat extraction from hot dry rock in a multi-well system. Appl Therm Eng 29:1676–1681 Zhu C, Zhang G, Lu P, Meng L, Ji X (2015) Benchmark modeling of the Sleipner CO2 plume: calibration to seismic data for the uppermost layer and model sensitivity analysis. Int J Greenh Gas Control 43:233–246

Chapter 10

Conclusions and Outlook

Conclusions: subsurface modelling confronting complexity, uncertainty and ambiguity In this book, we have focused on the theory and practice of subsurface environmental modelling at the science-policy interface from an interdisciplinary perspective. The subsurface of the Earth always has played a crucial role in its intense utilisation by mankind and this will continue in the future. With fossil resource extraction dominating the past, nowadays, we see the development of further energy-related subsurface applications such as storage and disposal (CO2 , other gases, nuclear waste), contamination remediation related to industrial activity and geothermal energy exploitation. Against the background of the great variety—and partly competing—existing and emerging subsurface applications, a responsible and solid handling of these activities seems essential. The main line of argumentation which has unfolded in this book takes this as the guiding principle and has undertaken to elaborate on both the knowledge production and knowledge communication of subsurface environmental modelling between science, policy and society. In the first part of the book, we illustrate conceptual issues of subsurface modelling at the science-policy interface. Producing scientific knowledge for subsurface modelling builds on conceptual models for environmental engineering, corresponding mathematical and numerical solutions as well as software concepts and implementation. Theorems, definitions and scientific laws—in short: the physics—of subsurface processes and mechanisms are laid out at the beginning of the book. The governing equations of single- and multiphase flow through porous media, flow-induced geomechanics and subsurface flow and transport mark the starting point for a better understanding of underground dynamics. The governing equations cover the fields of multiphase flow, including compositional effects in the flow and transport of components as well as corresponding thermal processes. In a next step towards numeri-

© Springer Nature Switzerland AG 2021 D. Scheer et al., Subsurface Environmental Modelling Between Science and Policy, Advances in Geophysical and Environmental Mechanics and Mathematics, https://doi.org/10.1007/978-3-030-51178-4_10

211

212

10 Conclusions and Outlook

cal modelling, we exemplified approaches of mathematical and numerical solution methods for solving partial differential equations. Since these equations are typically solved numerically, discretization in time and space is necessary and corresponding discretization examples are provided. In order to deal with the high degree of nonlinearity, an appropriate linearization scheme —- Newton’s Method—is presented to cope with the large set of nonlinear equations at each time step. In addition, choices of appropriate variables for compositional models are essential for the solution process. Several primary variables are introduced and discussed for non-isothermal water-gas NAPL systems and for steam injection in the unsaturated and saturated zone. Within the area of software concepts and implementation of numerical solutions, we touched upon both the knowledge production and communication area of subsurface modelling. With a focus on the open-source principle, software solutions open up in terms of transparency so that users, developers and decision makers are able to follow up and retrace the modelling exercise. As such, open-source modelling seeks legitimacy through transparency. The open-source principle is detailed with different stages of its infrastructure and an in-depth introduction into the porous-media simulator DuMux . Developed by the Department of Hydromechanics and Modelling of Hydrosystems (LH2) at the University of Stuttgart, we outline historical developments, visions and concepts as well as its relevant modules, models and structure. Next, we focused on the science-policy interface where subsurface modelling is contextualized as scientific advice for policy. Applying differing approaches, we illustrated the main characteristics and current developments of both the science and the policy system and identify knowledge transfer and usage patterns among them. These results are then discussed in relation to subsurface modelling issues, that is the Black-Box character, the (un)certainty dilemma and the risk/hazard perception dilemma. The conceptual analysis of the science-policy interface ends by elaborating on knowledge production and communication modes of modelling across borders. In the second part of the book, we discuss and detail the generic concepts and results of subsurface environmental modelling at the science-policy interface by means of several case-study applications. The Carbon Capture and Storage case was highly ranked on the political agenda in the late 2000s with intensive research and political stimulus placed as a bridging technology for mitigating climate change. Modelling issues tackle several decisive stages of future storage projects such as capacity estimates, site selection, CO2 injection and plume behaviour, storage monitoring and closure. Concerning science-policy issues, it is interesting to see the processing patterns of modelling results by policy makers and stakeholders. In addition, participatory modelling approaches of brine migration in the field of CCS gave insights into how to manage and implement cooperative subsurface modelling initiatives. Selected model-based illustrations put CO2 plume-shape development and convective mixing into the spotlight. Hydraulic fracturing is another interesting case-study for subsurface modelling. Being one of the most controversial technologies in today’s public debates, it nevertheless has revolutionized oil and gas supply, at least in the US, with considerable impacts on the global market. Impressed by the success in the US, the EU intended to stimulate a European fracking market but, for the time being, left their present and

10 Conclusions and Outlook

213

future ambitions shipwrecked. Modelling issues stress the importance of risk assessment, where we provide an example of a scenario-related risk assessment approach. The science-policy issues discusses fracking as a contested technology within a sample of expert reports, revealing the fact of an apparent expert dilemma. The selected model-based illustration covers a basic scenario of methane migration in a typical setting after hydraulic fracturing, while the fracking process itself and the generation of fractures is not yet state-of-the-art in numerical modelling. The case of nuclear waste is a further case of interest and is likely to see strongly increasing public attention in the decades to come. Being a longstanding area of public conflict and fierce opposition, nuclear energy and its waste has been on the public agenda since the 1960s—with the waste disposal problem still unsolved. However, fresh waste disposal efforts are underway, planning deep geological repositories in several countries. Modelling issues tackle both energy scenario modelling for transforming the energy system and risk assessment for deep storage disposal. Science-policy issues identify subsurface modelling as a key factor for site selection and operation processes. Finally, the selected model-based illustration presents the heat-pipe effect. With less detail, further subsurface environmental modelling cases related to energy (e.g. geothermal energy, aquifer thermal energy) and contamination issues close the book. Drawing conclusions, we intend to give a broad overview of several substantial facets of subsurface modelling within and beyond the scientific community. Subsurface modelling is closely linked to subsurface engineering applications with a focus on benefiting from the underground in terms of exploitation (e.g. fossil or renewable resources, CO2 or energy storage) and its impact (e.g. waste disposal, contamination). Thus, subsurface modelling serves for practical issues regarding technical feasibility and risk assessment statements. From a scientific perspective, modelling is also about epistemic robustness (Lentsch and Weingart 2011). Epistemic robustness relates to the quality of scientific knowledge claiming the universal validity of its statements. Universal validity is given through critical examinations of scientific statements of facts within the scientific community. There are several quality criteria, in general, which are used to assess and evaluate scientific statements and epistemic methods, i.e., objectivity, reliability and validity. The first aspect is objectivity, which refers to the independence of the method and its results from the individual scientist who applies the methods. Secondly, reliability refers to the extent to which a scientific method yields to consistent and precise results in a given domain (Petersen 2006). That is not to say the results are close to the ‘true’ value, but are consistently precise, indicating the closeness of the singular results (e.g. calculation, measurements) and agree with one another independent of their relation to the ‘true’ values. Thirdly, validity refers to the degree to which a scientific tool or method measures or indicates what it claims to indicate. Validity therefore relates to accuracy while accuracy refers to the closeness of the result to the ‘true’ value of the sought physical quantity. Thus, accuracy

214

10 Conclusions and Outlook

is different from precision. Accuracy thus implies precision but the converse is not necessarily true (Hon 1989). Within the field of subsurface modelling, we illustrated the high degree of objectivity in detailing the governing equations and its transfer into mathematical and numerical solutions. Although considerable solid knowledge is already available in modelling flow and related processes in the subsurface, we also hinted at the fact of remaining uncertainties and research which is currently ongoing. A crucial issue in the field of software implementation and its objectivity concerns the principle of open source. The independence of subsurface modelling solutions from the individual scientist is closely linked to open and transparent review possibilities that is achievable only with open-source modelling. Therefore, open-source modelling is an essential quality aspect not only for external credibility and understanding, but also for scientific development itself. The reliability of subsurface modelling is based on a great variety of quality control approaches as mentioned and illustrated in several parts of the book. Among them, approaches such as comparative benchmark modelling, sensitivity analyses, parametrization and model coupling are widespread and systematically applied in subsurface modelling. These techniques aim to improve the reliability of modelling results via extending the consistency and plausibility with inter- and intra-modelling quality-control measurements. The validity of results from subsurface modelling refers to matching the true values to real world underground phenomena. That is to say modelling should resemble the target system’s processes and predict its outcomes as ’truly’ as possible. Common approaches in subsurface modelling are to match modelling results with experimental data—and to compare ex-post its degree of (non-)similarities. From a beyond-science perspective—that is at the science-policy interface— modelling has become a key issue concerning the design and implementation of subsurface technologies and engineering applications. Evaluating the pros and cons of an emerging technology within a political and societal debate should be centred on solid, evidenced-based scientific grounds. Modelling has its role to play by contributing to prospective risk-assessment knowledge. Compatibilities of scientific simulations with the policy-making system rely on key characteristics of models meeting policy’s reasoning, forward-thinking and decision oriented needs, namely the capability to: reduce complexity, compare options, analyse intervention effects, deliver results in numbers and pictures, and carry out trials without error. From a systemic perspective, simulations serve as a knowledge instrument, contributing to secure and uncertain knowledge and, to a lesser degree, to recognized non-knowledge. As a communication instrument, simulations enable, amplify and provide feedback for communication. Taking an impact perspective, the use of simulations differs in terms of the instrumental, conceptual, strategic and procedural use patterns. Finally, the evaluation and assessment of simulations by decision makers and stakeholders follows simulation-inherent and simulation-contextual criteria. However, by crossing scientific borders, modelling as policy advice enters the field of responsiveness and political legitimacy. Labelled as the political robustness of knowledge (Lentsch and Weingart 2011), it reflects the acceptability of knowledge by considering the inclusion of knowledge and the preferences of affected stakehold-

10 Conclusions and Outlook

215

ers, decision makers and citizens, and embed decision-making within accepted institutional and organizational routines and mechanisms. However, what has become clear is that decision makers use scientific knowledge differently. Simulations seem much more vulnerable to misunderstandings and misconception compared to other scientific approaches due to their inherent black-box character, (un)certainty dilemmas and further context-oriented variables such as trust in the source of simulations, reception of discourse and the degree of external participation. Outlook: the way forward to improve models at the science-policy interface What seems essential for the future of subsurface modelling is to cope with both the challenges identified and lessons learnt in order to meet scientific and non-scientific requirements and expectations. Based on the main findings of the book, we have defined several theses to summarize the main starting points for improving models at the science-policy interface. Thesis 1: The subsurface will gain attention in the policy arena—as will subsurface modelling. The subsurface plays an increasingly important role for the transformation of the energy system and the German Energiewende respectively. With its focus on reducing CO2 emissions and expanding the share of renewable energies, the subsurface hosts a great variety of uses. Among them, use types exist in the field of primary production (e.g. all gaseous and liquid hydrocarbons and other resources, groundwater use and geothermal energy), storage (e.g. hydrogen and methane, compressed air and heat), and disposals (e.g. brines, waste depositories). The different types of underground demands have led to increasing utilisation and competition for suitable geological sites. Against this background, the idea of subterranean spatial planning as a legal framework instrument has been proposed and discussed among policy makers (Kahnt et al. 2015; Schulze et al. 2015). Subsurface modelling will play a key role in subterranean spatial planning for exploring usage type specific geological suitability criteria based on risk assessment modelling studies. Good quality modelling, hence, is a necessary though insufficient precondition. Thesis 2: In contested policy domains, dissent and conflict dominates—with subsurface modelling in the focus. Subsurface modelling is an essential basis for decision-making within politics, business and industry. It contributes to assessing generic and site-specific technical feasibility and technology impacts. Most often, modelling is the one and only tool to prospectively assess the potential and/or probable future outcome. However, what we also observe is the fact that decision makers and the public at large highly dispute and debate current and emerging subsurface applications. All the case studies discussed in this book take the same direction. Carbon capture and storage, hydraulic fracturing and nuclear waste disposal all share the same story: they are subject to highly controversial debates and are so far not successfully implemented in most instances. The more that subsurface applications are contested, the more subsurface modelling enters the spotlight of controversy with tactical and strategic uses to make a point

216

10 Conclusions and Outlook

about debating adequacy and the reliability of assumptions, boundary conditions and related uncertainties. Under these circumstances, modelling capacities are limited to contributing to the objectification of the debate as well as conflict avoidance and consensus making. On the contrary: modelling might become the centrepiece of a controversy where technology proponents and sceptics try to make their arguments. Modellers and the scientific community at large need to be aware of the two-edged role of computational models at the science-policy interface. Thesis 3: Modelling needs to be transparent, both in terms of the modelling process and the models themselves. In order to overcome the black-box character of simulations, it is of the utmost importance to create the highest-achievable transparency in all stages of the modelling process. This starts with the inclusion of stakeholders from the very beginning, for example in form of participatory modelling; it continues with a clear documentation of the model’s physics including all limitations and regularisations; furthermore, the resulting source code and employed data should be published in containerised, executable environments by means of open-science principles; eventually, detailed insights must be provided in the interpretation of the simulation results. While tools already exist for the individual stages such as e.g. scientific papers for the model description or internet portals like GitHub for joint open-source code development, it remains a tremendous challenge to integrate all these stages. Several questions arise especially at the interfaces between the stages. For example: • How to technically connect a paper with the associated source code and data? • How to ensure that the source code is indeed an implementation of the discretised mathematical model? • How to guarantee that the results will be reproducible over a certain period of time? Thesis 4: The most complex model is not always the most appropriate, but complex models are useful for uncertainty quantification. This is not causing a sensation in the subsurface environmental modelling community, but it deserves some highlighting in the context of the science-policy interface. Whether a modelling approach for a given problem is appropriate or not depends on: • The expected model output. For example, both estimating CO2 dissolution in a reservoir or predicting the clean-up time in a thermally enhanced soil remediation project requires a compositional multiphase flow model and not merely a multiphase flow model with immiscible phases; in the latter case, even a non-isothermal model is inevitable. • The spatial and temporal scale of the problem in consideration of the available computational resources. Even for constantly improving computational capabilities, this will remain a core challenge for the foreseeable future in the application of flow and transport models to real-life problems. • The availability or the lack of data. Model complexity needs to be justified by the availability of the required data.

10 Conclusions and Outlook

217

• The expertise of the modeller, his/her skills or familiarity with certain approaches and the availability of a model. Having all this in mind, a model can be chosen to be “as simple as possible, but not simpler” (Albert Einstein). And once again bringing forth the British statistician George Box with his “all models are wrong, but some are useful”, we find usefulness in those approaches which allow an interpretation of the available data while they are still easy enough to comprehend for all parties involved in the assessment, evaluation and usage of the modelling results. Yet, every model lags behind depicting the truth since it is only an abstraction of reality and inevitably neglects certain processes. Thus, it makes sense to assess the uncertainty that was introduced by the simplifications in a model. More complex models allow the quantification of “recognised ignorance”, see e.g. Walker et al. (2003) who describe a conceptual basis for uncertainty management in model-based decision support. We recommend raising awareness for these trade-offs in model complexity and to alert persons at the science-policy interface with respect to the limitations and possibilities of modelling subsurface environmental systems. Thesis 5: Addressing and categorizing the typical sources of uncertainty in subsurface flow and transport modelling will help to communicate the (un-)certainty dilemma of simulations in this field. Uncertainties in subsurface flow problems arise first and foremost from two aspects which are both related to the exploration of geologic circumstances. (i) Do we know about the major geologic features like fault zones, preferential flow paths, hydraulic barriers, etc.? (ii) Are we able to assess permeability and porosity distributions for the identified geologic units. Following the above-cited concept of uncertainty management by Walker et al. (2003), (i) falls into the category of scenario uncertainty, while (ii) rather addresses statistical uncertainty. These two categories are the unavoidable uncertainties, while recognized ignorance introduced by the chosen model (model concept, discretization methods, solution strategies) can be well treated and quantified by proper quality management and transparency as outlined in the previous theses, see also Class et al. (2009), Sim-SEQ (2020), Walter et al. (2012), Nordbotten et al. (2012). We do not claim that the (un-)certainty dilemma can be fully solved, but communicating the categories of uncertainty, related to subsurface environmental modelling, facilitates the assessment and comprehension of model outputs and can eventually increase confidence in decisions based upon numerical simulation results. Thesis 6: Model coupling for problems of high complexity should be reviewed carefully. Highly complex interactions like flow with geomechanics, flow with geochemical reactions of many components, etc. are often modelled by coupling specifically tailored models. If different software packages are used, an iterated solution usually is not practical. Using an example of flow coupled to geomechanics in Sect. 3.3.2, we have shown that non-iterated solutions of models for different physics (or chemistry, etc.) usually compromise the accuracy of the results. In cases where no reference

218

10 Conclusions and Outlook

solution is available, it is particularly necessary to review the accuracy that can be achieved. Thesis 7: Uncertainty-aware validation metrics are required to assure the quality and reliability of modelling. Today, the necessity of Verification and Validation (V&V) for ensuring the credibility of simulation results is acknowledged by all scientific disciplines that deal with computational simulations. While the philosophical position towards V&V is rather fatal (Popper and Hudson 1963), more pragmatic discipline-specific guidelines and standards have evolved over the last decades (Oberkampf 2010). However, some scientific communities have not formalised the V&V process to such an extent. This particularly holds true for the porous-media community where model validation is extremely difficult due to the usually very limited knowledge of the material parameters and constitutive relations. Concerning validation, the most important scientific question to be addressed is how to compare quantitatively computational results and experimental/observational data. For the development of such validation metrics, it is particularly challenging to systematically address the uncertainties which are present in both the experimental/observational data and the model-based predictions. A rigorous development and application of uncertainty-aware validation metrics for subsurface applications will be of the utmost importance for assessing the reliability and improving the social acceptance of corresponding computational models.

References Class H, Ebigbo A, Helmig R, Dahle HK, Nordbotten JM, Celia MA, Audigane P, Darcis M, EnnisKing J, Fan Y, Flemisch B, Gasda SE, Jin M, Krug S, Labregere D, Naderi Beni A, Pawar RJ, Sbai A, Thomas SG, Trenty L, Wei L (2009) A benchmark study on problems related to CO2 storage in geologic formations. Comput Geosci 13:409–434 Hon G (1989) Towards a typology of experimental errors: an epistemological view. Stud Hist Philos Sci Part A 20:469–504 Kahnt R, Gabriel A, Seelig C, Freund A, Homilius A (2015) Unterirdische Raumplanung. Vorschläge des Umweltschutzes zur Verbesserung der über- und untertägigen Informationsgrundlagen, zur Ausgestaltung des Planungsinstrumentariums und zur nachhaltigen Lösung von Nutzungskonflikten, Teil 1. Umweltbundesamt, Dessau-Roßlau Lentsch J, Weingart P (2011) The politics of scientific advice: institutional design for quality assurance. Cambridge University Press, Cambridge Nordbotten JM, Flemisch B, Gasda SE, Nilsen HM, Fan Y, Pickup GE, Wiese B, Celia MA, Dahle HK, Eigestad GT, Pruess K (2012) Uncertainties in practical simulation of CO2 storage. Int J Greenhouse Gas Control 9:234–242 Oberkampf WLC (2010) Verification and validation in scientific computing. Cambridge University Press, Cambridge Petersen A (2006) Simulating nature: a philosophical study of computer-simulation uncertainties and their role in climate science and policy advice. Het Spinhuis Popper KR, Hudson GE (1963) Conjectures and refutations: the growth of scientific knowledge. AIP Schulze F, Keimeyer F, Schöne R, Westphal I, Janssen G, Bartel S, Seiffert S (2015) Unterirdische Raumplanung-Vorschläge des Umweltschutzes zur Verbesserung der über-und untertägigen

References

219

Informationsgrundlagen, zur Ausgestaltung des Planungsinstrumentariums und zur nachhaltigen Lösung von Nutzungskonflikten, Teil 2. Umweltbundesamt, Dessau-Roßlau Sim-SEQ. The Sim-SEQ project - understanding model uncertainties in geological carbon sequestration. https://eesa.lbl.gov/projects/sim-seq/. Accessed 23 Apr 2019 Walker WE, Harremoës P, Rotmans J, van der Sluijs JP, van Asselt MB, Janssen P, Krayer von Krauss MP (2003) Defining uncertainty: a conceptual basis for uncertainty management in model-based decision support. Integr Assess 4:5–17 Walter L, Binning P, Oladyshkin S, Flemisch B, Class H (2012) Brine migration resulting from CO2 injection into saline aquifers - an approach to risk estimation including various levels of uncertainty. International Journal of Greenhouse Gas Control 9:495–506