Advances and New Trends in Environmental Informatics 2023: Sustainable Digital Society (Progress in IS) [1 ed.] 3031469011, 9783031469015

This book is an outcome of the 37th International Conference EnviroInfo 2023, held at the Leibniz Supercomputing Centre

117 7 9MB

English Pages 285 [275] Year 2024

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Advances and New Trends in Environmental Informatics 2023: Sustainable Digital Society (Progress in IS) [1 ed.]
 3031469011, 9783031469015

Table of contents :
Preface
Contents
Part I Environmental Modeling and Monitoring
Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling
1 Introduction
2 Materials, Methods, and Concepts
2.1 Explainable Artificial Intelligence Basic Methods
2.2 Air Quality and ML Related Explainable Methods
3 Explainable AI in the Field of Air Quality
4 Discussion
5 Conclusions
References
Commonalities and Differences in ML-Pipelines for Air QualitySystems
1 Introduction
2 Methods
2.1 The Fog Computing Approach in AQ Systems of the WSB Merito University in Gdansk
2.1.1 Considerations for Pre-processing Data in the Fog Layer
2.1.2 Preliminary Studies of AQ Data Processing to Assess the Possibility of Their Use in the Fog Layer
2.2 The Aristotle University Thessaloniki ML Pipeline
2.3 University of Applied Sciences Bielefeld
2.3.1 Data Pre-processing
2.3.2 Feature Engineering
2.3.3 Data Modelling
3 Comparison and Discussion
4 Conclusions
References
Optimal Stacking Identification for the Machine Learning Assisted Improvement of Air Quality Dispersion Modelingin Operation
1 Introduction
2 Materials and Methods
2.1 Machine Learning Models and Genetic Algorithm
2.2 Genetic Algorithm Hybrid Stacking
3 Results
4 Discussion
5 Conclusions
References
Concepts for Open Access Interdisciplinary Remote Sensing with ESA Sentinel-1 SAR Data
1 Introduction
2 State-of-the-Art/Research
2.1 Remote Sensing
2.2 Algorithm/Software Development in Science
3 Previous Work
4 Concepts for New Solutions
4.1 The Sentinel-1 SAR Processing Tool Software Architecture
4.2 Main Focus
5 Challenges and Limitations
6 Conclusion
References
Part II Technological Advances and Sustainability
Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector
1 Introduction
2 Literature Review
2.1 Overview
2.2 Industry Analysis
2.3 Patent Analysis for Industry Analysis
2.4 Dashboards for Visualization
3 Research Methodology
4 Dashboard Design
5 Evaluation
5.1 Heuristic Evaluation
5.2 Results
6 Conclusion and Future Work
References
The Bike Path Radar: A Dashboard to Provide New Information About Bicycle Infrastructure Quality
1 Introduction
2 State of the Art and Research Gap
3 Website
3.1 Structure and Technologies
3.2 KPIs
4 Data
4.1 Time Series Data
4.2 API
5 AI Approach
5.1 Surface Types and Damages on Bicycle Paths
5.2 AI Model for Damage Detection
5.3 AI Model for Damage Detection
6 Conclusion and Outlook
References
Tactics for Software Energy Efficiency: A Review
1 Introduction
2 Related Work
3 Definitions
3.1 System and Software Levels
3.2 Energy Efficiency Optimization
3.3 Platforms
3.4 Abstraction Levels
3.5 Software Development Life Cycle
4 Study Design
4.1 Research Goal
4.2 Research Questions
4.3 Search Strategy
4.3.1 Initial Search
4.3.2 Impurity Removal
4.3.3 Application of Selection Criteria
4.3.4 Snowballing
4.3.5 Use of Secondary Studies for Snowballing
4.4 Data Extraction
4.4.1 Characteristics of Tactics for Software Energy Efficiency
4.4.2 Potential for Industrial Adoption
4.4.3 Use of Extended Papers
4.5 Data Synthesis
4.6 Study Replicability
5 Results RQ1: Characteristics of Tactics for Software Energy Efficiency
5.1 Publication Trends
5.1.1 Publication Year
5.1.2 Publication Types
5.1.3 Publication Venues
5.2 Tactic Properties
5.2.1 Execution Environment
5.2.2 Tactic Goal
5.2.3 Execution Environments and Tactic Goals
5.2.4 Abstraction Level
5.2.5 Software Development Stage
5.2.6 Platform
6 Results RQ2: Potential for Industrial Adoption
6.1 Industrial Involvement
6.2 Rigor and Industrial Relevance
7 Discussion
8 Threats to Validity
9 Conclusion
References
everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station for Remote Areas
1 Introduction
2 Related Work
3 everWeather System
3.1 Hardware
3.2 Forecasting Algorithm
3.3 System Configuration
3.4 API for Data Monitoring and Storing
4 Experimentation
4.1 Deployment
4.2 Data Analysis
4.3 Forecasting Results
5 Conclusions and Future Work
Funding
References
Part III Data-Driven Approaches to Environmental Analysis
News from Europe's Digital Gateway: A Proof of Concept for Mapping Data Centre News Coverage
1 Introduction
2 Public Debates Towards Data Centres
3 Research Methodology
3.1 Data Collection
3.2 Article Content Analysis
3.3 Data Extraction
3.4 Mapping of News Articles
4 Results
4.1 Identified Categories
4.2 Spatial Differences and Category Co-occurrence
4.2.1 Municipal Findings
4.2.2 Categorical Findings
5 Discussion
5.1 Amsterdam
5.2 Zeewolde
5.3 Groningen
5.4 Hollands Kroon
6 Limitations
7 Future Work
References
GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards a Digital Twin for Real Estate
1 Introduction
2 Literature Review
3 The GAEA Tool
3.1 GAEA User Interface
3.2 Software Architecture
3.3 Implementation Details
3.4 GAEA Environmental Services Overview
3.5 Evaluation and Results
4 Discussion
5 Conclusion
References
Detecting Effects on Soil Moisture with Guerilla Sensing
1 Introduction
2 The Microclover Challenge
3 Related Work
4 Guerilla Sensing as Soil Moisture Observation System
5 Soil Moisture Sensor Selection
5.1 Sensor Exploration
5.2 Sensor Experiments
6 Microclover G-Boxes Setup
7 Microclover SMOS Evaluation and Parameterization
8 Conclusion
References
Data Management of Heterogeneous Bicycle Infrastructure Data
1 Introduction
2 Introduction Data Sources
2.1 Data Use in Cycling Planning
2.2 The INFRASense Project
2.3 Time Series Data
2.4 Non-time Series Data
3 Data Lake and Data Pipeline
4 Data Models
4.1 Time Series Data
4.2 Non-time Series Data
5 Conclusion and Outlook
References
Part IV Sustainable Planning and Infrastructure
Evaluation of Incentive Systems in the Context of SusCRM in a Local Online Retail Platform
1 Introduction
1.1 Motivation
1.2 State of Art
2 SusCRM in a Local Online Retail Platform
2.1 Alignment of the Online Retail Platform
2.2 Incentive Systems in the Context of a Local Online Retail Platform
2.3 SusCRM in the Online Customer Journey
3 Evaluation of Different Incentive Systems
3.1 Research Hypothesis and Method
3.2 Results
4 Discussion
5 Future Outlook
References
Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe
1 Introduction
1.1 Background
1.2 Aim of the Work
1.3 Structure
2 The Search for Influencing Factors
3 Collection and Processing of the Geographic Data
3.1 Creation of the Base Map
3.2 Data on Railway Infrastructure
3.3 Identification of the Intersections
3.4 Preliminary Result
3.5 Filtering Border Regions with CBRCs
3.6 Identification of all European Border Regions
4 Analysis of Potential Influencing Factors
4.1 Analysis of Language as an Influencing Factor
4.2 Analysis of the Economy as an Influencing Factor
4.3 Analysis of Population Size as an Influencing Factor
4.4 Analysis of Tourism as an Influencing Factor
4.5 Analysis of Natural Borders as an Influencing Factor
5 Results
5.1 Language
5.2 Tourism
5.3 Economy
5.4 Population
5.5 Natural Borders
6 Discussion and Outlook
References

Citation preview

Progress in IS

Volker Wohlgemuth Dieter Kranzlmüller Maximilian Höb   Editors

Advances and New Trends in Environmental Informatics 2023 Sustainable Digital Society

Progress in IS

Progress in IS encompasses the various areas of Information Systems in theory and practice, presenting cutting-edge advances in the field. It is aimed especially at researchers, doctoral students, and advanced practitioners. The series features both research monographs, edited volumes, and conference proceedings that make substantial contributions to our state of knowledge and handbooks and other edited volumes, in which a team of experts is organized by one or more leading authorities to write individual chapters on various aspects of the topic. Individual volumes in this series are supported by a minimum of two external reviews.

Volker Wohlgemuth • Dieter Kranzlmüller • Maximilian Höb Editors

Advances and New Trends in Environmental Informatics 2023 Sustainable Digital Society

Editors Volker Wohlgemuth HTW Berlin, University of Applied Sciences Berlin, Germany

Dieter Kranzlmüller Ludwig-Maximilians-Universität München München, Germany

Maximilian Höb Leibniz Supercomputing Centre Garching, Germany

ISSN 2196-8705 ISSN 2196-8713 (electronic) Progress in IS ISBN 978-3-031-46901-5 ISBN 978-3-031-46902-2 (eBook) https://doi.org/10.1007/978-3-031-46902-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface

This book presents the key research findings from the 37th edition of the longstanding and established international and interdisciplinary conference series on environmental information and communication technologies, EnviroInfo 2023. The conference was held from 11th to 13th of October 2023 in Garching near Munich, Germany. It was organized by the Technical Committee on Environmental Informatics of the Gesellschaft für Informatik e.V. (German Informatics Society— GI) and the Leibniz Supercomputing Centre (LRZ) in Garching. Featuring a selection of peer-reviewed research papers, these articles describe innovative scientific approaches and ongoing research in the striving field of environmental informatics. Combining and shaping national and international activities in the realm of applied informatics and sustainability, the EnviroInfo conference series aims to present and discuss the latest state-of-the-art developments in Information and Communication Technology (ICT) for sustainability-related fields. The articles included in this book encompass a wide array of scientific aspects. These span subjects like sustainable computing, software engineering, and digital transformation in general, along with topics like artificial intelligence applications, sustainable mobility, green coding, ICT’s role in the circular economy, and other pertinent areas within the sphere of environmental informatics. Notably, the conference places a particular emphasis on exploring how environmental informatics can contribute to achieving the United Nations’ Sustainable Development Goals (SDGs) and which of these goals are directly addressed by the environmental informatics community. Our heartfelt gratitude extends to all contributors for their valuable submissions. Additionally, we extend special thanks to the dedicated members of the program and organizing committees for their diligent review of all submissions. In particular, we would like to express our appreciation to our collaborative colleagues at the Leibniz Supercomputing Centre (LRZ) in Garching for their support in organizing the event. We extend a warm thank you to our sponsors who provided support for the conference and help us making it possible.

v

vi

Preface

To conclude, our profound appreciation goes to Mrs. Barbara Bethke from Springer and the entire Springer production team for their invaluable assistance and guidance in successfully producing this book. Berlin, Germany Garching, Germany Garching, Germany

Volker Wohlgemuth Dieter Kranzlmüller Maximilian Höb

Contents

Part I Environmental Modeling and Monitoring Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thomas Tasioulis and Kostas Karatzas

3

Commonalities and Differences in ML-Pipelines for Air Quality Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cezary Orlowski, Grit Behrens, and Kostas Karatzas

21

Optimal Stacking Identification for the Machine Learning Assisted Improvement of Air Quality Dispersion Modeling in Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evangelos Bagkis, Theodosios Kassandros, Lasse Johansson, Ari Karppinen, and Kostas Karatzas

39

Concepts for Open Access Interdisciplinary Remote Sensing with ESA Sentinel-1 SAR Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jennifer McClelland, Tanja Riedel, Florian Beyer, Heike Gerighausen, and Burkhard Golla

57

Part II Technological Advances and Sustainability Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Timothy Musharu and Jorge Marx Gómez The Bike Path Radar: A Dashboard to Provide New Information About Bicycle Infrastructure Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Birke, Florian Dyck, Mukhran Kamashidze, Malte Kuhlmann, Malte Schott, Richard Schulte, Alexander Tesch, Johannes Schering, Pascal Säfken, Jorge Marx Gómez, Kathrin Krienke, and Peter Gwiasda

75

95

vii

viii

Contents

Tactics for Software Energy Efficiency: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Jose Balanza-Martinez, Patricia Lago, and Roberto Verdecchia everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station for Remote Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Sofia Polymeni, Georgios Spanos, Dimitrios Tsiktsiris, Evangelos Athanasakis, Konstantinos Votis, Dimitrios Tzovaras, Georgios Kormentzas Part III Data-Driven Approaches to Environmental Analysis News from Europe’s Digital Gateway: A Proof of Concept for Mapping Data Centre News Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Anna Bessin, Floris de Jong, Patricia Lago, and Oscar Widerberg GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards a Digital Twin for Real Estate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Asfa Jamil, Chirag Padubidri, Savvas Karatsiolis, Indrajit Kalita, Aytac Guley, and Andreas Kamilaris Detecting Effects on Soil Moisture with Guerilla Sensing . . . . . . . . . . . . . . . . . . . 201 Johannes Hartkens, Florian Schmalriede, Marvin Banse, Dirk C. Albach, Regine Albers, Oliver Theel and Andreas Winter Data Management of Heterogeneous Bicycle Infrastructure Data . . . . . . . . . 219 Johannes Schering, Pascal Säfken, Jorge Marx Gómez, Kathrin Krienke, and Peter Gwiasda Part IV Sustainable Planning and Infrastructure Evaluation of Incentive Systems in the Context of SusCRM in a Local Online Retail Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Benjamin Buchwald, Mattes Leibenath, Richard Schulte, Sascha Heß, Linus Krohn, and Benjamin Wagner vom Berg Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Peter Paul Theisen and Benjamin Wagner vom Berg

Part I

Environmental Modeling and Monitoring

Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling Thomas Tasioulis and Kostas Karatzas

Abstract The increasing complexity of machine learning models used in environmental studies necessitates robust tools for transparency and interpretability. This paper systematically explores the transformative potential of Explainable Artificial Intelligence (XAI) techniques within the field of air quality research. A range of XAI methodologies, including Permutation Feature Importance (PFI), Partial Dependence Plot (PDP), SHapley Additive exPlanations (SHAP), and Local Interpretable Model-Agnostic Explanations (LIME), have been effectively investigated to achieve robust, comprehensible outcomes in modeling air pollutant concentrations worldwide. The integration of advanced feature engineering, visual analytics, and methodologies like DeepLIFT and Layer-Wise Relevance Propagation further enhance the interpretability and reliability of deep learning models. Despite these advancements, a significant proportion of air quality research still overlooks the implementation of XAI techniques, resulting in biases and redundancies within datasets. This review highlights the pivotal role of XAI techniques in facing these challenges, thus promoting precision, transparency, and trust in complex models. Furthermore, it underscores the necessity for a continued commitment to the integration and development of XAI techniques, pushing the boundaries of our understanding and usability of Artificial Intelligence in environmental science. The comprehensive insights offered by XAI can significantly aid in decision-making processes and lead to transformative strides within the fields of Internet of Things and air quality research. Keywords XAI · Explainable modelling · Air quality

T. Tasioulis · K. Karatzas () Environmental Informatics Research Group, School of Mechanical Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_1

3

4

T. Tasioulis and K. Karatzas

1 Introduction Air quality (AQ) is one of the most pressing environmental challenges facing the world today. It constitutes a significant threat to public health, ecosystems, and has a direct effect on climate change. The estimation of air pollution requires sufficient data for a studied area as well as methods that can transform the data into interpretable results either as indices or pollutant concentrations that show the extent and the severity of the current situation [1–3]. Predictions of air pollution levels are a fundamental process for better environmental management and informed decisions as well. Currently, there are numerous methods that can predict air pollution concentration levels, fuse data from various sources and extract insights and information from databases [4]. Those methods could be divided into two main categories, deterministic methods, and empirical or data-driven methods, while the latter include statistical approaches, and Machine Learning/Deep Learning methods. Nowadays, empirical methods seem to capture a big proportion of approaches in the current research state, as they tend to be cheaper, faster, and accurate to the same extent with the rest [5]. A deterministic model in AQ employs physical and chemical principles to simulate the behavior of air pollutants in the atmosphere and predict their concentrations at specific locations and times, making use of a system of nonlinear differential equations which are solved via numerical methods [6]. These equations take into account factors such as emission sources, atmospheric stability, wind speed and direction, and chemical reactions between pollutants and other substances in the atmosphere and reflect transportation, dispersion and diffusion mechanisms. The process constitutes a simulation on how each pollutant would behave in a realtime scenario. The main advantage of such models is their ability to simulate the real world in a holistic manner, and therefore to provide with explainable results on the basis of their inputs. However, those methods have certain limitations and weaknesses; the main weakness of deterministic approaches is the number of factors required for accurate predictions [7, 8]. The process of simulation tends to be expensive both practically and computationally, whereas a higher level of expertise is needed for this model to be used. The effectiveness of other approaches is antagonistic to these types of methods as they can produce equal quality results, sometimes with less effort and expertise. Empirical methods use historical data and statistical approaches to capture and evaluate their predictions. These methods do not explicitly model the underlying physical and chemical processes, but rather rely on correlations between measured pollutant concentrations and other factors such as meteorology, emissions, land use etc. [6]. However, the complexity and range between empirical methods is wide, and includes from low complexity (ARIMA, ANOVA etc.) to high complexity models such as Deep Learning models (i.e., Convolutional Neural Networks, Hidden Markov Models etc.). Towards the empirical methods there is an underlying tradeoff between complexity, accuracy, and interpretability. Low complex models are easy to be applied and explained as most of the time they are self-explanatory; such

Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling

5

examples include linear and logistic regression, decision trees and empirical models related to weather conditions. However, those models may suffer from inaccuracy in the results, high bias, and they are severely vulnerable to extreme cases. On the other hand, high complexity models such as multi-layer Artificial Neural Networks (ANNs) and Deep Learning techniques (Convolutional Neural Networks—CNNs, Recurrent Neural Networks—RNNs etc.) can usually produce better results in both spatial and temporal dimensions [9–11]. According to this state, the more complex a model becomes the harder it is to be interpreted; by adding new parameters and factors the process of predicting air pollution levels becomes ambiguous and, therefore those results cannot target directly the decision-making process of policymakers. To tackle the problem of explainable results, methods of Explainable Artificial Intelligence (XAI) can be used. As in real life, people would not trust a decision that does not follow proper arguments and explanations, especially when those decisions are accountable for serious health-related issues such as immunotherapy [12], allergic reactions [13] etc. Explainable Artificial Intelligence (AI) systems try to bridge this gap between a “black box” (commonly data-driven models) and a “white box” (commonly deterministic models) without significant changes to accuracy of predictions by modifying or explain models into more simple forms and giving results on the decision-making process of the model that can be understood by humans [14, 15]. Those methods, either lay on top of extant Machine Learning (ML) models to enhance interpretability or are unique independent methods that produce those results in a comprehensive manner. Therefore, an explainable result should provide clear and transparent information about how the AI system arrived at its decision, including the factors and variables that were used, the importance (usually in the form of weight assigned) of each factor, and a form of reasoning behind the decision (i.e., assuming linear relation). By making AI results explainable, users can better understand how the system works and have confidence in the decisions it produces. In this study, we reviewed international literature, trying to answer three major research questions; “how to reach to an explainable result?”, “how explainable AI modelling fits to environmental and AQ data?” and “what extra functionalities can explainable methods offer to an air-quality analysis?”

2 Materials, Methods, and Concepts To answer the proposed research questions, we conducted a study by utilizing a keyword-driven research methodology to various scholar databases including Google Scholar, Scopus, and Science Direct. Our main criteria were the combination of the keywords, “XAI,” “Air quality,” and “Explainable models,” ensuring a comprehensive review for the field of Explainable AI and its relation to AQ. Furthermore, our research was an iterative process; findings from some papers led us to examine similar papers that might not be present through our initial keyword exploration, allowing us to cross-reference our findings with other fields

6

T. Tasioulis and K. Karatzas

that interfere and have a severe impact. This process ensured a deeper dive into the field and the current state of the art around Explainable AI and AQ. Before embarking on an exploration of XAI principles as applied to the domain of AQ, it is essential to delineate the key distinctions among fuzzy logic methodologies and factor significance (or importance) in ML-powered modeling, and XAI. In these “non-XAI” approaches, the output of a model is not always confined to a specific value1 but rather presents varying degrees of truth or probability concerning the prediction. Yet, a stark difference emerges when comparing the operational parameters of these “non-XAI” approaches. Fuzzy logic systems primarily cope with the challenges of uncertainty and imprecision by employing a structured methodology based on predefined rules. These rules, grounded in fuzzy set theory, enable a system to deal with approximate values, ambiguous or incomplete data, in contrast to the binary true or false paradigm traditionally observed in logical systems. As a result, fuzzy logic systems are uniquely capable of operating effectively within complex environments with imperfect information, or in environments where non binary approaches should be used, providing nuanced outputs that consider degrees of truth, rather than absolutes [16–18] and also helping in the comparison and interpretation of AQ model results [19]. Contrarily, artificial intelligence systems operate under the presumption that each data point or input is absolutely reliable, thereby steering decisions that are strictly data-driven. Interestingly, a combination of these two theories frequently proves instrumental in crafting the final prediction when integrated into more intricate systems [20–22]. By synthesizing the strengths of both approaches, the computational robustness and decision-making capacity of these models are significantly enhanced. Therefore, a comprehensive understanding of these distinct methodologies is vital when delving into the subject of XAI within the context of AQ. In contrast, the notion of feature importance in ML models, often referred to as factor significance, provides a nuanced understanding of the relative contribution of each input variable towards the predictive outcome of the model. Such significance is typically observed through the application of particular metrics or algorithms, thereby establishing a quantitative framework to measure the ranking of each variable within the context of the predictive task. This notion is largely instrumental within the decision-making process, offering a means to enhance model performance by concentrating on the variables exerting the most substantial influence. However, when dealing with feature significance, it is important to note that every predictor invariably contributes to the predictive outcome, even though certain predictors may exhibit a negative contribution. Consequently, feature importance, as an interpretive tool, is limited in its capacity to discern hidden relationships or address collinearity among the features within the model as it does provide a reasonably comprehensive insight into the model’s decision-making process. Moreover, the applicability of feature importance is not universal across all models. Models based on Deep Learning architectures, for example, frequently struggle to

1 Be

it binary, categorical, numerical or other.

Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling

7

identify significant features with a high degree of confidence. This limitation often compromises the interpretability of the model. Thus, amalgamating the strengths of various methodologies can significantly bolster the computational robustness and decision-making capacity of these models. A thorough understanding of these discrete methodologies—such as fuzzy logic, feature importance, and Explainable Artificial Intelligence (XAI)—is therefore crucial in the exploration of XAI within the context of AQ modeling and prediction. Visual analytics is another method adding explainability to ML models, demonstrating its potential in various applications such as AQ monitoring. It enhances our grasp of ML models by representing data visually, thereby increasing interpretability and facilitating user interaction [23]. proposes an architecture for the design of custom interactive visual analytics dashboards, in which each component of a dashboard, visible or otherwise, performs a focused set of operations with welldefined inputs and outputs. These interactive dashboards use features like heatmaps and line charts, which allow users to visually navigate and manipulate data on specific geographical points or time instances. A unique feature of this architecture is the provision for linking different components, forming reactive workflows to define the underlying logic of a visual analytics dashboard. In addition, to aid in achieving explainability in AI models, multiple components are usually implemented such as the Annotated Line Chart (ALC)2 and the SHAP Chart.3 A SHAP Chart is a typical feature importance chart based on shapley values4 calculated by SHAP algorithm. The chart can contain negative values if a predictor not only contributes to the model but also distorts the output during the prediction process. On the other hand, ALC is the most basic form of Visual Analytics; it is mainly used to see trends, anomalies, or outliers in the data that are visible and distinct. The main drawback of ALC is the curse of dimensionality; the more variables are included in the analysis it is disproportionally infeasible to find those outliers and draw significant results from them. By using these visualizations higher transparency is preserved to how the analysis outcome is derived, fostering trust in the models. The proposed architecture allows for the inclusion of any functionality that can be structured in a component form as the combination of those two charts can provide higher levels of transparency and interpretability of the outcome. Although this design provides a solid foundation for future development, including the creation of visualization components, it is yet limited to the interpretation capability of the user who analyses the data and to the existing biases in the data (i.e., collinearity, intercorrelation, sampling bias etc.). Therefore, a systemic approach of explainable ML models is needed to particularly

2 An Annotated Line Chart is a line chart with additional annotations and clarifications, used to explain the results of a ML model. 3 A SHAP (SHapley Additive exPlanations) chart is a visualization method making use of game theory concepts. It is used to improve the interpretability of ML models. 4 A shapley value is a measure (expressed as a positive or negative value) of the contribution of each feature to the overall performance of an AI model.

8

T. Tasioulis and K. Karatzas

enhance the explainability of more complex AI models, such as deep neural networks through methodologies that independently can assure the explainability of the former regardless of the user. Therefore, the application of visual analytics to ML models within the realm of AQ monitoring, holds substantial promise having strong advantages and potential drawbacks. The development of interactive dashboards in combination with XAI methods which validate the findings, and their potential for increasing transparency, interpretability, and user engagement positions visual analytics as an indispensable tool in managing and interpreting complex environmental data.

2.1 Explainable Artificial Intelligence Basic Methods In the field of ML, the quest for decipherable and interpretable results has engendered the development of numerous methods and frameworks. The Local Interpretable Model-Agnostic Explanations (LIME) algorithm, introduced by Ribeiro et al., in 2016 [24], is a prominent XAI approach designed to elucidate the predictions of any ML model, irrespective of its opacity or so-called “black-box” nature. LIME operates by crafting a locally interpretable model, which approximates the behaviour of the original model in the vicinity of a given prediction. This approximation, executed through linear and stochastic processes, permits insights into the decision-making rationale of the model, and facilitates the identification of the most salient features underpinning a given prediction. The strength of LIME lies in its model-agnostic approach, allowing it to be utilized in conjunction with any ML model, regardless of complexity. However, it is essential to acknowledge certain limitations inherent to this method. Specifically, LIME’s focus on generating local models may result in an inability to capture global trends comprehensively. Additionally, its reliance on linear interpretations may restrict its capacity to encapsulate non-linear relationships among data points. This nuanced understanding of LIME is imperative for its effective and informed application in the field of XAI. SHapley Additive exPlanations (SHAP), developed by Lundberg and Lee (2017), is a renowned method in the field of XAI with the capacity to elucidate the output from any given ML model. Unlike LIME, SHAP’s theoretical foundation is game theory, wherein Shapley values are deployed to measure the contribution of each feature towards the final prediction. SHAP embraces an intriguing conceptualization wherein all variables are considered as “players” in the predictive “game”. Accordingly, each “player” or variable contributes to a certain degree to the final prediction. This analogy helps us understand the interconnectedness of variables in a model and their collective influence on the output. Subsequent adaptations of the SHAP algorithm have been proposed by authors of studies [25, 26]. These versions of SHAP have been explicitly designed to cater to tree-based and random forest models. Notably, these targeted adaptations have demonstrated substantial improvements in performance, whereas not only enhance the efficiency of the original SHAP algorithm but also extend its applicability to tree and random

Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling

9

forest models. By further refining the interaction and contribution of features, these adaptations promote an advanced level of interpretability and accuracy, hence making a significant contribution to the field of XAI.

2.2 Air Quality and ML Related Explainable Methods Within the domains of AQ research, ML, and Deep Learning, numerous methodologies have been developed to enhance the interpretability of results. However, these techniques do not seamlessly fit within the framework of XAI. Often, they come with distinct requirements related to data or model architectures. Furthermore, their ability to provide comprehensive explainability is somewhat limited compared to the state-of-the-art in XAI. For Artificial Intelligence methods focusing on the air quality/air pollution domain, one should take into account that this is an area where data are in the form of time series having also a spatial id (usually comprising of a latitude and a longitude set of values). For time series data, Time-Correlation-Partitioning [27] showed prominent results in AQ domain. In this case the whole time-series data is partitioning into multiple segments to capture correlation between the target value and secondary variables. The time-correlation partitioned segments are used to identify causal relationships between the variables using statistical techniques such as Granger causality, dynamic regression models or Bayesian networks. These techniques aim to establish the direction and the strength of the causal relationships between the variables. Furthermore, this method incorporates information entropy among the segments to capture significant changes in the trendlines across each different secondary variable. LRP (Layer-wise Relevance Propagation) is another prominent method mainly used in explaining Deep Neural Network models (DNNs) proposed by Bach et al. [28]. This method lies on top of the existing model, the process follows a form of backpropagation where specific instances of data (or segments of them) are fed into the neural network and the prediction is explained through an algorithmic way of steps and sequences. The explanation can take the form of a heatmap or saliency map, which highlights the input features that are most relevant to the decision. This type of algorithm showed rigorous results in both explainability and interpretability in multiple and different domains for both time-series data and images [29]. Feature selection constitutes a pivotal technique that can yield interpretable results, particularly in the realm of AQ forecasting. Integral to the pre-processing phase, feature selection and engineering are crucial for enhancing predictive accuracy. Yet, the pursuit of explicability often relies on domain-specific expertise rather than the technique in question. For example, the relevance of a seven-day lag variable for ozone, while high in the context of forecasting precision, necessitates comprehensive understanding of the behavior of ozone and the temporal lag effect on pollutants. This perpetuates the importance of expert knowledge in illustrating the complex relationships inherent in these data.

10

T. Tasioulis and K. Karatzas

Additionally, methods such as Deep Learning Important FeaTures (DeepLIFT), and Layer-Wise Relevance Propagation (LRP) offer distinct approaches to discerning the relationship between predictive variables and forecasted outcomes [30]. Both have been developed to enhance the interpretability of Deep learning models. DeepLIFT is an algorithm that assigns contribution scores to each input feature by comparing the activation of each neuron to a ‘reference activation’ and computing the differences [30]. These differences are then backpropagated through the network, thereby attributing the output of the network to its input features. In essence, DeepLIFT helps identify which features were critical in making a particular prediction, rendering deep neural networks more interpretable and aiding the identification of patterns that might otherwise be obscured. Conversely, LRP works by distributing the output of the network back to its input layer, attributing relevance scores to individual neurons in the process [28]. This creates a heatmap of relevance across the input features, offering a visual interpretation of which features contribute most significantly to a given output. Through this process, LRP provides a nuanced understanding of the intricate inner workings of deep learning models, effectively supporting the interpretation of complex models and bridging the gap between accuracy and explainability.

3 Explainable AI in the Field of Air Quality In the exploration of different methods, it is evident that there is a plethora of methods that can be utilized to aid in the explainability of results, each one leveraging different components of the model. Interestingly, certain subdomains of AQ and environmental analysis are more aligned with specific methods and tasks. Within the realm of the Internet of Things (IoT), various methods from the field of XAI have been utilized, exhibiting their respective strengths and limitations across diverse research themes. For instance, SHAP techniques (already introduced in Part 2), have been employed to assess space occupancy and thermal comfort using low-cost sensor models [31]. These techniques have proven instrumental in deriving interpretable results that facilitate informed decision-making processes [32]. Moreover, SHAP methodologies have been extended to the application of sound and vibration sensors, demonstrating superior performance in predicting false predictions while simultaneously reducing the number of features used [33]. This ability of SHAP techniques to accurately identify false predictions has been the subject of extensive investigations in various domains [34]. These applications highlight the versatility and efficacy of SHAP techniques within the IoT field, particularly their capacity to reduce feature complexity while maintaining accuracy. Such findings reinforce the value of XAI methods in advancing our understanding and use of ML models in IoT, underlining the potential for future research in this cross-disciplinary field. In the field of AQ, XAI methods are currently less frequently utilized than deep learning models. However, this trend is showing a significant increase in recent

Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling

11

years due to the need for transparent and interpretable solutions as models become more sophisticated. For instance, Gu et al. [35] employed SHAP-based approaches adapted for random forest and linear regression to evaluate air pollutants spatially and temporally. The study employed both linear regression (LR) and random forest (RF) models in their examination, with the latter demonstrating superior predictive capabilities due to its ability to capture complex and non-linear relationships. The study also underscored the impact of the specific division of training and test data sets on model performance, an aspect that led to the implementation of repeated cross-validations (RCVs) as well as the calculation of stable mean and standard deviation as more comprehensive evaluation metrics. Using SHAP, the contribution of each predictor was evaluated; significant variables that emerged from this analysis included vehicle density, which consistently played a critical role across models, as well as population density. Although both LR and RF models highlighted the importance of vehicle density, geographic location, and population density in predicting nitrogen oxides (NO) concentrations, differences were observed in the ranking of these variables’ contributions across models. These observations elucidate how different XAI techniques can yield divergent insights and potentially guide disparate strategies for mitigating NO concentrations. Stadtler et al. [36] successfully used XAI methods to enhance unrepresented samples, identifying redundancies, and flagging inaccurate predictions in the spatial resolution of ozone measurements. To understand the functioning of their ML models, the researchers examined the nearest neighbors of the entire test set, utilizing the k-nearest neighbor algorithm. This allowed them to categorize all test samples into three cases, and they concentrated on inaccurate predictions. The analysis employs various assumptions and leverages XAI techniques, resulting in key insights. While both random forest and neural network models demonstrate similar performance on the test set, they encounter difficulties in accurately predicting some test samples due to several constraints. By utilizing XAI methods, important patterns within the models’ activation, linked with both accurate and inaccurate predictions, were discerned. Furthermore, these techniques enabled the identification of underrepresented test samples and non-influential training samples, thus providing actionable recommendations, such as proposing new ozone observation stations. A subset of unexpected inaccurate predictions, considered untrustworthy, highlighted unique feature-target relationships not captured in the models, demonstrating the valuable insights derived from XAI techniques. Rahardja et al. [37] applied XAI (TensorFlow built-in functions) to model the decision process in the temporal axis of PM2.5 modeled by multi-layered Recurrent Neural Networks. The study utilized the Permutation Feature Importance (PFI) method to identify globally significant features in the random forest model, but recognizing its limitation in assessing multi-feature interactions or directionality of impact, they further employed the Partial Dependence Plot (PDP) method. This served to provide deeper insights into the interactive effects of variables on clinic visits. To further substantiate these findings and understand feature contributions better, the researchers adopted SHAP method. This method provided a detailed understanding of both the magnitude and direction of feature impacts.

12

T. Tasioulis and K. Karatzas

They leveraged Local Interpretable Model-Agnostic Explanations (LIME) for local explanations of specific cases, delivering an easy-to-understand and reliable model interpretation. This multifaceted use of various explainable AI (XAI) methods has empowered the research with robust and interpretable results. In more complex AQ systems, Palaniyappan Velumani et al. [38] utilized XAI methods to estimate factor importance of variables trained on both deterministic and ML models in spatial and temporal dimensions, combining multiple pollutants (PM2.5 , PM10 , O3 , NO2 , etc.) and meteorological data. Similarly, Ji et al. [39] used XAI methods to evaluate the AQ index and its relation to pediatric respiratory diseases using ozone and PM10 , identifying the most important predictors affecting the number of pediatric respiratory outpatients in Taizhou, China. The analysis of the random forest model using XAI methods demonstrated that AQI, O3 , PM10 , and the current month are the most important factors affecting the number of outpatients. Furthermore, a combination of rough set theory and XAI methods have been implemented by Dutta and Pal [40] to assert thresholds of AQI in a greater area by using a threshold for each AQI category (very poor to high). Additionally, the research introduced Z-number-based information to deal with the unreliability and vagueness of real-world data. Z-numbers are used to quantify the linguistic description of air quality (AQI) for different threshold ranges of PM2.5 and PM10 . The certainty and coverage values, along with corresponding linguistic descriptions, are presented to provide a reliable representation of AQI. Thereby, they validated the decisions made by the condition-decision support system using inverse-decision rules. These rules allow for the validation and explanation of the predictions by traversing from the output to the input side. The study also highlights the significance of flow graphs, explainability, and Z-number-based information in the development of an intelligent environmental decision support system. These elements contribute to the interpretability, reliability, and validation of the system’s predictions, enhancing its usefulness in air-quality management. The overall findings of different studies according to the method the subdomain and the task achieved is listed below in Table 1. To sum up, the successful application of XAI methods has been demonstrated within the realm of environmental studies. However, it is noteworthy that a substantial portion of AQ research currently lacks the inclusion of XAI techniques for result interpretation. This omission often culminates in the persistence of bias and redundancy within datasets, which can in turn impact factor estimation, even in “smaller” ML models like random forest and regression models. However, the value of XAI becomes particularly pronounced in spatio-temporal analysis, a setting where the quantity of factors can increase exponentially. In such contexts, efficient decision-making processes are paramount for the enhancement of AQ. The incorporation of XAI techniques into AQ research has the potential to identify and eradicate biases and redundancies within the data. This rectification can pave the way for more precise and interpretable results, which can in turn significantly aid decision-making processes.

Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling

13

Table 1 Finding across different studies for the encapsulation of XAI methods in environmental and air quality analyses Method SHAP

Subdomain Thermal comfort prediction, Smart city applications in IoT, Spatial and Temporal predictions in air pollutants Space occupancy measuring in smart buildings Sound and vibration measuring

Task Interpretability

Source [31, 34, 35]

Interpretability and informed decision-making process Identify false predictions, eliminate redundant features Assess the ranking compared to RF importance Identify redundant samples, underrepresented samples augmentation Interpretability, factor estimation Interpretability on Deep Learning models Factor importance on both spatial and temporal dimensions

[32]

Air quality index assessment

Interpretability, predictors evaluation

[40]

NO evaluation; PM2.5 evaluation Evaluate the air quality index using ozone and PM10 ; PM2.5 evaluation

Model explanation and factor estimation Interpretability, factor estimation, factor ranking

[35]; [37]

NO evaluation

Ozone measurements

PM2 NOx measurements Visual Analytics and Feature Importance in ML and deterministic models Combined Rough Set Theory with Visual Analytics (Flowgraph) LIME PDP Plot

Several Pollutants (PM2.5 , NOx , O3 )

[33]

[35]

[36]

[37] [41] [38]

[37]; [39]

4 Discussion XAI techniques are emerging as a pivotal tool in the arena of predictive modeling. Several studies have shown that their potential to both amplify the precision of predictions and preserve the clarity of results makes them quite valuable throughout the entire research trajectory. Figure 1 offers a schematic representation of how XAI methods can be adeptly incorporated within a typical research case in the AQ domain. The main difference from traditional approaches is that rather

14

T. Tasioulis and K. Karatzas

Fig. 1 A custom adaptation of the concept of XAI in AQ modelling to provide interpretability and to support the decision-making process, based on a similar modelling approach [37]

than producing predictions and subsequently interpreting them, it suggests the simultaneous undertaking of these phases, leveraging models that inherently clarify their predictions, enabling the derivation of meaningful insights by reducing human interpretation. The versatility of XAI techniques across various disciplines, particularly within IoT and AQ research, illustrates the range of research questions and challenges these techniques can address. For instance, in IoT research, SHAP techniques have proven effective in feature reduction while maintaining prediction accuracy, offering insights into space occupancy, thermal comfort, and false prediction identification [31–34]. In the field of AQ, the application of XAI methods has expanded as model sophistication increases, warranting transparent and interpretable solutions. The use of SHAP, alongside other models, in evaluating air pollutants and their spatial and temporal patterns demonstrated this [35]. Comparatively, the use of XAI methods in identifying redundancies, enhancing underrepresented samples, and flagging inaccurate predictions enhanced the spatial resolution of ozone measurements [36]. Furthermore, different XAI techniques like PFI, PDP, SHAP, and LIME, were utilized to model and interpret PM2.5 concentrations modeled by multi-layered Recurrent Neural Networks, thus providing robust and interpretable results [37]. Different studies leveraged XAI methods in estimating factor importance in AQ models and evaluating AQ indexes and their impacts on respiratory health [38, 39]. For example, a combination of rough set theory and XAI methods was employed in determining AQI thresholds in a larger area using PM2.5 and PM10 thresholds [40]. Using Z-numbers, the linguistic description of AQ was quantified for different threshold ranges, and inverse-decision rules validated the predictions, thereby enhancing the interpretability, reliability, and validation of the system’s predictions. Each of these studies demonstrates the valuable application of XAI in providing nuanced insights, facilitating the reduction of feature complexity,

Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling

15

enhancing model transparency and interpretability, and ultimately guiding informed decision-making. The importance of feature engineering and visual analytics in XAI is becoming increasingly recognized in the realm of AQ modeling and prediction [23]. The applicability of feature importance, though not universal across all models, is pivotal in discerning significant features, particularly in models based on deep learning architectures. However, these models often encounter challenges in confidently identifying salient features, a limitation that can compromise model interpretability. As such, integrating various methodologies, including fuzzy logic and feature importance, can substantially enhance the computational robustness and decisionmaking capability of these models. This calls for a profound understanding of these discrete methodologies, reinforcing their role in the pursuit of XAI. Visual analytics further lends explainability to ML models, illustrating complex data patterns, and facilitating user interaction through visual representations. Custom interactive visual analytics dashboards, with their ability to perform focused, and on-demand operations and their use of features like heatmaps and line charts, enable users to navigate and manipulate complex data effectively. Notably, this architecture allows the creation of reactive workflows through linking different components, thus defining the underlying logic of a visual analytics dashboard. Specifically designed components such as the Annotated Line Chart and the SHAP Chart contribute to achieving explainability in AI models, preserving transparency in how the analysis outcome is derived and fostering trust in the models. The architecture’s design also provides a robust foundation for future development, allowing for the inclusion of new visualization components that further enhance explainability, especially in complex AI models. The application of feature engineering and visual analytics plays an important role in the evolution of XAI, particularly within AQ monitoring. Their capacity to boost transparency, interpretability, and user engagement makes them integral tools in managing and interpreting complex environmental data. As such, their continued development and integration will be crucial in pushing the boundaries of our understanding and usability of AI in environmental science. As illustrated by the example of a seven-day lag variable for ozone, an understanding of the pollutant’s behavior and the temporal lag effect is necessary to appreciate the complex relationships in the data fully. This underscores the importance of expert knowledge in discerning relationships between predictive variables and forecasted outcomes. Emerging XAI methodologies offer innovative avenues for enhancing the interpretability of Deep learning models. For instance, DeepLIFT assigns contribution scores to input features based on their deviation from a “reference activation”, backpropagating these differences through the network to discern which features are crucial for a particular prediction. This process aids in uncovering patterns that might otherwise be obscured, enhancing the transparency and interpretability of deep neural networks. In contrast, LRP distributes the network’s output back to its input layer, attributing relevance scores to individual neurons and creating a heatmap of relevance across input features. This type of activation underscores the importance of clear and trustable inputs

16

T. Tasioulis and K. Karatzas

during the creation of a model in order to produce explainable and understandable results. This visual representation aids in discerning which features contribute most significantly to an output, providing a refined understanding of the complex inner workings of deep learning models. These methodologies collectively contribute to the interpretability of complex models, bridging the gap between accuracy and explainability. Reflecting upon our research questions the following elucidations are derived. With respect to the question: “How to achieve an explainable result?” our findings provide a partial answer. Several models can produce explainable results depending on the task, the nature of data (images, quantitative, qualitative) and the extent that is needed. The findings showed that especially across the factor estimation, the current state can offer robust solutions that not only outperform conventional approaches such as feature importance but also increase the quality of results. However, there are instances of ad-hoc adaptations of these techniques that become imperative for each analysis. A more exhaustive exploration of methods is requisite within the realm of AQ and environmental studies. This is paramount to ascertain whether there are universally accepted methods for each specific case and additional research is needed. For the second question, “How does the explainable AI modelling fit into AQ and environmental data?” our findings offer a complete answer. The research brings to the fore numerous studies that either adapt AI models or customize visual analytics methods to confer explainability. Specifically, no instances of cases were identified where AQ modelling data posed a constrain factor in the adaptation of those methods. In contrast, findings showed that conventional methods can provide misleading results (i.e., factor estimation) by carrying inherent biases regardless of the type of the data. In response to “What other functionalities can explainable methods offer to air quality analysis?” our findings provided an answer yet not fully complete. The evidence showed instances where underrepresented samples were identified; other research illustrated the identification of redundant features, emphasizing the versatility of explainable methods and opening a window to some extra unique potential appliances. However, the need for more extensive research of additional capabilities is essential to conclude on definitive answers about all the possible uses of XAI methods and their correspondence to specific tasks.

5 Conclusions In essence, the widespread adoption of XAI methods shows transformative potential within the IoT and AQ domains. In IoT research, XAI assists in identifying and rectifying false predictions and mitigating feature complexity, critical challenges in the field. Simultaneously, it can shine in AQ studies, offering a rigorous approach to handling data complexities. Advanced techniques such as feature selection and visual analytics are employed, bolstering the understanding of data behavior,

Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling

17

including intricate aspects such as the temporal lag effect of pollutants without requiring the perquisite knowledge in the effect itself. Consequently, biases and redundancies within datasets are reduced, facilitating precision and interpretability. Moreover, XAI serves a dual purpose, enhancing accuracy while bridging the gap between complex models and their interpretability. Tools like SHAP, LIME, DeepLIFT, PDP plots, Layer-Wise Relevance Propagation and custom adaptations of those can provide a detailed insight into the inner workings of deep learning and ML models, supporting the interpretation of these complex entities and promoting a balance between accuracy and explainability. The potential of XAI is therefore profound, offering a comprehensive and interpretable framework for analysis across diverse research themes. As we progress, it becomes pivotal for researchers to continue harnessing these methods. Embracing XAI ushers in an era of enhanced clarity, actionable insights, and transformative strides within IoT and AQ research. On top of this, it has been interesting the fact that the choice of XAI method is closely related to the foundational modelling method. This paradigm suggests a shift to the traditional dynamics of research, as the problem’s (modelling) nature now outlines the methodological component and protocols. Thus, the XAI approach becomes a reflection of the problem-solving structure already in play. Consequently, a multi-stage analytical framework is illustrated. The nature of the data prescribes the modeling methodology, which in turn determines the appropriate XAI technique to illustrate and interpret those results. Hence, based on current research, XAI methods in the AQ domain remain underrepresented, without any significant limitations identified, while, a comprehensive framework for addressing AQ challenges utilizing XAI technologies has yet to be established. Those findings derive the ability to leverage the knowledge of methodological approaches into deeper understanding and eventually more efficient problem-solving appliances.

References 1. Delle Monache, L., Alessandrini, S., Djalalova, I., Wilczak, J., Knievel, J.C., Kumar, R.: Improving air quality predictions over the United States with an analog ensemble. Weather Forecast. 35(5), 2145–2162 (2020). https://doi.org/10.1175/WAF-D-19-0148.1 2. Castelli, M., Clemente, F.M., Popoviˇc, A., Silva, S., Vanneschi, L.: A machine learning approach to predict air quality in California. Complexity. 2020 (2020). https://doi.org/10.1155/ 2020/8049504 3. Riga, M., Kontopoulos, E., Karatzas, K., Vrochidis, S., Kompatsiaris, I.: An ontology-based decision support framework for personalized quality of life recommendations. In: Decision Support Systems VIII: Sustainable Data-Driven and Evidence-Based Decision Support: 4th International Conference, ICDSST 2018, Heraklion, Greece, May 22–25, 2018, Proceedings, Springer, 2018, pp. 38–51 4. Represa, N.S., Fernández-Sarría, A., Porta, A., Palomar-Vázquez, J.: Data mining paradigm in the study of air quality. Environ. Processes. 7(1) (2020). https://doi.org/10.1007/s40710-01900407-5 5. Sokhi, R.S., et al.: Advances in air quality research–current and emerging challenges. Atmos. Chem. Phys. 22(7), 4615–4703 (2022)

18

T. Tasioulis and K. Karatzas

6. Seinfeld, J.H., Pandis, S.N.: Atmospheric Chemistry and Physics: From Air Pollution to Climate Change. Wiley (2016) 7. Westerlund, J., Urbain, J.P., Bonilla, J.: Application of air quality combination forecasting to Bogota. Atmos. Environ. 89, 22–28 (2014). https://doi.org/10.1016/j.atmosenv.2014.02.015 8. Zhang, Y., Bocquet, M., Mallet, V., Seigneur, C., Baklanov, A.: Real-time air quality forecasting, part I: History, techniques, and current status. Atmos. Environ. 60, 632–655 (2012) 9. Ayturan, Y.A., Ayturan, Z.C., Altun, H.O.: Air pollution modelling with deep learning: a review. Int. J. Environ. Pollut. Environ. Model. 1(3), 58–62 (2018) 10. Liao, Q., Zhu, M., Wu, L., Pan, X., Tang, X., Wang, Z.: Deep learning for air quality forecasts: a review. Curr. Pollut. Rep. 6, 399–409 (2020) 11. Zaini, N., Ean, L.W., Ahmed, A.N., Malek, M.A.: A systematic literature review of deep learning neural network for time series air quality forecasting. Environ. Sci. Pollut. Res., 1– 33 (2022) 12. Kavya, R., Christopher, J., Panda, S., Lazarus, Y.B.: Machine learning and XAI approaches for allergy diagnosis. Biomed. Signal. Process. Control. 69, 102681 (2021) 13. Fortino, V., et al.: Machine-learning-driven biomarker discovery for the discrimination between allergic and irritant contact dermatitis. Proc. Natl. Acad. Sci. 117(52), 33474–33485 (2020) 14. Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794, 2017 15. Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable AI: a brief survey on history, research areas, approaches and challenges. In: Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8, Springer, 2019, pp. 563–574 16. Freksa, C.: Fuzzy Systems in AI: An Overview. Springer (1994) 17. Ai, A.I.: Fuzzy logic and artificial intelligence: a special issue on emerging techniques and their applications. IEEE Trans. Fuzzy Syst. 28(12), 3063 (2020) 18. Yen, J.: Fuzzy logic-a modern perspective. IEEE Trans. Knowl. Data Eng. 11(1), 153–165 (1999) 19. Kyriakidis, I., Kukkonen, J., Karatzas, K., Papadourakis, G., Ware, A.: New statistical indices for evaluating model forecasting performance, Skiathos Island, Greece, 2015 20. Ai, C., Jia, L., Hong, M., Zhang, C.: Short-term road speed forecasting based on hybrid RBF neural network with the aid of fuzzy system-based techniques in urban traffic flow. IEEE Access. 8, 69461–69470 (2020) 21. Wang, J., Li, H., Lu, H.: Application of a novel early warning system based on fuzzy time series in urban air quality forecasting in China. Appl. Soft Comput. 71, 783–799 (2018) 22. Kolokotsa, D., Tsiavos, D., Stavrakakis, G.S., Kalaitzakis, K., Antonidakis, E.: Advanced fuzzy logic controllers design and evaluation for buildings’ occupants thermal–visual comfort and indoor air quality satisfaction. Energ. Buildings. 33(6), 531–543 (2001) 23. Kalamaras, I. et al.: Visual analytics for exploring air quality data in an AI-enhanced IoT environment. In: Proceedings of the 11th International Conference on Management of Digital EcoSystems, 2019, pp. 103–110 24. Ribeiro, M.T., Singh, S., Guestrin, C.: ‘Why should i trust you?’ Explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery, Aug. 2016, pp. 1135– 1144. https://doi.org/10.1145/2939672.2939778 25. Lundberg, S.M., et al.: From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2(1), 56–67 (2020) 26. Lundberg, S.M., et al.: Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nat. Biomed. Eng. 2(10), 749–760 (2018) 27. Guo, F., et al.: Visual exploration of air quality data with a time-correlation-partitioning tree based on information theory. ACM Trans Interact Intell Syst. 9(1) (2019). https://doi.org/ 10.1145/3182187

Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling

19

28. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One. 10(7), e0130140 (2015) 29. Kohlbrenner, M., Bauer, A., Nakajima, S., Binder, A., Samek, W., Lapuschkin, S.: Towards best practice in explaining neural network decisions with LRP. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE (2020) 30. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: International Conference on Machine Learning, PMLR, 2017, pp. 3145–3153 31. Sirmacek, B., Riveiro, M.: Occupancy prediction using low-cost and low-resolution heat sensors for smart offices. Sensors. 20(19), 5497 (2020) 32. Diallo, A.B., Nakagawa, H., Tsuchiya, T.: An explainable deep learning approach for adaptation space reduction. In: 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C), pp. 230–231. IEEE (2020) 33. Mansouri, T., Vadera, S.: A deep explainable model for fault prediction using IoT sensors. IEEE Access. 10, 66933–66942 (2022) 34. Kabir, M.H., Hasan, K.F., Hasan, M.K., Ansari, K.: Explainable artificial intelligence for smart city application: a secure and trusted platform. In: Explainable Artificial Intelligence for Cyber Security: Next Generation Artificial Intelligence, pp. 241–263. Springer (2022) 35. Gu, J., Yang, B., Brauer, M., Zhang, K.M.: Enhancing the evaluation and interpretability of data-driven air quality models. Atmos. Environ. 246, 118125 (2021) 36. Stadtler, S., Betancourt, C., Roscher, R.: Explainable machine learning reveals capabilities, redundancy, and limitations of a geospatial air quality benchmark dataset. Mach. Learn. Knowl. Extr. 4(1), 150–171 (2022). https://doi.org/10.3390/make4010008 37. Rahardja, U., Aini, Q., Sunarya, P.A., Manongga, D., Julianingsih, D.: The use of tensorflow in analyzing air quality artificial intelligence predictions PM2.5 . Aptisi Transactions on Technopreneurship (ATT). 4(3), 313–324 (2022) 38. Palaniyappan Velumani, R., Xia, M., Han, J., Wang, C., Lau, A.K., Qu, H.: AQX: Explaining air quality forecast for verifying domain knowledge using feature importance visualization. In 27th International Conference on Intelligent User Interfaces, 2022, pp. 720–733 39. Ji, Y., et al.: Regression analysis of air pollution and pediatric respiratory diseases based on interpretable machine learning. Front. Earth Sci. (Lausanne). 11, 263 (2023) 40. Dutta, D., Pal, S.K.: Z-number-based AQI in rough set theoretic framework for interpretation of air quality for different thresholds of PM2. 5 and PM10. Environ. Monit. Assess. 194(9), 653 (2022) 41. García, M.V., Aznarte, J.L.: Shapley additive explanations for NO2 forecasting. Ecol. Inform. 56, 101039 (2020)

Commonalities and Differences in ML-Pipelines for Air Quality Systems Cezary Orlowski, Grit Behrens

, and Kostas Karatzas

Abstract This paper compares three ML-pipelines in Air Quality (AQ) Systems, namely a fog layer management model for IoT-systems, a low-cost AQ sensor system with sensor calibration and data fusion competences and a ML-method research based on low-cost OpenSensorMap. The three ML-pipelines are described, commonalities and differences worked out and the advantages of every technique are led over in an effort of a combined ML-pipeline which could be realised in a scientific cooperation of the three groups. Keywords ML-pipelines · Fog layer management model · IoT-system · Low-cost AQ sensor

1 Introduction AQ systems may be defined as informatics systems that address the whole life cycle of air pollution monitoring and modelling. On this basis, such systems include capabilities for: • the (usually automatic and continuous) monitoring of the concentrations of air pollutants in the ambient or indoor air (depending on the application),

C. Orlowski WSB Merito University Gdansk, Gda´nsk, Poland e-mail: [email protected] G. Behrens () HSBI/Bielefeld University of Applied Sciences, Bielefeld, Germany e-mail: [email protected] K. Karatzas Environmental Informatics Research Group, School of Mechanical Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_2

21

22

C. Orlowski et al.

• processing of observations, together with the collection and processing of relevant data and information (meteorological conditions, emission data, human activity data, etc.), • modelling of AQ levels, with the aim to at the first step simulate and then be able to forecast these levels, • information provision towards the society (public, decision makers, policy makers, regulators etc.) with the aid of better management of air pollution as well as of human activities having an impact on it or affected by it. These distinct characteristics emerge not only from the abundant literature on the field but also from best practices that have been reflected by various relevant projects and activities worldwide [1, 2]. Moreover, these characteristics can be understood as the backbone of an information technology-oriented pipeline that runs through the whole process of AQ monitoring, modelling and management. In the current paper we address this pipeline from the Machine Learning (ML) point of view, considering the way that it is approached and materialised by three different, independent research groups from three different European countries. The aim is to understand, compare and discuss the methods used to identify commonalities and differences, and relate them to different goals and outcomes. This will allow us to better map the requirements of the AQ systems field in terms of ML methods and tools and to also suggest specific approaches in accordance with application-related requirements. The rest of this paper is structured as follows: In the first chapter the ML methods and relevant AQ system procedures used by each research group are presented. Part 2 compares and discusses these methods and Part 3 draws the conclusions toward a combined use of the presented approaches.

2 Methods The methods employed by the three research groups WSB (Meriro University Gdansk), AUTh (Aristotle University of Thessaloniki) and HSBI (University of Applied Sciences Bielefeld) provide a complementary view concerning the way that the ML pipeline is addressed in the field of AQ systems.

2.1 The Fog Computing Approach in AQ Systems of the WSB Merito University in Gdansk An important characteristic of an AQ system is where the data and its processing, modelling and applications capabilities abide. For this reason, the pre-processing of AQ data is addressed to build the fog layer management model of the AQ related IoT system. To achieve this goal, this part of the paper has been divided into two sections. The first presents the conditions of AQ data processing methods. Also

Commonalities and Differences in ML-Pipelines for Air Quality Systems

23

discussed are the steps that become necessary for data to be processed at the Fog level (instead of at the Cloud level). The second (experimental) section presents research carried out at WSB University in Gdansk, the purpose of which is to analyse the pre-processing of data at the Edge level and the possibility of their processing at the Fog level.

2.1.1

Considerations for Pre-processing Data in the Fog Layer

The AQ data processing methods presented in [3, 4] focus mainly on pre-processing data in the Cloud layer. It seems that the development of the Internet of Things changes the concept of data processing, moving away from data processing based on client-server architectures to three-layer architectures Edge-Fog-Cloud [5, 6]. To assess the usefulness of three-tier architectures, the introduction shows how data is processed in client-server architectures. For two-layer architectures, it is assumed that the data processing process includes the following stages [7, 8]. • Sensor calibration for high measurement accuracy. • Regular download of data such as PM2.5 or PM10 concentrations based on sensor readings.1 • Data filtering to eliminate potential errors or extreme values. • Data visualisation based on time graphs, heat maps to show AQ changes depending on time or location. • Trend analysis using statistical analysis or ML techniques to identify patterns and predict future values. • Sharing data with the community. The analysis of these stages shows that it is possible to transfer the data filtration process, which is so important from the point of view of subsequent processing, to the Fog layer, assuming that the following methods of classifying data from low-cost AQ sensors will be used [9, 10]: • Defining threshold values for various AQ parameters, such as PM2.5 or PM10 concentrations. • Data classification based on standards - measured values from low-cost sensors can be compared to national or international AQ standards. • Use ML models for measurement data with AQ class labels (for example, from reference stations) and then use them to train a classification model. • A hybrid approach combining several classification methods.

1 PM 2.5 and PM10 stand for particulate matter of mean aerodynamic diameter of 2.5 and 10 μm respectively, and are part of the regulated air pollutants in ambient air. Their size corresponds to their penetration potential in the human breathing system (smaller particles penetrate deeper in the lungs).

24

C. Orlowski et al.

Considering the given stages, it is possible to transfer the data filtration process to the Fog layer, obtaining the following benefits [11, 12]: • Classification of AQ data in the Fog layer will allow for immediate decision making based on classification results. • Uploading all measurement data from AQ sensors to the cloud can generate a significant amount of data, which can lead to network capacity issues. Using the fog layer instead may balance possible problems. • Fog data classification allows only critical information to be sent, such as classification results or events that exceed AQ standards, which can significantly reduce the amount of data transferred. • Fog processing requires less data to be transferred and can lead to lower power consumption compared to sending all data to the cloud. This is important, especially for IoT systems with low energy protocols. Considering these benefits, it should be well-thought-out whether it is not advisable for data preprocessing to take place at the level of the Fog layer. There is also the question of how to develop the Fog layer so that the processing processes are not burdened with significant loads.

2.1.2

Preliminary Studies of AQ Data Processing to Assess the Possibility of Their Use in the Fog Layer

The requirements of the Fog layer may be used as guiding steps for the requirements related to data pre-processing. On this basis, it is important to identify issues related to data types, types of classifiers used, computing power of the devices used, as well as the delay resulting from the classifier’s operating time. Therefore, in this part of the work, research conducted at the University of WSB in Gdansk will be shown on the processing of data from hybrid networks involving low-cost AQ measurement sensors, to determine, using basic classification processes, to what extent this data can be useful for processing the Fog layer. The following stages were adopted for the implementation of the following studies: • Selection of Edge nodes that can provide the required measurement data. • Carrying out processing on a device simulating the operation of the Fog node. • Use of AQ data classifiers such as decision trees, logistic regression, or neural networks. • Assessment of the possibility of optimising the use of classifiers from the point of view of computing power of the Fog node and classifiers’ working time. • Updated classification models to include new training data and improved classification accuracy. In the first stage, 26 measuring sensors were selected, in Pereira dos Santos et al. [13]. Figure 1 presents with the view of the way that the AQ data were obtained with the aid of the NODE-RED data and API interconnection tool using the API available for accessing the Airly database (i.e., the database with the AQ measurements).

Commonalities and Differences in ML-Pipelines for Air Quality Systems

25

Fig. 1 The 26 AQ measuring sensors in Gdansk area

The data is then stored in another database and converted to an Excel spreadsheet, which is the basis for the subsequent processing. Data from the measurement stations shown in Fig. 1 are then downloaded via API to the Node-RED system and converted to an Excel spreadsheet. The detailed data pipeline (measuring stations, API, Node-Red flow, Postgres database, Excel spreadsheet) is shown in Fig. 2. In the second stage, pre-processing was carried out on a notebook device with a capacity of 4 GB RAM and a quadcore processor. Conducting the research on a notebook resulted from the need to assess the initial computing power necessary for processing. This initial assessment of computing power allowed us to replace the notebook computers with Raspberry Pi microcomputers in the next stage of our work and move to the Fog layer to work. The choice of computing power also resulted from the analysis of the work of classifiers. The following types of classifiers were used: decision trees, support vector machines, k-means clustering, random forest, artificial neural networks and Bayesian classifiers. For the work carried out, those classifiers were selected that the research team used many times [13]. Therefore, Fig. 3 shows the accuracy of classifiers for a different number of measurement data in five measurement sessions carried out over two days using two types of flows in two development environments: Node-RED and Python. The aim of this work was, on the one hand,

26

C. Orlowski et al.

Fig. 2 Data pipeline from measuring stations in Node-RED flow

Fig. 3 Accuracy of classifiers operation

to assess the accuracy of classifiers and, on the other hand, to compare flows in two different development environments. This assessment indicates a significant usefulness of flows implemented in the Node-RED environment.

2.2 The Aristotle University Thessaloniki ML Pipeline The use of Machine Learning (ML) methods and tools in the AQ domain is a long-standing scientific practice, dealing with all the problems of data analysis and modelling related to the operation of an Air Quality Monitoring and Modelling System (AQMMS). For this reason, it is important to firstly identify the basic elements of such a system, and to map them to ML-powered methods and processes. Considering the common structure of an AQMMS [1, 2, 14], the AUTh team identifies its basic characteristics as visualised in Fig. 4 and clarified below: 1. An AQ monitoring infrastructure, which may include reference instruments and/or low-cost devices and aims at estimating the concentration levels of air pollutants at an hourly basis, with an adequate spatial representation for the area of interest. The use of IoT technologies is common in this aspect.

Commonalities and Differences in ML-Pipelines for Air Quality Systems

27

Fig. 4 Basic elements of an AQMMS on the basis of the AUTh approach

2. AQ simulation capabilities, which address the question of modelling, with the aim of forecasting the AQ levels of interest. 3. An AQ information dissemination module, which is responsible for presenting information on the current and forecasted levels of air pollution to interested parties. 4. IT infrastructure and supporting functionalities, being responsible for materialising data flows between each of the components, as well as between other data sources that are being used to support the operation of said components. Examples include meteorological and city activity (commonly traffic) data which are used as additional information sources to assist-support the task of estimating the AQ levels. This may be done with the aid of proper data fusion methods, where ML also plays an important role. The IT infrastructure is also responsible for materialising data pre-processing, analytics and modelling tasks, safeguarding data storage and communication when necessary. On the other hand, the typical ML pipeline includes (at least) the following steps: • Data pre-processing (data cleaning, missing value and outlier handling, data harmonisation and transformation). • Feature engineering (correlation analysis, feature extraction and generation, feature prioritisation). • Data analytics and modelling. The aforementioned AQMMS components and relevant ML pipeline can be depicted with the aid of the KASTOM project example [15, 16]. This was a project led by the Aristotle University of Thessaloniki in Greece (from 2019 to 2022), which created an advanced system for monitoring and modelling of AQ. During the project, low-cost AQ sensor systems (LCAQSS) known as KASTOM nodes, were deployed in the Greater Thessaloniki Area (GTA, Fig. 5). These nodes provided hourly data concerning the concentration levels of CO, NO2 , O3 , PM10 , PM2.5 , as well as temperature, relative humidity, and atmospheric pressure. Data transmission was achieved through LoRaWAN, and The Things Network’s global collaborative IoT ecosystem facilitated the data transfer via LoRa gateways being installed.

28

C. Orlowski et al.

Fig. 5 Map depicting the KASTOM low-cost AQ sensor nodes installed in the GTA. A picture of the node is embedded in the lower left side

Before their deployment to 33 locations in the area, the KASTOM nodes underwent a three-month collocation with reference instruments as part of the project. It was during this period that the first ML procedure was developed, for a device-level computational calibration, that aimed to make use of the low-cost sensor readings as inputs and result in measurements with improved accuracy, using reference data as the target. It should be underlined that this procedure was not aiming only at the improvement of the statistical indices of the nodes performance but was tailored to address and improve the uncertainty of the low cost AQ measurements, therefore being in line with the recently published standard CEN/TS 17660–1:2021 [17]. The second ML procedure was developed to fuse all available information improving the representation of AQ information. For this reason, a Universal Kriging-based spatial interpolation method was developed that takes into account (a) low cost sensor network data (b) gridded AQ modelling data, (c) traffic data, (d) land use and local climate zone data, (e) meteorological data [18, 19]. Results allow to reflect AQ levels at a street and neighbourhood level without having any initial information of such a fine spatial scale available at the beginning of the process

Commonalities and Differences in ML-Pipelines for Air Quality Systems

29

Fig. 6 An example of the fine spatial resolution of AQ level representation achieved via the MLpowered data fusion approach for the GTA. NO2 levels (in μg/m3 ) are presented for a typical summer noon (greater area left, zoom area right)

(Fig. 6) and to have this available at nowcasting (current time) or forecasting (future time) time periods. Moving one step ahead of what has been operationally achieved in the KASTOM project, a third ML procedure was created to generalise the method initially developed for individual LCAQSS calibration. The new procedure allows for the calibration of a LCAQSS network which has already been deployed, where only few of the network nodes are collocated with reference instruments, while the whole network suffers from sensor drifts (a common problem for these sensor technologies). In this case, a single calibration function is built with the aid of all available data vectors coming from collocated devices, which is then seamlessly applied to all the stand-alone nodes of the network. Therefore, relocation for all nodes is not necessary anymore, while the learning process capitalises from the availability of multiple data vectors to learn from. Nevertheless, one should pay attention to the capabilities and limitations of individual ML algorithms and of spatiotemporal validation procedures. For this reason, the use of ensemble models, as well as the employment of stacking, in combination with optimization methodologies (like a genetic algorithm) concerning feature and model selection of the stacking ensemble, is the way that was preferred in order to deal with the challenges of spatiotemporal interpolation and forecasting [20]. Apart from the aforementioned ML-oriented procedures, Table 1 summarises relevant methods that the AUTh team employs in various steps of the overall pipeline for data analytics and modelling in the AQ domain. Overall, the main differences of the AUTh approach in comparison to the “standard” AQMMS approach are the following:

30

C. Orlowski et al.

Table 1 Comparison of methods and tools in the ML-Pipeline for the AQ-domain. Black color indicates AUTh, red italics color indicates HSBI and green underline color indicates WSB

Data preprocessing AQ monitoring

AQ modeling

Feature Engineering

Correlation analysis (Pearson, Spearman) Mutual information Basic descriptive statistics calculation Cyclic value transformation Missing value imputation: polynomial, ANNs, k-NN Outlier identification: 3-std rule, Raw mV to μg/m3 conversion Diurnal profiles Correlation analysis (Pearson, Spearman) Missing value imputation: polynomial, ANNs, k-NN additional criteria. decision trees, support vector machines, Bayesian classifiers artificial neural networks

Add rolling statistics. Introduce lagged parameters. Introduce engineered features (reflecting the physical characteristics of the system under investigation e.g., fractions, products and differences) Introduce engineered features (reflecting the physical characteristics of the system under investigation) Rolling window feature vector generation e.g., fractions, products and differences

Self-Organizing Maps (SOM) for identifying interdependencies between variables.

Data merging Downscaling Disaggregation Normalization (zero mean, std=1) Standardization (between 0 and 1)

Add rolling statistics. Introduce lagged parameters. Use of feature selection methods (PCA, Random Forest Feature Importance, Genetic Algorithms, Particle swarm optimization, Correlation based feature selection etc. )

Linear regression and regularized versions (LASSO, RIDGE, ELASTIC NET) Multivariate regression Decision Tree algorithms and ensembles (xgboost, lgbm, random forest) KNN, LSTM, CNN Graph neural networks Online/incremental learning Stacking (GAHS) Genetic algorithms for model selection Long short-term memory neural networks Convolutional neural networks

window generation for test-and training data sets

IT characteristics

Data modelling

APIs, MongoDB, docker, AWS, Python, NETcdf, pandas, seaborn, scipy, HTTP requests HTTP requests APIs, Node-RED MongoDB, Docker, Kubernetes Apache Kafka

WEKA, Python weka wrapper, pandas, sklearn, tensorflow

kNN and k-means for recognizing possible data clusters. t-SNE

Matlab, WEKA, Python weka wrapper, sklearn, tensorflow, keras, pytorch, pytorch geometric, xgboost, LightGBM tensorflow, keras

Commonalities and Differences in ML-Pipelines for Air Quality Systems

31

• Move beyond current state-of-the art in IoT oriented AQ monitoring, by improving, via ML, individual AQ node performance as well as the performance of the whole AQ monitoring infrastructure. • Use of existing AQMMS components (both AQ monitoring and simulation infrastructure) in a data fusion approach improving the spatial and temporal capabilities of AQ estimations at urban scales and under operational conditions.

2.3 University of Applied Sciences Bielefeld The achievement Methods in ML-pipeline of AQ domain is in the research focus in the Minden group of Applied Computer Science. The global aim was the most accurate forecast of the numerical values of PM10 and PM2.5 concentration levels for the next day. We address the problem of AQ forecast based on lowquality data from the open data portal https://maps.sensor.community/de/ mixed with high-quality data from German Federal Environment Agency (UBA https:// umweltbundesamt.de/en/data/air/air-data/) consolidated with weather data received with Meteosat python library. The three steps of AQMMS as Data pre-processing, Feature engineering and Data modelling are investigated:

2.3.1

Data Pre-processing

In Bielefeld area with a circle of 25 km around the city centre data of 71 low quality sensors are collected in the sensor community portal while 2 reference stations of UBA are available as shown in Fig. 7. Data from January 2020 until April 2023 are taken under consideration. Some of the low-quality sensors began to send available data later than January 2020 or they have bigger downtime periods without sending available data longer than 50 days or they send invalid data for longer periods of time than 50 days until now. This way 31 low-quality sensors should be filtered out. Hourly mean of the measured PM10 and PM2.5 data was calculated and consolidated with weather data sets (precipitation, temperature, wind direction, wind speed and relative humidity). Data gaps up to 50 days were filled with the measurement of the nearest distance neighbour, where offsets of PM10 and PM2.5 data were shifted along the receiving sensor. An outlier detection was calculated for every 2-month period applying 5δ -rule (5 times standard deviation), as shown in Fig. 8. Outliers were interpolated with the mean of previous and next neighbours in time series.

2.3.2

Feature Engineering

Features were converted and/or normalised for better understanding and equal inclusion by the ML-methods. The timestamp of the timeseries vectors was converted to

32

C. Orlowski et al.

Fig. 7 Positions of all available (red and green) 71 low-cost sensors in a circle of 25 km around Bielefeld and the filtered out 40 usable sensors (green) after considering downtime periods and outlier detection

the sinus values of the three single numbers of day, month and year. Additionally, based on the holiday and workday calendar from the python library, all vectors are marked with a Boolean data field for free days of work. Wind direction with degree values is converted into radians. Autocorrelation (ACF) and Partial Autocorrelation (PACF) of time series 2020–2023 of every sensor was calculated (Fig. 9).

2.3.3

Data Modelling

Investigating related works for forecast AQ models we found that the among the most accurate ML-model for short time AQ hourly forecast is developed by Zhang et al. [21] as an BiLSTM algorithm. A little bit better accuracy also for 24 hourly forecasts can be achieved by hybrid models CNN-LSTM and/or CNN-BiLSTM as shown by Sokhi et al. [18]. The investigated models will be CNN-LSTM and CNNBiLSTM combined with feature engineered training and test data sets based on

Fig. 8 Example for outlier detection for the PM2.5 (upper plot) and PM10 (lower plot) concentration levels; outliers marked with blue dots

Commonalities and Differences in ML-Pipelines for Air Quality Systems 33

34

C. Orlowski et al. Autocorrelation

1.00 0.75

0.75

0.50

0.50

0.25

0.25

0.00

0.00

–0.25

–0.25

–0.50

–0.50

–0.75

–0.75

–1.00

0

5

10

15

20

25

Partial Autocorrelation

1.00

30

35

40

–1.00

0

5

10

15

20

25

30

35

40

Fig. 9 ACF and PACF calculated for PM10 for all 40 sensors around Bielefeld (January 2020– April 2023)

window generation techniques with flexible adjustment of window length by using the Timeseries generator from python library Keras.preprocessing.sequence. The window length could be optimised by ML-experiments. Results are still pending as this is a work in progress, but the analysis so far suggests a promising outcome.

3 Comparison and Discussion The approaches used by the three different research groups demonstrate the vividity and the multidimensionality of the way that ML is involved in the AQ field. To better demonstrate these characteristics, Table 1 summarises the used methods and tools in the ML-Pipeline for the AQ-domain. For doing so, the emphasis is put on the fields of AQ monitoring, AQ modelling, and relevant IT, while for each field the dimensions presented include data processing, feature engineering and data modelling. In this way, the basic contents of the AQ field are matched with the ML methods and tools used in the well-established dimensions of the ML pipeline. It is evident that there are commonalities as well as differences in the way that each research group addresses the AQ field via its own ML arsenal, reflecting upon its expertise and domain of emphasis. Common features include the use of different pre-processing processes to increase data certainty. On the other hand, differences in approach depending on the research group include focusing only on pre-processing processes for the WSB group, individual and network calibration processes for the Greek partner as well as fusion for nowcasting and forecasting, and pre-processing for forecasting for the German partner. This means that the pipelines have more steps, as in the case of the Greek and the German partner. This means that each partner sees the desirability of preprocessing processes but uses different pipelines for its implementation. The AUTh group, having a long expertise in the AQ field, has chosen to address it in a more holistic manner, aiming at being able to respond to core questions concerning

Commonalities and Differences in ML-Pipelines for Air Quality Systems

35

the way that air pollution measurements may be analysed and interpreted, and relevant modelling requirements may be developed. In this way they can address real world operational requirements making use of a hybrid set of reference and lowcost sensors, and incorporating various information sources in a fusion approach, towards spatially improved and temporally enhanced AQ estimations. In their approach, innovative ML methods including new stacking-ensemble modelling, spatial and temporal cross validation, uncertainty oriented computational calibration of hybrid sensor networks and operationally viable data fusion methods are among the main characteristics. The WSB group, having a long expertise in IoT and fog/cloud architectures, emphasises in the latter, making use of the advantages of fog computing, and bringing a new perspective to the way that environmental information is processed and modelled. The main emphasis here is the improvement of network independence and of the system response capabilities, in a decentralised way, which nevertheless results in a robust and flexible architecture being capable of serving various operational requirements in AQ management aspects. In the WSB studies, attention is also drawn to the need to use various methods of data pre-processing before their subsequent use for forecasting. Therefore, processing methods in the Edge layer and in the Fog, layer are analysed. Then these methods are compared to indicate the importance of the Fog layer in data pre-processing. These studies are also the beginning of joint projects carried out by three centres in which data from individual partners can be processed in the Fog layer. The HSBI group, having a long experience in dealing with interdisciplinary environmental and renewable energy challenges, has decided to develop a ML pipeline reflecting modern developments as reported by literature, being able to serve multiple application domains. In Feature engineering techniques like Rolling window feature vector generation, window generation for test-and training data sets, in the AQ-Forecast the algorithms Long short-term memory neural networks as BiLSTM variation and Convolutional neural networks are used.

4 Conclusions The current work presents a differentiated approach to data processing and AQ via ML. Since each of the partners sees the use of AQ systems differently, they use different methods of data pre-processing. These diverse approaches are an impulse to change the processing processes in individual research centres. Three different research groups with different levels of interest in the AQ field, employing ML methods in the data analytics and modelling pipeline, serve as the basis to address interdisciplinarity as well as complementarity towards real world applications. Pre-processing of data carried out in the IoT laboratory of the WSB University in Gdansk confirmed the advisability of transferring pre-processing from the Cloud layer to the Fog layer. It was possible by using different types of classifiers

36

C. Orlowski et al.

and different libraries used for processing. The highest classification accuracy was obtained by using decision trees and the k-means algorithm. The change in computing power had a significant impact on the operating time of the classifiers. It states that it is possible to develop, for different research groups, different (yet complementing in technology) implementations of the Edge measurement nodes for the Fog node -based data processing. It was pointed out that the key element of the ability to transfer data processing from Edge to Fog and use the classifier is its accuracy and time of operation. These parameters translate directly into the delay as well as certainty of the data, which is so important from the point of view of making potential decisions. The work performed by the AUTh team demonstrated a holistic approach with strong applicability orientations, emphasising on the challenge related to real-world low-cost sensor network dynamic calibration problems and to the fusion-powered nowcasting and forecasting of AQ. The new methods being developed contribute to the state of the art in the field and reflect the deep knowledge of the group concerning AQMMS. The work done by the HSBI group demonstrated the high capabilities of stateof-the-art ML modelling methods in combination with publicly available low-cost sensor data. Individual sensors and versatile sensor network calibration as well as fusion powered nowcasting and forecasting, as in the case of the partner from Greece, or focusing on forecasting, as in the case of the partner from Germany, or focusing on ensuring data certainty in the Fog layer, which is implemented by the partner from Poland, shows that it is worth presenting these diverse approaches and looking for hybrid solutions. Therefore, this work serves as a starting point for the implementation of future research in which the approaches used will be exchanged and combined.

References 1. Karatzas, K., Dioudi, E., Moussiopoulos, N.: Identification of major components for integrated urban air quality management and information systems via user requirements prioritisation. Environ. Model. Softw. 18, 173–178 (2003) 2. Sokhi, R.S., Moussiopoulos, N., Baklanov, A., Bartzis, J., Coll, I., Finardi, S., Friedrich, R., Geels, C., Grönholm, T., Halenka, T., Ketzel, M., Maragkidou, A., Matthias, V., Moldanova, J., Ntziachristos, L., Schäfer, K., Suppan, P., Tsegas, G., Carmichael, G., Franco, V., Hanna, S., Jalkanen, J.-P., Velders, G.J.M., Kukkonen, J.: Advances in air quality research – current and emerging challenges. Atmos. Chem. Phys. 22, 4615–4703 (2022). https://doi.org/10.5194/ acp-22-4615-2022 3. Mahajan, S., et al.: Translating citizen-generated air quality data into evidence for shaping policy. Human. Social Sci. Commun. 9(1), 1–18 (2022) 4. Van, N.H., et al.: A new model of air quality prediction using lightweight machine learning. Int. J. Environ. Sci. Technol. 20(3), 2983–2994 (2023) 5. Popovic, I., et al.: Building low-cost sensing infrastructure for air quality monitoring in urban areas based on fog computing. Sensors. 22(3), 1026 (2022)

Commonalities and Differences in ML-Pipelines for Air Quality Systems

37

6. Serdaroglu, K.C., Baydere, S., ¸ Saovapakhiran, B.: Real time air quality monitoring with fog computing enabled IoT system: an experimental study. In: 2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS), pp. 147–152. IEEE (2022) 7. Karatzas, K., Katsifarakis, N., Orlowski, C., Sarzynski, A.: Revisiting urban air quality forecasting: a regression approach. Vietnam J. Comput. Sci. 5, 177–184 (2018). https://doi.org/ 10.1007/s40595-018-0113-0 8. Sadri, A.A., et al.: Data reduction in fog computing and internet of things: a systematic literature survey. Internet of Things, 100629 (2022) 9. Mishra, A., Jalaludin, Z.M., Mahamuni, C.V.: Air quality analysis and smog detection in smart cities for safer transport using Machine Learning (ML) Regression Models. In: 2022 IEEE 11th International Conference on Communication Systems and Network Technologies (CSNT). IEEE, 2022. p. 200–206 10. Solemani, A., Farhang, Y., Babazadeh Sangar, A.: An intelligent control method for urban traffic using fog processing in the IoT environment based on Cloud Data Processing of Big Data. Comput. Knowl. Eng. 2023 11. Pereira dos Santos, J.P., Wauters, T., De Turck, F.: Efficient management in fog computing. In: NOMS2023, the IEEE/IFIP Network Operations and Management Symposium. 2023 12. Rani, A., Prakash, V., Darbari, M.: Fog computing paradigm with internet of things to solve challenges of cloud with IoT. In: Advancements in Interdisciplinary Research: First International Conference, AIR 2022, Prayagraj, India, May 6–7, 2022, Revised Selected Papers. Cham: Springer Nature Switzerland, 2023. p. 72–84 13. Alicki, P., Janicki, W., Lubocki, R., Łalyko, S., Romanowski, M., Samsel, S.: Unpublished materials, Gdansk, 2023 14. Gulia et al., (2015), Urban air quality management – a review, Atmos. Pollut. Res. 6(2), 286– 304. doi:https://doi.org/10.5094/APR.2015.033 15. Bagkis, E., Kassandros, T., Vogiatzi, E., Karatzas, K.: Simultaneous genetic optimization concerning feature and model selection of a stacking ensemble for the spatiotemporal interpolation of air quality measurements. In: International Conference on Mathematical Models, Engineering and Environment – May 2023, Thessaloniki, Greece (https://conference-auth.gr/) 16. Melas, D., Papadogiannaki, S., Liora, N., Kontos, S., Parliari, D., Cheristanidis, S., Poupkou, A., Kassandros, T., Bagkis, E., Karatzas, K.: Development of a monitoring and forecasting air quality modelling system, Poster presentation, 11th International Conference of the Balkan Physical Union (BPU11 Congress), Belgrade, Serbia, 2022. https://indico.bpu11.info/event/1/ contributions/23/ (last accessed: 01 Aug. 2023) 17. Bagkis, E., Kassandros, T., Karteris, M., Karteris, A., Karatzas, K.: Analyzing and improving the performance of a particulate matter low cost air quality monitoring device. Atmos. 12(2), 251 (2021). https://doi.org/10.3390/atmos12020251 18. Bekkar, A., Hssina, B., Douzi, S., et al.: Air-pollution prediction in smart city, deep learning approach. J. Big. Data. 8, 161 (2021). https://doi.org/10.1186/s40537-021-00548-1 19. Kassandros, T., Bagkis, E., Karatzas, K.: Data fusion for the improvement of the spatial resolution of air quality modelling. In: Proceedings of Abstracts 13th International Conference on Air Quality: Science and Application. Published by Aristotle University of Thessaloniki, Greece and University of Hertfordshire, UK, pp. 67, https://doi.org/10.18745/PB.25560 20. Kassandros, T., Bagkis, E., Johansson, L., Kontos, Y., Katsifarakis, K.L., Karppinen, A., Karatzas, K.: Machine learning-assisted dispersion modelling based on genetic algorithmdriven ensembles: an application for road dust in Helsinki. Atmos. Environ. 307, 119818 (2023). https://doi.org/10.1016/j.atmosenv.2023.119818 21. Zhang, J., Peng, Y., Ren, B., Li, T.: PM2.5 Concentration prediction based on CNN-BiLSTM and attention mechanism. Algorithms. 14(7), 208 (2021). https://doi.org/10.3390/a14070208

Optimal Stacking Identification for the Machine Learning Assisted Improvement of Air Quality Dispersion Modeling in Operation Evangelos Bagkis , Theodosios Kassandros Ari Karppinen , and Kostas Karatzas

, Lasse Johansson

,

Abstract Air quality modeling plays a crucial role in understanding and predicting the dispersion of pollutants in the atmosphere, aiding in the development of effective strategies for mitigating the adverse impacts of air pollution. Traditional air quality modeling commonly relies on deterministic models that simulate pollutant transport, and dispersion based on physical and chemical principles leading to analytical numerical simulations towards the identification of pollutant concentrations in ambient air. However, these models often face challenges in accurately capturing the complex and dynamic nature of pollutant behavior due to uncertainties in emission inventories, meteorological conditions, and local-scale variations in terrain and land use. ENFUSER is a local scale air quality model that operates in the greater Helsinki area in Finland that successfully addresses most of the mentioned challenges. In previous research (Kassandros et al., Atmospheric Environment. 307:119818, 2023) we formalized a machine learning-based methodology to assist the operational ENFUSER dispersion model in estimating the coarse particle concentrations. Here, we continue this line of research and evaluate the genetic algorithm hybrid stacking with a novel validation procedure coined spatiotemporal cross validation. The development of the validation procedure was deemed necessary to simulate closely the operational requirements of ENFUSER. Furthermore, we introduce a fitness function based on robust statistics (median and standard deviation) that forces the predictions to follow the distribution of the reference stations. Results obtained using the greater Helsinki area (including Vantaa and Espoo) as a testbed suggest that the combination of ENFUSER with the proposed framework can provide estimations with higher confidence and improves the correlation from 0.61 to 0.71,

E. Bagkis · T. Kassandros · K. Karatzas () Environmental Informatics Research Group, School of Mechanical Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece e-mail: [email protected]; [email protected]; [email protected] L. Johansson · A. Karppinen Atmospheric Composition Research, Finnish Meteorological Institute, Helsinki, Finland e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_3

39

40

E. Bagkis et al.

the coefficient of determination from 0.34 to 0.50 and reduces the RMSE by 2.2 μg/m3 . Keywords Spatiotemporal cross validation · Coarse particles · Dispersion · ENFUSER · Genetic algorithm hybrid stacking · Ensemble · Operational · Machine learning

1 Introduction Air quality (AQ) remains at the forefront of environmental concerns for urban agglomerations. In Europe, poor AQ contributes to more than 350.000 premature deaths (EEA report).1 Society’s most vulnerable (children, people with comorbidities, etc.) are more susceptible to air pollution. Moreover, healthcare costs to treat cardiovascular diseases and strokes are putting pressure on the European healthcare systems and economies. AQ also adds strain when observed along with the urban heat island effect [1] and the combination might even contribute non-linearly to adverse health effects. Therefore, high resolution and accurate AQ estimations can help identify sources of air pollution and hotspots where AQ degradation is also synergetic in adding strain with other coexisting factors. AQ modeling at a detailed spatial and temporal scale, poses a significant challenge due to its dependency on multiple heterogeneous data sources, the modeling area meteorology, high computational demand, local emissions and more. For instance, in Helsinki, the overall AQ is quite good when aggregated into yearly values. However, during the Spring period, high levels of coarse particle (PM10 – PM2.5 ) episodic pollution are often experienced by the citizens [2]. Such episodic events may last several weeks severely affecting the AQ in the area. To model such events, additional data sources such as data on sanding, dust binding, salt application and cleaning operations on road surfaces, are required. Those are available in Helsinki but are rather rare for other areas. ENFUSER [3], a local scale operational dispersion model that runs for the greater Helsinki area is envisioned and designed to run on open access data for other areas too therefore, a data-driven methodology that can assist to capture such events without depending on the aforementioned detailed data is an appealing option for further development. Machine learning (ML) is a data-driven paradigm that has shown great potential in the field of air quality. A vast number of studies concentrated on using measurement data from official reference monitoring stations and low-cost sensors to estimate [4], impute [5], calibrate [6] and forecast [7] pollutant concentrations in recent years. However, very few have explored the potential of combining ML with deterministic AQ chemical transport models (CTM) or dispersion models and even fewer concentrated on operationally applying and validating such combinations

1 https://www.eea.europa.eu/en/topics/in-depth/air-pollution.

Optimal Stacking Identification for the Machine Learning Assisted. . .

41

[8]. formulated a hybrid forecasting system combining the LOTOS-EUROS CTM with a multilayer perceptron (MLP) to forecast PM2.5 twelve hours into the future and compared their approach with data assimilation approaches. They targeted the bias produced by the LOTOS-EUROS for 9 sites in Shanghai and managed to significantly improve the predictions [9]. Employed a random forest (RF) ensemble model in conjunction with the Community Multi-Scale Air Quality (CMAQ) CTM in the Yangtze River Delta region. The study aimed again at predicting and correcting the CMAQ bias and led to improved metrics [10]. Relied on geospatial artificial intelligence to combine a pure land-use regression (LUR) with a hybrid kriging-LUR model. Furthermore, several tree-based ensembles were employed to reduce the bias of the former models and finally the later models were combined into a stacking ensemble. To the best of our knowledge, the only study that combined CTMs with ML models operationally is the [11]. The operational platform Prév’Air, provided 5- and 8-member ensemble predictions and they applied ridge regression to enhance the performance of the CTM ensemble for NO2 , O3 , PM10 , and PM2.5 estimations. Vela et al. [12] recently highlighted the importance of accounting for the temporal degradation of ML models. They introduced “AI aging” which is a highly relevant issue in applying ML operationally for air quality modeling and forecasting. Contrary to concept drift which refers to the temporal variations in time series statistical properties over time, AI aging refers to the change in the operational behavior of a trained model relevant to the time it was trained last. This points to embracing adaptive strategies and model libraries instead of using a single ML model while also calling for operational cross-validation strategies. The problem at hand, dispersion modeling of coarse particles, already poses a significant challenge due to concept drifts (high concentrations are rare) and adding the AI aging phenomenon further complicates the situation. Therefore, to apply a ML system and maintain high accuracy the practitioner has to carefully consider the temporal aspect during validation. Stacked generalization (stacking) has been proven to be an ensemble approach that leads to improved metrics. Comprises several ML models called base-learners, and these are combined via another ML model called the meta-learner. Genetic algorithm (GA) is an optimization algorithm that has been previously incorporated together with stacking in the AQ modeling literature [13] used GA as a feature selection method for the base-learners of a stacked ensemble. In another study [14], the GA was employed to find the best lag features again for the base-learners [15] on the other hand utilized the GA to overcome the suboptimal random initialization of the weights and biases of a Multi-Layer Perceptron (artificial neural network), to avoid a local minimum and to improve forecasting accuracy levels and stacking was applied afterward. Here, we combine stacking with the GA to optimize three objectives (feature selection, base-learner selection, and meta-learner selection) simultaneously. Unlike bagging and boosting, stacking is an effective framework for combining algorithms that differ between them therefore, any type of model can be stacked. We aim to exploit this advantage by stacking the ENFUSER, which is a dispersion model with several ML models. Finally, the GA is applied to the

42

E. Bagkis et al.

meta-learner learning stage (often called level 1) thus, our framework effectively combines stacking and GA into a new algorithm called genetic algorithm hybrid stacking (GAHS). The aim of the study is to evaluate a ML powered post-processing methodology that will indirectly (learned from data instead of assumptions) account for the uncertainties in dispersion modeling and will provide refined predictions for the target pollutant concentrations. Overall, the system has the following attributes (1) operational, (2) adaptive, (3) target agnostic, (4) automatic, (5) optimizing different stages of a ML pipeline, (6) applicable to gridded datasets. The study offers a first evaluation on combining an operational dispersion model, ENFUSER, with an operational ML system to improve on the spatiotemporal extrapolation of pollutant concentrations. Moreover, we introduce the spatiotemporal cross validation (STCV) which is optimized for spatiotemporal extrapolation during operation.

2 Materials and Methods ENFUSER runs operationally for the greater Helsinki area to provide twodimensional resolved AQ estimations with 13 × 13 m spatial resolution covering approximately a 40 × 30 km area, for a height close to breathing as well as AQ measurements level. The model is updated hourly and provides estimations for lung deposition surface area (LDSA),2 PM2.5 , PMcoarse , NO2 , NO, O3 and BC as well as the derivative air quality index (AQI). Access to 19 reference station measurements was granted by the Finnish Meteorological Institute (FMI) and the locations are displayed in Fig. 1. The measuring and modeling period starts from 17/11/2022 and ends on 31/5/2023. It should be noted that not all the reference stations measured from the beginning of the time interval but the stations whose ID is encoded as T####### (e.g., T4850559) became available on 5/3/2023. Furthermore, there is a missing interval from 28/2/2023 to 5/3/2023 when the new stations were added to the incoming data (Table 1). To accommodate the application and evaluation of the combined models, GAHS is implemented as an additional computational layer accepting as input the output of ENFUSER and estimating refined values for the target variables. The input dataset includes (1) the ENFUSER predictions for all the studied pollutants, (2) regionalscale AQ forecasts from the chemical transport model SILAM [16] for the same pollutants and (3) HARMONIE Numerical Weather Prediction (NWP) data with 1 × 1 km resolution for meteorology [17] adding up to 35 initial features.

2 LDSA is a measure of the total particle surface, a parameter that has been proven to be highly relevant to the health impacts that particles have when inhaled.

Optimal Stacking Identification for the Machine Learning Assisted. . .

43

Fig. 1 Map of the AQ reference station in Helsinki. The red pins correspond to stations measuring from the start of the study while the blue pins correspond to stations that were added later Table 1 Details of the reference AQ monitoring network in the Helsinki greater area ID T4850559 T4930725 T4940338 T4940412 T4940533 T4950778 T5010284 T5010328 station_60169_24939 station_60187_24950 station_60196_24951 station_60209_24729 station_60220_24811 station_60223_25102 station_60263_25024 station_60264_25140 station_60271_24874 station_60289_25039 station_60314_24684

Latitude 60.186000 60.260765 60.226536 60.169980 60.158270 60.189346 60.271927 60.204945 60.169640 60.187390 60.196470 60.209570 60.220240 60.223930 60.263780 60.264590 60.271960 60.289950 60.314390

Longitude 24.967384 24.856367 24.825218 24.749440 24.921844 24.977900 24.874481 24.899426 24.939240 24.950600 24.952034 24.729940 24.811330 25.102440 25.024060 25.140440 24.874510 25.039530 24.684860

Height (m) 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3

Measured pollutants PM2.5 , PMcoarse , NO2 , NO, O3 PM2.5 , PMcoarse , NO2 , NO, O3 PM2.5 , PMcoarse , NO2 , NO, O3 PM2.5 , PMcoarse , NO2 , NO, O3 PM2.5 , PMcoarse , NO2 , NO, O3 PM2.5 , PMcoarse , NO2 , NO, O3 PM2.5 , PMcoarse , NO2 , NO, O3 PM2.5 , PMcoarse , NO2 , NO, O3 PM2.5 , PMcoarse , NO2 , NO, BC PM2.5 , PMcoarse , NO2 , NO, O3 , BC PM2.5 , PMcoarse , NO2 , NO, O3 , BC PM2.5 , PMcoarse , NO2 , NO PM2.5 , PMcoarse , NO2 , NO, BC PM2.5 , PMcoarse , NO2 , NO, O3 PM2.5 , PMcoarse , NO2 , NO, BC PM2.5 , PMcoarse , NO2 , NO PM2.5 , PMcoarse , NO2 , NO, BC PM2.5 , PMcoarse , NO2 , NO PM2.5 , PMcoarse , NO2 , NO, O3 , BC

44

E. Bagkis et al.

2.1 Machine Learning Models and Genetic Algorithm A brief description of the incorporated machine learning algorithms follows: ElasticNet [18] is a linear regression model that attempts to combine the advantages of both ridge and Lasso regression, namely shrinkage and sparsity together. The minimization criterion of ElasticNet combines two components: the least squares error term and a regularization term. The least squares error term measures the discrepancy between the predicted values of the model and the actual observed values. The goal is to minimize this error term, which represents the model’s ability to fit the data. The regularization term in ElasticNet consists of two parts: the L1 norm (Lasso penalty) and the L2 norm (Ridge penalty). The L1 norm encourages sparsity by promoting some coefficients to be exactly zero, effectively selecting a subset of predictors. The L2 norm promotes small values for the coefficients, helping to stabilize the model and reduce the impact of multicollinearity. K-Nearest neighbors (KNN) [19] is a non-linear algorithm that uses the similarity of input vectors on high dimensional spaces to make predictions. Setting the K parameter, KNN finds the K vectors that are most similar to the instance that we need a prediction for and averages the known values for the target. The similarity is usually the Euclidean or the Manhattan distance or more generally the Minkowski metric with different power values. Here, KNN is materialized with K = 9 with the Manhattan distance. Random Forest (RF) [20] is based on the concept of decision trees, which are binary tree-like structures that recursively partition the data based on feature values to make predictions. However, a single decision tree can often suffer from high variance and low bias, leading to overfitting or poor generalization. RF overcomes this limitation by aggregating multiple decision trees and leveraging the power of their collective predictions. The key advantages of RF are: (1) robustness against overfitting, (2) high predictive accuracy, handling of high-dimensional data, (3) feature importance estimation and (4) outlier and noise robustness. Boosting is an ensemble technique that is very successful for tabular data. Similarly based on the decision tree algorithm, boosting is an additive model that trains each tree sequentially by targeting the residual errors of the previous tree at each step. Gradient boosting (GB) is an extension of boosting where the process of additively generating weak models is formalized as a gradient descent algorithm over an objective function. LightGBM (LGB) is a light version of the algorithm, a really fast implementation that incorporates a histogram-based approach, which bins feature values into discrete bins to speed up the training process. The LGB algorithm utilizes two novel techniques called Gradient-Based One-Side Sampling (GOSS) and Exclusive Feature Bundling (EFB) which allow the algorithm to run faster while maintaining a high level of accuracy [21]. XGBoost (XGB) [22] is another popular implementation of gradient boosting that offers regularization, which allows for better control of overfitting by introducing L1/L2 penalties on the weights and biases of each tree. It also supports parallelization, out-of-core computation (training even

Optimal Stacking Identification for the Machine Learning Assisted. . .

45

Fig. 2 An illustration of the candidate solution and its constituent parts

if the memory is not enough to hold the full dataset) and has been established as one of the best algorithms for structured datasets for its high accuracy. Both algorithms are employed in this study to extract the benefits of each and combine them into a stacking ensemble. ElasticNet, KNN, LGB were employed as base-learners as well as for implementing the meta-learner. The decision was made considering the speed of the algorithms because GA high computational requirements. XGB and RF were realized only for the base-learner generation. GA is a meta-heuristic search algorithm that simulates the biological propagation of genes throughout generations (iterations) aiming to minimize/maximize an optimization criterion. The genetic operators, including selection, crossover, and mutation, are applied to evolve the population of candidate solutions (CS). Through generations, the GA converges towards an optimal solution (best CS) that maximizes the prediction accuracy. It is often employed as a feature selection or as a hyperparameter tuning method in ML. GAHS employs the GA and modifies the inner workings to optimally search the combined space of features, base-learners, and meta-learner to identify a stacking ensemble that performs best. Analytically, GAHS operates on a binary (true/false) vector called that represents a CS. A CS (Fig. 2) defines (1) which features, (2) which base-learner predictions, and (3) which meta-learner will comprise the stacking.

2.2 Genetic Algorithm Hybrid Stacking The proposed approach, GAHS, consists of several steps for air quality prediction. The methodology can be summarized as follows: Feature Engineering (FE) FE is performed as the initial step to enhance the input vector. Given the temporal nature of the operational procedure, at time t, we compute descriptive statistics and the difference between the minimum and

46

E. Bagkis et al.

the maximum of the last 24 hours on a rolling basis to inform the input vector and, consequently, the learners about the long-term statistical behavior of the system (as predicted from ENFUSER) and provide the models with smoother, less noisy versions of the initial features. For instance the following features are created from temperature (T): mean(T,24), std.(T,24), min(T,24), max(T,24), median(T,24), min_max_difference(T,24). The same features are extracted for all the initial features adding up to 267 features in total. Feature Selection (FS) To reduce the dimensionality of the input vector and identify the most relevant features, feature selection is conducted. This step becomes necessary due to the high collinearity resulting from the feature engineering procedure. Two feature selection methods are employed in this study: correlationbased feature selection (CFS) [23] and RF feature importance (RFFI). By employing multiple FS methods, GAHS introduces different representations of the initial dataset. This diversification process is essential to build base-learners that exploit different parts of the feature space and thus, build a robust ensemble. Base-Learner Generation Once the reduced representations are obtained, the generation of base-learners takes place. Hyperparameter optimization is not performed at this stage for three reasons: (1) to reduce computational costs, (2) irrelevant models will be discarded by the GA in subsequent steps, and (3) better performance gains are expected from improved features obtained through the introduction of additional FS methods. In this study, all the base-learners are evaluated with the STCV scheme that was developed to accompany operational ML applications. Furthermore, the STCV predictions will be used as input features for the metalearner in the stacking modeling stage. This part is essential because the aim of the application of the model will happen in locations that are never part of the input datasets therefore, performing STCV safeguards against obtaining overconfident evaluation metrics. Hybrid Stacking Stacking is a technique that can combine any model into an ensemble. It does so by employing strictly the predictions of the base-learners and does not rely on the algorithms per se. We have observed that oftentimes hybridizing the input space of the meta-learner with the initial features as well as with base learner predictions leads to better metrics. The meta-learner is identified using the GA optimization. The GA evaluates different combinations based on their fitness, which is determined by the performance of the meta-learner. Operational Evaluation To produce the base-learners without introducing any data leaks, the dataset is initially divided into two parts preserving the temporal dimension. The first part serves as the training set and the second part (validation set) is used to evaluate the base-learners with STCV and to produce predictions for the meta-learner. In the next stage, the predictions of the base-learners are concatenated with the initial features to create the hybrid dataset. The validation set is further divided into train-validation-test split. The train and validation sets are used to run the GAHS algorithm and identify the optimal combination of base-learners,

Optimal Stacking Identification for the Machine Learning Assisted. . .

47

features, and meta-learner. Finally, the obtained combination is tested on the test set to obtain the test metrics. In the GA module of the GAHS framework the following operators are implemented: Initialization The procedure starts by randomly creating a population of CSs. Each CS is initiated with a random percentage (between 10% and 50%) of its values as true and all other values set to false. Validation GAHS uses the mean absolute error (MAE) fitness function (FF). Each CS defines the features and base-learners for inclusion as input to train a metalearner and validate with STCV. One modification that was necessary here is that if more than one value of the meta-learner vector becomes true then two meta-learners for one CS will be included and the algorithm will break (Fig. 2). Therefore, in such cases GAHS simply counts the true values of the meta-learner vector and assigns artificially high (9999 for the MAE FF) values to the FF without evaluating the CS, effectively discarding solutions that included more than one meta-learner. Selection Tournament The best candidate is being kept without any modifications (elitism) while all the rest compete in triads via a simple selection tournament. Out of three competing CSs, randomly drawn from the population, the one with the best fitness function result is selected to continue to the next stage. Crossover Apart from the best candidate which is kept invariable (elitism) all other candidates are combined via a single point crossover operation. Given the candidate vector, an index is chosen randomly and splits two CSs into two vectors. The first part of the first CS is combined with the second part of the second vector to create an offspring of the two. Moreover, the second part of the first CS is combined with the first part of the second CS to create a second offspring. The offspring will now be CSs in the population of the next generation. Mutation After elitism, the final operator randomly draws a value in the range [0, 1] for a gene (true/false) and iterates over all genes for each CS. If the value is below a specified threshold probability, then the operator flips the genes value from true to false or from false to true. ENFUSER and the meta-learner (GAHSv1) are combined in the following way: During the validation stage of GA, for each CS, the meta-learner predicts the validation set. The predictions from both models are averaged to provide a new prediction P=P(GAHSv2). P (GAH Sv2) = [P (ENF U SER) + P (GAH Sv1)] /2

.

(1)

48

E. Bagkis et al.

Afterwards, the daily median and the daily standard deviation (std) from P as well as for the target pollutant concentration (T) are calculated. The following two factor FF defines the optimization criterion. Optimization criterion = arg min (F F )

(2)

F F = AE (P median, T median) + MAE (P std, T std)

(3)

.

.

It should be noted that the predictions from both models are hourly, but the FF is calculated with daily values. Therefore, the FF does not directly target the reconstruction of the raw reference values but rather tries to approximate the daily statistical behavior. This helps with (1) overfitting reduction and (2) forces GAHS to learn a meta-learner that is dependent from ENFUSER and that can work together as an ensemble. Moreover, since ENFUSER incorporates data assimilation, the bias is not consistent for all locations and there might be locations where there are underestimations and locations where there are overestimations. Thus, targeting the median and std. helps the meta-learner to avoid local minima and to adapt to the behavior of ENFUSER. Finally, this procedure does not guarantee that GAHSv1 will have a better performance than ENFUSER but the average between P(GAHSv1) and P(ENFUSER) will. ENFUSER provides products with complete gridded datasets hourly. Consequently, during time t, the future datasets that will be obtained at times t + n are unknown to the GAHS models. Moreover, GAHS will apply to all the gridded points in the datasets that are completely unknown (both in the spatial and temporal dimension) to the models. If the past values of an evaluation location are included in the training data then the GAHS internal models will have acquired knowledge about the characteristics of this location and this will leak information unrealistically improving the performance metrics. Furthermore, if future values of the training locations are included in the training data, then temporal leaks for the validation location are introduced due to spatial autocorrelation between locations. Therefore, classical CV approaches are inadequate to estimate operational performance. STCV was developed to address operational evaluation in a consistent way. In Fig. 3 it is demonstrated that the validation location is spatially and temporally isolated from all the other training locations and unbiased metrics can be obtained for applying GAHS on unknown locations and for future steps. Three commonly used performance metrics, namely the root mean squared error (RMSE), the coefficient of determination (R2 ) and the Pearson correlation coefficient (R) were employed to estimate the performance of the models. The metrics are calculated on the combined predictions from all the locations (Table 2).

Optimal Stacking Identification for the Machine Learning Assisted. . .

49

Fig. 3 Visual illustration of the spatiotemporal cross validation STCV Table 2 yi stands for the observed values, .yˆi stands for predicted values, .y is the mean of the observed values and n is the number of the test samples Statistical metric Coefficient of determination

Symbol

Root mean squared error

RMSE

R2

Formula .1





R

(yi −yˆi )2 n



MAE .

Pearson correlation coefficient

n i=1

.

Mean absolute error

n (yi −yˆi )2 i=1 n 2 i=1 (yi −y)

1 n  i=1 yi n

.

 − yˆi 

n n

i=1 yi yˆi −nμi μˆi

2 2 i=1 yi −nμi

n

i=1 yˆi

2

−nμˆi 2



3 Results Initially, for each station that undergoes the STCV as a validation location, the range of timestamps that measurements are present on the specific station is identified. To train the base-learners, the time range is divided into four equal intervals and the first interval is kept for training while the other three intervals are considered as validation. Therefore, the training set at this stage of STCV consists of the data from all stations (except the validation station) until the specific timestamp and the validation set consists of data from the validation location and after the specific timestamp. The process repeats and finishes when all the locations have been evaluated. The GA runs with the following hyperparameters: 100 generations; 20 CSs; meta-learners [LGB, KNN, ElasticNet]; selection constant = 3; crossover probability = 0.9; mutation probability = 0.01. In Fig. 4, the FF and the coefficient of determination from the best CS from each generation is depicted for the GAHSv2

50

E. Bagkis et al.

Fig. 4 Progression of the best CSs FF and coefficient of determination for daily averages as computed with STCV inside the GA Table 3 Original variables that were selected by GAHS ENFUSER PM2.5 PMcoarse O3 CO NO2 BC

HARMONIE meteorology Temperature Precipitation 1 h North wind component East wind component MO_len_inv Boundary layer height Wind direction Wind speed Atmospheric stability class

SILAM regional PM2.5 PMcoarse O3 CO NO2 LDSA BC

Temporal (local) Hour Day of week Month

and ENFUSER. The progression through the generations shows that reductions in the FF do not necessarily translate to better linearity. The overall error as measured by the FF is gradually reduced and the coefficient of determination is very close to the ENFUSER value again for daily averages. In Tables 3 and 4, the features that were selected by the GAHSv2 are presented. From an initial pool of 267 features and 10 base-learners the algorithm reduced the number of features to 82 (25 initial and 57 engineered) and included the following base-learners: XGB_CFS; LGB_RFFI; RF_RFFI. Finally, the LGB algorithm was selected as the best meta-learner. The abbreviation for the base-learners is structured with the name of the regressor followed by the feature selection method.

Optimal Stacking Identification for the Machine Learning Assisted. . .

51

Table 4 Engineered variables that were selected by GAHS categorized by the aggregation operation Mean PM2.5 SO2 O3 NO2 NO LDSA

Median SO2 NO2 LDSA DPT EWC Air pressure

Std PM2.5 CO NO BC MO_len_inv ASC

DPT MO_len_inv Air pressure Wind direction ASC SO2 (R) LDSA(R)

Wind speed CO(R) ASC NO(R) PM2.5 (R) SO2 (R) O3 (R) CO(R) LDSA(R)

Min Precipitation 1 h Relative humidity Wind direction PM2.5 (R) PMcoarse (R) CO(R)

Max DPT NWC EWC MO_len_inv Wind direction ASC NO(R)

Difference PM2.5 NO2 NO BC MO_len_inv Long-wave radiation ASC O3 (R) CO(R) NO(R)

All the features were calculated from the 24-hour window in a rolling manner. ASC atmospheric stability class, DPT dew point temperature, NWC north wind component, EWC east wind component. Features ending with (R) represent the regional concentrations from SILAM

Once the best CS from the final generation is obtained the validation metrics are computed for the hourly resolution on the validation set. GAHSv1 has RMSE = 17.13, R2 = 0.45, R = 0.72, the ENFUSER has RMSE = 17.74, R2 = 0.41, R = 0.66 and the combined model GAHSv2 improves the metrics to RMSE = 15.87, R2 = 0.53, R = 0.73. Interestingly, even though the coefficient of determination is slightly lower for daily averages it is greatly improved (12%) in the hourly resolution. To estimate the final performance in operation the test metrics were computed to the withheld test set. Note that during the testing phase the validation set is excluded completely from training and thus the training and test data are separated by the validation time interval. This guarantees that data leaks aren’t introduced. GAHSv1 has RMSE = 17.99, R2 = 0.21, R = 0.70, the ENFUSER has RMSE = 16.49, R2 = 0.33, R = 0.61 and GAHSv2 improves the metrics to RMSE = 14.29, R2 = 0.50, R = 0.71. GAHSv2 improves the RMSE by 2.2 μg/m3 and improves the modeled variance by 17% as estimated by the R2 . Moreover, the correlation rises substantially from 0.61 to 0.71. The validation and testing intervals were selected so that part of the Spring period is included in the training datasets (until 2023-03-27 for the base-learners and until 2023-04-29 for the metalearner). Operational conditions dictate that there is the need for models to improve the estimations at specific times therefore, the validation and testing intervals were selected so that part of the Spring period is included in the training datasets (until 2023-03-27 for the base-learners and until 2023-04-29 for the meta-learner). Since the time series are not long enough (data started from 17/11/2022) to contain previous Spring periods this was deemed necessary. However, this indicated that

52

E. Bagkis et al.

the seasonal transitions can be monitored with appropriate metrics and retraining can be triggered to adapt to seasonal and other changes even when the timeseries are short. From the visualization in Fig. 5, it can be observed that the FF that was developed manages to produce the meta-learner GAHSv1 that better estimates the

Fig. 5 Starting from the top, timeseries plots of GAHSv2, GAHSv1 and ENFUSER estimations compared with the reference measurements. The data from all the stations are concatenated and depicted into a single timeseries

Optimal Stacking Identification for the Machine Learning Assisted. . .

53

Fig. 6 Bar plots of the performance metrics (RMSE and R) on the test set for all base-learners, the meta-learner, GAHSv2 and ENFUSER

peak concentrations because ENFUSER estimates better the lower concentrations. Therefore, their combination GAHSv2 manages to perform better on average exploiting the advantages from the other two models in terms of errors and linearity. Furthermore, this also explains the increase of RMSE in GAHSv1 since the model estimates higher values it is expected that the errors will increase. In contrast, the correlation of GAHSv1 increases compared to ENFUSER indicating that the model learns to overshoot the estimations in order for the combination to provide better estimations overall. Finally, in Fig. 6, the performance metrics for all the models are depicted. Notably, none of the base-learners manages to outcompete ENFUSER and this is partly due to the complex bias on different locations. Moreover, ElasticNet outperforms the tree-based ensembles, but it was not selected by GAHS. This indicates that a priori knowledge of the best performing base-learner does not necessarily translate in inclusion of this model into the stacking ensemble. This is probably because linear model estimations are expected to be very close to the ENFUSER estimations and since ENFUSER already provides better estimations than the ElasticNet the GAHS selects ENFUSER over ElasticNet.

4 Discussion Given the complexity in modeling the dispersion of pollutants in the urban atmosphere, several operational issues should be taken under consideration when applying ML frameworks similar to GAHS. Firstly, the bias of a model like ENFUSER is not consistent across all locations. A simple ML model that would

54

E. Bagkis et al.

be able to correct such consistent biases will not work for such a case unless this inconsistency is modeled also. Secondly, a ML computational layer would be ideally independent from external data sources (other than the ones provided by the traditional model) to be as general as possible and be easily integrable with more traditional models. Compared to our previous study [2] no external datasets were used here hence, we demonstrated that GAHS is more general in its ability to model uncertainties. Thirdly, when validating ML models, the temporal dimension is of utmost importance and should be treated explicitly due to concept drifts and AI aging issues that arise solely in operation. Fourthly, retraining of the models should be held at specific intervals that can be derived from experimenting or are triggered by monitoring the performance with appropriate metrics. Finally, information leakage, even due to autocorrelation or spatial correlation as well as overfitting should be addressed at every stage of the ML pipeline to ensure models operate as expected. The practitioner should also be able to maintain feature databases and model libraries, to monitor and visualize relevant performance metrics and visualize the predictions. It could become cumbersome to control all these aspects, however, there are tools that can automate and greatly simplify these procedures. Analyzing how the features and models behave overtime could additionally offer valuable insight into identifying better models without relying so heavily on metrics obtained on historical datasets. Moreover, oftentimes the computational requirements can disproportionately scale with data volume but, GPU computations are becoming easier to incorporate with newer ML tools, an option that should be considered when working with high resolution spatiotemporal data. Finally, since GAHS was built with Python we can comment on the management of libraries (e.g., scikitlearn, scipy etc) versions. Practically, there is the need to keep track of the versions which is usually done with virtual environments, but a better option is to use docker containers or similar tools that can hold development and execution software environments consistent automatically. A ML framework that can improve the performance of CTM and dispersion gridded models on unknown locations could be of high relevance to the modeling community. Importantly, models like ENFUSER, SILAM, and others are grounded on physics and chemistry and should not be abolished in favor of purely learned models. Building ML frameworks on top of such models to account for the uncertainties that are built in these models has the advantage of combining the physical and chemical principles with the increased accuracy of ML. The stacking ensemble approach opens up the potential to ensemble traditional models and ML models together. Furthermore, there is also the opportunity to employ adaptive ML models such as incremental models [24] and increase the operability of the framework but this was out of the scope of this study. Limitations of the study: One limitation is the lack of longer timeseries and thus we couldn’t explore if including previous similar periods would be enough to model upcoming target periods. Moreover, since the aim was to model a specific period, we didn’t allow for the validation procedure to be continuous. An extension of the STCV can be defined as follows. Given that the models are retrained on

Optimal Stacking Identification for the Machine Learning Assisted. . .

55

specific intervals, define the temporal dimension of STCV to match this interval. Next, apply STCV in a forward manner on all the collected data and identify the combinations that perform consistently better across all the historical data. This approach also incorporates the retraining phase of a ML lifecycle and can optimize even the temporal interval so that the models remain highly accurate for longer.

5 Conclusions Aiming to fill the gap of combining deterministic AQ models with advanced ML models in operation, we presented the GAHS framework and validated it on the Helsinki greater area during the hard-to-model Spring period. The PMcoarse concentrations for 19 locations were modeled and the framework managed to improve the estimations substantially. An extension of the GAHS framework was presented and validated within operational constraints. Incorporating the STCV, this study made it possible to provide robust validation for the operational application of ML based models in conjunction with more traditional statistical and deterministic models. Moreover, a more robust strategy of combining models was demonstrated by forcing the GA to include predictions that obey robust statistics through the fitness function. Finally, this study demonstrated the difficulty but also the added value in applying ML frameworks alongside traditional models for AQ modeling.

References 1. Ulpiani, G.: On the linkage between urban heat island and urban pollution island: three-decade literature review towards a conceptual framework. Sci. Total Environ. 751, 141727 (2021). https://doi.org/10.1016/j.scitotenv.2020.141727 2. Kassandros, T., Bagkis, E., Johansson, L., Kontos, Y., Katsifarakis, K.L., Karppinen, A., Karatzas, K.: Machine learning-assisted dispersion modelling based on genetic algorithmdriven ensembles: an application for road dust in Helsinki. Atmos. Environ. 307, 119818 (2023). https://doi.org/10.1016/j.atmosenv.2023.119818 3. Johansson, L., Karppinen, A., Kurppa, M., Kousa, A., Niemi, J.V., Kukkonen, J.: An Operational Urban Air Quality Model ENFUSER, based on dispersion modelling and data assimilation. Environ. Model Softw. 156, 105460 (2022). https://doi.org/10.1016/ j.envsoft.2022.105460 4. Fan, K., Dhammapala, R., Harrington, K., Lamb, B., Lee, Y.: Machine learning-based ozone and PM2.5 forecasting: application to multiple AQS sites in the Pacific Northwest. Front. Big Data. 6 (2023). https://doi.org/10.3389/fdata.2023.1124148 5. Ferrer-Cid, P., Barcelo-Ordinas, M., Garcia-Vidal, J.: Graph signal reconstruction techniques for IOT air pollution monitoring platforms. IEEE Internet Things J. 9, 25350–25362 (2022). https://doi.org/10.1109/JIOT.2022.3196154 6. De Vito, S., Di Francia, G., Esposito, E., Ferlito, S., Formisano, F., Massera, E.: Adaptive machine learning strategies for network calibration of IOT smart air quality monitoring devices. Pattern Recogn. Lett. 136, 264–271 (2020). https://doi.org/10.1016/j.patrec.2020.04.032 7. Yang, J., Ismail, A.W.: Air quality forecasting using deep learning and transfer learning: a survey. 2022 IEEE Global Conference on Computing, Power and Communication Technologies (GlobConPT). (2022). https://doi.org/10.1109/GlobConPT57482.2022.9938230

56

E. Bagkis et al.

8. Xu, M., Jin, J., Wang, G., Segers, A., Deng, T., Lin, H.X.: Machine learning based bias correction for numerical chemical transport models. Atmos. Environ. 248, 118022 (2021). https://doi.org/10.1016/j.atmosenv.2020.118022 9. Xiong, K., Xie, X., Mao, J., Wang, K., Huang, L., Li, J., Hu, J.: Improving the accuracy of O3 prediction from a chemical transport model with a random forest model in the Yangtze River Delta region, China. Environ. Pollut. 319, 120926 (2023). https://doi.org/10.1016/ j.envpol.2022.120926 10. Babaan, J., Hsu, F.-T., Wong, P.-Y., Chen, P.-C., Guo, Y.-L., Lung, S.-C.C., Chen, Y.-C., Wu, C.-D.: A geo-ai-based ensemble mixed spatial prediction model with fine spatial-temporal resolution for estimating daytime/nighttime/daily average ozone concentrations variations in Taiwan. J. Hazard. Mater. 446, 130749 (2023). https://doi.org/10.1016/j.jhazmat.2023.130749 11. Debry, E., Mallet, V.: Ensemble forecasting with machine learning algorithms for ozone, Nitrogen Dioxide and PM10 on the Prev’Air platform. Atmos. Environ. 91, 71–84 (2014). https://doi.org/10.1016/j.atmosenv.2014.03.049 12. Vela, D., Sharp, A., Zhang, R., Nguyen, T., Hoang, A., Pianykh, O.S.: Temporal quality degradation in AI models. Sci. Rep. 12 (2022). https://doi.org/10.1038/s41598-022-15245-z 13. González-Enrique, J., Ruiz-Aguilar, J.J., Moscoso-López, J.A., Van Roode, S., Urda, D., Turias, I.J.: A genetic algorithm and neural network stacking ensemble approach to improve NO2 level estimations. Adv. Comput. Intell., 856–867 (2019). https://doi.org/10.1007/978-3030-20521-8_70 14. Surakhi, O.M., Zaidan, M.A., Serhan, S., Salah, I., Hussein, T.: An optimal stacked ensemble deep learning model for predicting time-series data using a genetic algorithm—an application for aerosol particle number concentrations. Computers. 9, 89 (2020). https://doi.org/10.3390/ computers9040089 15. Zhai, B., Chen, J.: Development of a stacked ensemble model for forecasting and analyzing daily average PM2.5 concentrations in Beijing, China. Sci. Total Environ. 635, 644–658 (2018). https://doi.org/10.1016/j.scitotenv.2018.04.040 16. Sofiev, M., Vira, J., Kouznetsov, R., Prank, M., Soares, J., Genikhovich, E.: Construction of the SILAM Eulerian atmospheric dispersion model based on the advection algorithm of Michael Galperin. Geosci. Model Dev. 8, 3497–3522 (2015). https://doi.org/10.5194/gmd-8-3497-2015 17. Bengtsson, L., Andrae, U., Aspelien, T., Batrak, Y., Calvo, J., de Rooy, W., Gleeson, E., Hansen-Sass, B., Homleid, M., Hortal, M., Ivarsson, K.-I., Lenderink, G., Niemelä, S., Nielsen, K.P., Onvlee, J., Rontu, L., Samuelsson, P., Muñoz, D.S., Subias, A., Tijm, S., Toll, V., Yang, X., Køltzow, M.Ø.: The harmonie–arome model configuration in the aladin–hirlam NWP system. Mon. Weather Rev. 145, 1919–1935 (2017). https://doi.org/10.1175/MWR-D16-0417.1 18. Zou, H., Trevor, H.: Regularization and Variable Selection via the Elastic Net. J. R. Statist. Soc.. Ser. B (Statist. Methodol.). 67(2), 301–320 (2005) https://www.jstor.org/stable/3647580 19. Kramer, O.: K-Nearest neighbors. Dimensionality reduction with unsupervised nearest neighbors. 13–23 (2013). https://doi.org/10.1007/978-3-642-38652-7_2 20. Breiman, L.: Mach. Learn. 45, 5–32 (2001). https://doi.org/10.1023/A:1010933404324 21. Machado, M.R., Karray, S., de Sousa, I.T.: LIGHTGBM: An effective decision tree gradient boosting method to predict customer loyalty in the finance industry. 2019 14th International Conference on Computer Science & Education (ICCSE) (2019) 22. Chen, T., Guestrin, C.: XGBoost. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. (2016). https://doi.org/10.1145/ 2939672.2939785 23. Hall, M.: Correlation based feature selection for machine learning. Ph.D. Dissertation, University of Waikato, Hamilton, New Zealand, https://www.cs.waikato.ac.nz/~mhall/thesis.pdf, last accessed 2023/6/11 24. Bagkis, E., Kassandros, T., Karatzas, K.: Learning calibration functions on the fly: hybrid batch online stacking ensembles for the calibration of low-cost air quality sensor networks in the presence of concept drift. Atmos. 13, 416 (2022). https://doi.org/10.3390/atmos13030416

Concepts for Open Access Interdisciplinary Remote Sensing with ESA Sentinel-1 SAR Data Jennifer McClelland, Tanja Riedel, Florian Beyer, Heike Gerighausen, and Burkhard Golla

Abstract Earth observation with advanced, large-scale technologies as satellite piloted Synthetic Aperture Radar (SAR) appear essential to monitor agricultural ecosystems in near future. Radar backscatter e.g. allows insights to crop conditions, soil properties and direct mapping of vegetation growth. Precise SAR pre-processing is a substantial prerequisite to perform machine learning on SAR data, e.g. for early prediction of optimal sowing, harvesting and fertilization time points. Not only for a successful, resource-efficient and environmentally friendly farming but for a wide range of other fields concerning environmental observations. Open access technologies offer the best solutions for collaborative efforts, thus minimizing financial and legal constraints in comparison to technologies residing in the commercial sector. Here, we combine expertise from the area of computer science, data science, software engineering, agriculture and geo-information-systems to build on state-of-the-art, open source (OS) tools and technologies in Germany. Our goal is to provide an easy to employ Sentinel-1 SAR pre-processing tool as well as a Germany wide, open access, pre-processed, analysis-ready database of Sentinel-1 SAR data. With the employment of modern software developing methods including the Model View Controller (MVC) architecture and a procedural and object-oriented design, these solutions can be extended, adapted and tested. This solution is available and accessible here (Jennifer, JenniferMcCl/Sentinel-1_SARData-Processing: Sentinel-1_SAR-Data-Processing_V.1.0-beta, https://zenodo.org/ record/8214935). Keywords Sentinel-1 SAR · Analysis ready data · Agriculture · Open access technologies · Geoinformation systems · Software engineering

J. McClelland () · T. Riedel · B. Golla JKI-SF, JKI-FLF, Kleinmachnow, Germany e-mail: [email protected]; [email protected]; [email protected] https://www.julius-kuehn.de/sf/ F. Beyer · H. Gerighausen JKI-PB, JKI-FLF, Braunschweig, Germany © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_4

57

58

J. McClelland et al.

1 Introduction Tackling the consequences of climate change has become a global issue. Climate change will clearly influence our common lifestyle. This involves increasingly frequent sudden weather changes and extreme temperatures, as well as drastic changes in water quality and availability [37]. Because of our constant growing global population, nutritional habits and agricultural practices, the share of the agricultural impact on global anthropogenic greenhouse gas emissions take on an estimated 10–12% [17]. At the same time, fulfilling the agricultural demand is becoming increasingly challenging due to unpredictable farming conditions [45]. Without immediate collaborative efforts including focused research, employment and adaption of state-of-the-art technologies, this issue will not be tackled soon enough, to avoid massive limitations and enormous losses. Since the first personal computer and satellite in the 1950s [15, 19, 38] technology has generally grown exponentially in developed economies worldwide [1, 9, 25]. Whilst on the one hand, an internationally wide engagement toward technological issues has led to a multitude of creative perspectives, approaches and problem-solving, on the other hand, the focus on gaining a large market share in promising discoveries and owning a new technology has led to strong competition. Whilst competition and market share offer an important drive to investment and dedication, by now this has led to an overwhelming complexity of interfaces, protocols, languages and formats [4, 16], where the same solutions are tackled over and over again becoming a strong limitation to progress [6, 41, 42]. Unfortunately, this phenomenon not only resides in business fields but also in science, often indirectly bound and strongly influenced by economy growth. Despite science targeting the public for non-commercial reasons, the open source paradigm offering technological and general usage solutions for the community has been driven by the commercial-software industry beginning in 1997 and still only proven in necessity and more extensively supported over the last few years, finding its place in the open market [27, 29, 30]. The open source paradigm as well as diverse online platforms offer technical solutions to combine and extend technical knowledge and expertise. However, there not only lies a gap in the communication between technical tools and interfaces. There lies a gap between fields of expertise. Interaction is limited by the lack of trust in sharing data and solutions and in the lack of technical skills, within increasingly critical fields, such as geographic information systems and remote sensing. A lot of expertise from pure digital fields have not yet found their place in natural science, although the demand for them has grown [52]. Thus, our approach attempts to combine expertise from modern software engineering, as practiced in industrial software companies, classic agricultural knowledge and the typical approach of a geographic information system data scientist. Our main focus lies on selecting and combining suitable OS technologies and freely accessible databases in a modular, adaptable and extendable way to derive and preprocess up-to-date European wide SAR data.

Sentinel-1 SAR Remote Sensing

59

2 State-of-the-Art/Research 2.1 Remote Sensing Remote sensing offers the unique possibility to monitor the Earth’s surface by providing up-to-date and reliable information at different scales with high temporal resolution. In regions with frequent cloud cover such as Central Europe the amount of suitable optical data is often limited. The all-weather capability is a major advantage of SAR data over optical systems. In addition, radar sensors provide complementary information to that contained in visible-infrared imagery. In the optical range of the electromagnetic spectrum, the information depends on reflective and emissive characteristics of the earth surface, whereas the radar backscatter coefficient is primarily determined by structural and dielectric properties of the surface target [14]. Ever since the deployment of the ERS satellites in 1991 [10] the SAR technique has strongly matured and with advancing technologies and large financial investments in the deployment of the Sentinel-1 (S1) satellite infrastructure within the Copernicus project [47], access to SAR Data across the planet has been made publicly accessible since 2014. The open-source Sentinel Application Platform (SNAP, [48]) offer tools to process the Sentinel data provided by the ESA. While the “Sentinel Hub” platform operated by the GIS IT company Sinergise and headquartered in Slovenia, offers a commercial solution to a processing engine and “free” user-friendly access to Sentinel databases, the Copernicus Data and Exploitation Platform—DE (CODE-DE, [49]) is a German solution, where the data center is situated in Frankfurt am Main (Germany) with a stronger orientation towards research and development. CODE-DE likewise offers direct access to all Sentinel satellite data as well as “free” user specific, dedicated virtual machines (VMs) for further processing of the data.

2.2 Algorithm/Software Development in Science While Python as a programming language has become most common in the fields of data science [46], it’s easy to learn language design and wide scope of software library’s lead to a broad quality in usage. Non-programmers can apply this language without any application of software programming paradigms such as procedural or object-oriented programming. The language offers run-time compilation, transferring disclosure of syntax-, memory-access- and logic errors to run-time, although the initial awareness of insuring memory and logic safeness is not implied, as in e.g. Java or C++ [3, 32]. In science, the general approach of initially designing a suitable, modular, thus scalable and maintainable software architecture when approaching a technical problem is not taken. “Clean code”, code documentation and reviewing principles are not made use of as well as the equivalent abundance of tools, e.g. Git, to simplify these procedures. Regular structures and infrastructures to document and

60

J. McClelland et al.

process technical errors, bug reports as well as desired features are not necessarily established. Project management systems such as agile scrum methodologies are not common or applied. As painfully learned in a long history of software development and legacy code, lack of these practices lead to error-prone, non-transparent, nonreusable and non-transferable algorithms [20, 26, 34]. Furthermore, no thorough test-ability is possible, leaving the results subject to human error and limiting the reliability of the outcome as well as the scope of possible complexity [12, 13]. Lack of incremental feedback to results and usability, as otherwise given within agile systems, allow long-term work efforts to possibly go in the wrong direction and production of large data-sets and algorithms to remain incomplete and faulty [21]. While from a short term perspective, establishing such globally standard practices and tools from the tech industry might appear like a large overhead in reference to the lifetime of time-limited, externally funded scientific projects, from a long term perspective these practices can be transferred to a multitude of projects, allowing better interaction between researchers with similar topics, integration of results, higher efficiency within the project lifetime and better, more reliable results. This has long ago been observed and discussed even from a financial point of view, e.g. in the Chapter “Managing the Cost of Change of the book” “Agile Development & Business Goals” [53].

3 Previous Work The easy, free, and open access to Sentinel-1 data, as well as to processing toolboxes and pre-processed datasets, encouraged the widespread use of S1 data in research, industry, and practice. The application scenarios are manifold and include the usage of S1-SAR data for maritime (ocean winds, sea ice, ship and oil spill monitoring) and land (forestry, agriculture, urban deformation mapping) monitoring applications as well as emergency management tasks (flood monitoring, earthquake analysis, landslide and volcano monitoring) [11]. The S1 processing tool presented in this study was developed at the JKI. As the Federal Research Centre for Cultivated Plants in Germany, the JKI is interested in the exploitation of remote sensing data for agricultural tasks, such as the derivation of crop type information, the monitoring of soil and crop growth conditions (e.g. growth stage, LAI, biomass) and yield assessment. Recent publications demonstrating the potential of S1-data for agricultural applications include [24, 33] and [44]. The most relevant and demanded analysis-ready data (ARD) are the normalized radar backscatter (gamma nought/gamma0, [35]) and derived data products. The benefit of the eigenvector-based dual-polarization decomposition vegetation index (DpRVI) and interferometric coherence for agricultural applications has also been shown in various studies [24, 28, 39, 40]. The coherence is often used to estimate sowing and harvest date. In addition, it shows high correlation to the NDVI derived from optical data and could be used to improve crop growth monitoring and growth stage assessment [40]. The DpRVI can be used to assess crop biophysical parameters

Sentinel-1 SAR Remote Sensing

61

(biomass, LAI, [50]) until the canopy is fully developed. The results obtained for DpRVI outperform other, commonly used SAR parameters, namely the cross-pol ratio and the radar vegetation index [24]. Nevertheless, most of the technical solutions for the derivation of these parameters remain solely in the scope of specific application scenarios and, besides in the work of [35], there are no open source solutions or insight to the processes. There, likewise have been multiple approaches to providing pre-processed SAR ARD cubes, as previously noted from CODE-DE, by researchers in Australia [35] and in Switzerland [7] as well as from further commercial suppliers. Nevertheless, application field important pre-processing steps are partially missing and not extendable, time series or spacial coverage is not complete, or the data is not processed for Germany. All of this previous work has been done with the Sentinel-1 Toolboxes, including SNAP. SNAP not only offers the operators for the mathematical calculations to derive the ARD but also a series of further operators including spacial fusion of the data. Although there has been one publication including the latter [23] with an introduction and insight to the workflow of deriving the dual pol covariance matrix with use of spacial operators, to our knowledge none of the other solutions include the spacial data fusion operators. Despite all previous dedication to the topic, re-use of open source solutions to derive Sentinel-1 ARD still remains a time-consuming and technical challenge.

4 Concepts for New Solutions For this reason, we propose two solutions for public use. The first solution is the publication of a S1 SAR pre-processing tool developed at Julius-Kuehn-Institute (JKI) on GitHub [51]. This tool is designed to be user-friendly, generalized for diverse use cases and offers a range of convenient functionality. It includes processing capabilities for the calculation of the normalized radar backscatter (gamma nought, 0), the eigenvector-based dual-polarization decomposition vegetation index (DpRVI) and the interferometric coherence. The individual processing chains (Fig. 1) consider recommendations of the CEOS Analysis Ready Data for Land

Fig. 1 Processing workflow

62

J. McClelland et al.

(CARD4L) initiative [57] and recent publications (e.g. [24, 35, 36]). Details on processing parameter settings can be found in the README document in the GitHub repository [51]. S1 Ground Range Detected (GRD) data in Interferometric Wide Swath Mode (IW) and Level-1 Single Look Complex (SLC) in Interferometric Wide Swath (IW) Mode, obtained via the CODE-DE platform, were used as input for the processing of the SAR ARD products. The second solution is to provide easy and open access to a S1 data cube of preprocessed ARD products structured into 4212 10.×10 km, non-overlapping square tiles over Germany, similar to the S2-GermanyGrid described in [5].

4.1 The Sentinel-1 SAR Processing Tool Software Architecture Besides with the deployment of Python 3.9., SNAP 9.0. and CODE-DE, the processing tool was developed and tested with extensive use of JupyterLab 3.2., Gitea and Git (Fig. 2, orange). The tool is roughly designed with a modular Model View Controller (MVC) software architecture (Fig. 3). All functional logic is contained in the “Controller” area, where all modules remain and can be easily extended. “Model” files to be accessed by the controller logic for processing and display as well as the SNAP graph operator files are held separately with a clear interface to the controller. The programming is done in a procedural, object-oriented manner with flat hierarchy. The internal communication is based on XML as the file type already introduced by the SNAP tool boxes and S1 metadata files (Fig. 4). The internal SNAP graph operator files are likewise held in a modular manner and can be easily adapted and extended. The “View” user interfacing and execution of the tool can be done over a minimal GUI and any desktop VM tool e.g. X2Go, directly over SSH and any command line interface or over any regular browser in combination with JupyterLab (Fig. 2, input tools). The geo-specific input and processed analysis ready data (ARD) output are held in the Geojson and Geotiff

Fig. 2 Applied OS tools and technologies within the infrastructure

Sentinel-1 SAR Remote Sensing

63

Fig. 3 Software architecture

Fig. 4 Applied common internal formats within the infrastructure

formats as common and widely supported formats (Fig. 4). Since the installation of all necessary Python packages as well as the SNAP toolboxes can be quite challenging and time-consuming, a Dockerfile has been created as well as the necessary structure to create a Docker container wherein the processing can be performed. Our pre-processing tool (Fig. 2, red) runs on CODE-DE VMs (green) and with the use of Python and the Python .→ SNAP interfacing Python package PyroSar, the Sentinel-1 SNAP Toolboxes are triggered to execute the predefined SNAP operator XML graphs on the continuously updated underlying CODE-DE database (Fig. 2, green) by means of intermediate communication files (Fig. 4).

64

J. McClelland et al.

4.2 Main Focus Complex processing performed on such voluminous data sets as the SAR data demands high computational power and storage. This is why the availability of VMs as offered by CODE-DE is essential for regular users to be able to generate ARD. Nevertheless, despite the high performance of the CODE-DE VMs at the JKI, processing times can surpass an acceptable maximum. The provided memory and internal cache size of the VMs can also become a bottleneck, additionally extending the processing times enormously. Due to this, one of our main points of focus was on optimizing the performance of our workflows. This we achieved with the following solutions: 1. Deployment of full multiprocessing on all available kernels. 2. Temporary storage of intermediate results during the workflow. 3. Limiting the file size to small data units, resulting from the desired area of interest (AOI) for the SAR data, as early as possible. For 1. and 2. we used the already available functionality of the Python package PyroSar. For 3. we integrated the SNAP spacial operators with the general calculation operators in the SNAP graph XML structure. Here, our intention was to stage any spacial operator that reduces the size of the original scenes to the size of the AOI as early as possible. This is demonstrated in Fig. 5 where the induced spacial operators within the workflow are marked in blue. Despite that, since we discovered that some of these SNAP operators can be prone to error in specific cases, depending on the geolocation and size of the desired AOI and are not well documented, we included an operation mode as a feature of the tool, where some of this functionality can be deactivated. The processing times for each individual clipping of the years 2019–2021 in reference

Fig. 5 Workflow with induced spacial operators (blue colored)

Sentinel-1 SAR Remote Sensing

65

Fig. 6 Processing time backscatter per scene clipping

to an AOI over Rhineland-Palatinate in Germany are displayed in Figs. 6, 7 and 8. In these figures, the x-axis describes the scene/clipping number within the amount of data pieces to be processed, the y-axis describes the individual processing time for each data piece. AOI “RLP1“ consists of 19.153612 km.2 and is the upper half of Rhineland-Palatinate, the other “RLP2” of 21.843920 km.2 the lower half of Rhineland-Palatinate, whereas a regular scene consists of approximately 47.783981 km.2 . Processing times fluctuate mainly because of the size of the remaining clipping, dependent on the position of the processed original scene in reference to the AOI. Further fluctuation derives from the amount of necessary intermediate processing steps (dependent on the position in reference to the AOI) and the current workload of the CODE-DE VM servers. The obvious spikes in the processing times of the Backscatter and DpRVI around the 500th, respectively 800th scene no, clearly display a stagnation in processing due to maintenance work done on the VMs by CODE-DE during this period (Figs. 6 and 8). With PyroSar’s internally implemented multiprocessing the performance improved significantly (approx. 10–12 hours .→ approx. 2–3 hours) in comparison to using the ESA SNAP/Python interface package “Snappy”. Likewise, with temporary storage of intermediate data (to bypass the random access memory (RAM) becoming a bottleneck), with the PyroSar functionality, the performance could be enhanced even more (approx. 2–3 hours .→ approx. 16–25 minutes). By then reducing the intermediate file sizes at an early stage, we achieved even better performance (e.g. reducing the data size from a full scene to an AOI 2% the size the

66

J. McClelland et al.

Fig. 7 Processing time dual pol vegetation index per scene clipping

Fig. 8 Processing time coherence per scene clipping

calculation time was reduced from 16 to 1.33 minutes for backscatter processing). Curiously, when comparing processing times on VMs with different CPUs (16 kernels vs. 32 kernels), within the JKI CODE-DE infrastructure, no significant improvement was detected. Another of our main points of focus was to enhance the precision of the ARD by analyzing, adapting and extending the parameters of the workflow in comparison to

Sentinel-1 SAR Remote Sensing

67

the available ones we extracted and applied from previous work. For example when generating the backscatter output, for one, we use the ESA pre-processed GRD data, for the other, we integrated the Terrain-Flattening operator in the processing graph. Further adaptions of the parameters are displayed in Fig. 1.

5 Challenges and Limitations While developing the S1 SAR pre-processing tool and generating ARD for diverse observation requests and applications, we tried to vaguely apply an agile, user and outcome driven procedure. This although was only possible within the JKI given infrastructure and without an automatized testing framework. In evaluation of the available tools and packages for the S1 SAR data access, these partially appeared, due to the sometimes limited documentation and transparency of the internal functioning, as a black box, leading to time-consuming trial-and-error attempts of combining the available features. Not only the ESA STEP forum for exchange on the use of the S1 Tools, but also the CODE-DE forum, with a small active community, still lacks content as a sufficient source of information to all relevant topics. The most thorough information source is the open SNAP Project Development platform, however very developer specific and not adequate for general use. PyroSar is well documented and allows good insight to the internals. Despite this, the interface is, though simple and clear, narrow and not suitable for the direct execution of predefined SNAP graph XML files. For this we had to make adaptions. Whilst Copernicus offers a multitude of handbooks and reports to the satellite infrastructure and products [54–56], the exact behavior of the satellites and structure of the resulting products can likewise only be determined by extensive use and observation. We also detected, that the list of available Copernicus scenes over a specific area and time differs between the Sentinel-Hub and CODE-DE databases. When accessing the CODE-DE database to derive pre-processed ARD, a lot of prefiltering is still necessary beforehand, e.g. the removal of duplicates, to receive a uniform data-set suitable for the SNAP graph operator workflow. Due to the lack of precise documentation of how to integrate the SNAP graph spacial fusion operators and to ensure correctness, we limited the functionality of the tool to operating on an upper boundary AOI size of 200% of the original S1 scene size. When executing the tool to process ARD, the performance is not only dependent on our algorithms and applied tools but also on the performance of CODEDE. We noticed that the performance varies strongly depending on time and day, independent of our JKI internal workload. At times the processing stagnates completely, making the expected calculation times unpredictable, which is seen by the outstanding maximum peaks in Figs. 6, 7, and 8. In general, as in software development, insuring the completeness and correctness of the algorithms and, in this case, the resulting ARD without a testing infrastructure is a demanding task. Up until now, we only randomly checked

68

J. McClelland et al.

individual samples of the data for each individual processing sequence we performed. Besides all efforts, we cannot ensure completeness and correctness for all application scenarios and further testing and adaptions will be necessary.

6 Conclusion Earth observation with SAR data and farming predictions with machine learning (ML) methods remain a promising solution to upcoming environmental changes. These solutions should be publically available. Despite a lot of investment in the deployment and provision of SAR data relevant technologies, applying these still requires strong technical or Sentinel-1 specific expertise and can result in errorprone data. Additionally, to the challenge of acquiring adequate reference data for ML, which is often limited by the lack of trust in sharing data and agreements on standards in diverse fields, the topic of open source SAR ARD remains open. More collaborative work must be done to insure the prerequisite of a free, continuously up to date, worldwide, complete and ready-to-use database of SAR data. Nonetheless, we have shown that by combining interdisciplinary skills and knowledge and applying solely open source, commonly practiced, modern software development formats, methods and tools, our derived solutions are shareable, adaptable and extendable to more than one field and/or problem and allow open source community activity. This approach has proven itself as a significant solution to many of the issues addressed in Sect. 2.2. Our S1 SAR processing tool designed with the described architecture/infrastructure in Sect. 4 is available open source. This can be used as inspiration, for further development, error handling and tool/database evaluation. Hands-on interaction and communication between fields of expertise have led to field specific, nonetheless re-applicable results. Integration of tech knowledge in the non-digital field of agriculture has allowed a more novel and generalized approach to solution finding and integration of supportive technologies. With these new insights, continuation of integrating tech-knowledge in such scientific fields appears a necessity. For this, the importance must be made more public and further interdisciplinary cooperations and collaborations created. We see this as a promising solution to speed up the process of performing realtime, reliable, convincing and applicable analysis of geo-observations and offering adequate farming advice. This taking into consideration the upcoming sudden, longlasting weather changes resulting from environmental changes. To motivate such collaborations, working in these fields must become more attractive to tech experts and the open source paradigm, idea of solution re-usability and agile approach, a more common topic among researchers. Assigned time and on-the-job training options could allow scientists from these fields to develop and update their technical skills and experience the support these methods can offer.

Sentinel-1 SAR Remote Sensing

69

References 1. Agency, R.: Exponential Growth in Technology. https://www.rehabagency.ai/insights/ exponential-technology-growth (2022). Accessed 13 Jun 2023 2. Altamira, T.: InSar At a Glance. https://site.tre-altamira.com/insar/. Accessed 13 Jun 2023 3. Ascher, D.: Dynamic Languages – Ready For The Next Challenges, By Design. https://people. dsv.su.se/~beatrice/DYPL/ascher.pdf (2004). Accessed 13 Jun 2023 4. Bayılmı¸s, C., Ebleme, M.A., Çavu¸so˘glu, Ü., Küçük, K., Sevin, A.: A survey on communication protocols and performance evaluations for Internet of Things. Digital Commun. Netw. 8(6), 1094–1104 (2022). ISSN: 2352-8648. https://www.sciencedirect.com/science/article/pii/ S2352864822000347 5. Beyer, F., Brandt, P., Schmidt, M., Stahl, U., Golla, B., Gerighausen, H., Möller, M.: A paradigm shift towards decentralized cloud-integrated spatial data infrastructures: lessons learned and solutions provided for public authorities. PrePrint. https://doi.org/10.31223/ X53H3N 6. Camp, P.-H.: The software industry is still the problem. Accessed 13 Jun 2023 (2021). https:// queue.acm.org/detail.cfm?id=3489045 7. Chatenoux, B., Röösli, C., Wingate, V., Poussin, C., Rodila, D., Peduzzi, P., Steinmeier, C., Ginzler, C., Psomas, A., Schaepman, M., Giuliani, G.: The Swiss data cube, analysis ready data archive using earth observations of Switzerland. Sci. Data 8, 295 (2021) 8. De Petris, S., Sarvia, F., Gullino, M., Tarantino, E., Borgogno-Mondino, E.: Sentinel-1 polarimetry to map apple orchard damage after a storm. Remote Sens. 13(5) (2021). ISSN: 2072-4292. https://www.mdpi.com/2072-4292/13/5/1030 9. Denning, P.J., Lewis, T.G.: Exponential laws of computing growth. https://turing.plymouth. edu/~zshen/Webfiles/notes/CS322/mooreCACM012017.pdf (2017). Accessed 13 Jun 2023 10. ERS: ERS At a Glance. https://www.esa.int/Applications/Observing_the_Earth/ERS_at_a_ glance. Accessed 13 Jun 2023 11. European Space Agency ESA: Sentinel Online: Applications. https://sentinels.copernicus.eu/ web/sentinel/user-guides/sentinel-1-sar/applications. Accessed 18 Aug 2023 12. Gao, J., Gupta, K., Gupta, S., Shim, S.: On building testable software components. In: Dean, J., Gravel, A. (eds.) COTS-Based Software Systems, pp. 108–121. Springer, Berlin (2002). ISBN: 978-3-540-45588-2 13. Gears of Testing, G.: Testable Architecture. https://gearsoftesting.org/testable-architecture. html. Accessed 13 Jun 2023 14. Gibson, P.J., Power, C.H.: Introductory Remote Sensing. Digital Image Processing and Applications, 249 pp. Routledge, London (2000). https://doi.org/10.1017/S0016756801244951 15. Goverment, U.: The Launch of Sputnik. https://2001-2009.state.gov/r/pa/ho/time/lw/103729. htm (2001). Accessed 13 Jun 2023 16. Haltian: Wireless IoT communication protocols comparison. https://haltian.com/resource/iotcommunication-protocols-comparison/ (2019). Accessed 13 Jun 2023 17. Hegener, K.: Agriculture and climate change. https://www.giz.de/expertise/html/60132.html. Accessed 13 Jun 2023 18. Hoja, D., Reinartz, P., Schroeder, M.: Comparison of DEM generation and combination methods using high resolution optical stereo imagery and interferometric SAR data (Jan 2006) 19. HOPE: Der erste Computer der Welt: Wer war der Erfinder des Computers?. https://www. computerhope.com/issues/ch000984.html (2022). Accessed 13 Jun 2023 20. Huttunen, A.: How to Prevent Legacy Code From Emerging. https://www.arhohuttunen.com/ prevent-legacy-code-from-emerging/ (2023). Accessed 13 Jun 2023 21. Inflectra: What is Agile Scrum Methodology? https://www.inflectra.com/Methodologies/ Scrum.aspx. Accessed 13 Jun 2023 22. Janse van Rensburg, G., Kemp, J.: The use of C-band and X-band SAR with machine learning for detecting small-scale mining. Remote Sens. 14(4) (2022). ISSN: 2072-4292. https://www. mdpi.com/2072-4292/14/4/977

70

J. McClelland et al.

23. Mandal, D., Vaka, D.S., Bhogapurapu, N., Vanama, V.S.K., Kumar, V., Rao, Y., Bhattacharya, A.: Sentinel-1 SLC preprocessing workflow for polarimetric applications: a generic practice for generating dual-pol covariance matrix elements in SNAP S-1 Toolbox (Nov 2019) 24. Mandal, D., Kumar, V., Ratha, D., Dey, S., Bhattacharya, A., Lopez-Sanchez, J.M., McNairn, H., Rao, Y.S.: Dual polarimetric radar vegetation index for crop growth monitoring using sentinel-1 SAR data. Remote Sens. Environ. 247, 111954 (2020). ISSN: 0034-4257. https:// www.sciencedirect.com/science/article/pii/S0034425720303242 25. McCain, A.: How Fast Is Technology Advancing?. https://www.zippia.com/advice/how-fastis-technology-advancing/ (2023). Accessed 13 Jun 2023 26. McNeilly, A.: Creating a better developer experience by avoiding legacy code. https://dev.to/ adammc331/creating-a-better-developer-experience-by-avoiding-legacy-code-22dc (2020). Accessed 13 Jun 2023 27. Mijinyawa, K.: Acceptance of Open Source Software 10.13140/RG.2.1.1905.8400, (Aug 2015) 28. Mishra, D., Pathak, G., Singh, B.P., Mohit, Sihag, P., Rajeev, Singh, K., Singh, S.: Crop classification by using dual-pol SAR vegetation indices derived from Sentinel-1 SAR-C data. Environ. Monit. Assess. 195(1), 115 (2022). ISSN: 1573-2959. https://doi.org/10.1007/ s10661-022-10591-x 29. Opensource.org: History of the OSI. https://opensource.org/history/ (2018). Accessed 13 Jun 2023 30. Partida, D.: Top Open Source Companies 2023. https://www.datamation.com/open-source/35top-open-source-companies/ (2023). Accessed 13 Jun 2023 31. Plank, S.: Rapid damage assessment by means of multi-temporal SAR – a comprehensive review and outlook to Sentinel-1. Remote Sens. 6(6), 4870–4906 (2014). ISSN: 2072-4292. https://www.mdpi.com/2072-4292/6/6/4870 32. Prechelt, L., Lutz: Are scripting languages any good? A validation of Perl, Python, Rexx, and Tcl against C, C++, and Java. Adv. Comput. 57, 205 (2003) 33. Salma, S., Keerthana, N., Dodamani, B.: Target decomposition using dual-polarization sentinel-1 SAR data: study on crop growth analysis. Remote Sens. Appl. Soc. Environ. 28, 100854 (2022). ISSN: 2352-9385. https://www.sciencedirect.com/science/article/pii/ S2352938522001628 34. Schweighofer, A.: What is legacy code and how to avoid it? https://andreschweighofer.com/ tech/what-is-legacy-code-and-how-to-avoid-it/. Accessed 13 Jun 2023 35. Ticehurst, C., Zhou, Z.-S., Lehmann, E., Yuan, F., Thankappan, M., Rosen-qvist, A., Lewis, B., Paget, M.: Building a SAR-enabled data cube capability in Australia using SAR analysis ready data. Data 4(3) (2019). ISSN: 2306-5729. https://www.mdpi.com/2306-5729/4/3/100 36. Truckenbrodt, J., Freemantle, T., Williams, C., Jones, T., Small, D., Dubois, C.: Towards Sentinel-1 SAR analysis-ready data: a best practices assessment on preparing backscatter data for the cube. Data 4(3), S. 93 (2019). https://doi.org/10.3390/data4030093 37. UNICEF: Water and the global climate crisis. https://www.unicef.org/stories/water-andclimate-change-10-things-you-should-know (2023). Accessed 13 Jun 2023 38. Verlag, B.: When was the first computer invented? https://bmu-verlag.de/der-erste-computerder-welt/. Accessed 13 Jun 2023 39. Villarroya-Carpio, A., Lopez-Sanchez, J.M.: Multi-annual evaluation of time series of Sentinel1 interferometric coherence as a tool for crop monitoring. Sensors 23(4) (2023). ISSN: 14248220. https://www.mdpi.com/1424-8220/23/4/1833 40. Villarroya-Carpio, A., Lopez-Sanchez, J.M., Engdahl, M.E.: Sentinel-1 interferometric coherence as a vegetation index for agriculture. Remote Sens. Environ. 280, 113208 (2022). ISSN: 0034-4257. https://www.sciencedirect.com/science/article/pii/S0034425722003169 41. Weinberg, G.M.: The Psychology of Computer Programming, Annual. Dorset House, New York (1998) 42. Weinberg, G.M.: Gerald M. Weinberg About Software. http://geraldmweinberg.com/Site/ Software.html. Accessed 13 Jun 2023

Sentinel-1 SAR Remote Sensing

71

43. Xun, Z., Zhao, C., Kang, Y., Liu, X., Liu, Y., Du, C.: Automatic extraction of potential landslides by integrating an optical remote sensing image with an InSAR-derived deformation map. Remote Sens. 14(11) (2022). ISSN: 2072-4292. https://www.mdpi.com/2072-4292/14/ 11/2669 44. Yadav, V.P., Prasad, R., Bala, R., Srivastava, P.K., Vanama, V.S.K.: Appraisal of dual polarimetric radar vegetation index in first order microwave scattering algorithm using sentinel – 1A (C - band) and ALOS - 2 (L - band) SAR data. Geocarto Int. 37(21), 6232–6250 (2022). eprint: https://doi.org/10.1080/10106049.2021.1933209 45. Yohannes, H.: A review on relationship between climate change and agriculture. J. Earth Sci. Clim. Change 7, 1–8 (2015) 46. Zaveria: Top 10 Programming Languages in 2023. https://www.analyticsinsight.net/top-10programming-languages-in-2023-with-the-largest-developer-communities/ (2023). Accessed 13 Jun 2023 47. ESA. https://scihub.copernicus.eu/ (2023). Accessed 4 Aug 2023 48. SNAP. https://earth.esa.int/eogateway/tools/snap (2023). Accessed 4 Aug 2023 49. CODE-DE. https://code-de.org/de/ (2023). Accessed 4 Aug 2023 50. Karlmarx, T.: Derivation of crop parameters using Sentinel-1 SAR data: a case study for winter wheat in northern Germany (2023). https://doi.org/10.5073/20230612-103122-0. Published: 27 Jun 2023 51. Jennifer, Mc: JenniferMcCl/Sentinel-1_SAR-Data-Processing: Sentinel-1_SAR-DataProcessing_V.1.0-beta (2023). Published: 04 Aug 2023. https://zenodo.org/record/8214935 52. Wilgenbusch, et al.: Big data promises and obstacles: agricultural data ownership and privacy, https://acsess.onlinelibrary.wiley.com/doi/epdf/10.1002/agj2.21182 (2023). Accessed 4 Aug 2023 53. Bill Holtsnider, et al.: Agile Development and Business goals. https://www.sciencedirect. com/book/9780123815200/agile-development-and-business-goals#book-description (2023). Accessed 04 Aug 2023 54. Copernicus, Sentinel Online. https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1 (2023). Accessed 04 Aug 2023 55. Copernicus, SciHub. https://scihub.copernicus.eu/userguide/ (2023). Accessed 04 Aug 2023 56. ESA, SNAP Command Line Tutorial. http://step.esa.int/docs/tutorials/SNAP_CommandLine_ Tutorial.pdf (2023). Accessed 04 Aug 2023 57. CEOS, ARD, https://ceos.org/ard/ (2023). Accessed 04 Aug 2023

Part II

Technological Advances and Sustainability

Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector Timothy Musharu and Jorge Marx Gómez

Abstract The digital revolution in the Information and Communications Technology (ICT) sector necessitates advanced analytical tools to understand industry dynamics and support strategic decision-making. This article presents the development of a digitization Dashboard for industry-level analysis of the ICT sector. The study aims to fill the research gap in comprehensive industry-level analytical instruments and provide valuable insights for managers, policymakers, and industry stakeholders. The research questions focus on identifying technological advancements, understanding interconnections between technologies, and predicting industry growth. A comprehensive literature review was conducted, covering various sectors related to ICT, digitization trends, and industry-level analysis. The review highlighted the need for a specialized Dashboard to integrate and visualize data across diverse technological domains within the ICT sector. The methodology employed a hybrid approach using Design Science Research, combining quantitative data analysis with qualitative data for software development. Industry data, including patent analysis and technological trends, were collected, and processed during the analysis phase. Prototypes of the Dashboard were developed based on requirements from literature and industry standards in the design and development phase. The Dashboard underwent iterative improvements based on user feedback and usability testing. The evaluation of the digitization Dashboard assessed its functionality, usability, and effectiveness in providing industry-level insights. The results demonstrate that the Dashboard offers valuable visual representations, trend analysis, and forecasting capabilities, empowering stakeholders to make informed decisions. Limitations of the study include the reliance on qualitative data analysis, limiting the inclusion of quantitative insights, and the need for further validation of the Dashboard’s impact in real-world scenarios and diverse groups of users.

T. Musharu () · J. Marx Gómez Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_5

75

76

T. Musharu and J. Marx Gómez

Future research should explore the integration of more machine learning techniques on patent data sources and user-centric evaluations to enhance the comprehensiveness and applicability of the digitization Dashboard. Continuous updates and expansions of the Dashboard functionalities are needed to accommodate emerging technological trends and evolving industry dynamics. Keywords Digitization · Industry analysis · Information communication technology · Patent analysis

1 Introduction The Information and Communications Technology (ICT) sector is undergoing a digital revolution, necessitating advanced analytical tools to understand industry dynamics and support strategic decision-making [1]. The development of a digitization Dashboard for industry-level analysis of the ICT sector is a significant step in this direction. This paper presents the development and evaluation of such a Dashboard, aiming to fill the research gap in comprehensive industry-level analytical instruments and provide valuable insights for managers, policymakers, and other industry stakeholders. The need for a specialized Dashboard to integrate and visualize data across diverse technological domains within the ICT sector has been highlighted in various studies. For instance, Marcin Kotarba’s study on “Measuring Digitalization: Key Metrics” discusses the importance of metrics in evaluating digital progress across different levels, from the digital economy to society, industry, enterprise, and clients [2]. This study underscores the need for a specialized tool that can integrate and visualize these metrics in a meaningful and accessible way, which is precisely what the digitization Dashboard aims to achieve. Moreover, the study by Bharadwaj et al. [3] highlighted the role of digitization in transforming business processes and enabling new digital business models. They argued that digitization is not just about the use of technology, but also about how technology is integrated and managed. This perspective underlines the importance of a tool like a digitization Dashboard, which can help integrate and visualize data across diverse technological domains within the ICT sector. The digitization Dashboard is developed using a hybrid approach, combining quantitative data analysis with software development. It incorporates industry data, specifically patent analysis, and technological trends, and undergoes iterative improvements based on user feedback and usability testing. The Dashboard’s effectiveness in providing industry-level insights is evaluated, demonstrating its valuable visual representations, trend analysis, and forecasting capabilities. The paper is structured as follows: following the introduction, a comprehensive literature review on the ICT sector, digitization trends, and industry-level analysis is presented. This is followed by a description of the Design science research methodology employed in this study, which explicates the problem and works

Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector

77

on refining the artefact. This methodology combines quantitative data analysis with software development. The subsequent section details the development of the digitization Dashboard, including its requirements analysis and design. The results and findings of the study are then presented, followed by a discussion of their implications. The paper concludes with a recap of the research objectives and main findings, contributions of the study, and recommendations for future research. However, the study acknowledges certain limitations, such as the reliance on quantitative data analysis, which limits the inclusion of qualitative insights. It also recognizes the need for further validation of the Dashboard’s impact in realworld scenarios. Future research should explore the integration of qualitative data sources and develop user-centric evaluations to enhance the comprehensiveness and applicability of the digitization Dashboard.

2 Literature Review 2.1 Overview The Information and Communication Technology (ICT) sector has experienced significant evolution, driven by key technological advancements. The sector’s growth has been propelled by the three fundamental laws: Moore’s Law, Butters’ Law, and Kryder’s Law, which describe the exponential growth in processing power, data transfer performance, and storage capacity respectively. However, recent studies indicate that technological progress in processing power has plateaued due to physical constraints [4, 5]. This could potentially impact the growth and development of other fields such as artificial intelligence, big data, and the Internet of Things, which rely heavily on these advancements [6]. For most firms, the motivation to digitize processes stems from a firm belief in achieving improved organizational performance and gaining a competitive edge, which is essential for both survival and growth [1, 7, 8]. Current digitization trends in the ICT sector include the rise of Artificial Intelligence, Big data, Blockchain, Virtual reality, Augmented reality, Cloud Computing, and the Internet of Things [9]. These trends have significantly influenced the sector, leading to the transformation of traditional approaches to those that rely on digital assets and information flow [10]. However, privacy and security concerns have emerged as significant inhibitors of digitization, even more so than costs, indicating the need for robust strategic security measures and legal frameworks to facilitate digitization [1, 7, 9, 10]. This is because, despite the growth of the technologies, the primary driver is ‘data’, so everything boils down to data being the propellant of digitization.

78

T. Musharu and J. Marx Gómez

As a fundamental component of the digital age, data is indeed the propellant of digitization. This assertion is supported by various studies across different sectors. For instance, in the construction [11] sector, the demand for simplification and transparency in information management, as well as the rationalization and optimization of fragmented processes, has been identified as a key driver for digitization [11]. Similarly, in the agricultural [12] sector, the phenomenon of Big Data, which involves capturing, analyzing, and using massive volumes of data for decision-making, has been recognized as a crucial element in the development of Smart Farming [12]. Moreover, the role of data in driving digitization is not limited to specific sectors but extends to the broader context of digital transformation in companies [13] argue that the use of data and digital technologies can drive real-time operational decisions, redesign business processes, and even lead to game-changing business models [13] also highlight the importance of digital assets, including data, in achieving higher service levels and integrating into customers’ processes [14]. In the context of the ICT sector, the importance of data becomes even more pronounced. The advancements in processing power, data transfer performance, and storage capacity, as described by Moore’s Law, Butters’ Law, and Kryder’s Law, have not only driven the growth of the ICT sector but have also enabled the development of new technologies such as Artificial Intelligence, Big Data, Cloud Computing, Blockchain and the Internet of Things. Table 1 indicates the key technological trends as identified across different sectors of ICT. These technologies, in turn, rely heavily on the availability and analysis of large volumes of data. Therefore, it can be argued that data is not just a by-product of digitization but a fundamental driver that fuels the growth and evolution of the ICT sector. Figure 1 shows the effect of Moore’s law on computers as there increase in power and shrinkage in size, a new class of machines has emerged every ten years.

2.2 Industry Analysis Industry analysis is a critical aspect of strategic market planning. It involves the examination of the economic, political, and market forces that influence the way an industry operates. The goal is to understand the strengths and weaknesses of the industry, the opportunities for growth, and the threats posed by or to industry players or market changes. This analysis provides a clear understanding of the industry’s attractiveness, trends, and prospects. Table 2 presents a range of models available for conducting industry analysis within the ICT sector. The table provides a concise overview of the models’ benefits, drawbacks, and their applicability to Patent Data. Industry analysis remains a method used by businesses and their stakeholders to understand their position relative to other businesses within the same field [23]. It involves examining the economic, political, and market dynamics that affect the

Adoption of emerging technologies to combat COVID-19

Information Systems (IS) research on Trends in the Blockchain.

Emerging Technologies to Combat the COVID-19 Pandemic

Blockchain Research in Information Systems: Current Trends and an Inclusive Future Research Agenda The Role of information systems in Healthcare: Current Research and future trends

The paper discusses the role of information systems in the healthcare sector, focusing on current research and future trends.

Main Focus Trends in information systems and technology

Paper Title An Overview of Trends in Information Systems: Emerging Technologies that Transform the Information Technology Industry [9]

Electronic health records (EHR), personal health records (PHR), social media platforms for health-oriented communities, and healthcare analytics.

Artificial Intelligence, Cloud Computing, Big Data, Telemedicine, Blockchain, 5G, Internet of Things (IoT), Drones, Robotics, Modern Enterprise Video Communications Platforms, Additive Manufacturing, Smartphone Apps Blockchain, IoT, Smartcontracts

Key Technologies Cloud computing, IoT, AI, blockchain, big data analytics, virtual and augmented reality, 5G network

Table 1 A summary of studies on ICT trends, key technologies, and relevance

The paper is relevant to researchers and practitioners in the field of Information Systems and blockchain technology. As it gives insight into Block Chain, IoT and Smart Contracts The paper is relevant to researchers, practitioners, policymakers, and stakeholders in the healthcare sector. It provides insights into the role of information systems in healthcare, the factors influencing the sharing of health information, and the potential of digital technology in improving healthcare outcomes.

Relevance The paper emphasizes the need for organizations to quickly adapt to the rapid changes in technology to stay competitive. It discusses the most widely used trends in information systems and technology and their contribution to meeting technology-enabled consumer demands. The paper discusses the urgent need for the adoption of the latest emerging technologies at the global level to combat the COVID-19 pandemic. It emphasizes the role of these technologies in population screening, infection tracking, vaccine development, effective quarantine, prioritizing the use and allocation of resources, and designing targeted responses.

Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector 79

80

T. Musharu and J. Marx Gómez

Fig. 1 Shows the progression of computer size over the years coupled with the increase in their processing power in line with Moore’s Law

way an industry operates. This analysis is often conducted as part of the strategic planning process to identify opportunities and threats within the industry. There are several key components of industry analysis, as identified by [23–25]: • Market Size: This refers to the total sales volume or number of customers in the industry. It helps businesses understand the potential for growth in the industry. • Competitors: This involves identifying other businesses in the industry that offer similar products or services. It includes understanding their strengths, weaknesses, market share, and strategies. • These could be technological advancements, changes in consumer behaviour, or regulatory changes. • Barriers to Entry: This refers to the challenges new entrants face when trying to enter the industry. These could be high startup costs, regulatory requirements, or strong brand loyalty for existing businesses. • Profitability: This involves assessing the potential for businesses in the industry to make a profit. It includes understanding the industry’s cost structure and the businesses’ pricing power. • Regulation: This involves understanding the laws and regulations that govern the industry. These can affect how businesses operate and their growth potential.

Considers both internal and external factors

Detailed view of a company’s activities Considers resources and capabilities that can provide a competitive advantage Provides growth strategies

Provides a view of a company’s product portfolio Considers both hard (strategy, structure, systems) and soft (shared values, skills, style, staff) elements

SWOT-Analysis [17]

Value Chain Analysis [18]

BCG-Matrix [21]

McKinsey 7S Framework [22]

Ansoff Matrix [20]

VRIO-Framework [19]

PESTEL Analysis [16]

Advantages A comprehensive view of competitive forces; considers suppliers, buyers, substitutes, new entrants, and industry rivalry Considers macro-environmental factors

Model Porter’s Five Forces [15]

Table 2 Existing possible industry analysis models

Doesn’t consider competitive forces or internal capabilities Mostly internal focus; doesn’t consider competitive forces

Doesn’t consider competitive forces or internal capabilities

Mostly static; doesn’t provide a detailed view of competitive forces Mostly internal focus; doesn’t consider competitive forces Mostly internal focus; doesn’t consider competitive forces

Doesn’t consider competitive forces; mostly static

Limitation Mostly static; doesn’t consider macro-environmental factors

Medium; Patent data can provide insights into technological factors, but less so into political, economic, sociocultural, environmental, and legal factors Medium; Patent data can provide insights into technological strengths and opportunities, but less so into weaknesses and threats Low; patent data doesn’t directly relate to most value chain activities Medium; Patent data can provide insights into valuable and rare resources (patents), but less so into imitability and organization Low; patent data doesn’t directly relate to market penetration, market development, product development, or diversification strategies Low; patent data doesn’t directly relate to market growth or market share Low; patent data can provide insights into systems (patent systems), but less so into other elements

Suitability for Patent Data High; patent data can provide insights into all five forces

Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector 81

82

T. Musharu and J. Marx Gómez

One popular framework for conducting an industry analysis is Porter’s Five Forces, which consider the bargaining power of suppliers, the bargaining power of buyers, the threat of new entrants, the threat of substitute products or services, and the intensity of competitive rivalry [26]. By conducting industry analysis, businesses can gain a better understanding of the dynamics of their industry, which can help them make more informed strategic decisions. This paper focuses on using patents for industry analysis due to the inherent value and wealth of information that patents can provide about technological advancements, innovation trends, and competitive landscapes within industries [22]. In the context of patent data, Porter’s Five Forces provides a comprehensive view of the competitive forces that a company or an industry might face. Patent data can provide insights into the threat of new entrants (through new patents), the bargaining power of suppliers (through patents owned by suppliers), the bargaining power of buyers (through patents owned by buyers), the threat of substitute products or services (through patents for potential substitutes), and the intensity of competitive rivalry (through patents owned by competitors). Therefore, Porter’s Five Forces is highly suitable for industry analysis using patent data.

2.3 Patent Analysis for Industry Analysis Patent data holds immense potential for industry analysis due to its unique and valuable insights into technological advancements and competitive dynamics. As outlined by Wang et al. [27], patents serve as indicators of innovation, representing the culmination of research and development efforts. They offer a tangible record of a company’s technological progress and can shed light on emerging trends and competitive strategies. Furthermore, patent data enables a proactive understanding of technological shifts, as highlighted by Miric et al. [28]. By analysing patent trends, industry players can anticipate changes in technology and market demand, gaining a competitive edge in adapting to these shifts. A variety of commercial tools are available to facilitate patent analysis for industry analysis purposes [28]. Noteworthy among these are PatSnap, offering a comprehensive platform for patent data search, analysis, and visualization; Thomson Innovation (now Clarivate Analytics), providing patent intelligence tools for research and decision-making; Derwent Innovation, specializing in IP research, competitor analysis, and technology trend assessment; and Questel Orbit, featuring patent searching, analysis, and collaborative capabilities. IP.com offers a platform for patent data analysis and intellectual property insights, while Anaqua focuses on patent portfolio analysis and competitive intelligence. Aureka by Minesoft offers patent family analysis and legal status tracking, and Innography, now part of CPA Global, delivers patent portfolio optimization and valuation tools. PatentSight provides data-driven insights for portfolio benchmarking, and Cipher (formerly Aistemos) specializes in IP strategy, competitive intelligence, and technology trends

Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector

83

analysis. The only drawback with these tools is that none of them has a specific focus on ICT sector-related patents. Patent analysis is a crucial aspect of industry analysis, especially in industries where innovation and technological advancement are key competitive factors. It involves examining the patents held by companies in an industry to gain insights into their research and development activities, their strategic direction, and their potential for future growth. Patent analysis can be linked with Porter’s Five Forces in this way to enable industry analysis: • Threat of New Entrants: Patents can function as a significant barrier to entry. If existing companies in the industry hold key patents, it can be difficult for new entrants to compete without infringing on these patents. A thorough patent analysis can help identify these barriers and assess the threat of new entrants. • Bargaining Power of Suppliers: If suppliers hold patents for key technologies or processes, they may have increased bargaining power. A patent analysis can help identify these patents and assess the potential impact on the industry. • Bargaining Power of Buyers: In some cases, buyers may hold patents that impact the industry. For example, in the pharmaceutical industry, healthcare providers may hold patents for certain treatments or procedures. Patent analysis in the ICT industry can help identify strategic patents and assess their impact on the bargaining power of buyers. • Threat of Substitute Products or Services: Patents can protect against the threat of substitute products or services. If a company holds a patent for a unique product or technology, it can prevent competitors from offering similar products or services. A patent analysis can help identify these patents and assess the threat of substitutes. • Competitive Rivalry: Patent analysis can provide insights into the competitive dynamics of an industry. By examining the patents held by competitors, a company can gain insights into their research and development activities, their strategic direction, and their potential for future growth. This can help the company develop strategies to compete more effectively. Figure 2 highlights how patent analysis can interlink with Porter’s five forces for industry analysis. In summary, patent analysis can provide valuable insights that can inform a company’s strategic planning process. By understanding the patents held by companies in an industry, stakeholders can better understand the competitive landscape and make more informed strategic decisions. The key to analysis is being able to draw insights, which can be aided by a Dashboard to visualize metrics related to patents and the industry.

84

T. Musharu and J. Marx Gómez

Fig. 2 Showing Porter’s five forces and how Patent analysis can inform insights on each force

2.4 Dashboards for Visualization The strategic importance of industry analysis in business decision-making is wellestablished in the literature [26]. Porter’s Five Forces framework has been widely adopted as a tool for understanding the competitive dynamics of an industry. However, as highlighted, the traditional Five Forces model does not explicitly consider the role of patents, which are increasingly recognized as a critical factor in industry competition, especially in technology-intensive industries [8, 23]. Patents represent a firm’s technological capabilities and can provide insights into its strategic direction, research and development activities, and potential for future growth. Patent analysis can also reveal the technological trends in an industry, the key players, and the competitive landscape [8]. Therefore, integrating patent analysis into industry analysis can provide a more comprehensive understanding of the industry dynamics and provide a concrete pathway for business growth [8]. Despite the potential value of patent analysis, it is a complex task that requires specialized skills and resources. The sheer volume of patent data, the complexity of patent documents, and the rapid pace of technological change make it challenging for firms to effectively analyse patent data [8]. This has led to calls for tools that can facilitate patent analysis and make it more accessible to decision-makers [29, 30]. Dashboards are a promising tool for this purpose. They can visually represent patent data, making it easier to understand and interpret. Dashboards can also enable near real-time monitoring of patent trends, helping firms to stay abreast of the latest developments in their industry [31]. However, there is a lack of research on how

Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector

85

Dashboards can be used for patent analysis in the context of industry analysis for ICT. Moreover, while there is a growing body of literature on the use of Dashboards in various business functions, such as marketing and supply chain management [31], there is a gap in the literature on their use in strategic management, particularly in industry analysis for ICT. In conclusion, there is a need for research on Dashboards that can aid in industry analysis using patent data analysis. This study introduces a Digitization Dashboard that uses ICT patent data supported by Porter’s framework. Such research could provide valuable insights for firms seeking to leverage patent data for strategic decision-making and could contribute to the literature on strategic management tools.

3 Research Methodology The Design Science Research (DSR) methodology was employed for this research. The DSR methodology encompasses several activities, which are outlined below [32]: 1. Problem Explication: This involves identifying and analysing the problem through a comprehensive literature review [32] of ICT Trends, Industry Analysis, Patent Analysis, and visualization, as discussed in the preceding Section Two. The problem for this research was how to facilitate industry analysis in the ICT sector by visualizing patent data. 2. Requirements Definition: The requirements for the proposed solution were gathered from the literature review [32] and are summarized in the subsequent section. 3. Artefact Design and Development: This paper primarily focuses on the design aspect of the activity, which is discussed in the following Section Four. This involves multiple iterations of the design and development of an artefact that addresses the problems and meets the specified requirements [32]. Each design iteration aimed to improve upon the previous one. 4. Artefact Demonstration: The artefact demonstration activity includes using the developed artefact in an illustration or real-life case (proof of concept), to demonstrate the feasibility of the artefact [1]. The demonstration was conducted during the evaluations of the artefact, as discussed in the subsequent section. 5. Problem Evaluation: The evaluation activity determines the extent to which the developed artefact meets the specified requirements and solves the identified problem. A heuristic evaluation was conducted in the following section, and future work will include other user studies. Design Science Research (DSR) is an ideal methodology for designing a Dashboard due to its problem-solving orientation, iterative nature, and emphasis on real-world demonstration and evaluation. DSR focuses on creating and

86

T. Musharu and J. Marx Gómez

refining innovative artefacts, like a Dashboard, to address identified problems, such as the need for a tool to facilitate industry analysis using patent data. The iterative process allows for continuous improvement based on user feedback and usability testing. Furthermore, DSR requires demonstrating the artefact’s feasibility in a real-life context and evaluating its effectiveness, which aligns well with the practical needs of Dashboard design. These characteristics make DSR particularly well-suited for this study which seeks to introduce the design of a Digitization Dashboard.

4 Dashboard Design In the context of this research, a Dashboard is defined as a visual display of essential information, incorporating both graphics and text, needed to achieve one or more objectives. This information is consolidated and arranged on a single screen for immediate visibility, aiding in identifying patterns, trends, and anomalies, and guiding users in effective decision-making. When designed appropriately, Dashboards can effectively communicate large volumes of information [30]. Existing Dashboard systems were reviewed to understand the current state of Dashboards designed to monitor patent analysis in the ICT industry [25, 29, 30, 33]. Suggestions from researchers on the design of these systems were identified and considered in the design of the proposed Dashboard. These suggestions included using visual metaphors for easy comprehension of information, providing different levels of detail for quick decision-making and intuitive understanding, and implementing interactive features such as hover and zoom effects. The purpose of a Dashboard informs the choices in its visual design and functionality. Dashboards serve as a means of communication, and to effectively deliver a message to the user, the message needs to be concise, goal-oriented, and clear. Effective Dashboards ensure consistency, facilitate planning, enhance communication, and enable monitoring. Based on their roles, dashboards can be categorized into strategic, analytical, and operational types. Operational Dashboards are focused on the current performance or day-to-day operations of an organization. Operational Dashboards make use of real-time or near-real-time data. Analytical Dashboards are enabled with drilldown functionality, enabling users to explore data in detail. Analytical Dashboards can be used to show key data sets highlighted against previous data. Strategic Dashboards aim to show key performance indicators for an organization or domain. The Dashboard required for this research is a combination of all three types, with requirements derived from literature and the need to address Porter’s five forces for industry analysis. The Dashboard must: 1. Display patent data related to the competitive rivalry within the industry. This could include the number of patents filed by different companies, the rate of patent filings, and the areas of technology where most patents are being filed.

Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector

87

2. The Dashboard should provide insights into the threat of new entrants. This could be achieved by visualizing the number of patents filed by new companies in the industry, or the areas of technology where new companies are filing patents. 3. The Dashboard should visualize patent data that could indicate the threat of substitute products or services. This could include patents related to alternative technologies or solutions. 4. The Dashboard should provide insights into the bargaining power of suppliers. This could be achieved by visualizing patents related to key components or technologies that are supplied by a small number of companies. 5. The Dashboard should visualize patent data that could indicate the bargaining power of customers. This could include patents related to technologies that increase customer choice or bargaining power. 6. The Dashboard should allow users to interact with the visualizations, such as by zooming in on specific data points, filtering data, or viewing additional details on demand. 7. The Dashboard should allow users to customize the visualizations to meet their specific needs. This could include selecting which data to display, changing the period for the data, or adjusting the layout of the Dashboard. 8. Allow the user to search for and flag patents for alerts on changes in the patent applications. 9. Overall, it should enable the user to make an easy comparative analysis of the patent landscape in ICT. The design of the Dashboard as shown in Fig. 3 underwent multiple iterations to address the requirements and to develop more effective designs with each iteration. The design of the Dashboard followed Schneiderman’s Visual Information Seeking Mantra: “Overview first, zoom and filter, then details on demand”, by showing the overview data first and then allowing the user to drill down to specific data [23, 31].

Fig. 3 Patent dashboard design showing the patent overview screen

88

T. Musharu and J. Marx Gómez

Fig. 4 Showing snapshot of the country order statistics screen

The Dashboard was designed for the ICT industry, with patent data from Google patents. The patents chosen for building the Dashboard were derived from search queries consisting of search strings “processing power”, “data transfer performance”, and “storage capacity”, as described by Moore’s Law, Butters’ Law, and Kryder’s Law in literature. Designs of the Dashboard were created using prototyping, with some of the designs shown in Figs. 3 and 4.

5 Evaluation To ensure the effectiveness and usability of the proposed Digitization Dashboard, an evaluation was conducted before its release to actual users. Specifically, a Formative Evaluation was carried out, which is a type of evaluation focused on identifying areas for improvement during the development process. Formative Evaluation is particularly useful for interactive systems like Dashboards, as it allows for the identification and rectification of usability issues before the system is fully deployed. This type of evaluation typically involves a combination of methods, including heuristic evaluation, cognitive walkthroughs, and usability testing with representative users [34]. In the case of the Digitization Dashboard, the Formative Evaluation included a heuristic evaluation, where the Dashboard was assessed against established usability principles, or “heuristics”. This helped to identify any design elements that could potentially cause usability issues.

Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector

89

5.1 Heuristic Evaluation A heuristic evaluation, a type of formative evaluation, was conducted on the designs to find issues in the design so they could be addressed as part of the iterative design activity in the DSR methodology. A heuristic evaluation includes having a small group of evaluators, usually experts, examining the user interface and judging the conformity with accepted usability standards. In the evaluation phase of the Digitization Dashboard, a group of five evaluators was selected, in line with research suggesting that a group of this size allows for the identification of nearly as many usability issues as testing with larger groups. The evaluators were a diverse mix of postgraduate students, graphic designers, and front-end designers, providing a range of perspectives on the Dashboard designs. The evaluators were provided with the Dashboard designs and briefed on the research, its aims, and the requirements that the Dashboard was designed to meet. They were then asked to evaluate the designs based on Nielsen’s ten heuristics [34], which are widely accepted principles for assessing usability. These heuristics cover key aspects of usability such as the visibility of system status, the match between the system and the real world, user control and freedom, consistency and standards, error prevention, recognition rather than recall, flexibility and efficiency of use, aesthetic, and minimalist design, helping users recognize, diagnose, and recover from errors, and help and documentation. After reviewing the designs, the evaluators completed a questionnaire that included both the heuristic questionnaire and additional open-ended questions seeking suggestions for design improvements. The feedback from the evaluators was then used to refine the design of the Digitization Dashboard, in keeping with the iterative nature of the Design Science Research methodology. This process ensured that the final design of the Dashboard was both effective and user-friendly, meeting the identified requirements and enhancing the usability of the Dashboard.

5.2 Results The data in Fig. 5 represents the heuristic evaluation ratings for the Digitization Dashboard by five users across ten usability principles. Overall, the Dashboard received high ratings, indicating a generally well-designed system that meets user needs. Most heuristics received ratings of 4 or above, suggesting strong performance in areas potentially representing “Match Between System and Real World”, “User Control and Freedom”, “Error Prevention”, “Recognition Rather Than Recall”, “Flexibility and Efficiency of Use”, “Aesthetic and Minimalist Design”, “Help Users Recognize, Diagnose, and Recover from Errors”, and “Help and Documentation.” However, some variability was observed in ratings for heuristics potentially representing “Visibility of System Status” and “Consistency and Standards”, indicating these as areas for potential improvement in future design iterations.

90

T. Musharu and J. Marx Gómez

Fig. 5 Average rating of each Heuristic Table 3 Heuristics report summary of issues Issue Filters

Layout

Description The filter labels (day, week, etc.) need to be more visible to make comparison easier White space and clarity of charts

Menu Charts

Menu breadcrumbs Replace some bar charts with a pie chart showing patent assignees as this improves the clarity of reading

Other

Add/ remove the Widget button Logo

Recommendation Need to improve the labels and make them clearer. Line up grids with equal spacing Stacking cads to show bigger graphs and remove white space. Add a hover function for menu buttons For country maps use colour intensity to indicate hotspots and Use more tooltips to aid user Need to categorise widgets and enable users to add/ remove components Add logo on top of the menu

The concerns highlighted by the evaluators from the questionnaire replies are displayed in Table 3. In addition to the results presented in Table 3, the design of the Dashboard received the following compliments: “The design is simple and sleek, the infographics and data representation makes for a good Dashboard,” “Visuals are crisp and make sense,” and “Could be useful if combined with other business metrics and industry monitoring and notification systems.”

Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector

91

6 Conclusion and Future Work In this paper, we presented the design and evaluation of the Digitization Dashboard for industry analysis. The Dashboard provides users with vital information and insights to support decision-making processes in various aspects of ICT for industry analysis. Through a heuristic evaluation, several usability issues were identified, and corresponding recommendations were made to improve the Dashboard’s design. The evaluation results indicate that the Dashboard generally meets user needs and demonstrates promise in enhancing industry analysis capabilities. By addressing the identified issues and implementing the recommended improvements, the Digitization Dashboard has the potential to become a valuable tool for managers, policymakers, and other industry stakeholders in conducting Industry analysis for ICT. Based on the findings and recommendations of this study, future research can be conducted in several areas. First, conducting user testing sessions with a larger and more diverse user group can yield additional information regarding the Dashboard’s usability and efficacy. This will aid in validating the provided recommendations and ensuring that design enhancements effectively address user needs. Incorporating advanced data analytics techniques, such as machine learning algorithms, can also improve the Dashboard’s ability to analyse patent data and extract meaningful insights. Moreover, given the dynamic nature of industries, it will be essential to continuously update and improve the Dashboard’s data sources and functionalities. Lastly, a comparative study with existing industry analysis tools and Dashboards can provide a thorough evaluation of the Digitization Dashboard and highlight its unique strengths and benefits. These future research avenues will contribute to the ongoing development and refinement of the “Digitization Dashboard as a valuable industry analysis tool. By addressing these factors, the Digitization Dashboard can become a potent and indispensable tool for industry stakeholders, allowing them to utilize patent analysis and Porter’s Five Forces framework for comprehensive and insightful industry analysis in the ICT sector and beyond.

References 1. Mr. Tidiane Kinda et al.: Accelerating Innovation and Digitalization in Asia to Boost Productivity. 2023 2. Kotarba, M.: Measuring digitalization-key metrics. Foundations of Management. 9(1), 123– 138 (2017). https://doi.org/10.1515/fman-2017-0010 3. Bharadwaj, A., El Sawy, O.A., Pavlou, P.A., Venkatraman, N.: Digital business strategy: toward a next generation of insights, 2013. [Online]. Available: http://ssrn.com/abstract=2742300 4. Theis, T.N., Wong, H.-S.P.: The end of Moore’s law: a new beginning for information technology, 2017. [Online]. Available: https://purl.stanford.edu/gc095kp2609 5. Waldrop, M.M.: The chips are down for Moore’s law. Nature. 530(7589), 144–147 (2016). https://doi.org/10.1038/530144a

92

T. Musharu and J. Marx Gómez

6. Ailia, M.J., Thakur, N., Abdul-Ghafar, J., Jung, C.K., Yim, K., Chong, Y.: Current trend of artificial intelligence patents in digital pathology: a systematic evaluation of the patent landscape. Cancers 14(10). MDPI, May 01, 2022. https://doi.org/10.3390/cancers14102400 7. Telukdarie, A., Dube, T., Matjuta, P., Philbin, S.: The opportunities and challenges of digitalization for SME’s. Proc. Comput. Sci. 217, 689–698 (2023). https://doi.org/10.1016/ j.procs.2022.12.265 8. Georgiou, K., Mittas, N., Ampatzoglou, A., Chatzigeorgiou, A., Angelis, L.: Data-oriented software development: the industrial landscape through patent analysis. Information (Switzerland). 14(1) (2023). https://doi.org/10.3390/info14010004 9. Hamed Taherdoost: An overview of trends in information systems: emerging technologies that transform the information technology industry. Cloud Comput. Data Sci. 1–16, 2022, https:// doi.org/10.37256/ccds.4120231653 10. Okorie, O., Salonitis, K., Charnley, F., Moreno, M., Turner, C., Tiwari, A.: Digitisation and the circular economy: a review of current research and future trends. Energies (Basel). 11(11) (2018). https://doi.org/10.3390/en11113009 11. Daniotti, B., Gianinetto, M., Della, S., Editors, T.: Digital transformation of the design, construction and management processes of the built environment. In: Research for Development. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-33570-0 12. Wolfert, S., Ge, L., Verdouw, C., Bogaardt, M.-J.: Big data in smart farming – a review. Agric. Syst. 153, 69–80 (2017). https://doi.org/10.1016/j.agsy.2017.01.023 13. Zhu, K., Dong, S., Xu, S.X., Kraemer, K.L.: Innovation diffusion in global contexts: determinants of post-adoption digital transformation of European companies. Eur. J. Inf. Syst. 15(6), 601–616 (2006). https://doi.org/10.1057/palgrave.ejis.3000650 14. Coreynen, W., Matthyssens, P., Van Bockhaven, W.: Boosting servitization through digitization: pathways and dynamic resource configurations for manufacturers. Ind. Mark. Manag. 60, 42–53 (2017). https://doi.org/10.1016/j.indmarman.2016.04.012 15. Aithal, S.: Industry analysis-the first step in business management scholarly research, 2017, https://doi.org/10.5281/zenodo.810347 16. Porter, M.E.: Technology and Competitive Advantage, 1985 17. Pan, W., Chen, L., Zhan, W.: PESTEL analysis of construction productivity enhancement strategies: a case study of three economies. J. Manag. Eng. 35(1) (2019). https://doi.org/ 10.1061/(asce)me.1943-5479.0000662 18. Helms, M.M., Nixon, J.: Exploring SWOT analysis – where are we now?: A review of academic research from the last decade. J. Strateg. Manag. 3(3), 215–251 (2010). https:// doi.org/10.1108/17554251011064837 19. Kaplinsky, R.: Globalisation and unequalisation: what can be learned from value chain analysis? J. Dev. Stud. 37(2), 117–146 (2000). https://doi.org/10.1080/713600071 20. Chatzoglou, P., Chatzoudes, D., Sarigiannidis, L., Theriou, G.: The role of firm-specific factors in the strategy-performance relationship: revisiting the resource-based view of the firm and the VRIO framework. Manag. Res. Rev. 41(1), 46–73 (2018). https://doi.org/10.1108/MRR-102016-0243 21. Loredana, E.M.: The use of Ansoff matrix in the field of business. In: MATEC Web of Conferences, 2016, p. 01006 22. Madsen, D.O.: Not dead yet: the rise, fall and persistence of the BCG Matrix. Probl. Perspect. Manag. 15(1), 19–34 (2017) 23. Baishya, B.: Mc Kinsey 7s framework in corporate planning and policy. Int. J. Interdisc. Res. Sci. Soc. Culture (IJIRSSC). 1(1) (2015) 24. Gascón, F., Lozano, J., Ponte, B., de la Fuente, D.: Measuring the efficiency of large pharmaceutical companies: an industry analysis. Eur. J. Health Econ. 18(5), 587–608 (2017). https://doi.org/10.1007/s10198-016-0812-3 25. Davis, R., Duhaime, I.M.: Diversification, vertical integration, and industry analysis: new perspectives and measurement, 1992. [Online]. Available: https://www.jstor.org/stable/ 2486601?seq=1&cid=pdf-

Developing a Digitisation Dashboard for Industry-Level Analysis of the ICT Sector

93

26. Dobbs, M.E.: Guidelines for applying Porter’s five forces framework: a set of industry analysis templates. Compet. Rev. 24(1), 32–45 (2014). https://doi.org/10.1108/CR-06-2013-0059 27. Wang, J., Veugelers, R., Stephan, P.: Bias against novelty in science: a cautionary tale for users of bibliometric indicators. Res. Policy. 46(8), 1416–1436 (2017) 28. Miric, M., Jia, N., Huang, K.G.: Using supervised machine learning for large-scale classification in management research: the case for identifying artificial intelligence patents. Strateg. Manag. J. 44(2), 491–519 (2023) 29. Exadaktylos, D., Ghodsi, M., Rungi, A.: What do Firms Gain from Patenting? The Case of the Global ICT Industry, Aug. 2021, [Online]. Available: http://arxiv.org/abs/2108.00814 30. Grimaldi, M., Cricelli, L., di Giovanni, M., Rogo, F.: The patent portfolio value analysis: a new framework to leverage patent information for strategic technology planning. Technol. Forecast Soc. Change. 94(1), 286–302 (2015). https://doi.org/10.1016/j.techfore.2014.10.013 31. Afandi, A., Eltivia, N., Sakdiyah, S.H.: Marketing Dashboard as an Early Warning on PR. Gagak Hitam, Management, Business and Social Science (IJEMBIS) Peer-Rev. Int. J. 3(1), 1– 10, 2023, [Online]. Available: https://cvodis.com/ijembis/index.php/ijembis/article/view/105 32. Adomavicius, G., Bockstedt, J.C., Gupta, A., Kauffman, R.J., Carey, W.P.: Making Sense of Technology Trends in the Information Technology Landscape: A Design Science Approach, 2008 33. Athanasios Anagnostopoulos Nikolaos, Exploring the complicated relationship between patents and standards, with a particular focus on the telecommunications sector. Ther. Ber., 2021. https://doi.org/10.48550/arXiv.2101.10548 34. Ngwenya, M., Wesson, J.L.: Interactive visualisation of energy usage in a smart environment. In: Southern Africa Telecommunication Networks and Applications Conference, J. Lewis and T. Balmahoon, Eds., 2019

The Bike Path Radar: A Dashboard to Provide New Information About Bicycle Infrastructure Quality Michael Birke, Florian Dyck, Mukhran Kamashidze, Malte Kuhlmann, Malte Schott, Richard Schulte, Alexander Tesch, Johannes Schering, Pascal Säfken, Jorge Marx Gómez, Kathrin Krienke, and Peter Gwiasda

Abstract Data can support the decision making process in bicycle infrastructure planning. Dashboards may make a positive contribution to learn more about infrastructure shortcomings if these provide relevant Key Performance Indicators (KPIs) and visualizations. Existing dashboards do not reflect the perspective of different types of users, only provide limited data sources and do not provide much information about bike path damages. The Bike Path Radar (Radweg Radar) should fill this research gap by providing relevant information about cycling infrastructure. The frontend enables the end user to create different KPIs regarding cycling accidents, citizen reportings, traffic volume etc. of highest interest. A role concept enables the provision of a suitable degree of information traffic planning experts and citizens. The most important KPIs were identified based on expert interviews. The dashboard is connected to a database in the background that includes heterogeneous cycling and bicycle infrastructure data by an API. In addition to that, the dashboard gives new opportunities for citizen engagement. Users can upload images of bike path damages in a reporting tool. The images will be processed by an object detection algorithm. The detected damages will be displayed on a map by a marker to find locations with surface shortcomings. This contribution will give a short overview about the current state of development of the Bike Path Radar. The outlook provides some additional information about the forthcoming working steps.

M. Birke · F. Dyck · M. Kamashidze · M. Kuhlmann · M. Schott · R. Schulte · A. Tesch · J. Schering () · P. Säfken · J. M. Gómez Department of Business Informatics VLBA, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected] K. Krienke · P. Gwiasda Planungsbüro VIA eG, Cologne, Germany e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_6

95

96

M. Birke et al.

Keywords Dashboard · Cycling planning · KPIs · Times series data · Object detection · Road damages · Machine learning · AI

1 Introduction Cycling has become popular in the past few years. During the Corona pandemic, German cities such as Cologne perceived a strong uptake in cycling [1]. Because of the growing amount of cyclists, there are more conflicts with pedestrians and accidents with cars at intersections. Municipalities need to invest in bike infrastructure [2]. Cyclists are not satisfied with the current infrastructure: Bike paths should become more comfortable with more space, conflicts with cars should decrease and the alignment at construction sites should become more safe [3]. According to the German National Cycling Plan 3.0 bicycle infrastructure should become more convenient and safe [4]. Therefore, new knowledge about bike path quality to support the practical work in cycling planning is required. In this context Key Performance Indicators (KPIs) and visualizations can support the decision making process. KPIs are aggregated and Business relevant data that can be used for assessment, decision making or support for different purposes [5]. Business Intelligence (BI) tools can be suitable to support the data analysis and the creation of reports. BI tools may include different types of visualizations, interaction opportunities and analysis [6]. A dashboard in the domain of cycling planning may provide the opportunity to identify problem points in the infrastructure that needs to be improved. Relevant information may be filtered according to the needs of the users. A not solved problem in cycling planning is that existing dashboards do not consider indicators based on real bike trips (e.g. waiting times, surface quality), the perspective of different types of users (experts, citizens) and that there is only little information available about surface quality or the type of bike path. One goal of the research project INFRASense [7] is the development of an interactive dashboard that supports the bike infrastructure planning process. The dashboard Bike Path Radar (Radweg Radar) [8] which is developed by a student project of seven students in the master program consists of different functions to support cycling planning by data. The dashboard should include many Key Performance Indicators (KPIs) that are relevant for stakeholders from the bicycle infrastructure planning domain. The requirements were collected as part of expert interviews with Planungsbüro VIA [9] that has supported this contribution. The dashboard should enable the dynamic visualization of relevant data sets on maps, as graphs or charts. To display the data on a website, an API as a connection to the data base is required. Another additional function of the website is dealing with Object Detection and Machine Learning approaches. Bike path damages on images should be detected and classified by an AI model. It should be part of a citizen reporting portal where inhabitants of a city can send images of damages to the city administration. The process should be supported by automatic processing of the images.

The Bike Path Radar: A Dashboard to Provide New Information About Bicycle. . .

97

The student project can be divided into three sub-groups. One group is dealing with the frontend (website). The question is how to visualize the relevant information on a web interface to make it understandable and to make a positive contribution to cycling planning experts. Different analysis results require other types of visualizations (graphs, charts etc.) to make the results understandable. The data group provides relevant data sources and preprocessed data by an API to the website. Part of the work of the AI group is the research of suitable AI methods for analysis of the state of bike infrastructure. A machine learning model is developed and trained for the detection of damages on images. The contribution starts with a literature overview about cycling dashboards, AI methods to identify damages of cycling paths and citizen science approaches in this field to identify the research gap that is filled by the Bike Path Radar. This contribution will present the development state of the working groups and provide an outlook to the further working steps. A prototype of the website is already available. The functions that has been developed will be presented. The data part will provide an overview about the data base and the API that provides the information to the website. The AI section gives an overview about potential AI approaches that can support the understanding of the state of the bike infrastructure. As we will learn, an object detection algorithm is able to detect different types of damages. The results of the contribution will be summarized in the end. An outlook and potential limitations will be discussed.

2 State of the Art and Research Gap To define the bicycle friendliness in cities quite a lot of indices were developed in the past few years. The Copenhagenize Index is one of the most famous digital tools to support assessment of bike path quality in European cities [10]. The assessment is only created by a closed expert group. The Bike-Friendly Cities Rating (ADFC Fahrradklimatest) [11] follows a citizen supported approach and is based on surveys to find strengths and weaknesses of the biking network in different locations. According to the results, the bike paths in Oldenburg are too narrow and the surface quality is not sufficient. The results of the survey are subjective and hardly comparable. Potential measures for the improvement of specific locations will not be provided. In addition to that, some municipalities are publishing local cycling KPIs. As an example, the city of Dortmund has revealed local bicycle accident statistics [12]. Beyond this there are also some interactive dashboards available that include many different cycling data sets. One big project is the Fietsbarometer that is hosted by the Province of Antwerp [13]. The content may be adapted to the interests of the users. Information about the cycling network, bicycle countings, accident locations, speed and volume of motorized traffic, green times of traffic lights, problem points (damages, barriers etc.) may be visualized. FixMyBerlin provides a map where users can find cycling promotion measures in Berlin that are planned or realized. Different types of diagrams are provided. The Happy Bike Index provides KPI on cycling accidents. The quality of intersections is defined by the number of slightly, serious

98

M. Birke et al.

and deadly accidents [14]. These dashboards do not differentiate between different types of users such as traffic experts or citizens. The KPIs were not developed under consideration of real bike trips (waiting times, surface quality based on bike sensor measurements). AI methods for road damage detection were not considered in the technical implementation. AI methods for road damage detection were already discussed in literature. Pham et al. address road damage detection and classification using YOLOv7 and AI. Their approach collects labeled road damage data from Google Street View, achieving high F1 scores in the CRDDC2022 competition [15]. Terven et al. thoroughly analyze YOLO’s evolution, emphasizing real-time object detection’s importance. The authors explore advancements in YOLO versions, architecture changes, and training techniques, offering insights for enhancing real-time systems [16]. Jeong focuses on road damage detection using YOLOv5x and smartphone images. Their lightweight solution, submitted to IEEE BigData Cup 2020, combines deep learning and real-time capabilities, showcasing good accuracy [17]. Doshi’s paper presents an ensemble model for road damage detection using YOLO-v4. Their submission to IEEE BigData Cup 2020 highlights improved resource management through efficient detection and classification [18]. No of these approaches is specified on the bike path domain. The presented AI approaches are also not combined with a web interface that allows the reporting of road or bike path damages. Nowadays citizens have many opportunities for improved digital participation. Many cities provide reporting platforms that enable inhabitants or visitors of a city to inform the administration about broken glass or rubbish on the road. The reportings can be displayed on a website. The cities make the locations with existing or solved problems in the city area visible. Examples to enable more citizen engagement may be the dashboards from Oldenburg (Stadtverbesserer) [19] and Osnabrück (EMSOS) [20]. There are also reporting systems with a focus on bike path quality available. The ADFC Bremen provides a reporting system (Mängelmelder) for cyclists [21]. Riders may submit free text and pictures about blocked bike paths, parking violators, disturbing traffic lights or bad surface quality. Both approaches have in common that no AI methods are applied on the citizen generated data to support data processing. The websites only display the images and the text as it was provided. Based on the literature review some of the requirements for a new a dashboard to support cycling planning can be collected. The overview about the current state of research shows that there is no holistic cycling data dashboard available that combines many different data sources. The existing solutions provide different KPIs or visualization but do not consider a role concept for different types of users. The existing solutions do not integrate KPIs that are based on real bike trips. The analysis of the surface quality is limited to free text that is provided by the users. The Bike Path Radar fills this research gap. The dashboard integrates much more data sources compared to the other solutions. The cycling KPIs were developed in collaboration with domain experts from cycling planning. A platform for reporting bike path damages using AI is not part of the current state of research. Our system automatically classifies uploaded images what is a novel approach and absent in current road damage detection methods. This combination fills a gap by integrating AI-driven classification and reporting for bike path issues.

The Bike Path Radar: A Dashboard to Provide New Information About Bicycle. . .

99

Fig. 1 Architecture of the Bike Path Radar project

Figure 1 shows the overall architecture of the Bike Path Radar project as a data flow diagram using the notation of Tom DeMarco [22]. The work can be divided into three inter related sub-groups. The website part (green section) provides a website frontend to the interested users which includes the selection of different KPIs and the dynamic visualization of diagrams and maps. This interface, namely a website, allows different types of users (citizens, experts) to gain insights into the data. The website is connected by an API to different types of cycling data bases that will be presented more in detail in the following chapters. The data group (blue section) is responsible for uniting data from different data sources and aggregate it where it is necessary. For this purpose, an API to access the data was provided. To visualize AI processed images of bike path damages, a reporting platform including a user interface to enable the upload of images was developed (red section). The reporting platform where users can report damages by images of bike paths should provide new opportunities for citizen participation to the users. The dashboard provides a unique cycling data collection including cycling accidents, infrastructure conditions, bike trips, weather conditions, concerns (from citizen reporting portals), traffic volume or road damages to support the quality assessment of bike paths. The development steps and the functions of each of the three Bike Path Radar components (website, data, AI approach) will be presented in the following chapters.

3 Website The main goal of the website group is to develop an interactive dashboard that allows users to visualize various Key Performance Indicators (KPIs) related to the bike infrastructure in an understandable way. The website aims to provide

100

M. Birke et al.

users with informed insights and information that supports the understanding about cycling and potential problems of the bike infrastructure. The dashboard serves as a central hub where users can select and visualize relevant KPIs they are interested in through charts and maps. The approach should empower users from different perspectives (citizens, experts) to easily interpret and analyze data that is related to bike infrastructure, enabling them to make informed decisions. Additionally, the dashboard is customizable, allowing users to set their preferences and create individualized dashboards based to their specific needs. Another goal is to give experts the ability to get additional information of the bike infrastructure with a higher degree of detail. The website is following a role concept. That means that interested citizens that want to learn more about the bike infrastructure get a basic version of the website with simplified graphs etc. There is also an expert version that allows more specific views and the visualization of more advanced information.

3.1 Structure and Technologies Figure 1 (green section) visualizes the structure of the website project part. As a first step, a WordPress website as an online presence for the project was implemented. It contains general information about the project as well as updates in the form of blog posts to keep the users updated. This page functions as a central hub for interested users to get an overview about Bike Path Radar and stay updated on the latest developments. In parallel, the core component of the website frontend, an interactive dashboard, was developed. To provide users with a personalized experience, a selection mask was implemented. Using this mask, users can choose KPIs they want to display on the dashboard (KPI selection, Fig. 1). This function allows users to visualize the information that is relevant to them. To retrieve and provide realtime data to the users, an API that is further explained below was developed to get data from the database. The data returned by the API, i.e., the selected KPI, is then presented using various chart types. For the implementation of the dashboard, the state-of-the-art web technologies such as React [23], Recharts [24], and Leaflet [25] were used. React is a JavaScript framework that enables the user (in our case citizen or expert) to create dynamic and reactive user interfaces. Recharts is a charting library that provides a wide range of chart types to visually represent the KPIs. Leaflet is a JavaScript library that offers interactive map functionalities to visualize the geographical aspects of the bike infrastructure.

3.2 KPIs The dashboard aims at displaying key indicators for specific roads and route sections from different cities, currently Oldenburg and Osnabrück. The development is based on the traffic planning guideline HEBRA (Hinweise zur einheitlichen Bewertung

The Bike Path Radar: A Dashboard to Provide New Information About Bicycle. . .

101

Table 1 Potential KPIs to support decision making in bicycle infrastructure planning [28] Type of data Bike sensor data

Accident data

Bicycle counting data GIS data

KPI/visualization – Shortest, longest, average trip length – Highest, average speed – Average time per trip – Graphical visualization of the speed of single trips – Seriousness of injuries according to years and months – Accident rate – Accidents according to accident type – Visualization on a map – Number per day per station – Hourly values – Visualization on a map of bus lines, bus stations, bike path types, slopes, bike routes etc.

von Radverkehrsanlagen, Suggestions for the Consistent Assessment of Bike Paths) of the Forschungsgesellschaft für Straßen- und Verkehrswesen e. V [26]. These key indicators are intended to identify disturbing factors in bicycle use, such as through an evaluation of waiting times, time loss, traffic volume, comparison of poor route quality with the number of accidents, correlation of accidents caused by weather conditions etc. As part of expert interviews, some of the most important KPIs for bicycle infrastructure planning were collected and structured according to the needs of the relevant stakeholders (Planungsbüro VIA). Table 1 displays some of the most relevant KPIs. The data sources were provided from different departments of the city administration and the police department (accident data). The bike sensor data was collected as part of the INFRASense research project with a crowdsourcing approach that is based on the active participation of several hundreds of interested citizens. The images for the training of the AI approaches were collected internally. This data source will be expanded in the future under citizen participation. Further information about the data providers can be found in the publication of Schering et al. [27]. Based on the individualized KPI selection, the user receives specific diagrams and maps (see Fig. 1). Key indicators in the area of accident, citizen reportings, bicycle volume or traffic volume analysis are available and can be evaluated on several dimensions. For example, accident types can be evaluated and visualized according to years, months, days, hours or 15-min intervals. This makes danger locations and times visible for traffic planning experts or other stakeholders and thus enables targeted roadway planning. Figure 2 presents an example how the individual dashboard may look like.

Fig. 2 Exemplarily graphs of the website: Bicycle volume at different locations in Oldenburg (left), locations of traffic counters, accidents, concerns and road quality (middle), filter for accident data (right)

102 M. Birke et al.

The Bike Path Radar: A Dashboard to Provide New Information About Bicycle. . .

103

4 Data The website needs suitable data and a connection to these data sources to enable the user to create an individual dashboard. The second part of the group is responsible for providing an API with enables queries to the data from the available sources. This includes fetching data from the other APIs and databases into the target database. Additionally, the data needs to be transformed in a way which allows fast and efficient querying when it is not possible on the original data. This is especially relevant when the amount of data in a table is huge.

4.1 Time Series Data There are several data sources that are combined in the target data base with the connection to the website. The blue section of Fig. 1 displays the connected data bases and related data sources. A GraphQL API provides bike rides that are recorded by a bike sensor system. Details about the sensor can be found in the publication of Schering et al. [29]. A REST API is available to connect the project to a weather data base. The weather data API of Visual Crossing [30] provides historical weather data worldwide. A PostgreSQL [31] database that contains time series data (traffic counting data, citizen reportings. bicycle accidents) from two different cities and police departments is connected. A PostGIS [32] database integrates geospatial data with coordinates of streets and its segments. The two data bases and the two APIs are queried by using Python [33], making use of its many libraries to easily access the different sources. The data sources are transformed and stored in the target PostgreSQL database. Figure 3 shows the schema of the Bike Path Radar data base. The following section presents the data sources more in detail. PostgreSQL: Bicycle Accidents The police department provides two different types of bicycle accident data, namely EUSKa [34] and Cognos [35]. In general, both tables include accidents that occurred between 2019 and 2022. In both tables are some accidents included that cannot be found in the other one. There is a lot of double information but there are some attributes that are only considered in one of the tables. As an example, only Cognos includes are free text description of the accident. As part of the project another table was created that consolidates the accidents from both sources (see match of Cognos and EUSKa accidents, Fig. 1). All accidents which have at most one different shared field were joined. There are many entries in the data base which describe the same accident in EUSKa and Cognos but have some differences (e.g. the number of slightly or serious injured accident participants). Therefore, we also join accidents when at most one person’s accident level is moved (and no other field is mismatched). PostgreSQL: Traffic Volume There are multiple tables that contain different types of traffic volumes. These are based on stationary sensor systems that count the

Fig. 3 Schema of the Bike Path Radar data base

104 M. Birke et al.

The Bike Path Radar: A Dashboard to Provide New Information About Bicycle. . .

105

amount of bicycles, cars or trucks at different locations of the city. There may be different columns for different types of vehicles, depending on the station. Sometimes stations in the same place but in different directions are stored as different stations. The traffic volume is recorded in different measurement frequencies (5 min, 15 min, 60 min etc.). PostgreSQL—Concerns (citizen reportings) This table contains time, place and text description of the reporting of Stadtverbesserer and EMSOS that were mentioned in the state of the art. Some of the categories of the citizen reportings are related to traffic (e.g. road infrastructure). Another column (topic text) defines a category that is related to cycling or bicycle infrastructure based on the text description. The categories are determined based on a machine learning model from the field of natural language processing (NLP). The results should support experts from the bicycle infrastructure domain. Accidents, concerns and traffic data are all provided by the Bictimese PostgreSQL DB. Infrastructure Data The infrastructure database (Infrastructure PostgreSQL DB) contains information about bicycle, car and bus network (bike path type, road names, bus lines and stops), but most importantly, the road segmentation. The segmentation divides all roads in Oldenburg and Osnabrück into segments of lengths up to about 50 m. The data base also includes specific information about topography (slope) and cleaning frequencies. Bike Ride Data The bike rides of the BIQEmonitor [36] website can be accessed by the GraphQL API of worldiety. There are ten of thousands of bike rides available. Most of these have a measurement frequency of 24 data points per second. The data points include time, speed, acceleration and some more information. There are tens of millions of data points available. To access the data in a reasonable amount of time, the aggregated data (e.g. locations of road segments or traffic intersections) will also be fetched. Weather Data The weather data of the REST API (Visual Crossing Weather API) [30] is used to fetch historical weather for all data points which contain both information about time and location. The weather data for the cities of the project (Oldenburg and Osnabrück) are stored in quarterly hours for the time periods that are available for other time series data sources. This enables a join of the weather data with other types of time series data.

4.2 API After the preprocessing steps all data sources are stored in the target PostgreSQL database. The data base needs to be connected to the Bike Path Radar. To enable an easy and secure way to access an API was implemented. The Query API was created based on FastAPI [37]. The Python library convinces by its speed, simplicity, inherent support for generating documentation and ability to work

106

M. Birke et al.

with JSON. We use FastAPI to build post-requests as JSON query to provide a more customizable interface. While this increases the complexity of the queries, it prevents an exponential blowup of endpoints. This is achieved by selecting which columns to group by, which to aggregate and by which to filter in the JSON query which is transmitted with the post request. We validate the JSON query, build a matching SQL query, execute it on the database, aggregate the result as JSON answer and return it as the answer.

5 AI Approach The dashboard should become more interactive by an additional function that allows the user to provide personal data that is related to bike infrastructure for the website. In general, bad bike infrastructure may have an influence on cycling safety [38]. Road damages are often disturbing on a bike trip. Therefore, the research and implementation of a robust artificial intelligence (AI) method for the quality assessment of bicycle paths is also part of the dashboard development. The focus is on identifying and analyzing damages, including surface types, bicycle path alignments, and specific types of damages found on these paths. In particular, this part concentrates on developing an AI model capable of detecting damages in images of bike paths. The aim is to provide insights into the data collection process, training of the AI model, and the implementation of a reporting platform that enhances citizen engagement.

5.1 Surface Types and Damages on Bicycle Paths Bicycle paths can have various surface types and forms. The different surface types include asphalt, pavement with or without chamfer or concrete, while the bicycle path forms can include painted lines, curbs, or physical barriers [39]. Damages on bicycle paths are of particular interest for this research. These damages can include unevenness, longitudinal or transverse cracks, edge deformations and potholes [40]. Tree root damage occurs when the roots of nearby trees uplift the surface of the path, leading to uneven surfaces and potential hazards for pedestrians [41] or cyclists. There are several factors such as wear and tear, bad weather conditions, bad maintenance management or the usage of low quality material that can affect the overall smoothness of the surface [42]. Cracks may vary in size or severity and reducing driving or riding comfort. Cracks should be repaired as soon as possible to avoid decreasing quality of the surface and increasing maintenance costs [43]. Potholes are more severe forms of damage that result from the progression of cracks and can cause accidents and significant damage to road users [44].

The Bike Path Radar: A Dashboard to Provide New Information About Bicycle. . .

107

5.2 AI Model for Damage Detection To develop an effective AI model for damage detection, a dataset of labeled images is required for training. The dataset used in this research was manually made and labeled by the project itself. The images were first reviewed by human annotators, who identified and categorized the damages. The labeling process was performed using the open-source program Labelstudio.io [45], with bounding boxes drawn around the damages. The choice of bounding boxes over masks is due to the selection of object detection with Yolov8 [46]. Yolo has made some evolution progress and has become a central real-time object detection system for robotics, automatic cars, robotics or video monitoring [47]. Some studies were conducted to detect road damages in the past [17, 48]. Table 2 shows the most important libraries for the AI models that are used for bike path damage detection. Two customized models are applied. Yolov8 is used as core technology for object detection (Custom_Yolov8Model). Yolov8 is dependent on Ultralytics [49]. The second model (Custom_efficientNetModel) is used for the classification of objects. The core classes of bike path damages that were implemented as part of the Bike Path Radar are cracks and potholes. Because efficientNet [50] is not embedded into Python, pytorch [51] and Torchvision [52] are used to make the models run in Python. The Yolov8 algorithm was employed to train the AI model on the labeled dataset. The Yolov8 medium architecture was used for training, and the model was trained for 1000 epochs to achieve the best possible performance. For Yolov8 and efficientNet we used the standard parameters. Metrics for evaluating the model’s performance are presented in the accompanying Fig. 4. Furthermore, manual practical tests on pictures that were not included in the dataset, were used to test the model and get a subjective overview of model performance. In the next step newer models with bigger datasets will be trained to archive even better results. Different methods to classify the damage on a picture are tested and will be implemented if they are deemed useful. Figure 5 shows an exemplary picture with successful damage detection. Table 2 Models and libraries that were used for bike path damage detection

Model Custom_Yolov8Model Custom_efficientNetModel

Libraries Yolov8 Ultralytics efficientNet Pytorch Torchvision

Description Core Dependency Core Python-module Python-module

Fig. 4 Metrics for evaluating model performance

108 M. Birke et al.

The Bike Path Radar: A Dashboard to Provide New Information About Bicycle. . .

109

Fig. 5 Bike infrastructure image with successful bike path damage detection

5.3 AI Model for Damage Detection Citizen science approaches have become popular in the past few years [53]. These are applied to increase knowledge in different fields of interest with the goal to provide data that can be used for scientific research [54]. Citizen engagement is also applied in the data collection process of road damage images. In this specific context a reporting platform was developed to provide images with a geo reference that can be used for further training of the AI model. Citizens will be enabled to submit images of damages they encounter on a bike route. These citizen-contributed images were integrated into the training process of the AI model, enhancing the diversity and comprehensiveness of the dataset and further enhancing general usage of the AI model. The platform implementation that is part of the Bike Path Radar website includes the development of both the frontend and backend components. Figure 1 (red section) shows the data pipeline of the AI part of the Bike Path Radar project. The frontend of the reporting platform was developed using React, providing a userfriendly interface for uploading images and reporting damages on bicycle paths. The backend, responsible for the AI image detection, was implemented using Flask [55]. It handles the detection of damages based on the trained AI model, data transformation to make storage possible. The metadata of the uploaded images (time of data collection and upload, location, bounding box, class) is stored in the PostgreSQL databases of the data section. To ensure efficient storage and retrieval of

110

M. Birke et al.

images, a MinIO [56] object storage is used to store raw and processed data. Images are physically stored in the object storage. Detected damages of the images will be displayed on the website frontend.

6 Conclusion and Outlook As the literature review has revealed, there are already some dashboards available to support the cycling planning process. The data base of these dashboards are not sufficient to fulfill all demands. Indicators based on real bike trips and bike infrastructure related data is often missing. Perspectives of different types of users (experts, citizens) are not reflected. As we have seen in the literature review, existing reporting systems are limited to free text or providing images without applying any AI approaches. The Bike Path Radar dashboard fulfills this research gap to support the bike infrastructure planning process. For the implementation of the dashboard, state-ofthe-art web technologies such as React, Recharts, and Leaflet, were used. React makes it possible to create dynamic and reactive user interfaces. Recharts is a charting library that provides a wide range of chart types to visually represent the KPIs. Leaflet offers interactive map functionalities to visualize the geographical aspects of the bike infrastructure. The website provides general information and real-time updates, while the dashboard allows users to visualize selected KPIs in an intuitive way. The requirements regarding the KPIs and how to display these were gathered in several expert interviews with stakeholders from traffic planning what makes the approach practical relevant. In the nearer future further KPIs via Leaflet maps will be created and visualized on the dashboard. The API will be expanded for this purpose. If there is data which should only be visible to certain users, an authentication for these queries will be introduced. A secure login system will be put into place to let expert login to access more detailed information and to save their created diagrams and maps. Different target groups may be differentiated what is another step forward in comparison to existing dashboards. The AI approach is another big working part of the project. Existing AI models are not embedded into reporting systems and are mainly specified on road (and not bike path) damages. Ongoing work in the Bike Path Radar focuses on improving both the frontend and the AI model’s performance. The development team is actively working on enhancing the visualization of damages on a map within the frontend. This involves processing GPS data associated with the uploaded images to accurately locate and display damages on a map interface. Continued research and development will contribute to further improving the accuracy and usability of the system. One perspective goal is to create a holistic ranking of different roads based on the created metrics and AI results. The website will be evaluated with further external experts from municipal administration and planning domain in the nearer future regarding relevance and usability. As a total result, the Bike Path Radar dashboard

The Bike Path Radar: A Dashboard to Provide New Information About Bicycle. . .

111

should make a valuable contribution to improving bike infrastructure by providing users and experts with informed insights and information. Acknowledgements INFRASense is funded by the Bundesministerium für Digitales und Verkehr (BMDV, German Federal Ministry of Digital and Transport) as part of the mFUND program (project number 19F2186E) with a funding amount of around 1.2 Mio. Euro. As part of mFUND the BMDV supports research development projects in the field of data based and digital mobility innovations. Part of the project funding is the promotion of networking between the stakeholders in politics, business, administration and research as well as the publication of open data on the Mobilithek portal.

References 1. City of Cologne: Radverkehr legte 2020 deutlich zu. https://www.stadt-koeln.de/leben-inkoeln/verkehr/radfahren/aktuelles/70864/index.html. Accessed 2023/08/19 2. Thurau, J.: Fahrrad-Boom durch Corona. https://www.dw.com/de/fahrrad-boom-durch-corona/ a-57754435. Accessed 2023/06/18 3. Gengenbach, S.: Fahrradklimatest zeigt viel Schatten und wenig Licht. VELOPLAN. 2, 551– 560 (2023) 4. Nationaler Radverkehrsplan 3.0. https://bmdv.bund.de/SharedDocs/DE/Artikel/StV/ Radverkehr/nationaler-radverkehrsplan-3-0.html. Accessed 2023/06/18 5. Vollmuth, J.H., Zwettler, R.: Kennzahlen. Haufe-Lexware GmbH & Co. KG, Freiburg (2020) 6. Leßweng, H.P.: Einsatz von Business Intelligence Tools (BIT) im betrieblichen Betriebswesen. Controlling. 16(1), 41–50 (2004) 7. Entwicklung einer Softwareanwendung zur Qualitätsbestimmung kommunaler Radverkehrsanlagen auf Basis von Crowdsourcing-Daten – INFRASense. https:// Accessed bmdv.bund.de/SharedDocs/DE/Artikel/DG/mfund-projekte/infrasense.html. 2023/05/25 8. University of Oldenburg, Radweg Radar. www.radweg-radar.de. Accessed 2023/05/29 9. Planungsbüro VIA. https://www.viakoeln.de/home. Accessed 2023/06/15 10. Copenhagenize: About. https://copenhagenizeindex.eu/about/the-index. Accessed 2023/08/17 11. Allgemeiner Deutscher Fahrrad-Club ADFC: ADFC-Fahrradklima-Test. https://fahrradklimatest.adfc.de/. Accessed 2023/08/17 12. City of Dortmund: Masterplan Mobilität Dortmund 2030: Verkehrssicherheitsstrategie – Teilkonzept Radverkehr und Verkehrssicherheit. https://www.dortmund.de/media/ p/masterplan_mobilitaet/downloads_24/strategien/Strategie_Verkehrssicherheit.pdf. Accessed 2023/08/17 13. Province of Antwerp: Cycle Data Hub. https://cycle-data-hubprovincieantwerpen.hub.arcgis.com/. Accessed 2023/08/17 14. FixMyBerlin: Happy Bike Index. https://fixmyberlin.de/zustand/. Accessed 2023/02/15 15. Pham, V., Nguyen, D., Donan, C.: Road Damages Detection and Classification with YOLOv7. arXiv preprint arXiv:2211.00091 (2022) 16. Terven, J., Cordova-Esparza, D.: A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond. arXiv preprint arXiv:2304.00501 (2023) 17. Jeong, D.: Road damage detection using YOLO with smartphone images. 2020 IEEE international conference on big data (big data). IEEE (2020) 18. Doshi, K., Yilmaz, Y.: Road damage detection using deep ensemble learning. 2020 IEEE International Conference on Big Data (Big Data). IEEE (2020) 19. Stadtverbesserer. https://gemeinsam.oldenburg.de/oldenburg/de/flawRep/54305. Accessed 2023/08/17

112

M. Birke et al.

20. EMSOS – Ereignismeldesystem der Stadt Osnabrück. https://geo.osnabrueck.de/emsos/. Accessed 2023/08/17 21. ADFC Bremen Mängelmelder. https://adfc-bremen.mängelmelder.de/#pageid=2. Accessed 2023/08/17 22. DeMarco, T.: Structured Analysis and System Specification. Yourdon Press, New York (1978) 23. React. https://react.dev/. Accessed 2023/06/15 24. Recharts. https://recharts.org/en-US/. Accessed 2023/06/15 25. Leaflet. https://leafletjs.com/. Accessed 2023/06/12 26. Forschungsgesellschaft für Straßen- und Verkehrswesen (FGSV): Hinweise zur einheitlichen Bewertung von Radverkehrsanlagen (Ausgabe 2021). FGSV Verlag, Cologne (2021) 27. Schering, J, Säfken, P., Marx Gómez, J., Krienke, K., Gwiasda, P.: Data Management of Heterogeneous Bicycle Infrastructure Data. In: Enviroinfo 2023 (in publication process) 28. Tasyer, D.: Eignung klassischer BI Tools für die Analyse und Darstellung von heterogenen Fahrraddaten. University of Oldenburg (unpublished Bachelor thesis), Oldenburg (2022) 29. Schering, J., Janßen, C., Kessler, R., Dmitriyev, V., Marx Gómez, J., Stehno, C., Pelzner, K., Bankowsky, R., Hentschel, R.: ECOSense and its preliminary findings: Collection and analysis of bicycle sensor data. In: Kamilaris, A., Wohlgemuth, V., Karatzas, K., Athanasiadis, I. (eds.) Enviroinfo 2020 Environmental informatics - new perspectives in environmental information systems: transport, sensors, recycling. Adjunct proceedings of the 34th EnviroInfo conference. Shaker Verlag, Düren, pp. 145–153 (2021) 30. Visual Crossing. https://www.visualcrossing.com/. Accessed 2023/08/17 31. PostgreSQL. https://www.postgresql.org/. Accessed 2023/06/15 32. Postgis. https://postgis.net/. Accessed 2023/06/15 33. Python. https://www.python.org/. Accessed 2023/08/19 34. Police Northrhine-Westphalia. Unfallhäufungsstellen erkennen – mit EUSKa. https:// polizei.nrw/artikel/unfallhaeufungsstellen-erkennen-mit-euska. Accessed 2023/06/12 35. Ministry of Interior and Sports in Lower Saxony, Polizeiliche Kriminalstatistik. https:// Accessed www.mi.niedersachsen.de/startseite/aktuelles/presseinformationen/-61569.html. 2023/06/12 36. Worldiety. BIQEmonitor. www.biqemonitor.de. Accessed 2023/08/18 37. Ramirez, S.: FastAPI. https://fastapi.tiangolo.com/. Accessed 2023/06/12 38. Shoman, M.M., Imine, H., Acerra, E.M., Lantieri, C.: Evaluation of cycling safety and comfort in bad weather and surface conditions using an instrumented bicycle. IEEE access. 11, 15096– 15108 (2023) 39. Forschungsgesellschaft für Straßen- und Verkehrswesen (FGSV): Empfehlungen für Radverkehrsanlagen (ERA). FGSV Verlag, Cologne (2012) 40. Larsson, M., Niska, A., Erlingsson, S.: Degradation of cycle paths—a survey in Swedish municipalities. CivilEng. 3(2), 184–210 (2022) 41. Smiley, E.T.: Comparison of methods to reduce sidewalk damage from tree roots. Arboricult. Urban For. 34(3), 179–183 (2008) 42. Saisree, C., U, K.: Pothole detection using deep learning classification method. Procedia Comput. Sci. 218, 2143–2152 (2023) 43. Li, B.-L., Qi, Y., Fan, J.-S., Liu, Y.-F., Liu, C.: A grid-based classification and box-based detection fusion model for asphalt pavement crack. Comput. Aided Civ. Inf. Eng. 38, 2279– 2299 (2022) 44. Eaton, R. A., Joubert, R. H., Wright, E.A.: Pothole Primer – A Public administrator’s Guide to Understanding and Managing the Pothole Problem. US Army Corps of Engineers, Cold Regions Research & Engineering Laboratory. Special Report 81-21, September 1981 (Revised December 1989). https://idot.illinois.gov/Assets/uploads/files/TransportationSystem/Manuals-Guides-&-Handbooks/T2/P009.pdf 45. Labelstudio. https://labelstud.io/. Accessed 2023/06/17 46. YOLOv8. https://yolov8.com/. Accessed 2023/06/17 47. Terven, J., Cordova-Esparza, D.: A Comprehensive Review of YOLO: From YOLOv1 and Beyond. arXiv preprint arXiv:2304.00501 (2023)

The Bike Path Radar: A Dashboard to Provide New Information About Bicycle. . .

113

48. Ma, H., Sekimoto, Y., Seto, T., Kashiyama, T., Omata, H.: Road damage detection and classification using deep neural networks with smartphone images. Comput. Aided Civ. Inf. Eng. 33(12), 1127–1141 (2018) 49. Ultralytics. https://ultralytics.com/. Accessed 2023/08/18 50. Tan, M., Le, Q. V.: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. International Conference on Machine Learning. arXiv:1905.11946v5 (2019). https:// doi.org/10.48550/arXiv.1905.11946 51. PyTorch. https://pytorch.org/. Accessed 2023/08/18 52. Torchvision. https://pytorch.org/vision/stable/index.html. Accessed 2023/08/18 53. Hecker, S., Bonney, R., Haklay, M., Hölker, F., Hofer, H., Goebel, C., Gold, M., Makuch, Z., Ponti, M., Richter, A., Robinson, L., Iglesias, J.R., Owen, R., Peltola, T., Sforzi, A., Shirk, J., Vogel, J., Vohland, K., Witt, T., Bonn, A.: Innovation in citizen science – perspectives on science-policy advances. Citizen Sci. 3(1), 1–14 (2018). https://doi.org/10.5334/cstp.114 54. Kosmala, M., Wiggins, A., Swanson, A., Simmons, B.: Assessing data quality in citizen science. Front. Ecol. Environ. 14(10), 551–560 (2016) 55. Flask. https://flask.palletsprojects.com/en/2.3.x/. Accessed 2023/06/17 56. MinIO, High Performance Object Storage for AI. https://min.io/. Accessed 2023/06/17

Tactics for Software Energy Efficiency: A Review Jose Balanza-Martinez, Patricia Lago, and Roberto Verdecchia

Abstract Over the years, software systems experienced a growing popularization. With it, the energy they consume witnessed an exponential growth, surpassing the one of the entire aviation sector. Energy efficiency tactics can be used to optimize software energy consumption. In this work, we aim at understanding the state of the art of energy efficient tactics, in terms of activities in the field, tactic properties, tactic evaluation rigor, and potential for industrial adoption. We leverage a systematic literature review based on a search query and two rounds of bidirectional snowballing. We identify 142 primary studies, reporting on 163 tactics, which we extract and analyze via a mix of qualitative and quantitative research methods. The research interest in the topic peaked in 2015 and then steadily declined. Tactics on source code static optimizations and application level dynamic monitoring are the most frequently studied. Industry involvement is limited. This potentially creates a vicious cycle in which practitioners cannot apply tactics due to low industrial relevance, and academic researchers struggle to increase the industrial relevance of their findings. Despite the energy consumed by software is a growing concern, the future of energy efficiency tactics research does not look bright. From our results emerges a call for action, the need for academic researchers and industrial practitioners to join forces for creating real impact. Keywords Systematic literature review · Software energy efficiency · Tactics

J. Balanza-Martinez () · P. Lago Vrije Universiteit Amsterdam, Amsterdam, The Netherlands e-mail: [email protected]; [email protected] R. Verdecchia Vrije Universiteit Amsterdam, Amsterdam, The Netherlands Università degli Studi di Firenze, Florence, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_7

115

116

J. Balanza-Martinez et al.

1 Introduction The energy consumption of software systems is an ever-increasing concern. Information and Communication Technology (ICT) consumes a staggering amount of electricity, estimated to produce between 2.1% and 3.9% of global greenhouse gas (GHG) emissions annually [6]. Data centers alone are estimated to produce 3% of the GHG, rivaling aviation at 2.5%, and having doubled in portion of global energy supply over the past 10 years [16]. Although hardware is the direct consumer of energy, software drives its energy consumption. As defined in a work by Jelschen et al. [13], which focuses on reengineering software to optimize its energy efficiency, at the highest level of abstraction of a computer system we find application software. Application software is the level of software that most software engineers address, namely the software developed for the end-users.Software engineers, now more than ever, must be aware of the energy consumption of the software they create, as the societal impact of their decisions can no longer be neglected. A prime example of how application software drives energy consumption is google.com, Google’s flagship search engine. Based on the latest figures officially released by Google, a search request on google.com consumes approximately 0.3 Wh.1 Although this might seem like a negligible amount of energy, the search engine serves around 45.41 billion requests per month,2 i.e., it consumes in total a staggering amount of 13.62 GWh per month. With this amount of energy, one could power 44.173 European homes per month.3 Increasing the energy consumption of a search engine request by 0.1 Wh, would equate to driving the total monthly energy consumption up to 18.16 GWh per month, a 4.54 GWh increase. At such a massive scale, a seemingly negligible energy consumption increase at the application software level has the potential to increase the total energy consumption of a software application exponentially. Despite the concerning global trends of software energy consumption, the literature on the topic is divided [26]. Software focusing on mobile and embedded devices received most attention [20], while application software received only a fraction of it [3, 26]. To worsen the situation, misleading biases and preconceptions often mentioned in the academic literature can confound readers. For example, a frequent yet wrongful assumption present in the literature is that software execution time and energy consumption are directly proportional, i.e., that reducing execution time will reduce energy consumption [27]. This fact has been disproved multiple times in the recent literature [3, 18, 19]. Similarly, moving to the cloud or using green resources are often mentioned as holistic strategies to address software sustainability, while 1 https://googleblog.blogspot.com/2009/01/powering-google-search.html. 2 https://www.statista.com/statistics/1201880/most-visited-websites-worldwide. 3 https://www.odyssee-mure.eu/publications/efficiency-by-sector/households/electricityconsumption-dwelling.html.

Tactics for Software Energy Efficiency

117

in reality such strategies do not provide any guarantee on the environmental sustainability of software applications [33]. By considering the state of practice, the outlook is also not promising. Most developers are not aware (yet) of the energy consumption of their software [25]. Developers that are aware of energy efficiency do not fully understand the energy consumption of their software, and lack concrete optimization examples and off-theshelf solutions to address it [24, 33]. Developers who offer help on software energy optimization in popular knowledge bases tend to provide misinformed advice [29]. The goal of this paper is to identify existing energy efficiency application software tactics in the literature, in order to study the activity of the field, the characteristics of tactics, and their applicability in industrial settings. To achieve this goal, we use as methodology a systematic literature review [15] focusing on application software energy efficiency tactics. Via a mix of automated search and snowballing, we identify 142 primary studies (9 of which are extended versions). The primary studies are then analyzed by considering a classification framework comprising 9 parameters, such as execution environment, tactic type, abstraction level, and software development lifecycle (SDLC) stage. The main contributions of this paper are: – A rigorous review of the current tactics for application software energy optimization; – A data-driven framework to classify application software energy optimization tactics; – An assessment of the industrial relevance of energy efficiency software tactics. The audience of this paper are (i) researchers interested in understanding the current state of the art of tactics for software energy efficiency and (ii) practitioners who consider applying tactics to improve energy efficiency of their software applications.

2 Related Work While the literature includes several studies presenting reviews of software energy efficiency tactics, to the best of our knowledge, none of these studies specifically targets the level of application software. In addition, in none of the related literature the platforms, software development stage, and abstraction level of tactics is taken into account. Further considerations on the related work are reported below. Frequently energy efficient software research considers embedded systems [13]. Such line of research focuses on low-level software and hardware optimizations, e.g., dynamic voltage and frequency scaling, low energy power modes, hardware architecture techniques, and utilization of unconventional cores, e.g., fieldprogrammable gate array circuits [22]. One area closely related to application software is mobile computing, which has received plenty of attention in the academic body of literature. Zaman and

118

J. Balanza-Martinez et al.

Almusalli [35] provide a review of multiple energy efficiency tactics for smartphones at different system levels. However, the tactics they propose are mostly targeted towards tuning hardware components, low-level software, and operating systems to make them more energy efficient, without considering the application software domain. Naik and Chavan [23] focus on increasing the energy efficiency of smartphone hardware components e.g., camera, and GPS. Similarly, Hans et al. [10] also present energy efficiency tactics for mobile applications. In both papers, the presented tactics focus exclusively on mobile applications, and are not applicable to devices that lack mobile specific hardware, e.g., movement sensors. The closest researches on reviewing energy efficiency application software tactics are the ones of Georgiou et al. [8] and Paradis et al. [26]. Georgiou et al. [8] present a review on techniques and tools to improve software energy efficiency. The authors describe different techniques and tools applicable at each development stage, and study the empirical evaluations conducted. In contrast to our work, the one of Georgiou et al. [8] (i) does not consider software abstraction levels, and (ii) does not consider the industrial relevance of tactics. Ignoring the abstraction level makes it harder for developers working at a specific level to select tactics. Presenting the existing empirical evaluations instead supports developers in understanding the efficacy and industrial applicability of tactics. Paradis et al. [26] present the systematic literature review most closely related to our work. The authors perform a review of energy efficiency tactics, outlining strategies, and identifying potential issues for future work. There are two main differences w.r.t. this work. Firstly, similarly to the work of Georgiou et al. [8], Paradis et al. categorize tactics based solely on their definition. In this work, we classify tactics based on the context in which they can be applied, e.g., abstraction level, target platform and SDLC. Secondly, the work of Paradis et al. differs in terms of number of primary studies considered. Paradis et al. review 39 primary papers, while this work is more encompassing, drawing results from 142 primary studies. In addition, Georgiu et al. do not report the individual tactic categorization, but rather a high level overview of each tactic type. Finally, the authors focus mostly on the verification stage of the SDLC, and do not conduct an systematic evaluation of the industrial relevance of tactics.

3 Definitions 3.1 System and Software Levels In order to illustrate with care the tactics we target in this study, following we provide a brief overview of the system and software levels presented by Jelschen et al. [13]. Each level presents different opportunities to optimize the energy efficiency of software systems:

Tactics for Software Energy Efficiency

119

Hardware: Optimizations at this level focus on improving the hardware of computer systems to increase their energy efficiency by promoting better utilization of resources. Low-Level Software: Optimizations at this level focus on improving the machine code transformations of source code. This is mainly done via compiler optimization. Operating System: Optimizations at this level focus on improving the management of energy consumption by adjusting operating system functioning and settings, e.g., by putting resources to sleep, or by scheduling resources. Application Software: Optimizations at this level focus on improving the energy efficiency of software developed for the end-users, independently of operating systems and hardware capabilities. Such optimizations take into consideration application information that is unavailable at lower levels.

3.2 Energy Efficiency Optimization We refer to the energy efficiency optimization of software as green in software, where the goal is reducing the energy consumption of software itself, and not green by software, where the goal is to use software to deliver energy efficient systems in other domains [4].

3.3 Platforms In this paper, we consider the platforms targeted by application software. To avoid potential ambiguities, a definition of the different platforms, if any, considered by tactics is provided below. The definitions are based on the classification framework emerging from the coding process of this literature review. Agnostic: Tactics that can be applied regardless of any platform, e.g., the most energy efficient thread-safe data structure for the Java language. Workstation: Tactics that can be applied on a single workstation such as a desktop or server, e.g., measuring the energy consumption of the CPU of a single machine. Distributed: Tactics that can be applied in a distributed setting, e.g., cloud architectural patterns that reduce the energy consumption of cloud-native applications. Mobile: Tactics that can be applied on mobile devices, e.g., bundling sensor requests of an application to increase the energy consumption of the device.

120

J. Balanza-Martinez et al.

3.4 Abstraction Levels In this paper, we consider different abstraction levels of application software. This abstraction levels are defined by Buschmann et al. [1] as follows: Architectural Level: The highest abstraction level. This level is concerned with overarching software components, layers, and their relation to the given context. Decisions at the architectural level express fundamental structural organization of software systems. Design Level: Decisions at the design level are smaller in scope to those at the architectural level, but are at a higher level than programming languagespecific idioms. The application of a design level decision has no effect on the fundamental structure of a software system, but may have a strong influence on the architecture of a subsystem. Example are the “Gang of Four” design patterns [7]. Code Level: The lowest abstraction level. This level is concerned with implementation details at the source code granularity and describes how to implement particular aspects of components (and the relationships between them) with the features of programming languages. An example of this level of granularity are the refactoring techniques proposed in Martin Fowler’s book, Refactoring [5].

3.5 Software Development Life Cycle In this paper we categorize tactics based on the Software Development Life Cycle stages as defined by ISO-24748 [11], namely: Requirements: Stage concerned with how requirements will be identified, traced, and managed. Design: Stage concerned with defining, modeling, and describing the software system architecture and design. Implementation: Stage concerned with how the various inputs into the software development effort will be implemented. Verification: Stage concerned with how requirements, including non-technical requirements such as safety and security, will be verified and validated for the software system. Maintenance: Stage concerned with how software defects and technical problems will be identified, recorded, and resolved.

4 Study Design In this Section, we present the study design employed for this systematic literature review. We begin with our Research Goal and Research Questions, followed by

Tactics for Software Energy Efficiency

121

complementary Out of Focus Questions to paint a clearer picture of the targeted focus. Next, we detail the employed Search Strategy, as well as the Data Extraction and Data Synthesis procedures, followed by our complementary Study Replicability package.

4.1 Research Goal This study focuses on understanding the state of the art of energy efficient tactics, in terms of the activity of the field, the properties of tactics, the rigor of tactic evaluations, and their potential for industrial adoption. More specifically, by following the Goal-Question-Metric approach [2], our goal can be formalized as follows: Analyze tactics for software energy efficiency For the purpose of classification and analysis With respect to publication trends, properties, and potential for industrial adoption From the viewpoint of software engineering researchers and practitioners

4.2 Research Questions The main research question we address is: RQ:

What is the state of the art of tactics for software energy efficiency?

We refine the main research question in two research sub-questions: RQ1 : RQ2 :

. .

What are the characteristics of tactics for software energy efficiency? Are tactics for software energy efficiency ready to be applied in industry?

With respect to .RQ1 , as detailed in Sect. 2, characterizations of tactics for software energy efficiency targeted to several domains such as cloud architectures, embedded systems, hardware, low-level software already exist. Hence, in this study, we explicitly focus on tactics for software energy efficiency targeting application software, i.e., tactics that can be deployed independently of the underlying hardware or operating system. With respect to .RQ2 , we are interested in the potential applicability of tactics in an industrial setting. For a tactic to be applicable in practice, we need more than just empirical evaluations performed by researchers in a controlled environment; we need empirical evidence in real world settings to fully understand the prospective effects of each tactic.

122

J. Balanza-Martinez et al.

Execution of search query

Google Scholar

388

142 primary studies (of which 9 extended papers)

Impurity Removal

298

Application of initial selection criteria

25

Application of updated selection criteria

81 Snowballing

Fig. 1 Search strategy (P: primary studies, S: secondary studies, E: extended)

4.3 Search Strategy This study follows the literature search strategy shown in Fig. 1. The selected search strategy allows us to better control the characteristics and number of the potential primary papers at each stage. A description of each stage is presented below. The search strategy was executed by one researcher, while two other supervised the process and revised the results.

4.3.1

Initial Search

For this investigation, we perform an initial automated search by leveraging the Google Scholar digital library. We opt to use Google Scholar based on multiple factors, namely (i) it has a vast aggregate of literature compiled across several publishers, e.g., IEEE, ACM, Elsevier, Springer; (ii) systematic literature review guidelines suggest to use such digital library to conduct an initial automated search followed by a snowballing process [34]; (iii) it produces a higher yield of possible primary studies as opposed to other digital libraries, e.g., IEEE Xplore, ACM Digital Library, Scopus; and (iv) query results can be automatically extracted. The initial search is conducted by executing on Google Scholar the search query presented in Listing 1. The end date is set to the date the query is executed, namely March 2022. The start date is left unbounded, to mitigate potential threats to validity. Listing 1 Google Scholar search query 1 2 3

TITLE: ("(power OR energy) (efficient OR efficiency OR consumption))" OR environmental OR green) AND TITLE: (tactics OR strategies OR techniques OR tools OR patterns) AND ("software (architecture OR development OR engineering)")

The query targets the keywords “power” or “energy” and “ efficient”, “efficiency”, or “consumption” in the title of the papers to identify studies focusing on energy efficiency. “Environmental” and “green” keywords are used to identify papers presenting energy efficiency tactics. The second query line targets the title keywords “tactics”, “strategies”, “techniques”, “tools”, and “patterns” to identify papers presenting software tactics. The design decision of utilizing such

Tactics for Software Energy Efficiency

123

broad range of synonyms for the keyword “tactics” stems from the multitude of definitions present in academic literature regarding architectural tactics, and from the literature’s lack of a standardized characterization of architectural tactics [21]. Lastly we include the keywords “software” and “architecture”, “development” or “engineering”, searched throughout the full-text of the papers, to identify papers focusing on software engineering. We refrained to filter papers by using the keyword “application software", in order not to exclude papers that targeted application level software under a more specific definition, like mobile application software or cloud computing. Instead, we assessed this criterion manually, by leveraging a specific inclusion criterion (I3, see Sect. 4.3.3). The papers identified via the initial search could be in the form of primary studies, systematic literature reviews, systematic mapping studies, or loosely structured literature reviews. The purpose of including secondary studies in the automated query is to be as encompassing as possible, in order to lay a solid and comprehensive foundation for the subsequent snowballing process. While utilized for the snowballing, secondary studies are not considered for the data extraction, as further documented in Sect. 4.3.5.

4.3.2

Impurity Removal

After the 388 potential primary studies are obtained via the automated query, an impurity removal procedure is performed to filter out entries which are not scientific peer-reviewed papers, e.g., standards, patents, and master/doctoral theses. This procedure concludes with the identification of 298 papers.

4.3.3

Application of Selection Criteria

After the impurities are removed from the initial search results, we filter the remaining possible papers through a set of rigorous and a priori defined inclusion and exclusion criteria. A paper is included if it satisfied all of our inclusion criteria and none of the exclusion criteria. Several exclusion rounds are performed by first reading titles, then abstracts and conclusions, and finally a full reading of the paper, following and incremental reading depth [14]. Our inclusion (I) and exclusion (E) criteria are defined as follows: I1 Studies focusing on the energy efficiency optimization. This criterion is used to include only studies focusing on the optimization of software energy efficiency. I2 Studies presenting software tactics. This criterion is utilized to include only studies focusing on software tactics. E1 Studies not focused on the perspective of a software engineer or software engineering researcher. This exclusion criteria ensures that tactics found are independently applicable by software engineers, without depending on external actors, e.g., cloud providers or hardware manufacturers.

124

J. Balanza-Martinez et al.

E2 Studies in the form of editorials, tutorial, short papers, and posters as they do not provide enough details for a thorough analysis. E3 Studies that have not been published in English language, as their analysis is unfeasible in a timely manner without a translator specializing in software engineering. E4 Studies that have not been peer reviewed, e.g., pre-prints, technical reports, or gray literature, to ensure high quality of the considered papers. E5 Duplicate or extensions of already included papers. When an extension of a paper is found, both papers are considered for the demographic analysis, but only the most mature version of the work is considered for data extraction. E6 Papers that are not accessible, as, other than title and authors, we can not analyze the content of the paper. After the application of the inclusion and exclusion criteria, we identify 81 papers which satisfy all inclusion criteria. However, we find that many papers target specific hardware and operating system, e.g., embedded systems and wireless sensor networks. Therefore, we introduce a third inclusion criterion to further narrow down the selected papers exclusively to the application software level: I3 Studies focused on application software, as defined by Jelschen et al. [13]. This inclusion criterion is utilized to select exclusively studies focusing on software developed for end-users, and is hence is applicable by a large number of software engineers. Once this third criterion is applied, the number of primary studies resulting from the initial search amounts to 25.

4.3.4

Snowballing

To mitigate potential biases due to the search query used, and expand the primary study set, a recursive bidirectional snowballing procedure is adopted, till theoretical saturation is reached [34]. In total, two rounds of backward- and forwardsnowballing are executed. The snowballing terminates with the inclusion of 133 additional studies.

4.3.5

Use of Secondary Studies for Snowballing

In this work, we design our automated query and selection criteria to include as many papers reporting secondary studies on tactics for software energy efficiency as possible. As discussed in Sect. 2, these literature reviews present a different focus w.r.t. this work. Nevertheless, given that these reviews focus on related subject matters, the reviews could accidentally capture primary studies containing tactics to be included in this review. This presents an opportunity to expand the search via a snowballing procedure. The secondary studies are exclusively used to enhance

Tactics for Software Energy Efficiency

125

the snowballing process. In total, 15 secondary studies are identified. The complete list of secondary studies is documented in the replication package of this work (see Sect. 4.6).

4.4 Data Extraction The purpose of the data extraction procedure is to create a classification framework for application software tactics, and to study the different facets of the tactics by following our classification framework. The classification framework takes into account properties and industrial relevance of the tactics, following the research questions proposed in Sect. 4.2.

4.4.1

Characteristics of Tactics for Software Energy Efficiency

Tactic characteristics can be separated into two groups: publication trends, and tactic properties. Publication trends help us visualize the state of the art in application software energy efficient research, and tactic properties helps us categorize each tactic based on specific criteria. Publication Trends To identify the publication trends of tactics we evaluate three attributes, namely publication year, publication venue, and publication type. Tactic Properties To categorize types of tactics we employ a keywording process [28]. The keywording process consists of two steps, namely (i) collecting keywords from the full-text of primary studies via open coding, and combining keywords via constant comparison [9] to identify the context and nature of each tactic, and (ii) clustering of keywords into categories via axial coding to build a classification framework. The result of this process is a classification framework and the categorization of each tactic. The parameters of the classification framework resulting from the keyroding are: Tactic Category, Execution Environment, Abstraction Level, Platform, and Software Development Stage. As one paper might present more than one tactic, the total number of tactics presented might be higher than the number of primary studies. Similarly, as a tactic can be mapped to more than one value of the classification framework parameters (e.g., a tactic might be mapped to more than one development stage), the total tactics per parameter might be higher than the number of tactics identified in the primary studies.

126

4.4.2

J. Balanza-Martinez et al.

Potential for Industrial Adoption

To evaluate the potential for industrial adoption of each software tactic, we analyze the empirical evaluation of each primary study by applying the well-defined classification model introduced by Ivarsson et al. [12]. To objectively evaluate the rigor, the quality of the description of context, study design and execution, and validity of each study is analyzed. To evaluate industrial relevance, the description of the industrial context, subjects, application scale, and research method are considered, as further detailed in Sect. 6.

4.4.3

Use of Extended Papers

To limit potential conclusion bias, we do not consider for data extraction papers for which an extended version was found (see also Sect. 4.3.3). This process leads to the exclusion of 9 extended papers, resulting in a final set of 134 primary studies considered for the data extraction process. Note that, while not used for data extraction, extended papers are considered to study publication trends (see Sect. 5.1).

4.5 Data Synthesis For our data synthesis procedure, we collate and summarize the data extracted to understand the current state of tactics for software energy efficiency [15]. In particular we use a combination of a descriptive synthesis (a descriptive analysis of the results) and content analysis (a categorization of results based on common characteristics).

4.6 Study Replicability To ensure the replicability and scrutiny of this study, all raw and processed data resulting from each of the research phases is made available online.4 In addition, we have included a reference table in the supplementary material of selected papers exemplifying each of the tactic groups discussed in Sect. 5.2.3.

4 https://github.com/ee-application-software/SEIS-2023-ee-application-tactics-rep-pkg.

Tactics for Software Energy Efficiency

127

5 Results RQ1 : Characteristics of Tactics for Software Energy Efficiency In this section, we present the results corresponding to the characteristics of tactics for software energy efficiency, in terms of Publication Trends and Tactic Properties.

5.1 Publication Trends 5.1.1

Publication Year

An overview of the publication trends is presented in Fig. 2, where we can observe the rapid growth of publications from 2009, with a peak in 2015, and a steady decline in subsequent years. Caution needs to be used when interpreting the low number of publications in 2022, as the search query used for this study was executed in the beginning of such year, and hence the results might not be representative of the actual research output in 2022. The publication years of the primary studies range from 2004 to March 2022. The earliest paper found was published in 2004. This paper focuses on the potential energy savings of offloading mobile computations to a cloud server compared to executing those computations locally. It is worth noticing that this paper considers mobile applications in the Android ecosystem, but does not directly mention application software, highlighting the dominance and maturity mobile computing has witnessed in application software energy efficiency research. Workshop

Conference

Journal

25 20

2

15

3

10 5 1

1

1 12 9 17 9 1 5 1 3 3 2 6 4 6 5 5 4 5 2 4 2 3 3 2 2 1 1 1

2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022

0

16

Fig. 2 Publication trends

128

5.1.2

J. Balanza-Martinez et al.

Publication Types

From Fig. 2 we can observe that the majority of the studies are published in conferences (91/142), while a substantial minority in journals (40/142), and only a handful in workshops (11/142).

5.1.3

Publication Venues

The publication venues where two or more primary studies were published are listed in Table 1. The venues with the most publications are the International Conference on Software Engineering (ICSE, 9 papers), followed by International Workshop on Green and Sustainable Software (GREENS, 5 papers). From the collected data a high variety of publication venues is observed, with only a small minority of venues presenting more than one publication on tactics for software energy efficiency (22/100).

5.2 Tactic Properties In this section we present the properties the tactics for software energy efficiency identified in our review. Table 1 Most popular venues (publications .≥3). W = Workshop, C = Conference, J = Journal Venue IEEE/ACM International Conference on Software Engineering (ICSE) International Workshop on Green and Sustainable Software (GREENS) International Green and Sustainable Computing Conference (IGSC) Information and Software Technology IEEE Int. Conf. on Software Analysis, Evolution, and Reengineering (SANER) IEEE Int. Conf. on Software Maintenance and Evolution (ICSME) IT Professional Journal of Systems and Software (JSS) ACM International Conference on Object Oriented Programming Systems Languages & Applications (OOPSLA) ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) Other

Venue C

# of studies 9

W

5

C J C

4 4 3

C J J C

3 3 3 3

C

3

Various

102

Tactics for Software Energy Efficiency

129

In total 163 tactics are identified from the primary studies. Out of the 134 studies considered (i.e., by excluding the 9 extended papers for the data extraction), 24 proposed more than one tactic.

5.2.1

Execution Environment

The first parameter of tactics for software energy efficiency found through the keywording process is the execution environment. The execution environment of a tactic can be categorized as either dynamic, if the tactic needs to be run during the execution of software applications, or static, if the tactic needs to be executed during the development of software applications (i.e., outside of its runtime environment).

5.2.2

Tactic Goal

The second parameter found during the keywording process is the tactic goal. The tactic goal can be either monitoring, if the tactic allows developers to estimate the energy impact of their software, or optimization, if the tactic consists of changing software characteristics (code, configurations, development environments) to improve energy efficiency.

5.2.3

Execution Environments and Tactic Goals

Supported by the parameters defined above (Sects. 5.2.1 and 5.2.2) we classify tactics into four different groups, namely dynamic monitoring, dynamic optimization, static monitoring, and static optimization. An overview of the distribution of tactics among such groups is depicted in Fig. 3, and is further characterized below.

Static

Dynamic

125 100 75

85

50

11

25 25 0

Optimization

Fig. 3 Tactic execution environments and goals

42 Monitoring

130

J. Balanza-Martinez et al.

Static Optimization Static optimization focuses on improving energy efficiency of software outside runtime execution. This category represents the majority of tactics (85/163). Most tactics focus on the code level (43/85), followed by the architectural (26/85), and the design one (24/85). At the code level, tactics focus on datastructure implementation, and/or low granularity code optimizations (e.g., control flow changes). At the architectural level, static optimization focuses on improving high level design decisions, e.g., using of the most energy efficient programming language, software libraries, and development environments. At the design level, tactics focus on modifying existing design patterns, or selecting the most appropriate concurrency constructs. Regarding the software development stages, the majority tactics are performed during the implementation stage (37/85), followed by the maintenance stage (26/85), the design stage (23/85), the requirements stage (6/85), and the verification stage (1/85). Tactics at the design and implementation stages include novel programming practices, such as the use of genetic algorithms, search based optimizations, and evolutionary computing. Tactics at the requirement stage propose to add energy efficiency as quality attribute, and suggest how to balance it with other functional and quality attributes. Tactics at the verification stage propose improving testing techniques to make them more energy efficient. Dynamic Optimization This group of tactics focuses on optimizing the energy efficiency of software at runtime. This group contains a smaller fraction of tactics (25/163) if compared to the static optimizations (85/163). Regarding the abstraction level of dynamic optimizations, the majority of tactics are employed at the architectural level (13/25), followed by the code level (11/25), and the design one (1/25). Tactics at the architectural level focus on dynamically choosing runtime environments for software components. Tactics at the code level self-adapt software components, e.g., data structures, to increase energy efficiency based on runtime measurements. Tactics at the design level allow developers to change design elements during execution depending on runtime information, such, e.g., input size. Regarding software development stages, these tactics only appear at the design stage (14/25) or implementation stage (12/25). Dynamic Monitoring This group of tactics focuses on monitoring application software at runtime to estimate its energy consumption. It is the largest group of monitoring tactics (42/163). Tactics in this group most commonly measure entire applications (25/42), followed by code sections (14/42), and design elements (3/42). Across development stages, since dynamic measuring is performed by measuring the energy consumption of application software at runtime, all instances occur at the verification stage (42/42). Static Monitoring This group of tactics focuses on measuring the energy consumption of application software outside of its runtime via static analysis and/or energy models. Only a handful of tactics are proposed to monitor applications based on static source code analysis (11/163). Analyses of this group can use information previously collected at runtime, e.g., the runtime data upon which an energy model is built. Nevertheless, these tactics are exclusively executed during development

Tactics for Software Energy Efficiency

131

80 60 40

76

67

20 0

29

Code

Architectural

Design

Fig. 4 Tactic abstraction levels

stage, e.g., by showing developers the energy consumption of lines of code via IDE extensions using an energy model. Regarding the considered abstraction level, static monitoring tactics mostly consider the code level (7/11), followed by the architectural (3/11) and the design one (1/11). Regarding the software development stages, most tactics are used at the verification stage (9/11), while only few at the design stage (2/11).

5.2.4

Abstraction Level

Regarding the abstraction level considered by the tactics identified in our review, an overview of their distribution is depicted in Fig. 4. As can be observed in the figure, most tactics consider the code level (76/163), followed by the architectural (67/163), and the design one (29/163).

5.2.5

Software Development Stage

An overview of the software development stages of the tactics is depicted in Fig. 5. Most tactics result to be employed at the verification stage (51/163), followed by the implementation (50/163), design (39/163), and maintenance stage (26/163). Energy efficient tactics to be used during the requirements stage appear to be only marginally explored (6/163).5

5 The sum of tactics across development stages is higher than 163, as some tactics are mapped to more than one development stage.

132

J. Balanza-Martinez et al. 60

40 51

20

50

39 26 6

qu Re

nt ai M

ire

en

m

e

an

n ig D es

pl Im

Ve r

ifi

ca

em

tio

en

t

n

0

Fig. 5 Software development stages 80 60 40

68

67

20 17 0

Agnostic

Mobile

Cloud

11 Workstation

Fig. 6 Tactic targeted platforms

5.2.6

Platform

An overview of the platforms considered by the tactics is depicted in Fig. 6. Most tactics result to be either platform agnostic (68/163), or targeting mobile computing (67/163). Only a smaller number of tactics regard cloud environments (17/163) and single workstations (11/163).

Tactics for Software Energy Efficiency

133

Main findings .RQ1: Energy Efficiency Tactic Characteristics  Starting from 2004, publications on energy efficiency tactics reached a peak of popularity in 2015, and then steadily declined.  Static optimization tactics are most frequent, followed by dynamic monitoring tactics.  The majority of tactics considers either the source code or architectural level, and only a smaller portion the design level.  Most tactics can be used during the verification or implementation stage. Fewer tactics during the design or maintenance stage.  Most tactics are platform agnostic or target mobile computing. Much fewer exist for cloud and workstation environments.

6 Results RQ2 : Potential for Industrial Adoption In this section we document the potential for industrial adoption of the tactics identified in this review.

6.1 Industrial Involvement To evaluate the industrial involvement in tactics for software energy efficiency, we leverage three categories: academic (if all the authors are affiliated to academia), industrial (if all authors are affiliated to industry), and mixed (if co-authors are from both academia and industry). Primary studies produced exclusively by academic authors are the vast majority (119/134), while only a handful of papers have both academic and industrial authors (12/134), and few exclusively industrial authors (3/134).

6.2 Rigor and Industrial Relevance To assess the readiness for industrial adoption of the software tactics, we analyze the evaluation of each primary study based on rigor and industrial relevance as defined by Ivarsson et al. [12] (see also Sect. 4.4.2). Rigor [12] is defined as the precision of the research approach and its documentation. It is measured via three parameters, namely: (i) context, i.e., how well the context is presented, and if its description is sufficient to make objective comparisons with other contexts, (ii) study design, i.e., the products, resources and processes used in the evaluation, and (iii) validity, i.e., any limitations or threats to

134

J. Balanza-Martinez et al. No Experiment

Weak

Medium

Strong

100 75 50 25 0

83 61 61 40 7 4 Context

66

50

11 7 5 7 Study Design Threats to Validity

Fig. 7 Rigor

the validity of the evaluation and the measures taken to limit them. The three parameters are measured with the values “weak”, “medium”, and “strong”. We also include the “no experiment” value to categorize the studies not including any evaluation. Industrial relevance [12] is defined as the realism of the environment in which the results are obtained and the research method used to produce results. It is measured by utilizing four parameters: (i) subjects, i.e., the subjects used in the evaluation, (ii) context, i.e., the context in which results are obtained, (iii) scale i.e., the type of application used in the evaluation, and (iv) research method, i.e., the research method used, e.g., laboratory experiments or case studies. The four parameters are measured with two values, “contributing” and “non-contributing”, each representing whether the characteristic under scrutiny contributes to industrial relevance or not. To limit conclusion bias, we introduce a “no experiment” value to categorize papers that do not perform any type of evaluation. Rigor An overview of the primary study rigor is depicted in Fig. 7. Out of all primary studies, only a small portion does not report any type of tactic evaluation (7/134). Regarding the context, most studies accurately describe the context (83/134). A smaller portion reports context with insufficient details to allow any comparison with other contexts (40/134), while only few papers do not describe the context at all (4/134). Regarding the study design, primary studies present a medium or strong study design. The majority describe the study design to an extent to which the reader can compare the results to other studies (61/134), or mention the steps taken, but do not allow the reader to understand the measurements / statistical analysis performed (61/134). Only few studies do not mention the study design at all (5/134). Regarding validity, studies result to be either a hit or miss. The majority does not discuss any threat to validity (66/134), while a similar number thoroughly describes them (50/134). Few papers mention threats without properly documenting them (11/134).

Tactics for Software Energy Efficiency No Experiment

135 Non-Contributing

Contributing

125 100 75 122

50

122

119 77 50

25 0

7 5 Subjects

7 5 Context

8 7 7 Scale Research Method

Fig. 8 Industrial relevance

Industrial Relevance Figure 8 paints a different picture when considering the industrial relevance of primary studies. Regarding subjects, the vast majority of studies presents evaluations where the researchers are the subjects applying the tactics (122/134). Only few papers present instead evaluations where the subjects applying the tactics are industrial practitioners (5/134). Regarding context, the vast majority of papers presents evaluations performed in laboratory settings or controlled environments (122/134). Only few studies consider a real-world industrial setting (5/134). The industrial relevance category showcasing the highest number of “contributing” studies is scale. Most papers present evaluations on real world applications (76/134), while only a smaller portion utilizes ad hoc made software, mock ups, or benchmarks (50/134). Finally, regarding the research method used, most primary studies use laboratory experiments and theoretical mathematical analyses (119/134), while only a much smaller fraction conducts evaluations using a research method that facilitate the investigation of real situations such as case studies and field studies (8/134). Main findings .RQ2 : Potential for industrial adoption  The field showcases a very limited industry involvement.  The rigor used to evaluate tactics is usually sound. Most studies document with care their context and study design, albeit threats to validity of evaluations are often not discussed.  Industrial relevance of tactics is scarce. Most tactics are developed and applied by academic researchers; they consider in vitro contexts, and use controlled experiments. Adequate scale of evaluations is mostly driven by the use of open source projects.

136

J. Balanza-Martinez et al.

7 Discussion The results collected via this systematic literature review provide a clear picture of the tactics for software energy efficiency landscape. The topic began to become attractive in 2009 and, after reaching a peak of popularity in 2015, experienced a loss of interest (see Sect. 5.1). Multiple conjectures can be made on this trend, but the culmination of software energy efficiency tactics cannot be one of them. To date the topic still displays vast improvement possibilities, and is characterized by only marginally-explored areas (e.g., tactics for cloud and edge environments [31]). An explanation could be the increasing complexity and fragmentation software applications experienced during the years, making tactics harder to be designed and evaluated. Another explanation could be the lack of a consolidated research foundation driving the topic. This conjecture might be further corroborated by the scattered publication venues (see Sect. 5.1.3), potentially indicating a missing unified research effort. Regarding tactic properties (see Sect. 5.2), the high occurrence of static optimization and dynamic monitoring tactics might be due to the relative ease of developing such tactics, if compared to dynamic optimization and static monitoring ones. The trend towards focusing on low-hanging fruits is further supported by the most common abstraction levels used. By considering static optimizations, the vast majority focuses on code optimizations, i.e., regard isolated adjustments which do not regard other software components or dependencies. Similarly, most dynamic monitoring tactics focus on monitoring entire applications, i.e., do not require additional analyses to identify which software elements, or sections of code, are more energy greedy. From the development stages of tactics (see Sect. 5.2.5) we can observe how their distribution is driven by the most recurrent tactic types. The high number of tactics utilizable during the implementation stage is primarily driven by dynamic monitoring tactics. Similarly, static optimizations mostly contribute to the implementation stage tactics. From the distribution of tactics across platforms (see Sect. 5.2.6), most tactics are either platform agnostic or mobile-centric. This trend can be attributed to the emphasis on energy efficiency in mobile contexts. With the adoption of open source operating systems, e.g., Android, and mobile hardware advancements, mobile platforms are becoming evermore similar to workstations. Therefore, the majority of tactics are either platform agnostic (i.e., can be applied across platforms), or consider mobile software. The low number of cloud tactics could be driven by the relatively recent consolidation of such platform, for which we expect a growing number of tactics in the future. From the results of .RQ2 emerges that, while the rigor of tactic evaluation is high, a prominent lack of industrial involvement is present. The topic remains primarily an academic interest, with industrial parties displaying little investment. Such trend is reflected in tactic evaluation, which, due to the low representativeness of their subjects, contexts, and research methods, possess little industrial relevance. This

Tactics for Software Energy Efficiency

137

creates a vicious cycle, in which academic efforts are hindered by the lack of industrial involvement, and industry cannot adopt academic solutions due to the low representativeness of the solutions. In order to break such vicious cycle, as suggested in recent literature [33], policy makers need to steer the way via regulations to make software sustainability a primary concern of industry. As a first possible step toward engaging practitioners, recently an open-source and open-access archive of tactics was made available online [17].

8 Threats to Validity Despite our best efforts, the presented results might have been influenced by threats to validity. Following the threat classification of Runeson et al. [30], we discuss four aspects. Construct Validity Our results could be influenced by the search query and digital library used. As mitigation strategy, we used a bi-directional exhaustive snowballing. To mitigate threats related to the literature selection and data extraction processes, we adopted consolidated guidelines for literature reviews [15] and snowballing [34], and used a systematic evaluation process to judge rigor and industrial relevance of tactics [12]. Internal Validity The identification of primary studies, the data extraction, and keywording processes might have been influenced by subjective interpretations, and hence prone to biases. Such threat might be remarked by the fact that the first author of this study was primarily responsible for such processes. To mitigate related threats, all steps were supervised by two other researchers, and any doubt or impediment was jointly discussed. In addition, all intermediate and final data was scrutinized and reviewed by all three researchers. Finally, the scientific quality of the primary studies could have influenced the results of the study. As done in similar work [32], rather than conducting a manual, and potentially subjective, quality assessment, we opted to include only primary studies published in venues employing peer-review processes. Therefore, we do not deem the quality of the primary studies to have noticeably influenced the results. External Validity A common threat to external validity of SLRs is the representativeness of the identified literature. To mitigate related threats, our search query was purposely designed as encompassing as possible, leaving the identification of relevant studies up to a higher effort required for the manually scrutiny. Augmented via bi-directional snowballing, our final dataset comprised 142 primary studies, and acknowledged the presence of 15 secondary studies. While not claiming to be complete, we conjecture that the identified literature is representative of the current state of the art. Reliability To promote reliability, all intermediate and final data of our the literature selection, data extraction, and data analysis is made available online (see Sect. 4.6).

138

J. Balanza-Martinez et al.

9 Conclusion In this paper, we aim at characterizing the state of the art of tactics for software energy efficiency. To achieve our goal, we conducted a rigorous systematic literature review, leading to the identification of 142 primary studies and a total of 163 tactics. Despite the energy consumed by software systems is a growing concern, the state of the research area, as studied through the lens of the literature, is not bright. The topic reached considerable popularity in 2015, but steadily declined afterwards. Potentially driven by analysis ease, most tactics focus on static optimization or dynamic monitoring, and consider either the energy consumed by entire applications or lines of code. Industry involvement is scarce, as reflected by the low industrial relevance of tactics. Tactics result to be, to date, mostly an academic concern, which is not transportable to practice. From the results emerges a call for action to academic researchers and industrial practitioners to join forces and study how software sustainability can be improved—needed now more than ever before. Only by joining forces, by firmly bridging the current gap between academic research and industrial adoption, and by merging the current academic fragmentation, can the sustainability of software really be addressed.

References 1. Buschmann, F., Meunier, R., Rohnert, H., Sommerlad, P., Stal, M.: Pattern-Oriented Software Architecture: A System of Patterns, Volume 1, vol. 1. Wiley, New York (2008) 2. Caldiera, V.R.B.G., Rombach, H.D.: The goal question metric approach. In: Encyclopedia of Software Engineering, pp. 528–532. Wiley, New York (1994) 3. Capra, E., Francalanci, C., Slaughter, S.A.: Is software “green”? Application development environments and energy efficiency in open source applications. Inf. Softw. Technol. 54(1), 60–71 (2012) 4. Condori Fernandez, N., Lago, P.: The influence of green strategies design onto quality requirements prioritization. In: International Working Conference on Requirements Engineering: Foundation for Software Quality, pp. 189–205. Springer, Berlin (2018) 5. Fowler, M.: Refactoring. Addison-Wesley Professional, Reading (2018) 6. Freitag, C., Berners-Lee, M., Widdicks, K., Knowles, B., Blair, G.S., Friday, A.: The real climate and transformative impact of ICT: a critique of estimates, trends, and regulations. Patterns 2(9), 100340 (2021) 7. Gamma, E., Helm, R., Johnson, R., Johnson, R.E., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Pearson Deutschland GmbH, Munchen (1995) 8. Georgiou, S., Rizou, S., Spinellis, D.: Software development lifecycle for energy efficiency: techniques and tools. ACM Comput. Surv. 52(4), 1–33 (2019) 9. Glaser, B.G., Strauss, A.L.: The Discovery of Grounded Theory: Strategies for Qualitative Research. Routledge, New York (2017) 10. Hans, R., Burgstahler, D., Mueller, A., Zahn, M., Stingl, D.: Knowledge for a longer life: development impetus for energy-efficient smartphone applications. In: 2015 IEEE International Conference on Mobile Services, pp. 128–133. IEEE (2015)

Tactics for Software Energy Efficiency

139

11. ISO/IEC/IEEE: International Standard - Systems and Software Engineering–Life Cycle Management–Part 5: Software Development Planning. IEEE (2017) 12. Ivarsson, M., Gorschek, T.: A method for evaluating rigor and industrial relevance of technology evaluations. Empir. Softw. Eng. 16(3), 365–395 (2011) 13. Jelschen, J., Gottschalk, M., Josefiok, M., Pitu, C., Winter, A.: Towards applying reengineering services to energy-efficient applications. In: 2012 16th European Conference on Software Maintenance and Reengineering, pp. 353–358. IEEE (2012) 14. Keshav, S.: How to read a paper. ACM SIGCOMM Comput. Commun. Rev. 37(3), 83–84 (2007) 15. Kitchenham, B., Charters, S.: Guidelines for performing systematic literature reviews in software engineering. Technical report, EBSE Technical Report EBSE-2007-01 (2007) 16. Knowles, B.: ACM TechBrief: Computing and Climate Change. Association for Computing Machinery, New York (2021) 17. Lago, P.: An Archive of Awesome and Dark Tactics (2022). https://s2group.cs.vu.nl/ AwesomeAndDarkTactics/ 18. Li, D., Hao, S., Gui, J., Halfond, W.G.: An empirical study of the energy consumption of android applications. In: 2014 IEEE International Conference on Software Maintenance and Evolution, pp. 121–130. IEEE (2014) 19. Lima, L.G., Soares-Neto, F., Lieuthier, P., Castor, F., Melfe, G., Fernandes, J.P.: Haskell in green land: analyzing the energy behavior of a purely functional language. In: International conference on Software Analysis, Evolution, and Reengineering, vol. 1. IEEE (2016) 20. Manotas, I., Pollock, L., Clause, J.: Seeds: a software engineer’s energy-optimization decision support framework. In: International Conference on Software Engineering (2014) 21. Márquez, G., Astudillo, H., Kazman, R.: Architectural tactics in software architecture: a systematic mapping study. J. Syst. Softw. 197, 111558 (2023) 22. Mittal, S.: A survey of techniques for improving energy efficiency in embedded computing systems. Int. J. Comput. Aided Eng. Technol. 6(4), 440–459 (2014) 23. Naik, B.A., Chavan, R.K.: Optimization in power usage of smartphones. Int. J. Comput. Appl. 119(18), 7–13 (2015) 24. Ournani, Z., Rouvoy, R., Rust, P., Penhoat, J.: On reducing the energy consumption of software: from hurdles to requirements. In: ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM) (2020) 25. Pang, C., Hindle, A., Adams, B., Hassan, A.E.: What do programmers know about the energy consumption of software? PeerJ PrePrints 3 (2015) 26. Paradis, C., Kazman, R., Tamburri, D.A.: Architectural tactics for energy efficiency: review of the literature and research roadmap. In: Hawaii International Conference on System Science (2021) 27. Pereira, R., Couto, M., Ribeiro, F., Rua, R., Cunha, J., Fernandes, J.P., Saraiva, J.: Energy efficiency across programming languages: how do energy, time, and memory relate? In: ACM SIGPLAN International Conference on Software Language Engineering (2017) 28. Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies in software engineering. In: International Conference on Evaluation and Assessment in Software Engineering (2008) 29. Pinto, G., Castor, F., Liu, Y.D.: Mining questions about software energy consumption. In: Proceedings of the 11th Working Conference on Mining Software Repositories (2014) 30. Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empir. Softw. Eng. 14(2), 131–164 (2009) 31. Toczé, K., Madon, M., Garcia, M., Lago, P.: The dark side of cloud and edge computing: an exploratory study. In: Workshop on Computing within Limits 2022. LIMITS (2022) 32. Verdecchia, R., Ricchiuti, F., Hankel, A., Lago, P., Procaccianti, G.: Green ICT research and challenges. In: Advances and New Trends in Environmental Informatics, pp. 37–48. Springer International Publishing, Cham (2017)

140

J. Balanza-Martinez et al.

33. Verdecchia, R., Lago, P., de Vries, C.: The future of sustainable digital infrastructures: a landscape of solutions, adoption factors, impediments, open problems, and scenarios. Sustainable Comput. Inf. Syst. 35, 100767 (2022) 34. Wohlin, C.: Guidelines for snowballing in systematic literature studies and a replication in software engineering. In: International Conference on Evaluation and Assessment in Software Engineering (2014) 35. Zaman, N., Almusalli, F.A.: Smartphones power consumption & energy saving techniques. In: International Conference on Innovations in Electrical Engineering and Computational Technologies. IEEE (2017)

everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station for Remote Areas Sofia Polymeni, Georgios Spanos, Dimitrios Tsiktsiris, Evangelos Athanasakis, Konstantinos Votis, Dimitrios Tzovaras, and Georgios Kormentzas

Abstract Weather constitutes a crucial factor that impacts many of the human outdoor activities, whether they are related to obligations or pleasure. In the contemporary era, due to climate change, the weather is more unstable and the forecasting task is more challenging than ever. By combining the Internet of Things (IoT) with Artificial Intelligence (AI), a new research field emerges that is called Artificial Intelligence of Things (AIoT) and could offer significant possibilities for the research community in order to efficiently tackle the short-term weather forecasting. Renewable energy sources constitute solutions for the achievement of sustainability development goals and could also offer power autonomy in a weather forecasting station. In the present research study, everWeather is proposed as a lowcost, self-powered weather forecasting station based on the AIoT paradigm and renewable energy. The proposed solution combines a variety of low-cost environmental sensors, the prowess of solar energy and an appropriate lightweight Machine Learning (ML) algorithm such as the Multiple Linear Regression (MLR) in order to forecast physical weather for the next half hour. Preliminary experiments have been conducted for the proposed solution validation and the corresponding results

S. Polymeni Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece Department of Information and Communication Systems Engineering, University of the Aegean, Samos, Greece e-mail: [email protected] G. Spanos () · D. Tsiktsiris · E. Athanasakis · K. Votis · D. Tzovaras Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected] G. Kormentzas Department of Information and Communication Systems Engineering, University of the Aegean, Samos, Greece e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_8

141

142

Sofia Polymeni et al.

highlighted that the performance of the everWeather station is quite satisfactory, in terms of reliability and forecasting accuracy. Keywords Internet of Things · Machine Learning · Weather forecasting · Artificial Intelligence of Things

1 Introduction Undoubtedly, weather is one of the main factors that affect human life, both from a physical (e.g., air temperature, relative humidity, etc.) and a chemical (e.g., greenhouse gases, PM.2.5 , etc.) aspect, from farming to safe transportation, due to its unstable and unpredictable nature. Weather forecasting refers to the process of predicting the atmospheric conditions in a given location at a given timestamp by collecting quantitative environmental data about the state of the atmosphere and using scientific understanding to estimate the way the atmosphere will evolve in the near future [1]. As a result, weather forecasting will always remain at the forefront of the research community’s interests as a way to best describe the unknown nature of the atmosphere. The aforementioned indispensable need for weather forecasting is of utmost importance in the modern era due to climate change that could cause various natural hazards such as floods and wildfires [13]. Weather stations are the most common systems that help predict the future fluctuations of the atmosphere by monitoring its current status, thus helping humans take preventive measures against any instabilities. For a long time, the weather forecasting process was supervised by specialized forecasters, requiring qualified human labor and regular equipment maintenance. Over the last few years, with technology’s burgeoning development, weather stations have become more automated, requiring less or even zero supervision [31]. At their most basic form, automated weather stations monitor atmospheric conditions by collecting real-time physical environmental values, including, among others, air temperature, relative humidity and atmospheric pressure. On the other hand, more complex forms of automated weather forecasting stations can even include short-term forecasting for various weather variables. In the digital era, the Internet of Things (IoT) and Artificial Intelligence (AI), which encompasses the ever-increasing field of Machine Learning (ML), could offer powerful and sophisticated solutions to a wide variety of contemporary problems. The combination of IoT and AI has created a new research field, which is the Artificial Intelligence of Things (AIoT) [25, 33]. The aforementioned research field has been utilized successfully in the recent literature from health [7, 10] and transportation problems [18] to agriculture [16] and cybersecurity [28, 29] issues. Nowadays, renewable energy sources, such as solar power, wind power, and hydro-power are in high demand since they are considered powerful tools of vital importance against the ever-increasing climate change. Wind and solar power constitute the main pillars of renewable energy sources in G20 countries towards

everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station. . .

143

decarbonization [26]. Solar energy is one of the planet’s most freely available energy sources that can generate electricity by capturing sunlight on solar panels in a joint chemical and physical reaction [9]. However, the amount of gathered light can greatly differ depending on the time of day, the location, and the season. For this reason, solar energy is considered particularly popular renewable energy source among Mediterranean countries [6]. Considering all the aforementioned, in this research work, the “everWeather” low-cost and self-powered weather forecasting station is presented, which is based on AIoT and renewable energy sources. More specifically, the proposed solution is a complete system comprised of both hardware and software development for monitoring both physical and chemical atmospheric parameters and forecasting short-term physical weather conditions. Regarding the hardware implementation of the proposed weather station, a variety of low-cost sensors and two ESP32 micro-controllers are employed, whereas for the power autonomy of the proposed system, solar energy is exploited. Finally, with respect to the everWeather software, a lightweight ML model such as Multiple Linear Regression (MLR) is adopted in order to meet the constraints of micro-controllers regarding resources. The remainder of this paper is organized as follows. In Sect. 2 some related works regarding the development of similar AIoT systems are presented. In Sect. 3 the system’s overall architecture is described, from hardware components, such as the environmental sensors and the sender-receiver circuits, to software components such as the forecasting algorithm and the Application Programming Interface (API) for monitoring and storing the collected data. Moreover, in Sect. 4 the experimentation part of the present work is analyzed, from the system deployment in a real-life scenario to data analysis and forecasting results. Finally, in Sect. 5, the conclusions of the present research work are summarized and future work directions are outlined.

2 Related Work As described in the previous section, weather forecasting stations constitute powerful tools allowing for the monitoring and collecting of environmental data. Additionally, with the advent of AIoT many research works have investigated its possibilities in the weather forecasting context, as reported in a recent related literature review [21]. In this section, similar approaches from the most recent bibliography are presented regarding the development of physical weather forecasting IoT systems. In this context, a related work was presented by Parashar [20], who proposed an AIoT-based automated system that monitors and reports weather conditions by collecting data including rainfall, dust, carbon dioxide (CO.2 ), carbon monoxide (CO), luminosity, air temperature, relative humidity and atmospheric pressure. The suggested system predicts the temperature from the other environmental factors, in case of a temperature sensor defect, using an MLR algorithm with a variance and a Mean Absolute Error (MAE) equal to 0.95 and 1.10, respectively. Similarly, Popa

144

Sofia Polymeni et al.

et al. [24] developed an AIoT-based weather and pollution reporting system that collects data from air temperature, CO, nitrogen dioxide (NO.2 ), sulphur dioxide (SO.2 ) and PM.2.5 sensors and offers temperature predictions by implementing a Gaussian process regression with a Root Mean Square Error (RMSE) equal to 6.9. Finally, Ponce and Montoya [23] proposed an IoT-based system that collects and predicts air temperature values using a supervised Artificial Neural Network (ANN) achieving a MAE of 0.55. Another approach by Hidayat and Soekirno [12] is based on AIoT for temperature monitoring and prediction to support Urban Heat Island (UHI) information. The proposed solution predicts upcoming temperature values to anticipate the impact of the UHI index with an accuracy of 85.1% and a Mean Square Error (MSE) equal to approximately 0.11. Additionally, Barthwal and Sharma [5] developed a location-aware system for monitoring, sensing and analyzing the presence of the UHI effect by collecting air temperature, relative humidity, latitude and longitude values and predicting the mean surface temperature by implementing a Support Vector Regression (SVR) algorithm with MAE and RMSE equal to 0.8 and 1.18, respectively. Balamurugan and Manojkumar [3] proposed an AIoT-based system for shortterm rain forecasting using Long Range (LoRa) communications that collects air temperature, relative humidity and atmospheric pressure data. The authors used public data for validation of the forecasting accuracy of the proposed logistic regression. Their model achieved an RMSE equal to approximately 0.11 and an accuracy of 84%. Karvelis et al. [15] proposed a lightweight onboard solution for real-time weather prediction on ships, which is called “PortWeather” and consists of a custom made software and a ready-to-use weather station, collecting wind direction and wind speed in addition to air temperature, relative humidity and atmospheric pressure. The proposed solution offers both wind speed and wind direction forecasting using a lightweight MLR model, with an RMSE equal to 2.01 and 34.45, and a MAE equal to 1.24 and 17.19, respectively. Finally, Fowdur et al. [8] proposed a real-time AIoT-based weather station that is able to provide short-term weather forecasting. The proposed system monitors weather parameters including air temperature, relative humidity, atmospheric pressure, rainfall, wind speed and wind direction and offers forecasts for the aforementioned weather factors at intervals ranging from 20 minutes to 1 hour using an (MLR) algorithm with an overall percentage error of 6.58%. Most of the AIoT systems presented above utilize low-cost sensors for short-term forecasting of various weather factors. The proposed work is pretty close to that of Fowdur et al. [8] and Karvelis et al. [15], since it investigates ways to forecast various weather variables with a lightweight multiple regression model. However, everWeather is differentiated from the other two works with respect to (a) the communication protocol, by utilizing the appropriate one for an AIoT system, LoRa; (b) the overall cost, which remains extremely low (since everWeather uses two ESP32 micro-controllers), in comparison with the two other systems (especially, that of Karvelis et al. [15] who used the ready-to-use weather station Airmar

everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station. . .

145

200WX); (c) the power autonomy of the system exploiting solar energy; and (d) the ML algorithm, which includes, apart from the environmental factors, a seasonality feature.

3 everWeather System The proposed everWeather AIoT-based weather forecasting station is a complete system taking into consideration the financial aspect [22] in its design and comprised of the corresponding hardware and software components. Regarding the hardware components, there are the sender circuit containing also the environmental sensors, the receiver circuit and the server used for the data storing. Similarly, with respect to the software components, these include the system configuration, the weather forecasting algorithm and the API. In the following subsections, the everWeather components are described.

3.1 Hardware The sender circuit is the main weather station, including all the sensors needed for the implementation. The main processing unit of the circuit is the ESP32 (NodeMCU-32s) development board, which features both Bluetooth and Wi-Fi connectivity and supports development in various programmable languages, including Arduino and MicroPython, thus supporting fast prototyping of IoT applications. Moreover, the proposed circuit collects a variety of weather data, including air temperature, relative humidity, luminosity, wind speed, rainfall and atmospheric pressure, as well as a few key air quality indices, such as CO, CO.2 and PM.2.5 concentrations. All of these sensors needed for the weather forecasting station along with their respective sensed values are presented in Table 1. In addition, in Fig. 1 the

Table 1 Sensors included in the sender circuit of the everWeather station along with their respective sensed values Sensor type DHT22 BH1750FVI Analog anemometer YL-83 BMP280 MQ-7 MQ-135 PMS5003

Sensed values Air temperature, relative humidity Luminosity Wind speed Raindrops Atmospheric pressure CO concentration CO2 concentration PM2.5 concentration

146

Sofia Polymeni et al.

Fig. 1 Complete system architecture of the outdoor module (sender circuit) of the everWeather station

connections between all of the implemented components are depicted in full detail. It should be noted that the anemometer is the only sensor that requires an input voltage higher than the 5 V maximum output of the ESP32 micro-controller and, therefore, a MT3608 DC-DC step-up booster was implemented to boost the 5 V output from the ESP32 to a 12.8 V input for the anemometer. The sender circuit is the only component of the everWeather station that has to be placed in an outdoor environment, thus requiring it to be self-powered. For this purpose, an 18650 Li-ion battery was implemented as a power supply for the ESP32 micro-controller. As presented in Fig. 1, the battery supplies the ESP32 micro-controller through a DC-DC step-up 5 V booster with a micro USB cable operating as a charger by boosting the input voltage from the battery, which ranges

everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station. . .

147

from 3.7 to 4.2 V, to a 5 V input for the ESP32. However, as the sender circuit needs to operate all day for a prolonged period of time without any other external power supply, there is also a need to charge the battery. For this reason, a 6 V/3.5 W solar panel was also installed, which charges the battery through a TP4056 lithium battery charging module. Furthermore, in order to monitor the voltage level of the battery, this is also connected to the GPIO32 of the micro-controller to help track its output voltage during each measurement. It is worth mentioning that two resistors were used as a voltage divider for the battery because each analog pin of the microcontroller can only read analog values between 0 and 4095. A detailed description of the aforementioned connections is presented in Fig. 1. In order to maintain the system’s power consumption to a minimum, both the ESP32 micro-controller and all the implemented sensors have to be deactivated between measurements. For this purpose, the deep sleep mode was implemented for the ESP32 micro-controller, which reduces the power consumption of the system to a minimum by cutting out the activities consuming more power while operating. On the other hand, in order to deactivate all the sensors of the system, two techniques were implemented. More specifically, regarding the sensors that require a 3.3 V input voltage (i.e., DHT22 and BMP280), they were connected to the GPIO27 of the ESP32 micro-controller for power supply and not to the common choice of 3V3 pin. In addition, regarding the 5V input sensors and the anemometer, a two-channel relay module was implemented between the sensors and the ESP32 micro-controller. This way, in the sleep mode of the ESP32, the relay module uses the control signal from the micro-controller to close down the sensors until the next measurement. Finally, the proposed everWeather station is not only self-powered, but it is also proposed as a solution for remote area monitoring. In order for the weather station to allow for remote connectivity, two LoRa SX1278 modules were implemented, one for the sender and the other for the receiver circuit. Two fully detailed diagrams regarding the connections of the LoRa modules in both the sender and receiver circuits are presented in Figs. 1 and 2, respectively. After establishing communication between the two circuits, the sensed values from the outdoor station will be sent in real time to the indoor receiver station. This way, the outdoor station could be placed at a distance ranging from a few meters up to 5km in an urban area,

Fig. 2 System architecture of the indoor module (receiver circuit) of the everWeather station

148

Sofia Polymeni et al.

approximately 15 km in a suburban area and almost 45km in a rural area, under perfect conditions.

3.2 Forecasting Algorithm One of the core components of the everWeather system is the software module representing the weather short-term forecasting algorithm. Considering (a) the resource constraints of the proposed system; (b) the findings of a recent SLR about AIoT systems in the environmental context [21] supporting the appropriateness of MLR models in weather forecasting; and (c) the most related works in the field by Fowdur et al. [8] and Karvelis et al. [15] utilizing MLR models for shortterm weather forecasting, a lightweight MLR model is also utilized in the proposed everWeather station as the main forecasting algorithm. The details of the forecasting procedure are analyzed in the following paragraphs. MLR models are simple ML models that have been used in various regression problems in the literature during the last few decades. Although MLR models are not considered particularly efficient regarding their prediction accuracy in comparison with more sophisticated ML models, they remain extremely popular with the research community due to their simplicity, explainability and low variability [14]. The main goal of an MLR model is to predict the value of a variable target Y from the predictors X by finding linear relations between them. An example of an MLR model is shown below and the respective coefficients (.β0 , to .βn ) of the MLR equation, which in the everWeather case are computed at the training phase due to resource constraints, can be computed with the least squares estimation method. .

− Y = β0 + β1 x1 + . . . + βn xn

Weather forecasting is a common time-series problem, since the target is to predict a weather variable such as temperature and humidity for the next time instance. In the proposed system, each of the weather variables (namely air temperature, relative humidity, luminosity, wind speed, rainfall and atmospheric pressure) is predicted by all the weather and air quality (CO, CO.2 and PM.2.5 ) variable values in the previous time instance plus a time variable. The inclusion of the daily seasonality in everWeather constitutes a differentiation of the suggested system compared to the works of Fowdur et al. [8] and Karvelis et al. [15]. Moreover, as mentioned in the introduction (see Sect. 1), the purpose of the everWeather system is the monitoring and short-term forecasting of weather parameters. Hence, this short period of time for weather monitoring and forecasting has been selected to be 30 minutes. The selection of this time period in everWeather follows a similar consideration with the respective literature [8, 15]. According to all the aforementioned, the weather forecasting equation for temperature (similar equations exist for the other five weather variables) is shown

everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station. . .

149

below, where the difference between two consecutive instances t and .t − 1 is 30 minutes. .

−T mpt = β0 + β1 T mpt−1 + β2 H mdt−1 + β3 Lmnt−1 + β4 W ndt−1 + β5 Rnt−1 +β6 P rst−1 + β7 COt−1 + β8 CO2t−1 + β9 P M2.5t−1 + β10 time

3.3 System Configuration Both of the ESP32 micro-controllers of the two circuits (sender and receiver) can be configured with the C programming language. These configurations are related to the following steps: – Sender circuit: • • .• .• .• . .

Read weather and air pollution data from each sensor Read the battery level from the battery Establish LoRa connection only with the receiver circuit Send all collected values to the receiver circuit Go to deep sleep for 30 minutes after receiving each measurement

– Receiver circuit: • • .• .• . .

Establish Wi-Fi connection with the local network Parse the received data from the sender circuit Compute weather forecast from MLR models Post the parsed data and forecasts to the API

3.4 API for Data Monitoring and Storing As mentioned above, the parsed data from the receiver circuit and forecasts are wirelessly transmitted through a Wi-Fi connection to the developed API in real time for monitoring purposes but also for storage and possible further data processing. For each record, the receiver circuit transmits a JavaScript Object Notation (JSON) object that contains the ID of the measurement (“id”), the ID of the node that collected the measurement (“node”), as well as the timestamp of the measurement (“dt”) in addition to the sensed values and the forecasts of the weather variables for the next half hour. Finally, it is worth mentioning that since the everWeather weather forecasting station is self-powered, exploiting the solar energy information related to the battery is of crucial meaning. For this reason, the transmitted JSON object also includes the “battery_voltage” information, which denotes the remaining battery voltage, and the “battery_level” that denotes the charging percentage of the battery.

150

Sofia Polymeni et al.

4 Experimentation In this section, the preliminary experiments that have been performed in order to configure and validate the everWeather forecasting station are analyzed. The section begins with the system deployment, describing all the necessary steps needed for the real-time system function. Next is the data analysis, while the final subsection presents the weather forecasting results.

4.1 Deployment A real-life scenario for the everWeather station is presented along with the testbed description in this subsection. More specifically, various tests were conducted in both outdoor and indoor environments in order to emulate a real-world scenario to a fair extent and train the forecasting algorithm accordingly. Additionally, extensive research has also been performed on powering the weather forecasting station using sunlight. In Fig. 3a and b, the sender component of the everWeather station is depicted in the two outdoor environments in which it was placed, while in Fig. 3c the receiver component is presented. Both circuits were installed in a controlled environment in Thessaloniki, Greece, during the month of April for 5 consecutive days. For the outdoor component of everWeather, which is actually the main weather station, two different scenarios were taken into consideration regarding its deployment. For the first scenario, as depicted in Fig. 3a, the weather station was deployed in an outdoor open space. In this scenario, the sensed values from the outdoor module were representative of the corresponding weather conditions in the area. However, this scenario was rejected as the weather station was not well protected enough to be left for a prolonged period of time. For this purpose, a second scenario was taken into account, as depicted in Fig. 3b, where the outdoor station would be placed in a protected environment for long-term deployment. In spite of the better location, in this scenario the wind speed data were not as accurate as the ones collected in the previous location due to the device’s distance from the building that jeopardized the representability of the wind speed measurements. For this reason, these data were not taken into consideration in the training of the forecasting algorithm. Similarly, it should be noted that within that period, no rainfall data were collected, therefore no training has been performed for the rain forecasting. A sample of the transmitted data to the API can be seen in Fig. 4. The “dry” variable is the digital output of the YL-83 sensor that showcases whether it is raining or not by returning a Boolean value of 0 and 1, respectively. Finally, it was also found that the selected antennas placed in both the sender and receiver circuits for the LoRa transmissions were not adequate to achieve very long distances, while the placement of the outdoor station’s antenna inside the box was also misfit, as the box tampered with the “eye contact” that needed to be established

everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station. . .

151

Fig. 3 everWeather system deployment in a real-world outdoor scenario: (a) outdoor module (sender circuit) in the first scenario, in an open space, (b) outdoor module (sender circuit) in the second scenario, in a protected area, and (c) indoor module (receiver circuit)

Fig. 4 Sample of transmitted data to the API by the receiver circuit of the everWeather station

between the sender and the receiver. In addition, as mentioned in a previous section, the building density also plays a pivotal role in choosing the appropriate distance between the two circuits. As a result, further experimentation needs to be conducted in the future. Therefore, in these preliminary experiments, the two components (sender and receiver) are deployed at a distance of approximately 20 m.

152

Sofia Polymeni et al.

4.2 Data Analysis In order to investigate possible relations between weather and air quality variables, a correlation analysis has been conducted. Figure 5 displays the corresponding scatterplot of the aforementioned variables sensed by the everWeather forecasting station during 5 consecutive days. Several useful insights could be drawn from this graph. First of all, humidity and temperature demonstrate a strong negative correlation, indicating an opposite flow direction between sensible and latent heat fluxes [17]. As expected, on the contrary, luminosity and temperature show a strong positive correlation, since during the day the air temperature is higher than at night. Furthermore, battery voltage, which is higher during the day due to solar energy, has an anticipated strong positive correlation with temperature and luminosity. This correlation also implies that temperature and luminosity can be potentially used to forecast the battery levels of the weather station. This enables the potential to create in the future a smart battery management system like the one demonstrated in [30], but fully autonomously in the everWeather case, since the forecasts are produced by the system. Regarding air quality variables, prominent correlations can be observed between the primary air pollutants measured (CO and PM.2.5 ) and humidity. Relationships like these have also been observed in studies such as [32]. In the later case this correlation might be attributed to hydroscopic growth and has been observed in studies like [4, 11], where a positive correlation between humidity and PM is observed while humidity is relatively low, but the behavior changes up to a specific value. Moreover, atmospheric pressure shows a strong negative correlation with CO.2 , which is not a common behavior. This relationship might be attributed to the dense vegetation in the area where the sensor was placed leading to low CO.2 concentrations. On the other hand, the strong negative correlations between temperature and PM.2.5 , which are also common, are not observed in the present experiments. It is worth mentioning that the position of the sensors is of utmost importance for any related system and for this reason, different positions have been tested for the everWeather station as also mentioned in Sect. 4.1. Therefore, in these initial experiments, some values are not so accurate. Indeed, apart from wind values that are almost always equal to 0 due to the protected position of the system, air temperature and humidity might also be affected, since the solar panel requires a sunny position. Finally, PM.2.5 patterns might be more easily observed in city centers, where polluting sources are more common. The aforementioned data analysis highlights that the values of the everWeather forecasting station are reasonable and most of the expected relations are also found in the respective correlation analysis. However, more research should be done in the future in order to find the optimal position of the everWeather station, which in turn affects inevitably the quality of the sensed values.

everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station. . .

Fig. 5 Scatter plot of the features measured by the everWeather station

153

154

Sofia Polymeni et al.

4.3 Forecasting Results The results of the weather forecasting are presented in this subsection. More specifically, the temperature, humidity, luminosity and atmospheric pressure have been forecast by the MLR model. The dataset used (available for everyone interested upon request) for the validation contains 232 records, representing measurements with a 30-minute interval between them. For the training of the MLR model, 193 measurements have been used (essentially, four full days), while for the validation, the remaining 39 records have been utilized in a common balance between the training and the testing sets. According to the model presented in 3.2, each value of the 39 of the testing set for the four weather variables is predicted by the corresponding MLR model, built on the 193 records of the training set. As already noted, the estimations are based only on the previous time instance. For the MLR model validation, well-established evaluation metrics have been used, such as MAE, Median Absolute Error (MdAE), RMSE and the respective normalized versions of these metrics [2, 27]. Here, for the normalized metrics, the difference between the maximum and minimum value is utilized as a normalization factor, in comparison with the work of Aivatoglou et al. [2] and Spanos and Angelis [27], in which the mean value was utilized. The results of the weather forecasting MLR model for temperature, humidity, luminosity and atmospheric pressure are displayed in Table 2. These results highlight that the everWeather system is capable of achieving quite satisfactory forecasts, since the normalized MAE for the temperature and luminosity is below 10% (4.9% and 8.4% respectively) and around 30% for the humidity and atmospheric pressure (32.7% and 33.4% respectively). The results of the other two normalized evaluation metrics present a similar view (a little bit lower value the normalized MdAE and a little bit higher value the more strict normalized RMSE, for all the weather indices). Finally, from Fig. 6, it is obvious that the proposed simple and lightweight MLR algorithm achieved for capturing the pattern existing in all the weather variables, either almost perfect for the cases of temperature and luminosity or pretty well for humidity and atmospheric pressure. However, there are some weaknesses of the proposed algorithm due to the limited data sample to capture immediately the radical changes in values such as in the luminosity and the changes in level such as in the atmospheric pressure.

Table 2 Weather forecasting results Temperature Humidity Luminosity Atmospheric pressure

MAE 0.91 16.68 252.81 4.64

MdAE 0.58 16.56 146.16 4.45

RMSE 1.35 18.11 406.14 4.74

Nor. MAE 4.9% 32.7% 8.4% 33.4%

Nor. MdAE 3.1% 32.5% 4.9% 32.2%

Nor. RMSE 7.2% 35.5% 13.6% 34.1%

everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station. . .

70 60

25 humidity

temperature

30

20 15

50 40 30

Apr 27

Apr 28

Apr 29

Apr 30

20

May 01

dt

Apr 29 dt

(a)

(b) atmospheric_pressure

3000 luminosity

155

2000 1000 0 Apr 27

Apr 28

Apr 29 dt

Apr 30

May 01

(c)

Apr 27

Apr 28

Apr 30

May 01

Apr 30

May 01

1008 1004 1000 Apr 27

Apr 28

Apr 29 dt

(d)

Fig. 6 Weather forecasting time-plot. (a) Temperature forecasting. (b) Humidity forecasting. (c) Luminosity forecasting. (d) Atmospheric pressure forecasting

5 Conclusions and Future Work In the present research work, an AIoT-based weather short-term forecasting station, called everWeather, has been presented. The system combines low-cost environmental sensors and ESP32 micro-controllers, the LoRa communication protocol for data transmission, power autonomy exploiting solar energy, and a lightweight MLR model for weather forecasting. The preliminary experiments that have been presented, highlighted that the everWeather system adequately achieved its purpose since the corresponding results showed that the system managed to accurately forecast four weather indices, namely, air temperature, relative humidity, luminosity and atmospheric pressure. From these results, the significance of the lagged values in weather forecasting is highlighted, since using only the current values to predict next values and a simple MLR model the results are very encouraging. Furthermore, from the correlation analysis of the dataset, common relations between the weather and air quality variables have been found, indicating that the proposed system produces reasonable environmental values. However, some other correlations are missing, revealing some possible inaccuracies in the measurements. Hence, more research work should be conducted in the future towards the optimal position of everWeather in order for the related issues to be mitigated. In the future, apart from the experimentation with respect to the everWeather placement, more research work could be done for the forecasting part, either by keeping the lightweight MLR model but improving it with feature selection

156

Sofia Polymeni et al.

algorithms among other techniques or by introducing novel and sophisticated ML algorithms, appropriately adjusted for the ESP32 micro-controller. Hence, in order to improve the forecasting performance of everWeather, the collection of much more data and the corresponding experiments are of crucial importance. Moreover, with regard to the communication protocol, more experimentation with various antennas is required in order to fully exploit the potential of the LoRa protocol for long-range data transmission. Toward the aforementioned improvements of the proposed solution, a prominent role to specify the requirements will play the opinion of the end users (namely the practitioners and customers). Finally, regarding the monitoring, the API component of the proposed system could become supplementary and only for storing purposes, if an LCD screen is integrated into the receiver circuit, constituting the everWeather system a complete weather forecasting station with no need for an Internet connection.

Funding This work is supported by the IoTFeds project, co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH—CREATE—INNOVATE (project code: T2EDK02178).

References 1. Abhishek, K., Singh, M., Ghosh, S., Anand, A.: Weather forecasting model using artificial neural network. Procedia Technol. 4, 311–318 (2012) 2. Aivatoglou, G., Anastasiadis, M., Spanos, G., Voulgaridis, A., Votis, K., Tzovaras, D., Angelis, L.: A Rakel-based methodology to estimate software vulnerability characteristics & score-an application to EU project ECHO. Multimed. Tools Appl. 81(7), 9459–9479 (2022) 3. Balamurugan, M., Manojkumar, R.: Study of short term rain forecasting using machine learning based approach. Wirel. Netw 27, 5429–5434 (2021) 4. Barmpadimos, I., Hueglin, C., Keller, J., Henne, S., Prévôt, A.S.H.: Influence of meteorology on pm10 trends and variability in Switzerland from 1991 to 2008. Atmos. Chem. Phys. 11(4), 1813–1835 (2011) 5. Barthwal, A., Sharma, K.: Analysis and prediction of urban ambient and surface temperatures using internet of things. Int. J. Syst. Assur. Eng. Manag. 13(Suppl. 1), 516–532 (2022) 6. Delfanti, L., Colantoni, A., Recanatesi, F., Bencardino, M., Sateriano, A., Zambon, I., Salvati, L.: Solar plants, environmental degradation and local socioeconomic contexts: a case study in a mediterranean country. Environ. Impact Assess. Rev. 61, 88–93 (2016)

everWeather: A Low-Cost and Self-Powered AIoT Weather Forecasting Station. . .

157

7. Farhad, A., Woolley, S., Andras, P.: Federated learning for ai to improve patient care using wearable and IoMT sensors. In: 2021 IEEE 9th International Conference on Healthcare Informatics (ICHI), pp. 434–434. IEEE (2021) 8. Fowdur, T.P., Beeharry, Y., Hurbungs, V., Bassoo, V., Ramnarain-Seetohul, V., Lun, E.C.M.: Performance analysis and implementation of an adaptive real-time weather forecasting system. Internet Things 3, pp. 12–33 (2018) 9. Guney, M.S.: Solar power and application methods. Renew. Sustain. Energy Rev. 57, 776–785 (2016) 10. Han, T., Yang, F., Deng, K.: Application and development prospect of artificial intelligence in healthy pension industry. In: Proceedings of the 2020 Conference on Artificial Intelligence and Healthcare, pp. 79–83 (2020) 11. Hernandez, G., Berry, T.A., Wallis, S., Poyner, D.: Temperature and humidity effects on particulate matter concentrations in a sub-tropical climate during winter. In: International Proceedings of Chemical, Biological and Environmental Engineering, pp. 41–49, vol. 102 (2017) 12. Hidayat, D., Soekirno, S.: Development of temperature monitoring and prediction system for urban heat island (UHI) based on the internet of things. J. Phys. Conf. Ser., 1816, 012054 (2021). IOP Publishing 13. Ismail-Zadeh, A.: Natural hazards and climate change are not drivers of disasters. Nat. Hazards 111(2), 2147–2154 (2022) 14. James, G., Witten, D., Hastie, T., Tibshirani, R.: An Introduction to Statistical Learning, vol. 112. Springer, Berlin (2013) 15. Karvelis, P., Mazzei, D., Biviano, M., Stylios, C.: Portweather: a lightweight onboard solution for real-time weather prediction. Sensors 20(11), 3181 (2020) 16. Lee, M., Hwang, J., Yoe, H.: Agricultural production system based on IoT. In: 2013 IEEE 16Th international conference on computational science and engineering, pp. 833–837. IEEE (2013) 17. Lüdi, A., Beyrich, F., Mätzler, C.: Determination of the turbulent temperature–humidity correlation from scintillometric measurements. Boundary Layer Meteorol. 117, 525–550 (2005) 18. Lytras, M.D., Chui, K.T., Liu, R.W.: Moving towards intelligent transportation via artificial intelligence and internet-of-things, Sensors. 20(23), 6945, MDPI, (2020) 19. Nwachukwu, A.N., Anonye, D.: The effect of atmospheric pressure on ch4 and co2 emission from a closed landfill site in Manchester, UK. Environ. Monit. Assess. 185(7), 5729–5735 (2013) 20. Parashar, A.: IoT based automated weather report generation and prediction using machine learning. In: 2019 2nd International Conference on Intelligent Communication and Computational Techniques (ICCT), pp. 339–344. IEEE (2019) 21. Polymeni, S., Athanasakis, E., Spanos, G., Votis, K., Tzovaras, D.: Iot-based prediction models in the environmental context: a systematic literature review. Internet Things, 100612 (2022) 22. Polymeni, S., Skoutas, D.N., Kormentzas, G., Skianis, C.: Findeas: a fintech-based approach on designing and assessing Iot systems. IEEE Internet Things J. 9(24), 25196–25206 (2022) 23. Ponce, H., Gutiérrez, S., Montoya, A.: Predicting climate conditions using internet-of-things and artificial hydrocarbon networks. In: 7th IMEKO TC19 Symp. Environ. Instrum. Meas. EnvIMEKO 2017, vol. 2017 (2017) 24. Popa, C.L., Dobrescu, T.G., Silvestru, C.I., Firulescu, A.C., Popescu, C.A., Cotet, C.E.: Pollution and weather reports: using machine learning for combating pollution in big cities. Sensors 21(21), 7329 (2021) 25. Seng, K.P., Ang, L.M., Ngharamike, E.: Artificial intelligence internet of things: a new paradigm of distributed sensor networks. Int. J. Distrib. Sens. Netw. 18(3), 15501477211062835 (2022) 26. Sokulski, C.C., Barros, M.V., Salvador, R., Broday, E.E., de Francisco, A.C.: Trends in renewable electricity generation in the g20 countries: an analysis of the 1990–2020 period. Sustainability 14(4), 2084 (2022)

158

Sofia Polymeni et al.

27. Spanos, G., Angelis, L.: A multi-target approach to estimate software vulnerability characteristics and severity scores. J. Syst. Softw. 146, 152–166 (2018) 28. Spanos, G., Giannoutakis, K.M., Votis, K., Tzovaras, D.: Combining statistical and machine learning techniques in IoT anomaly detection for smart homes. In: 2019 IEEE 24th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), pp. 1–6. IEEE (2019) 29. Spanos, G., Giannoutakis, K.M., Votis, K., Viaño, B., Augusto-Gonzalez, J., Aivatoglou, G., Tzovaras, D.: A lightweight cyber-security defense framework for smart homes. In: 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), pp. 1–7. IEEE (2020) 30. Spertino, F., Ciocia, A., Leo, P.D., Malgaroli, G., Russo, A.: A smart battery management system for photovoltaic plants in households based on raw production forecast. In: Enescu, D. (ed.) Green Energy Advances. IntechOpen, Rijeka (2018) 31. Tanner, B.D.: Automated weather stations. Remote Sens. Rev. 5(1), 73–98 (1990) 32. Yang, Q.: The relationships between pm(2.5) and meteorological factors in China: seasonal and regional variations. Int. J. Environ. Res. Public Health 14(12), 1510 (2017) 33. Zhang, J., Tao, D.: Empowering things with intelligence: a survey of the progress, challenges, and opportunities in artificial intelligence of things. IEEE Internet Things J. 8(10), 7789–7817 (2020)

Part III

Data-Driven Approaches to Environmental Analysis

News from Europe’s Digital Gateway: A Proof of Concept for Mapping Data Centre News Coverage Anna Bessin, Floris de Jong, Patricia Lago

, and Oscar Widerberg

Abstract The Netherlands has been described as Europe’s Digital Gateway, owing in part to the disproportionate number of data centres located in the relatively small country. Data centres have become a much-discussed issue in Dutch media, with 11,842 news articles about data centres having been published between 1 January 1990 and 23 January 2023. This study explores this news coverage to identify possible sustainability concerns experienced in society as a result of data centre operation and construction. Identifying such concerns could help in informing discussion and future decisions regarding location, design, and operation of data centres as well as potential measures to mitigate sustainability concerns. This study explores Dutch data centre news coverage by combining manual and automated content analysis to determine commonly discussed themes. The results are subsequently spatialised using GIS software, which was later adapted into a PowerBI tool, allowing for an interactive exploration of the data. Through this exploration, we identify a strong trend towards increased public attention and debate about data centres in the Netherlands, underscored by a significant increase in media coverage since 2020. Most notably, the topic “space” is prominent throughout the entire study period, receiving the highest number of mentions each year and quadrupling in coverage from 2015 to 2021. Furthermore, matters relating to the categories “technology” and “environment” experienced the highest relative growth in the same time period. Overall, our results indicate an increasing importance of data centres in public discourse. Keywords Data centre · News coverage · Content analysis

A. Bessin · F. de Jong · P. Lago () · O. Widerberg Vrije Universiteit Amsterdam, Amsterdam, the Netherlands e-mail: [email protected]; [email protected]; [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_9

161

162

A. Bessin et al.

1 Introduction The Netherlands has been described as a “Digital Gateway to Europe” [6], illustrating the country’s important role in European digital infrastructure. Apart from hosting one of the world’s largest internet exchanges points (AMS-IX) [1], the country also hosts a relatively large amount of data centres given its small size: In 2021, the Netherlands ranked sixth in terms of amount of data centres hosted by country [5]. In the past decade, the trend of increasing demand on digital infrastructure has resulted in the construction of several so-called hyperscale data centres in the Netherlands. Examples of such facilities include Microsoft’s AMS06 facility in Middenmeer, and Google’s data centre in Eemshaven. These data centres are the largest and most energy-intensive of their kind, taking up more than five hectares of land and consuming over 50MW of electricity according to the definition of “hyperscale” set by the Dutch Data Centre association [7]. While data centres are becoming increasingly important, some scholars have raised concerns about the sustainability challenges which arise from increasing societal reliance on data centres and its associated digital infrastructure [13]. Examples of such concerns include increasing energy consumption [14] and associated greenhouse gas emissions [9]. Other scholars have identified so-called dark-patterns, i.e. deceptive and unsustainable outcomes for end-users in the ongoing move towards Cloud-computing, and have warned of its tendency to prioritise short-term economic imperatives over long-term sustainability [20]. Responding to such concerns, some literature proposes opportunities to enable greater data centre sustainability. This includes studies proposing environmental impact assessment methods [3, 11, 12], as well as methods to reduce energy consumption through hardware or software optimisation [4, 18]. However, it appears that the current body of scientific literature into data centre sustainability focuses largely on energy efficiency. As such, other possible sustainability challenges resulting from data centre operation or construction remain only sparsely researched. Tocze and colleagues find, that sustainability aspects of data centres are overlooked in scientific research, with datacenter operation often prioritising economic gain over the cost of long-lasting sustainability research [20]. This study serves to further investigate which other sustainability challenges might arise from increasing reliance and demand on data centres. Therefore, the study aims to explore topics discussed around data centres in the Netherlands to help determine which perceived challenges are discussed in society. Doing so could help identify gaps in the current body of scientific literature, and aid decision-makers in effective planning. To this end, this study aimed to answer the following research question: RQ.1 : What themes are discussed in print media news coverage on data centres in the Netherlands?

Mapping Data Centre News Coverage

163

In order to gain further insight into how and where data centres are covered by news media, this study serves to explore spatial differences in the occurrence of identified themes, posing the research question: RQ.2 : What spatial differences can be observed relating to the occurrence of identified themes? Moreover, to investigate how the categories are related to each other, this study will apply a co-occurrence network analysis to identify: RQ.3 : Which categories have a tendency to co-occur and to what degree?

2 Public Debates Towards Data Centres Due to the increasing reliance on digital technologies and growing amounts of generated and processed data, data centres are also prone to grow in their importance in the future. However, in several instances, the construction of data centres has been met with criticism from local communities. One example of such criticism was the case of Zeewolde, the Netherlands. Here, Meta announced plans to build a new hyper scale data centre [16]. This data centre was expected to be Europe’s largest data centre at the time of completion, potentially taking up 166 hectares of space. In addition, the energy usage of the facility was expected to total 1380 GWh per year. News about the proposed construction of this data centre sparked intense media coverage of Meta’s plans, which led to the matter being discussed in parliament [19]. In addition, activists expressed their concerns over the proposed data centre, with the activist group Extinction Rebellion briefly occupying the local city hall to voice their opposition to Meta’s plans in Zeewolde [15]. Though there was clearly a societal debate about whether the construction of a hyperscale data centre in Zeewolde was desirable, little attention has been paid to such cases of societal contestation around the planning and construction of new data centres in scientific literature. As such, there is reason to believe that concerns about data centres voiced in civil society are being inadequately assessed in the scientific community.

3 Research Methodology To answer the research questions, we will follow a 4-step approach. This approach consists of partly-automated content analysis, which includes both qualitative and quantitative methods. As a first step, the newspaper articles are retrieved. Subsequently the content analysis is initiated by manually coding a set of the articles, followed by validation iterations, in order to uncover the thematic composition of all the articles. Upon validation of the final categories and keywords, they are applied to quantify their occurrence within the articles, and we retrieve the article metadata as the third step. In the fourth and last step, the results are visualised in an interactive tool that allows for data exploration. This tool includes the mapping of the news

164

A. Bessin et al.

Table 1 The search settings used to retrieve the newspaper corpus Setting Search term Time period Language Place of publication

Input “datacenter” OR “data center” OR “data centrum” OR “datacentre” OR “data centre” 1 January 1990 through 20 January 2023 Dutch Kingdom of the Netherlands

articles to their respective municipalities, which provides spatially differentiated insights.

3.1 Data Collection In order to identify the sustainability challenges in news paper coverage of data centres, newspaper articles were collected from NexisUni1 on 20 January 2023, using the search settings described in Table 1. This yielded a total corpus of 11,482 news articles published between 1 January 1990 and 20 January 2023 containing the term “datacenter” or a synonym thereof.

3.2 Article Content Analysis First we applied manual inductive content analysis, as a means to identify recurring categories and themes within the newspaper articles [2, 10]. We identified initial themes (also referred to as categories), and applied open coding to a subset of 35 articles. This involved manually analysing the articles to identify common categories [8]. The next step in this process served to verify the categories and identify new ones. For this purpose, a subset of 500 articles was selected. Using a proprietary Python script, an automated frequency analysis was performed to extract all individual words which occurred at least 50 times in the subset of 500 articles. In addition, this step served to validate the established categories. Subsequently, the set of categories was used to identify associated keywords. Keywords were identified by manually revising and categorising words which occurred over ten times in the corpus of 500 articles. In addition, the identified keywords were also used to identify new categories. These steps resulted in the establishment of an initial set of categories and associated keywords.

1 https://internationalsales.lexisnexis.com/products/nexis-uni.

Mapping Data Centre News Coverage

165

This set of categories then served as the basis of a deductive coding technique to identify associated keywords [8]. Another round of frequency analysis was applied. We retrieved all sentences in which a category keyword occurred, and then manually evaluated the counted re-occurring words2 to constitute as a keyword to the category. This process yielded an initial set of categories and associated keywords. Once these categories and keywords were defined, the occurrence of each individual keyword was determined for every article in the full data set of 11,842 articles. If any article was found to contain zero keywords, thus not relating to any of the identified categories, these articles were re-evaluated by performing an additional round of code extraction to extract new codes and keywords. This reiteration was performed until no additional categories could be defined. At this stage, the extraction of categories and associated keywords was considered complete.

3.3 Data Extraction Beyond keyword counting, the Python script written for this study extracted metadata from the news articles retrieved from NexisUni. This metadata included information such as the publication date, article title, and source of the article. These details were extracted from the text files and subsequently added to the data set containing all news articles and their keyword frequency counts. To map the news articles to specific municipalities in the next step, the Python script identified the names of the Dutch municipalities mentioned in each article. The municipality that is referenced most frequently is considered the municipality that the article is associated with.

3.4 Mapping of News Articles The final data resulting from the aforementioned steps is visualised using PowerBI, in order to allow for an interactive exploration of the data. The tool includes a map, that displays the distribution of articles, by category across the Netherlands. This allows for thematic hot-spot identification across time and space.

2 A word was considered “re-occuring” if it appeared at least ten times in the body of extracted sentences.

166

A. Bessin et al.

Table 2 Identified categories and a selection of associated keywords Category Spatial

# articles 10,263

% articles 86.7%

Selection of keywords Bedrijventerrein, bestemmingsplan, bouw. . . Energieslurpers, energietransitie, energieverbruik. . . server, cyberaanval, clouddiensten. . . banen, arbeidsplaatsen, belasting. . . beleid, beslissingen, besluit. . . Milieuaspecten, milieueffecten, milieueffectrapportage. . .

Energy

5309

44.8%

Technical

4832

40.8%

Economy

4521

38.2%

Political

4076

34.4%

Environment

2237

18.9%

Heat

1333

11.3%

glastuinbouw, restwarmte, verwarmd. . .

Water

677

5.7%

drinkwater, drinkwaterverbruik, drinkwatervoorziening. . .

Translated keywords Business park, zoning plan, construction. . . Energy guzzler, energy transition, energy usage. . . server, cyber attack, cloud services. . . jobs, vacancies, taxes. . . policy, decisions, decision. . . Environmental aspects, environmental effects, environmental impact report Greenhouse [agriculture], residual heat, heated. . . drinking water, drinking water usage, drinking water provisioning

4 Results 4.1 Identified Categories Table 2 displays the identified categories as well as a selection of associated keywords. It also indicates the number of articles that mention at least one keyword from each category, and the proportion of these articles relative to the total subset.

4.2 Spatial Differences and Category Co-occurrence The findings from the analysis were visualised using PowerBI, resulting in the development of an interactive dashboard (Fig. 1).3 A map highlights all discussed municipalities, whereby the circle size reflects the number of articles published. Various data elements are present, including the ‘Count of Articles by Category’, ‘The Share of Articles by Category’ and the ‘Articles Over Time by Category’. A

3 Accessible

via https://digitalsustainabilitycenter.nl.

Mapping Data Centre News Coverage

167

Fig. 1 The initial view of the tool

slicer allows to select either all categories, combinations or the exploration of just one category, which is then reflected in the analysis options for the whole dashboard. Though many different patterns can be identified and explored in this tool, this section will highlight a selection of findings. The results are divided into municipal findings and categorical findings, which will be further evaluated in the discussion.

4.2.1

Municipal Findings

Articles about data centres are spread out throughout the country, with a total of 235 municipalities being associated with at least one article. However, there are clear ‘hot-spots’ of data centre news coverage. The four most commonly mentioned municipalities are listed in Table 3. In total, these four municipalities represent 31.9% of data centre news coverage in the Netherlands. Table 4 ranks the most discussed municipality by category. Again Amsterdam, Zeewolde, Groningen and Hollands Kroon are the municipalities ranking high for each category. Additionally also Haarlem, Eindhoven and Boxtel are included in the highest ranking municipalities. Table 3 Four most common municipalities

Municipality Amsterdam Zeewolde Groningen Hollands Kroon

# associated articles 1256 992 947 552

% of articles 10.6% 8.4% 8.2% 4.7%

168

A. Bessin et al.

Table 4 Most discussed municipalities by category 1 2 3 4 5

Water Zeewolde Hollands Kroon Amsterdam Groningen Boxtel

Economy Groningen Amsterdam Zeewolde Hollands Kroon Eindhoven

Heat Zeewolde Amsterdam Hollands Kroon Haarlem Groningen

Energy Amsterdam Zeewolde Groningen Hollands Kroon Haarlem

1 2 3 4 5

Spatial Amsterdam Zeewolde Groningen Hollands Kroon Eindhoven

Environment Zeewolde Amsterdam Groningen Hollands Kroon Eindhoven

Political Zeewolde Amsterdam Groningen Hollands Kroon Haarlem

Technical Amsterdam Zeewolde Groningen Hollands Kroon Eindhoven

Fig. 2 Relative variance in category occurrence across municipalities

The variance is computed and visualised in Fig. 2, to investigate the differences in pattern of category occurrence among municipalities. Space has the highest variance (51.1%), indicating significant differences in how frequent this category is discussed among municipalities. It is thus a category that is highly reported in some municipalities, and far less in others. Conversely, categories like water (0.7%) and environment (2.9%) have a way lower variance, and thus are much more consistently discussed topic across all municipalities in the Netherlands.

Mapping Data Centre News Coverage

4.2.2

169

Categorical Findings

Overall, with a percentage share of nearly 30% the spatial category emerges as the most frequently discussed category. This is followed by energy (17%), technical (14%), economy (13%) and politics (13%). The three lowest ranking, but emerging topics are environment (6%), heat (4%) and water (2%). The cooccurrence analysis explores the relationship between different categories based on their frequency mentions across the data set. This analysis answers RQ3, with the result being visualised in Fig. 3. Here, the categories are visually represented as circles, interconnected by lines. The numbers displayed on the lines indicate the frequency of co-occurrence between the categories. The strongest co-occurrence is between space and energy, with a frequency count of 4131 articles. The weakest relationship between water and heat with 181 articles. Generally, the category space shows strong co-occurrence with multiple categories, including economy, energy and politics, suggesting potential relationships between spatial issues and these categories. Lesser degrees of association can be found for categories like water, heat, environment and technical.

Fig. 3 Network indicating the co-occurrence of the identified categories

170

A. Bessin et al.

5 Discussion The study and the resulting tool, can support applications such as identifying geographical variations, changes in media attention over time, and potentially identifying correlations between news coverage and specific events or developments in municipalities. The insights presented in the results section serve as an illustration of how the tool can be utilised to compare data across different locations. It is important to note that the PowerBI tool contains additional findings and details that are not explicitly mentioned in this paper. Therefore, there is a wealth of additional information available for further exploration and analysis using the tool. It can be used by researchers, policymakers, and other stakeholders to enhance their understanding of data centres, and inform decision-making related to data centre sustainability and the public debates surrounding them. We identified that eight different thematic categories are discussed in Dutch data centre news coverage. Those categories can serve as a means to identify possible future directions for research into data centre sustainability, informed by the public discourse. Further defining these research directions requires comparing the findings of this study with a literature review of studies investigating data centre sustainability. However, at the time of writing, such a study has not yet been published and is currently undertaken. Initial findings suggest that most scientific literature on data centres focuses on energy (efficiency). In contrast, the outcomes of this study highlight the prominence of spatial issues in the Dutch context, suggesting that they are discussed more frequently compared to other themes. This sheds light on a research gap pertaining to these issues, warranting further exploration into the impact of data centres on spatial planning. Generally speaking, all identified categories, except energy, appear to be under-appreciated in scientific literature, and thus might provide a direction for future research. An initial exploration of the tool suggests that the amount of news articles per municipality follows a long tail distribution, i.e. a minority of municipalities are associated with the majority of news paper articles. There are also significant differences between municipalities regarding the occurrence of themes as the variance analysis shows. This indicates that certain categories receive more attention in specific municipalities, indicating the unique circumstances and dynamics of each location. This can provide valuable insights for future planning, as highly discussed themes should receive greater attention for policymakers, urban planners, data centre operators and stakeholders to tailor their strategies accordingly. By accounting for the differences, it can guide sustainable data centre application that aligns with interests and priorities of the public domain. Below, a short discussion of the four highest ranking municipalities will be provided, in order to compare the differences between them, and to provide additional insights into why these municipalities were among the most commonly mentioned.

Mapping Data Centre News Coverage

171

5.1 Amsterdam Amsterdam hosts AMS-IX, one of the world’s largest internet exchange points, which has made it a desirable location to operate a data centre [1]. News articles associated with Amsterdam most often focused on spatial matters, followed by energy considerations, and finally technology. This is in line with the general trend observed over the entire data set.

5.2 Zeewolde The second most commonly identified municipality was Zeewolde, with 992 articles (8.4%) of all articles being associated with this location. Notably, no data centre is located in Zeewolde as of May 2023. These findings indicate that the news coverage pertaining to data centres in Zeewolde predominantly revolved around Meta’s proposal to construct a hyper-scale data centre. Despite the absence of any operational data centres in the municipality, this single unbuilt facility accounted for 8.4% of the overall news coverage on data centres in the Netherlands since 1990. In addition to spatial matters, political topics were frequently discussed in news articles related to Zeewolde. The political category emerged as the second-most discussed topic within the municipality, whereas the political in general ranked as the fifth-most discussed category across the entire dataset. As Table 4 shows, Zeewolde is the municipality that discusses political associations the most.

5.3 Groningen Groningen, both the name of a Dutch municipality and the province it is located in, might lead to articles that reference the province rather than the municipality specifically. Therefore, the analysis results for this municipality should be interpreted as representative of the entire province of Groningen. In 2022, the Dutch government designated the municipality of Het Hogeland as one of two locations where hyperscale data centres could still be built [17]. This could, in part, explain why this municipality is the fourth most commonly found. In Groningen, economy as a category is discussed relatively often compared to the national average, being the third most-discussed category instead of the fourth.

172

A. Bessin et al.

5.4 Hollands Kroon The municipality Hollands Kroon hosts Agriport, a business park which hosts multiple hyper scale data centres.4 In addition, this location was the second location that the Dutch government designated as a possible site for an additional hyper scale data centre. These two factors might have contributed to the disproportionate amount of news coverage associated with this municipality when compared to its population size. Similar to Zeewolde, politics was the second-most discussed category in this municipality, which differs from the national trend. Hollands Kroon also ranks high on the category water, as can be seen in Table 4.

6 Limitations Beyond answering the research questions posed, this study also serves as a proof of concept for exploring news media discussions related to data centres. It aims to provide insights into where data centre-related topics are discussed, the intensity (i.e. quantity of publications) of these discussions, and the time frame in which they have occurred. By analysing and mapping these aspects, this study aims to shed light on the presence and evolution of data centre discussions, offering a foundation for further investigation and understanding of this topic. The main limitations fall into three categories (i) The original data used, (ii) the assumptions made when assigning a news article to a certain municipality, and (iii), the reproducibility, scalability, and generalisability of this study. Firstly, the original data used was extracted from NexisUni, which only allows newspaper articles to be extracted in .pdf, .rtf, or .docx formats. This fact caused numerous issues in Python-assisted data analysis, such as inconsistencies in formatting and the absence of metadata. The decision was ultimately made to use .docx files, due to the authors’ familiarity with the python-docx package.5 In addition, newspapers could only be stored in seperate .docx files containing at most one hundred articles, causing further issues when attempting to analyse the entire data set. For these reasons, the authors of this study suggest against the use of NexisUni for the collection of news articles intended to be analysed automatically. Possible alternatives to NexisUni could be ProQuest6 or Access World News.7 Second, the mapping of news articles to a certain municipality rested upon the assumption that the most frequently mentioned Dutch municipality reflected the location being discussed in the news article. No assessment was made of the

4 https://www.agriporta7.nl/datacenters/. 5 https://pypi.org/project/python-docx/. 6 https://about.proquest.com/en/content-solutions/news/. 7 https://www.newsbank.com/libraries/military/solutions/access-world-news.

Mapping Data Centre News Coverage

173

accuracy of this method. This is particularly troublesome in case an article discusses a location which itself is not a municipality. One instance of such a location is Schiphol Airport: Data centres located near Schiphol are officially part of the municipality Haarlemmermeer, but many articles simply referred to Schiphol to describe a data centre’s location. Though this specific example was accounted for, other examples of this same issue could exist. This could mean that some articles where either assigned to the wrong municipality, or not assigned to any at all. Thus, the method of article localisation presented in this study is possibly inaccurate. Lastly, due to the initial data limitation the authors chose a non-automated content analysis process and manually verified the categories and keywords, which might affect reproducibility of the results. Moreover, this study focused solely on news coverage in the Netherlands. This implies that the study discovered the themes that occur most often in Dutch news coverage. Thus, the findings can not be generalised internationally as it would require an entirely new analysis, which also implies the study is not easily scalable to include regions outside of the Netherlands. While energy-related matters are likely a universal issue, as data centres are inherently energy-intensive structures, the emphasis on spatial issues in the Netherlands could be due to its relatively small size. In addition, it is possible that some categories identified in this study can be observed in other countries’ news coverage on data centres as well, but that these categories might be much more, or less relevant depending on the country or location in question. The water usage of data centres, for instance, might be less of a problem in the Netherlands compared to a data centre operated in a region with less access to drinking water.

7 Future Work The tool aimed to give a quantified overview of discussed themes on data-centres in the Netherlands and should be extended. In particular, future work could attempt to expand the scope of this research by analysing data centre news coverage in other countries. This additional analysis could then be added to the tool, thus allowing for examination of differences in news paper coverage between countries. Further analyses could focus on countries which also host a relatively large amount of data centres, such as Ireland or Denmark. Expanding the scope could reveal broader trends in the reception of data centres by news media internationally. This could also provide valuable insights into public debates surrounding data centers and their spatial differences. Moreover, such an expanded analysis could contribute to identifying research gaps and exploring the sustainability aspects of data centers in a more comprehensive manner. Apart from expanding the tool’s geographical scope, there are additional features which could be added to increase the tool’s utility. One concrete addition could be the visualisation of data centre locations, which could help immediately visualise which data centres appear to attract media coverage. Another possible addition is the ability to visualise the occurrence of individual identified keywords, which could

174

A. Bessin et al.

provide the user with additional insights into the concrete discussion points in a given municipality. Lastly, the data set of 11,842 news articles collected for the purposes of this study can be used for modes of analysis which were not performed in this study. For instance, the data set might be used to perform a more detailed, qualitative study to inform the way data centres are discussed in a specific municipality. The data set also contains article metadata such as article subjectivity and polarity, which were displayed on the PowerBI tool, but which were not used to perform any type of further data analysis.

References 1. AMS IX, https://www.ams-ix.net/ams, https://www.ams-ix.net/ams (nd). Accessed 24 May 2023 2. Bhattacherjee, A.: Social science research: principles, methods, and practices, p. 115 (2012) Open Access Textbooks. Book 3. http://scholarcommons.usf.edu/oa_textbooks/3 3. Callou, G., Ferreira, J., Maciel, P., Tutsch, D., Souza, R.: An integrated modeling approach to evaluate and optimize data center sustainability, dependability and cost. Energies 7(1), 238– 277 (2014) 4. Carrega, A., Repetto, M.: Exploiting novel software development paradigms to increase the sustainability of data centers. In: Proceedings of the 9th International Conference on Utility and Cloud Computing, pp. 310–315 (2016) 5. Daigle, B.: Data Centers around the World: A Quick Look. United States International Trade Commission, Washington (2021) 6. Digital Gateway to Europe, https://www.digitalgateway.eu/data-centers.html, https://www. digitalgateway.eu/data-centers.html (nd). Accessed 24 May 2023 7. Dutch Data Center Association: Wat is een datacenter? (nd). https://www.dutchdatacenters.nl/ datacenters/wat-is-een-datacenter/ 8. Elo, S., Kyngäs, H.: The qualitative content analysis process. J. Adv. Nurs. 62(1), 107–115 (2008) 9. Freitag, C., Berners-Lee, M., Widdicks, K., Knowles, B., Blair, G.S., Friday, A.: The real climate and transformative impact of ICT: a critique of estimates, trends, and regulations. Patterns 2(9), 100340 (2021) 10. Gheyle, N., Jacobs, T.: Content analysis: a short overview. Internal Research Note, pp. 1–17 (2017) 11. Lykou, G., Mentzelioti, D., Gritzalis, D.: A new methodology toward effectively assessing data center sustainability. Comput. Secur. 76, 327–340 (2018) 12. Marwah, M., Maciel, P., Shah, A., Sharma, R., Christian, T., Almeida, V., Araújo, C., Souza, E., Callou, G., Silva, B., et al.: Quantifying the sustainability impact of data center availability. ACM SIGMETRICS Perform. Eval. Rev. 37(4), 64–68 (2010) 13. Moghaddam, F. A., Lago, P., Grosso, P.: Energy-efficient networking solutions in cloud-based environments: A systematic literature review. ACM Comput. Secur. (CSUR) 47(4), 1–32 (2015) 14. Morley, J., Widdicks, K., Hazas, M.: Digitalisation, energy and data demand: the impact of Internet traffic on overall and peak electricity consumption. Energy Res. Soc. Sci. 38, 128–137 (2018) 15. NOS Nieuws: Klimaatactivisten voeren in gemeentehuis actie tegen komst datacenter in Zeewolde (12 2021). https://nos.nl/artikel/2409322-klimaatactivisten-voeren-in-gemeentehuisactie-tegen-komst-datacenter-in-zeewolde

Mapping Data Centre News Coverage

175

16. NOS Nieuws: Meta ziet definitief af van datacenter in Zeewolde (06 2022). https://nos.nl/ artikel/2434758-meta-ziet-definitief-af-van-datacenter-in-zeewolde 17. Rijksoverheid: Kabinet beperkt mogelijkheid tot bestiging hyperscale datacentra (06 2022). https://www.rijksoverheid.nl/actueel/nieuws/2022/06/10/kabinet-beperkt-mogelijkheid-totvestiging-hyperscale-datacentra 18. Shuja, J., Gani, A., Shamshirband, S., Ahmad, R.W., Bilal, K.: Sustainable cloud data centers: a survey of enabling techniques and technologies. Renew. Sustain. Energy Rev. 62, 195–214 (2016) 19. Teunissen, C.: Gewijzigde motie van het lid Teunissen over bevoegdheden zodanig gebruiken dat de vestiging van het datacenter in Zeewolde niet mogelijk zal zijn (t.v.v. 35925-XIII-93) (03 2022). https://www.tweedekamer.nl/kamerstukken/detail?id=2022Z05949&did=2022D12191 20. Toczé, K., Madon, M., Garcia, M., Lago, P.: The dark side of cloud and edge computing: an exploratory study. In: 8th Workshop on Computing Within Limits (LIMITS 2022) (2022)

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards a Digital Twin for Real Estate Asfa Jamil, Chirag Padubidri, Savvas Karatsiolis, Indrajit Kalita, Aytac Guley, and Andreas Kamilaris

Abstract Monitoring the physical and artificial environment at large-scale is crucial for approaching significant problems such as climate change, biodiversity loss, and sustainable urban growth. Towards this direction, GAEA is a novel AI-empowered geospatial online tool, designed to facilitate country-scale environmental monitoring, modelling, analytics, and geo-visualizations, providing valuable insights in the geographical region of the country of Cyprus, with some focus on the real estate application domain. This paper presents the design and development of GAEA, the needs and requirements it addresses, the environmental services it offers, its implementation details and main features, and an evaluation and discussion of its perspectives and overall potential. GAEA offers a user-friendly web interface that allows users to interact with a wide range of services, including land use monitoring, climate information, geohazard, and proximity analysis. GAEA is an important milestone and real-world demonstration of the vision of creating a country-scale environmental digital twin that allows informed decisions in land use assessment, climate analysis, and disaster management. Keywords Environmental digital twins · AI-empowered geospatial tool · Sustainable urban growth · Country-scale modelling · Environmental monitoring · Geo-visualizations · GAEA · Geoinformatics · Climate change

A. Jamil () · S. Karatsiolis · I. Kalita · A. Guley CYENS Center of Excellence, Nicosia, Cyprus e-mail: [email protected] C. Padubidri CYENS Center of Excellence, Nicosia, Cyprus Cyprus University of Technology, Limassol, Cyprus A. Kamilaris CYENS Center of Excellence, Nicosia, Cyprus University of Twente, Enschede, The Netherlands © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_10

177

178

A. Jamil et al.

1 Introduction In recent years, innovative sensing methods combined with analytical techniques have led to a significant shift and an exponential increase in the amount of digital data. This transformation has had a considerable impact on how we perceive and monitor our environment, making it crucial to analyze environmental components at scale to address challenges such as sustainable urbanization practices and biodiversity loss in the face of climate-driven crises. To effectively tackle these challenges, geospatial analytics tools including Application Programming Interfaces (APIs), Internet of Things (IoT) devices, and Artificial Intelligence (AI) technologies play a critical role in monitoring and evaluating the environment on a large scale. Modern tools based on Geographic Information System (GIS) offer powerful computer mapping and analytical capabilities, enabling us to approach complex environmental phenomena and disasters such as climate change, forest fires, flooding, and landslide risks [1]. This paper introduces GAEA, a country-scale environmental modelling tool based on geospatial informatics. GAEA serves as a digital twin for the real estate market of Cyprus, providing highly accurate geospatial analytics through its userfriendly interface and a wide range of environmental services. GAEA offers diverse environmental services designed to model, analyze, and visualize various geospatial challenges. For instance, GAEA enables the detection of swimming pools, construction changes, and vegetation around the houses, which is essential for land use planning and urban development. Furthermore, GAEA offers insights regarding environmental risks such as land subsidence, landslides, wildfires, and flooding, enabling policymakers to estimate potential hazards and conduct vulnerability studies, while real estate agents can better evaluate properties based on these insights. Additionally, GAEA provides information on the distances to the nearest Points of Interest (POI), such as amenities, assisting individuals and organizations in making informed decisions about property purchases or infrastructure development. By integrating these environmental geospatial services, GAEA enhances decision-making and policy development across a wide range of environmental problems and challenges. In the following sections, we will first discuss the state-of-the-art related work in this field, then present GAEA, including its technical description, software architecture, and the features and services it offers. Subsequently, we will assess GAEA, exploring its potential and highlighting its contributions to the fields of geospatial informatics and digital twins.

2 Literature Review In this section, we will review relevant work on large-scale geospatial tools and digital twins, which are closely related to the services and design of the GAEA tool.

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . .

179

These areas of research are centred around environmental modelling and digital twins and employ geospatial analytics with big geospatial data. Starting with large-scale geospatial tools, Palaiologou et al. in [2] proposed a framework for fire prediction and risk assessment that integrates open data, geospatial analysis, and Monte Carlo fire simulations. This framework serves as an efficient geospatial tool for land monitoring and can function as a digital twin for strategic wildfire management. Similarly, in [3] Verma et al. introduced the GeoEngine, a comprehensive tool that addresses the challenges of geospatial machine learning in large-scale land monitoring applications, mimicking the functionalities of a digital twin. Further, Zhang et al. in [4] presented a City Appearance Environment Management System based on WebGIS, leveraging data visualization technology to analyse the appearance and environmental information of urban areas. On the other hand, Morales et al. [5] introduced Earth Map, offering free access to satellite and geospatial data, utilizing Google Earth Engine’s capabilities for seamless visualization, processing, and analysis of land and climate data. Shifting the focus to large-scale digital twins, Ignatius et al. in [6] introduced BESCAM, a simulation platform that assesses microclimate conditions and district energy demand using GIS data, urban canopy modelling, and building energy simulation. This platform provides architects, engineers, and scientists with a digital twin-like solution to analyse CityGML and Building Information Modelling (BIM) data. Ahyun et al. presented a Unity3D-based geospatial platform for an Urban Digital Twin (UDT) that manages large-scale individual mobility data and generates long-term route information based on vehicle license plates [7]. In a different context, Mezzour Ghita et al. proposed a framework in [8] that integrates location intelligence, digital twins, and business intelligence to effectively manage crisis situations, leveraging GIS-based data for multi-perspective modelling, optimization, and learning. The reviewed papers collectively demonstrate the role of GIS in processing and analyzing large datasets, leading to significant advancements in environmental monitoring, urban management, agricultural assessments, and climatic research. The combination of GIS and digital twin technologies presented in the literature serves as the foundation for the development of the GAEA tool, with its extensive range of services encompassing land cover monitoring, geo-hazard assessment, climate monitoring, proximity analysis, and geomorphological characteristics. We hypothesize that through the integration of satellite imagery, geospatial analytics, and artificial intelligence within a unified framework, a solution can be crafted to effectively address the specific needs of Cyprus. This integration is intended to provide a distinct advantage over existing solutions by enabling a more comprehensive understanding of Cyprus’ environmental dynamics through one platform, thereby facilitating more informed decision-making and policy formulation.

180

A. Jamil et al.

3 The GAEA Tool This section provides an overview of the GAEA tool, including its software architecture, key features, environmental services, service generation methodology, and user interface.

3.1 GAEA User Interface The main user interface (UI) allows users to make spatial queries through the map, retrieve related information, and visualize the data for 26 environmental services provided by GAEA for better understanding and interpretation. The UI of GAEA is presented in Fig. 1. GAEA enables users to interact with geospatial data by offering various query options. Users can select locations on the map and extract relevant data related to selected points or areas through the UI. They can select specific points of interest on the map by simply clicking on them and also draw polygons to define custom areas of interest (AOI). The polygon selection feature is particularly useful for analyzing geospatial patterns within specific regions. Furthermore, GAEA’s front-end component allows users to search for locations using addresses, facilitating quick data retrieval. The query option on the user interface of GAEA is demonstrated in Fig. 2a. Once a user makes a point or polygon-based request, GAEA visualizes the response. Example responses of GAEA to the user queries are shown in Fig. 3. Figure 3a shows the response of the “flooding risk” query, presenting the risk score and associated values for the specified AOI. The “swimming pool detection” query response, shown in Fig. 3b, indicates whether a swimming pool is present at the specified location. The “proximity to the beach” query response, shown in Fig. 3c, returns the distance to the three nearest beaches with their exact locations. GAEA utilizes interactive plots, maps, and visual representations to aid users in comprehending complex geospatial data. By leveraging modern web technologies and data visualization libraries, GAEA generates visualizations according to each service and request type. For instance, for the “swimming pool detection” query, users are presented with a pie chart comparing properties with and without swimming pools. Users can explore different layers of information and toggle between various visual representations. For instance, the UI of GAEA allows users to visualize environmental risk-based services through pie charts and heat maps. These visualizations empower users to identify trends, patterns, and correlations, facilitating better decision-making. Figure 4 showcases a different type of visualization for three different services. For the “landslide risk” polygon-based query, Fig. 4a presents a pie chart displaying risk scores for landslides within the specified polygon. The “humidity” service, which supports both point and polygon-based queries, is visualized in Fig. 4b using a bar graph showing monthly average, maximum, and minimum humidity

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . .

Fig. 1 An overview of GAEA user interface and functionalities

181

Fig. 2 Snapshots of the GAEA user interface: (a) the query options interface allows users to select polygons or points, customize the map type, and view results, (b) mobile view of the GAEA interface, (c) a description of the selected service along with input and output parameters

182 A. Jamil et al.

Fig. 3 Exemplary environmental service responses demonstrated by GAEA. (a) Flooding risk. (b) Swimming pool. (c) Distance to beach

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . . 183

Fig. 4 GAEA example responses as visualizations. (a) Landslides risk. (b) Humidity. (c) Geology

184 A. Jamil et al.

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . .

185

percentages. Fig. 4c illustrates the “geology” service for a point-based query, displaying the soil type and soil depth. Furthermore, the GAEA platform offers a range of functions to enhance spatial data manipulation including: – The ability to modify polygons by removing individual points or deleting them entirely, as shown in Fig. 1-Label:[A]. – The specific polygon shapes can be used for area calculations, as depicted in Fig. 1-Label:[B]. – Users are notified when polygons exceed acceptable areas beyond 1 km.2 , as shown in Fig. 1-Label:[C]. – Detailed service descriptions containing information regarding data sourcing methods and output are available to users for enlightening purposes as shown in Fig. 1Label:-[D] and Fig. 2c. – Users can switch between street view and satellite view on the map, as illustrated in Fig. 1-Label:[E]. – Users have the ability to store their query results, allowing for future assessment or comparison with previous data. Overall, GAEA provides a comprehensive and user-friendly platform with extensive functionalities for geospatial data analysis and manipulation.

3.2 Software Architecture GAEA offers a comprehensive suite of geospatial features, geo-visualizations, and environmental services for a variety of application domains, including land cover monitoring, geohazard assessment, climate monitoring, proximity services, and geomorphological land characteristics. GAEA provides this extensive set of services through GIS technology, satellite imagery, AI, machine learning, and deep learning models, as well as big data geospatial analysis and processing. With its intuitive user interface, GAEA enables users to harness and visualize environmental data effectively. By visualizing data, users can uncover hidden patterns, trends, and correlations within geospatial information, leading to informed decision-making and a deeper understanding of the spatial world. GAEA’s graphical interface allows users to interact with the tool, and retrieve information through points or polygons drawn on the map of Cyprus, or by typing in specific addresses of interest. GAEA follows a modular software architecture (see Sect. 3.3) and supports 26 environmental services, which are described in detail in Sect. 3.4. These services offer users a wide range of geospatial information and analytics. Whether users are accessing GAEA through desktop or mobile devices, they can conveniently utilize the tool’s functionalities. The mobile view of GAEA’s user interface is shown in Fig. 2b highlighting the accessibility and adaptability of the tool across different devices.

186

A. Jamil et al.

Front End

User Interface

Back End Application Logic

Query

MAP

Response

Options

UI

Visualization

retrieves data

uses

SuPerWorld Geo API Geo Spatial Data Features Layers and GeoTiif Files

MongoDB Database

API Functions

accesses utilizies Geo Spatial Search

utilizies Geo Spatial Relationship

Fig. 5 Block diagram illustrating the components of GAEA’s architecture

3.3 Implementation Details GAEA is driven by the MERN (MongoDB, Express, React, Node) architecture, which consists of React for the frontend, the React Leaflet package and OpenStreetMap for maps, Node.js-Express for the backend, and MongoDB for the database. The implementation of a RESTful API for frontend-to-backend communication ensures that data is transferred efficiently. GAEA uses the SuPerWorld Geospatial API1 for direct access to 26 environmental services. The SuPerWorld API has been developed as a Flask-based application, incorporated into GAEA to expand its capabilities. The SuPerWorld API employs a variety of Python geospatial libraries, including GDAL, Fiona, Shapely, Rasterio, and Rtree thereby enhancing the system’s functionality. GAEA uses a number of interrelated components to give a comprehensive geospatial platform. These components include a front-end interface, back-end infrastructure, SuPerWorld API connection, and a geospatial data store as shown in Fig. 5. These components work together to enable users to maximize the value of geospatial data, conduct efficient spatial queries, and visualize information in a meaningful and intuitive manner. GAEA’s design ensures a robust and scalable base for providing user with rich geospatial experiences. Front-End Component The front-end of GAEA provides an intuitive and interactive user interface to users for querying and visualizing geospatial data. For efficient exploration and analysis of geospatial data, the front-end provides different query options, response visualization, and other features.

1 SuPerWorld

Geospatial API: https://superworld.cyens.org.cy/product1.html.

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . .

187

SuPerWorld API The SuPerWorld API is an important component that connects the back-end and front-end of GAEA. It receives user inquiries from the back-end component and conducts geospatial analysis on geospatial data contained in vector (shapefiles) and raster (GeoTIFF files) data. Utilizing geospatial spatial indices and spatial relationships, the SuPerWorld API efficiently extracts meaningful information through the use of algorithms and techniques intended specifically for geospatial analysis. It computes and returns the results to the back-end component for further processing and delivery to the front end. This integration between the SuPerWorld API and the back end ensures accurate geospatial analysis results. Back-End Component The back-end component is the engine of GAEA, handling user queries and administering data retrieval from SuPerWorld API. It serves as an intermediary between the front end and the SuPerWorld API. The back-end component receives the request and prepares it for processing when a user submits a query through the front end. It communicates with the SuPerWorld API, which executes geospatial analysis, to retrieve the corresponding results. Once the data has been obtained from SuperWorld API, the back-end component prepares the response and returns it to the front-end for user presentation. Geospatial Database Component GAEA supports different formats of geospatial data, including shapefiles and GeoTIFF files, that can represent point, polygon, or polyline-based features. Thus, using these formats, GAEA accommodates diverse applications and use cases. In addition, GAEA uses shapefiles and GeoTIFF files on the same server as SuPerWorld API, therefore, external geospatial hosting services such as ArcGIS are not needed. Co-locating data and API on a single server improves performance, reduces latency, and facilitates data retrieval and processing. With these enhancements, users can access and analyze geospatial data without any added complications. Geospatial Search and Relationships GAEA uses the R-tree [9] method for efficient spatial search and processing of geospatial queries. Indexing and spatial search are optimized using the technique. Through the splitting and indexing of the spatial objects, the R-tree algorithm effectively arranges the data into a hierarchical structure. It speeds up response time and makes it easier to process input requests effectively. GAEA is capable of handling geographical relationships such as a polygon within a polygon, a polygon overlapping another polygon, a line touching a polygon, etc. in addition to doing quick spatial searches. By using the distances to the closest features, GAEA also determines the proximity between features. By taking spatial interactions like proximity, confinement, and intersection into account, GAEA offers users accurate responses regarding various geographical associations. Thus, GAEA enhances the overall efficiency and effectiveness of geospatial analysis through the GAEA platform by combining the effective spatial search through the use of the R-tree index and handling multiple geospatial relationships between query points or polygons and the underlying geospatial data for all services.

188

A. Jamil et al.

Data Visualizations GAEA acknowledges the importance of data visualization for effectively communicating geospatial insights to end users. Using modern data visualization libraries and techniques, GAEA converts complex geospatial data into interactive charts and maps, thereby facilitating a clearer comprehension of the information at hand. This visual representation enables users to investigate and interpret data more effectively, thereby facilitating well-informed decisions and allowing for deeper analysis. In summary, GAEA integrates its front-end, back-end, and API components for efficient geospatial analysis. Using the SuPerWorld API, geospatial data is processed for optimization and seamless integration. Incorporating the R-tree algorithm allows for efficient indexing and searching, while geospatial relationships improve data retrieval by capturing spatial associations. Unlocking the full potential of geospatial data in GAEA, these components enable users to leverage optimized data, enhanced retrieval, and interactive visualization for valuable insights and informed decision-making.

3.4 GAEA Environmental Services Overview The development of GAEA followed an innovative and comprehensive methodology that utilized the SuPerWorld API, a geospatial API customized for diverse environmental applications in Cyprus. This methodology involved the systematic acquisition and preprocessing of numerous datasets containing a wide variety of environmental indicators for the region of Cyprus. Our strategy for creating GAEA centres on integrating 26 essential environmental services. Figure 6 provides the broader categories of environmental services offered by GAEA and a list of the services that fall under these categories: GAEA Services Detailed Overview Each service is designed using purposespecific methodologies and data sources. These services provide valuable insights and analyses for a wide array of environmental applications in Cyprus, empowering decision-making processes and facilitating the effective management of land, climate, and geohazard-related challenges. A detailed overview of GAEA services, in regards to their data sources and methodology used for their development (i.e., data pre-processing steps, spatial resolution, modelling, etc.) is provided in Table 1.

3.5 Evaluation and Results The performance of the GAEA tool has been evaluated based on multiple metrics, and the results of experimentation were used to improve GAEA in terms of accuracy, usability, and efficiency.

Geo-Suitability Zones

Seismic Zones and Distance to fault lines

Building Earthquake Damage Probability

Flooding Risk

Detection of Burnt Areas

Wildfire Risk

Landslide Risk

Subsidence Risk

II. Geohazards

Amenities

Electric Network

Blue Flag Beaches

Sea

Roads

III. Proximity

Precipitation

Humidity

Temperature

Wind

IV. Climate Monitoring

Geology

Elevation

Slope & Aspect

V. Geomorphological Characteristics

Fig. 6 An overview of GAEA environmental services categories: (I) The Land Cover Monitoring category includes services designed to analyze and comprehend the land cover characteristics of the region of Cyprus, (II) Geohazard-related services concentrate on environmental risks for the main natural disasters occurring in Cyprus, (III) The Proximity category is mainly about the proximity of a location to different POI in the region of Cyprus, (IV) Climate monitoring services offer insights into various climatic factors for specific locations in Cyprus, and (V) The geomorphological-related services provide valuable information about various geomorphological features and characteristics in Cyprus

Natura regions

Building Area Estimation

Land Use Classification

Detection of Vegetation

Detection of Land Use Change

Detection of Swimming Pool

I. Land Cover Monitoring

GAEA Environmental Services

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . . 189

50 cm RGB aerial images from the department of land registry Cyprus, Global canopy top height for the year 2020 at 10 m ground sampling distance. [17] 50 cm RGB aerial images from the department of land registry Cyprus

50 cm RGB aerial images from the department of land registry Cyprus

Land registry of Cyprus

Detection of vegetation

Detection of land use classes

Building area estimation

Natura regions

The images have been preprocessed (divided into 256X256X3 patches) and annotated with the label studio tool [11]. Following that, data augmentation techniques are used to the annotated data. Finally, for training, the data is split into 70:10:20 (training:validation: test) ratio. A U-net [12] with ResNet50 [13] as a feature extracting layer was used and trained on the swimming pool data. The images are preprocessed (split into 256X256X3 patches) and labelled by visually comparing photographs from two separate dates (weakly-supervised annotation). The annotated data is then augmented using data augmentation techniques. Finally, the data is divided into 70:10:20 (training:validation: test) ratios for training. The change detection was achieved in three steps, first a Siamese neural network [15] to produce an initial change map. And in the next step a semi-automated OTSU [16] threshold-based algorithm to create a second change map. In the final step, common changes were identified from the 2 change map obtained in steps 1 and 2. G−R VARI index calculation based on formula: V ARI = G+R−B , The canopy height data was generated using a probabilistic deep learning model developed. The deep learning model was trained on Sentinel-2 images and GEDI-derived canopy top height data. The images have been preprocessed (divided into 256X256X3 patches) and annotated with the label studio tool [11]. Following that, data augmentation techniques are used for the annotated data. Finally, for training, the data is split into a 70:10:20 (training:validation: test) ratio. A U-net [12] with ResNet50 [13] as a feature extracting layer was used and trained on the land cover data. The images have been preprocessed (divided into 256X256X3 patches) and annotated with the label studio tool [11]. Following that, data augmentation techniques are used to the annotated data. Finally, for training, the data is split into a 70:10:20 (training:validation: test) ratio. A U-net [12] with ResNet50 [13] as a feature extracting layer was used and trained on the land cover data. –

Google Earth Images at a resolution of 50 cm. [10]

The pair of Planetscope [14] images (one-year temporal gap) at a resolution of 3.7 m.

Methodology

Data source

Land use change

Service Land monitoring Detection of swimming pools

Table 1 GAEA geospatial environmental services

190 A. Jamil et al.

Estimation of building earthquake damage probability Determining seismic zone and proximity to fault lines Geo-suitability and geohazard assessment

Flooding risk

Detection of burnt areas

Wildfire risk

Geo hazards Subsidence risk detection Landslide risk assessment

Geotechnical drilling investigations, geological mapping activities, geophysical excavations conducted by the department of Geology

Land registry Cyprus

(continued)

GIS analysis, proximity calculation

Computational mathematical model

Random forest classifier

Normalized burn ratio (NBR) calculation based on the formula: I R−SW I R N BR = N N I R+SW I R

Random forest classifier

Satellite imagery analysis. Using ESA RUS-Copernicus proposed methodology to pre-process and analyze the data. [18] Random forest classifier

Seismic hazard map of Cyprus, land registry Cyprus

Distance to rivers, catchment area, historical floods, elevation data Specific datasets for 40,000 buildings

Lithology, slope angle, climate-physiographic regions, land cover information from European Soil Data Center [19] Geo-morphological and climate-physio-graphical data, vegetation data, distance to populated areas and electric grids Land-sat imagery at a resolution of 30 m [20]

Sentinel-1 InSAR satellite data

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . . 191

Land registry Cyprus, Google Places API. [22]

Calculation of slope and aspect using GIS techniques. – Geospatial analysis

Data retrieval from weather stations. Interpolation-based geospatial analysis and categorization.

Approximation to nearest weather stations.

Approximation to the nearest weather station.

GIS analysis

For dirty road, data was gathered from Land use developed

Land registry Cyprus Land registry Cyprus Blue flag official website [21]

Land registry Cyprus

Methodology

Data source

Data collected from Weather stations in Cyprus, provided by the department of meteorology cyprus. Temperature Data collected from Weather stations in Cyprus, provided by the department of meteorology Cyprus. Humidity Weather stations in Cyprus Precipitation CHIRPS [23] data Geomorphological characteristics Slope and aspect Land registry digital elevation model (DEM) Elevation Land registry digital elevation model (DEM) Geology and soil Land registry of Cyprus

Service Proximity Proximity to roads Proximity to sea Proximity to blue-flag beaches Proximity to the electricity network Proximity to the nearest amenity. Climate monitoring Wind speed monitoring.

Table 1 (continued)

192 A. Jamil et al.

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . .

193

Correctness of Point and Polygon-Based Queries To verify the correctness of GAEA’s response for point and polygon-based queries, alpha testing was performed. Five scenarios each for point-based queries and polygon-based queries were considered. The testing aimed to correlate the output of GAEA with the ground truth and determine if the correct results were returned. The user interface of GAEA simplifies the testing for the correctness of its underlying services due to its visualization aspect. For the “proximity to amenities” service, it was observed that some recently established places, especially restaurants, and pharmacies, were not retrieved. Therefore, we have enriched our service with Google Places API [22] data. For the “humidity” service, we found some data discrepancies in the city of Paphos (the third largest city of Cyprus) due to the unavailability of data from the nearest weather station. We resolved this issue by considering the monthly aggregate of humidity percentage from all weather stations in case of data unavailability. Accuracy of Environmental Services For each service, specific evaluation metrics were selected, and the performance of the services was assessed using these metrics. Table 2 provides detailed information about the evaluation method, evaluation metrics, and the results for each service. After conducting the evaluation for each service, we found that for risk-based services including “wildfire risk” and “flooding risk”, the model performed well for binary classification to identify the occurrence or non-occurrence of risk. However, for a 5-point scale (no risk to very high risk), the results of the service showed lower performance on non-occurrence areas of risk, as it tries to match the ground truth label where ground truth historical data only informs if a wildfire or flood occurred in the past or not. To combat this issue with the “wildfire risk”, we are now developing a model that learns from the burn ratio intensity ratio and tries to classify risk based on input parameters instead of the ground truth label. A similar approach will be used for the “flooding risk” service. For the “building area estimation” service, the evaluation revealed that the precision and recall are around 90% for urban areas, but the accuracy drops for forest regions as the model is trained on satellite imagery containing urban areas only. Handling of Different Spatial Relationships Testing was conducted to assess GAEA’s effectiveness in handling different spatial relationships. Different experiments were conducted, such as a point inside a polygon, a polygon inside (overlaps, within, touches) polygon, and point-to-point relationships for the input point or polygon and spatial data related to the service under consideration. The results showed that GAEA correctly identifies the spatial relationship between the input point and the underlying geospatial layer or GeoTIFF. Distance Calculation and Area Estimation of Polygon Based on the user interaction with GAEA, it was found that GAEA provides accurate distances and area estimation of polygons when compared to Google Maps. For some services, when a polygon larger than 1 km.2 is selected, the results are not obtained due to

Estimation of building earthquake damage probability Determining seismic zone and proximity to fault lines

Detection of burnt areas Flooding risk

Landslide risk assessment Wildfire risk

Accuracy

Precision Precision on binary classification Accuracy Precision based on binary classification – Contingent on data from land Registry Cyprus



Close to 100% 84%

High cross-correlation between 90–95% 70% 96%



Building area estimation Natura Geohazards Subsidence risk detection

83.64%, 74.12% (Phase 1) 89.18% (Phase 2) 98.92%, 93.86 on urban area% Close to 100%

Precision, IOU Accuracy Precision, recall Accuracy

Detection of land use classes

73.4%

Precision

98%, 91%

72%, 77%

Precision, recall

Precision, recall

Evaluation results

Evaluation metrics

Detection of vegetation around the property

Web service Land cover monitoring Detection of swimming pools inside the property Land use change

Table 2 Evaluation results for web services

Manual inspection



Manual inspection Comparison with ground truth data

Manual inspection and comparison with ground truth data Comparison with ground truth data Manual inspection

Manual inspection and Testing with 20% test dataset. Contingent on data from Land Registry Cyprus

The confusion matrix is used to assess performance (precision and recall). The results are based only on the testing data (20%). Manual inspection—the precision and recall are calculated for 428 sample points randomly chosen across Cyprus, by manually checking whether they fit into the defined vegetation scores. Testing with 20% test dataset.

Testing with 20% test dataset

Evaluation method

194 A. Jamil et al.

Close to 100% Contingent on DEM from land registry Cyprus Close to 100% on inspected locations

Accuracy

Geology and soil

Accuracy

Accuracy Accuracy

Precision Accuracy

Proximity to blue-flag beaches Proximity to the electricity network

– – – –

Accuracy

Proximity to the sea

– – – –

Close to 100% on inspected locations% Close to 100% on inspected locations% 100% Close to 100 on inspected locations% 90–100%

Precision

Proximity of the property to the nearest amenity Climate monitoring Wind speed monitoring Temperature Humidity Precipitation Geomorphological characteristics Slope and aspect Elevation

Contingent on data from land Registry Cyprus

Accuracy

Geo-suitability and geohazard assessment Proximity Proximity to roads

Manual inspection

Manual inspection Data collected from DEM

Data collected from weather stations Data collected from weather stations Data collected from weather stations Manual inspection and predefined thresholds

Manual inspection

Manual inspection Manual inspection

Contingent on INSPIRE mapping service performance

Manual inspection

Manual inspection

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . . 195

196

A. Jamil et al.

a response timeout issue for the SuPerWorld API. Therefore, a limit of 1 km.2 has been applied to the size of the polygon. The Efficiency of Query Search To evaluate the effectiveness of the efficient geospatial index (R-Tree) used for spatial search, we performed experiments both with and without spatial indexing. For point-based and polygon-based queries, the experimentation showed that the response time was consistently lower while using an R-tree index compared to a query search without using spatial indexing. From our experiments, we identified optimization strategies for polygon checking. By ordering polygons by proximity to the input polygon’s centroid and early termination of checks, we efficiently discarded distant, non-intersecting polygons. Using the polygon’s minimum bounding rectangle (MBR) for R-tree index construction further reduced index size, accelerated searches, and proved beneficial for large, complex polygons.

4 Discussion Our geospatial tool, GAEA, has the potential to have a significant impact on various stakeholders in Cyprus, including urban planners, environmentalists, real estate agents, and policymakers. By providing comprehensive geospatial data, Gaea enables informed decision-making, risk assessment, and environmental monitoring. Potential Applications and Benefits GAEA offers a wide range of potential applications and benefits for different stakeholders in Cyprus. The potential users of this service could be government agencies and NGOs, urban planners, environmentalists/ecologists, real estate companies, insurance companies, and risk management companies. For example, Urban planners can leverage the tool to track land cover changes, identify suitable areas for construction, and assess proximity to amenities and infrastructure. This information is crucial for sustainable urban development and efficient resource allocation. In the real estate industry, developers and evaluators can utilize GAEA’s services to assess land use changes, accurately determine property values, and identify marketing edge based on proximity to amenities, the sea, or blue-flag beaches. The tool’s ability to detect building areas and swimming pools further assists in property valuation and development decisions. Thus, GAEA forms the foundational layer of a digital twin for the real estate industry of Cyprus, giving an accurate representation of the current state of land use, properties, and surrounding environmental features. Policymakers can utilize GAEA to monitor environmental changes, evaluate the impact of geohazards, and develop policies and regulations related to land use planning, disaster management, and environmental protection.

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . .

197

Finally, GAEA serves as an essential tool for assisting environmentalists and ecologists. Its capability to identify geohazards, including subsidence risk, landslide risk, wildfire risk, and flooding risk, empowers them to assess and mitigate potential environmental hazards, conserve natural resources, and protect biodiversity. One valuable application of GAEA is in the development of Species Distribution Models (SDMs) [24]. SDMs are used by ecologists and conservationists to understand the environmental factors influencing species distributions and predict habitat suitability. Crucially, covariates such as environmental data, climate data, and land cover maps play a pivotal role in developing accurate SDM models. Gaea, with its extensive range of services, provides these essential covariates required for SDMs. By integrating Gaea’s data and capabilities into the SDM process, researchers and conservationists can enhance the accuracy and reliability of their models, leading to informed decisions for biodiversity conservation and the formulation of effective conservation policies. Contributions to Environmental Informatics and Sustainability Gaea’s development aligns with the broader goals of environmental informatics, sustainability, and the digital transformation of society. GAEA contributes towards meeting key objectives outlined under the United Nations Sustainable Development Goals(SDGs), particularly those underlining sustainability concerning climate action or life on land. By aligning with SDG 11 (Sustainable Cities and Communities). SDG 13 (Climate Action). And SDG 15 (Life on Land), Gaea serves as a catalyst in promoting environmentally responsible practices that create a lasting social impact. Future Work In the future, there is scope for potential developments and improvements in GAEA to enhance its functionalities and provided services. One potential improvement is the enrichment of services provided by GAEA to include more indicators such as building quality, land use similarity, etc. To make it a true 3D twin of Cyprus, we will include 3D visualizations. Refining the models and algorithms employed by Gaea can result in more accuracy and precision in the services delivered. We are striving toward this goal through continual research and collaborations with field experts. We believe that GAEA will present us with an opportunity to collaborate with other scholars, organizations, and institutions in order to benefit from and improve its potential. We can not only improve GAEA’s capabilities but also cover other parts of the world for geospatial analysis and visualizations through these collaborations and knowledge-sharing efforts. In conclusion, GAEA is an important tool offering valuable geospatial insights for decision-making, risk assessment, and environmental monitoring to various stakeholders in Cyprus, with the potential to open new avenues of research, partnerships, knowledge utilization, and sharing.

198

A. Jamil et al.

5 Conclusion This paper presents the design and development of GAEA, which constitutes a novel AI-empowered geospatial online tool, designed to facilitate country-scale environmental monitoring, modelling, analytics, and geo-visualizations, providing valuable insights into the geographical region of the country of Cyprus, with some focus on the real estate application domain. The paper describes the needs and requirements addressed by the tool, the 26 environmental services it offers, its implementation details and main features, as well as an evaluation and discussion of its perspectives and overall potential. Since monitoring the physical and artificial environment at a large scale is crucial for approaching significant societal problems such as climate change, biodiversity loss, and sustainable urban growth, GAEA serves as an important milestone and real-world demonstration towards the vision of creating country-scale environmental digital twins which allow informed decisions in land use assessment, climate analysis, and disaster management, among others.

References 1. Yang, L., Driscol, J., Sarigai, S., Wu, Q., Chen, H., Lippitt, C.D.: Google earth engine and artificial intelligence (ai): a comprehensive review. Remote Sens. 14(14), 3253 (2022) 2. Palaiologou, P., Kalabokidis, K., Day, M.A., Ager, A.A., Galatsidas, S., Papalampros, L.: Modelling fire behavior to assess community exposure in europe: combining open data and geospatial analysis. ISPRS Int. J. Geo-Inf. 11(3), 198 (2022) 3. Verma, S., Gupta, S., Shin, H., Panigrahi, A., Goswami, S., Pardeshi, S., Exe, N., Dutta, U., Joshi, T.R., Bhojwani, N.: Geoengine: a platform for production-ready geospatial research. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21,416–21,424 (2022) 4. Zhang, X., Ma, C., Yang, G.: City appearance environment management system based on webgis. In: Third International Conference on Computer Science and Communication Technology (ICCSCT 2022), vol. 12506, pp. 1389–1398. SPIE (2022) 5. Morales, C., Díaz, A.S.-P., Dionisio, D., Guarnieri, L., Marchi, G., Maniatis, D., Mollicone, D.: Earth map: a novel tool for fast performance of advanced land monitoring and climate assessment. J. Remote Sens. 3, 0003 (2023) 6. Ignatius, M., Wong, N., Martin, M., Chen, S.: Virtual singapore integration with energy simulation and canopy modelling for climate assessment. In: IOP Conference Series: Earth and Environmental Science, vol. 294, no. 1, p. 012018. IOP Publishing (2019) 7. Lee, A., Lee, K.-W., Kim, K.-H., Shin, S.-W.: A geospatial platform to manage large-scale individual mobility for an urban digital twin platform. Remote Sens. 14(3), 723 (2022) 8. Ghita, M., Siham, B., Hicham, M., Hafid, G.: Artificial and geospatial intelligence driven digital twins’ architecture development against the worldwide twin crisis caused by covid-19. Geospatial Intelligence: Applications and Future Trends, pp. 79–104. Springer, Cham (2022) 9. Rtree: https://pypi.org/project/Rtree/. Last accessed 10 June 2023 10. Google Earth: https://earth.google.com/. Last accessed 10 Sept 2022 11. Label Studio - Annotation Tool: https://github.com/heartexlabs/label-studio. last accessed 10 June 2023

GAEA: A Country-Scale Geospatial Environmental Modelling Tool: Towards. . .

199

12. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18, pp. 234–241. Springer (2015) 13. Shah, A., Kadam, E., Shah, H., Shinde, S., Shingade, S.: Deep residual networks with exponential linear unit. In: Proceedings of the Third International Symposium on Computer Vision and the Internet, ser. VisionNet’16. New York, NY, USA: Association for Computing Machinery, pp. 59–65. (2016) [Online]. Available: https://doi.org/10.1145/2983402.2983406 14. PlanetScope: https://earth.esa.int/eogateway/missions/planetscope. Last accessed 10 Sept 2022 15. Chicco, D.: Siamese neural networks: an overview. Artificial Neural Networks, pp. 73–94. Humana, New York (2021) 16. Lv, Z., Shi, W., Zhou, X., Benediktsson, J.A.: Semi-automatic system for land cover change detection using bi-temporal remote sensing images. Remote Sens. 9(11), 1112 (2017) 17. Lang, N., Jetz, W., Schindler, K., Wegner, J.D.: A high-resolution canopy height model of the earth. Nat. Ecol. Evol. 7(11), 1778–1789 (2023). https://doi.org/10.1038/s41559-023-022066. Epub 2023 Sep 28. PMID: 37770546; PMCID: PMC10627820 18. Palazzo, F. Šmejkalová, T., Castro-Gomez, M., Rémondière, S., Scarda, B., Bonneval, B., Gilles, C., Guzzonato, E., Mora, B.: Rus: a new expert service for sentinel users. In: Proceedings, vol. 2, no. 7 (2018) [Online]. Available: https://www.mdpi.com/2504-3900/2/7/ 369 19. ESDAC - European Commission: https://esdac.jrc.ec.europa.eu/. Last accessed 10 June 2023 20. Landsat Collection: 25-08-2021 , Landsat7, 08 2021 21. Blue Flag: https://www.blueflag.global. Last accessed 10 June 2022 22. Google for Developers: https://developers.google.com/maps/documentation/places/webservice/overview. Last accessed 10 June 2023 23. CHIRPS Rainfall Dataset: https://developers.google.com/earth-engine/datasets/catalog/ UCSB-CHG_CHIRPS_DAILY. Last accessed 10 Jan 2023 24. Guillera-Arroita, G., Lahoz-Monfort, J.J., Elith, J., Gordon, A., Kujala, H., Lentini, P.E., McCarthy, M.A., Tingley, R., Wintle, B.A.: Is my species distribution model fit for purpose? matching data and models to applications. Glob. Ecol. Biogeogr. 24(3), 276–292 (2015)

Detecting Effects on Soil Moisture with Guerilla Sensing Johannes Hartkens, Florian Schmalriede, Marvin Banse, Dirk C. Albach, Regine Albers, Oliver Theel, and Andreas Winter

Abstract A soil moisture monitoring system (SMOS) is presented to support the microclover project in determining effects of soil cover on soil moisture. It is built as a project-specific adaptation of the environmental information system (EIS) Guerilla Sensing. We describe the adaptation process step by step to provide a blueprint for easy use of Guerilla Sensing in similar future projects. Keywords Environmental information system · Soil moisture measurement · Soil cover · Plant cultivation

1 Introduction Climate change and the resulting increase in temperature and drought periods challenge mankind with rising frequency. Routine economic processes must be put on trial, searching for more resource-conserving and efficient alternatives. Nursery industry is challenged from two sides. First, the lack of predictable precipitation and water availability causes pressure to reduce water consumption. Second, increasing energy prices and consequently more expensive fertilizers increased studies on how nitrogen-fixing legumes may reduce the dependency on artificial fertilizers. Legumes have consequently been used previously in a number of crops as living mulch [1]. The advantages are crop-specific and depend on cultivation practices [2]. The effect of living mulch, therefore, needs to be investigated separately for

J. Hartkens · F. Schmalriede () · M. Banse · O. Theel · A. Winter Department for Computer Science, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected] D. C. Albach · R. Albers Department Plant Biodiversity and Evolution, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_11

201

202

J. Hartkens et al.

any cropping system. So far, experience with pot-based cultivation in horticultural nurseries is lacking. The project “microclover” funded by the Agricultural European Innovation Partnership “Productivity and Sustainability in agriculture” investigates how soil cover by different clover types on nursery pots may help to improve soil moisture, provide nitrogen to the soil and even suppress weeds. Legumes are specifically suited as living mulch since they are able to fix nitrogen from the air through their symbiosis with rhizobial bacteria in root nodules. Low growing types of clover like micro-varieties of white clover (Trifolium repens L.) should therefore not compete with nursery plants. They do not shade them, but complement them by providing extra nitrogen and suppressing weeds by shading. While soil nitrogen content and the biomass of the ornamental and weeds are easily measurable, the expected effect on soil moisture is not. As later highlighted in Sect. 2, multiple soil moisture measurements must be taken and documented daily over a one-year period at two different measuring points on multiple outdoor plant pots. Furthermore, measurements shall not influence follow-up measurements, e.g. due to changes in the conditions at a measuring point resulting from measurement methodology. In addition, important parameters such as sampling rate of soil moisture and measurement accuracy are not fixed so far. Measurement methodologies that determine soil moisture, by weight, by sticking instruments in and pulling them out, or by drying samples cannot be used. They would not measure soil moisture at defined points or would affect subsequent measurements by changing soil structure due to compression and incision of irrigation canals. In addition, the number of measurements throughout the day over a period of a year would result in large, repetitive, and monotonous efforts if done manually. Thus, a soil moisture observation system (SMOS) is required that automatically performs and documents soil moisture measurements at defined measuring points in pots without changing soil structure. When developing the SMOS, it must be ensured that it can be operated in the measurement environment for a period of 1 year and that sufficient measurements are recorded to evaluate stated hypothesis. Components of the system must be designed to be resistant to environmental influences and mechanisms to monitor their functionality must be integrated. The former prevents downtimes and thus measurement loss, while the latter ensures still occurring downtimes will be detected and resolved. In addition, undetermined parameters such as the sampling rate of the soil moisture and the required measurement accuracy must be determined in order to ensure that measurements are performed adequately. Using an automated SMOS ensures performing the microclover experiment with limited effort. Beyond the microclover project, such a system could also be reused for experiments with similar conditions. Further experiments looking at effects on soil moisture could be automated with such a SMOS. Thus, the system’s transferability should be taken into account as an important requirement. After introducing the objective of the microclover project in Sect. 1, the project challenges will be presented detailed in Sect. 2. Related work that provide knowledge on sensor selection and the EIS Guerilla Sensing are presented in Sect. 3.

Detecting Effects on Soil Moisture with Guerilla Sensing

203

The concept for an adaption of Guerilla Sensing to SMOS is derived in Sect. 4. To select appropriate sensors, various sensor types are evaluated in Sect. 5 and their embedding in sensor nodes is presented in Sect. 6. Within a formative evaluation in Sect. 7 outstanding parameters like the sampling rate of the soil moisture and the required measurement accuracy as well as further system parameters that offer scaling possibilities are determined. Finally, Sect. 8 summarizes and discusses the resulting SMOS. Consider that results concerning the conceptualization and realization of the SMOS are presented here. Biological findings will be presented in a future paper, when an appropriate amount of data has been collected.

2 The Microclover Challenge To evaluate the microclover project’s hypothesis that soil cover with clover has a positive effect on soil moisture of plants in nursery pots, the change of soil moisture in pots must be monitored and compared. A positive effect on soil moisture is defined as a slower decrease of soil moisture in covered pots. Different combinations of clover species and ornamental plant species could lead to different magnitudes of effects. Interfering factors, such as differences by individual plants, local soil properties, environmental situations, and measuring errors must be taken into account in order to relate the effect to soil cover with clover. Detecting changes of soil moisture is achieved by repetitive, time-shifted measurements. Measurements must be done at least twice between two water inflow phases to determine falling soil moisture without influences of water inflow. To investigate variations in magnitude of effects likely to occur depending on combination of plant species and clover species, microclover considers two different ornamental plant species in combination with two different clover species. Additionally, as a reference, each plant species is monitored without soil cover and with non-living mulch as cover. Taken together, eight different treatments are considered. In order to exclude effects of individual plants as far as possible, each treatment is tested on six individuals, leading to a setup with 48 pots. Local properties of soil affects measurements due to different positions and compression. Just below surface, soil is more likely to be warmed by sunlight, so water is more likely to evaporate. In addition, gravity acts towards the bottom of pots, which will cause more water to accumulate there. To account local effects, two measurement points are selected, one near the top and one near the bottom of a pot, resulting in 96 measuring points. Fixed measuring points minimize influences due to compression, as only local changes have to be considered and changes due to variation in locality can be excluded. To ensure transferring the projects results to economically managed nurseries, measurements must be carried out under real conditions. Changing temperatures, solar activities, rainfall, day and night cycles, and seasonal changes affect soil moisture. Therefore, soil moisture measurements have to be performed multiple

204

J. Hartkens et al.

times a day over a period of 1 year on ornamental plants supplied outdoors at a nursery. Due to the project duration and multiple measurements a day, it is likely that all relevant environmental factors will occur multiple times and can be considered accordingly. The care of ornamental plants by a nursery allows the results to be applied to real situations. But it must be noted that in this way uncontrolled water inflow may occur, for example, due to rain. Depending on selected measuring equipment, errors occur during each measurement. Therefore, suitable measuring devices must be selected according to measurement project requirements and potential errors must be taken into account during evaluation. In order to measure the desired effect in the microclover project, the measurement error must be smaller than the smallest difference of soil moisture measurements to be considered. Only then effects assigned to measuring errors can be excluded. However, if repeated measurements of adequate quantity show that soil moisture decreases more slowly over time in clover-covered plants compared to control groups, the hypothesized effect can still be concluded based on a large number of observations. It is unlikely that measurement errors will occur repeatedly in sufficient quantity to support the hypothesis. In summary, soil moisture must be measured several times a day over a period of 1 year in a realistic environment without affecting follow up measurements at 96 measuring points and documented for evaluation in a separable manner depending on measuring point and time. Thereby, the exact sampling rate of soil moisture and the required measurement accuracy is undetermined so far.

3 Related Work As the effects of soil cover with clover on soil moisture are to be determined, it is necessary to measure the changes in soil moisture over time. Other related projects are dedicated or have been dedicated to similar challenges. Predictive Plant Production The project Predictive Plant Production tries to optimize environmental parameters like temperature, soil moisture and soil conductivity to match plants’ needs [3]. This way, the project aims to increase plant growth while only as much resources are used as are necessary. A set of sensors is used to learn a model of plant growth and weather conditions. The model tries to predict the process of plant development and ease the scheduling in production. As part of the Predictive Plant Production project, a wide variety of sensors from different manufacturers were compared and evaluated using a standardized procedure. Uniform soil is completely dried and then mixed with an amount of water that is tailored to the volume of the soil. The moistened soil is then filled into containers and compressed using a standardized process. Accordingly, the soil moisture of prepared soil is almost known. The soil moisture of the soil in containers is measured with different sensors. Difference between almost known and measured soil moisture is used to evaluate sensors.

Detecting Effects on Soil Moisture with Guerilla Sensing

205

The results vary over different sensors, as some delivered more accurate and repeatable results than others. Thereby, they differ in cost and durability. This suggest that sensors must be chosen project-specifically. Therefore, a study is performed in Sect. 5 to make a microclover-specific sensor selection. Guerilla Sensing In order to handle the microclover challenge and at the same time to support the intended sensor study with project-specific experiments, an adaptable EIS is needed. Here, the Guerrilla Sensing system [4] offers a solution. It is developed in an extensible way and allows automatic sensing and documentation of different environmental factors, which can be accessed via a web-based application. Sensor drivers can be defined in the Guerilla Sensing platform and firmware can be configured for situation-specific assembled sensor nodes, called G-Boxes. Various measurement projects, such as monitoring air quality near schools, radioactivity near castor transports and soil moisture measurements in forests1 have already been realized with Guerilla Sensing. Due to the high degree of adaptability, a universal G-Box to which different soil moisture sensors can be connected is realized for the intended sensor study in Sect. 5.2. Sensor drivers and parts of the universal G-Box are adopted to a microclover G-Box in Sect. 6. With the web-based application, collected measurements in sensor experiments as well as in the microclover experiment can be remotely accessed at any time. Accordingly, missing measurements are noticed without traveling on site. Due to reuse of components and the generic approach of Guerilla Sensing, the effort to realize the required SMOS can be kept low. Here an instantiation of the Guerilla Sensing, shows its applicability to the problem at hand. For a detailed, general view on the systems architecture please refer to the paper “Environmental wellbeing through guerilla sensing” [4].

4 Guerilla Sensing as Soil Moisture Observation System Guerilla Sensing depicts with the three types of subsystems G-Client, G-Platform and G-Box the required functionalities and features to realize a SMOS in order to detect effects of soil cover on soil moisture. With the web-based G-Client, persisted measurements for multiple sensors can be screened simultaneously, visualized in line graphs, and exported to a file of comma separated values. Measurements can thus be viewed at any time and further processed with a variety of tools. The GPlatform persists measurements centrally and provides interfaces through which GClients can query measurements and G-Boxes can persist measurements. G-Boxes periodically request measurements from connected sensors and transfer them to the G-Platform. They can be designed project-specific, and the firmware for individual G-Boxes can be generated using the configurator in the G-Client via the G-Platform. The only requirement for a G-Box is that it supports the Arduino framework. In 1 https://www.guerilla-sensing.de/campaign/7.

206

J. Hartkens et al. G-Client 1

Legend

...

G-Client 2

G-Client k

Instance Connection

G-Platform

G-Box 1

Sensor 1

Sensor 2

...

G-Box 2

Sensor m1

Sensor m1+1

Sensor m1+2

...

... Sensor m1+m2

G-Box n

...

Sensor m1+m2 +...+mn

Fig. 1 Instances of Guerilla Sensing subsystems in order to observe effects in soil moisture

order to make project-specific sensors available via the configurator, their drivers must be implemented by extending a base-class and registered in the G-Platform. As shown in Fig. 1, k instances of G-Client can access the G-Platform. This allows stakeholders to access measurements simultaneously. Here, k can remain indeterminate, as the number of instances automatically adjusts according to requests of stakeholders. Stakeholders retrieve G-Client instances from the web server of the Guerilla Sensing system via web browser. The number of G-Box instances is also scalable with the Guerilla Sensing system. Thereby, Guerilla Sensing does not require that the number of G-Box instances n is predefined. Moreover, .mi sensor instances can be attached to a G-Box instance. The G-Boxspecific number of sensor instances .mi must be known before the firmware is configured, but can easily be changed by configuring a new firmware. By adjusting n and .mi , it is possible to scale between cost of G-Boxes and handleability of G-Boxes. When there are many G-Box instances, each connected to only a few sensor instances, the cost increases due to the large number of GBox instances and the handleability also increases since there are only a few sensor instances connected to the G-Box instances. However, if few G-Box instances, each connected to many sensor instances, are used, the cost decreases due to the small number of G-Box instances but also the handleability decreases since many sensor instances are connected to one G-Box instance. For the microclover project, 96 soil moisture sensor instances are needed. The number of G-Box instances n and sensor instances .mi needs to be scaled accordingly. The design of sensors and G-Boxes, as well as the number of G-Box instances n and the number of sensor instances per G-Box .mi have to be determined projectspecifically, whereby decisions affect each other. In order to connect G-Boxes and sensors, G-Boxes must provide physical interfaces for sensors in an adequate number .mi . Which interfaces exactly are needed depends on sensors’ choice. Only when sensors and G-Boxes are connected experience with the handleability can be made. Thus, .mi and n can be estimated in advance but have to be evaluated. To solve this problem, suitable sensors are first selected in Sect. 5 and in Sect. 6 G-Boxes are designed to handle more than an estimated number of sensor instances. In Sect. 7 the handleability is evaluated.

Detecting Effects on Soil Moisture with Guerilla Sensing

207

5 Soil Moisture Sensor Selection In order to find suitable sensors to measure soil moisture in the microclover project, a project-specific sensor study was conducted, as highlighted in Sect. 3. This study is divided into two phases, where the first phase is lead by documentations to explore and select potential sensors for the second phase. The second phase is lead by experimental results to select a specific sensor. By reducing the number of sensors in the first phase, the effort for experiments in the second phase was reduced.

5.1 Sensor Exploration According the requirements discussed in Sect. 2, sensors accuracy and durability have to be considered. Here, the required accuracy is unknown, but the accuracy of sensors used must be known so that effects resulting from measurement errors can be taken into account. Regarding durability, the sensors must be able to operate outdoors for at least 1 year or it must be possible to replace sensors with respect to effort. Additionally, the documentation and connectivity of sensors will be considered in the first phase in order to use this information for the experiments in the second phase. Based on the findings from the “Predictive Plant Production” project and experience with soil moisture sensors in previous Guerilla Sensing projects, a relationship between sensor cost and sensor characteristics like accuracy and durability can be seen. Therefore, sensors of different price categories are considered. Here, the selection is restricted to five sensors in a range from 20e to 120e. Less expensive sensors are excluded based on the experiences with soil moisture sensors in previous Guerilla Sensing projects regarding their durability. More expensive sensors are excluded by limited projects funds. The considered range should be sufficiently covered by the five different sensors: Xiaomi Mi Flower Care Plant Sensor (.∼20e), 3-prong sensor (.∼25e), Tinovi Capacitive Soil Moisture sensor (.∼30e) as well as the Trübner SMT50 (.∼70e) and SMT100 (.∼120e). Xiaomi Mi Flower Care Plant Sensor This sensor is interesting for the project for its low cost of 20e and communication via Bluetooth Low Energy (BLE). This will enable maximum flexibility for the setup, as no cables are needed and a single sensor node is suffice. This however comes with severe drawbacks. Firstly, the sensor is designed for indoor use, for monitoring soil moisture, light irradiation, and levels of fertilizer in the soil of house plants [5]. As a result, every single sensor have to be adapted with epoxy or similar materials to allow outdoor use. Also, the documentation is incomplete, so no data on the technology used or the accuracy can be found, which make intensive testing necessary. 3-Prong Sensor For about 25e, the 3-prong sensor is a relatively inexpensive sensor that measures soil moisture via conductivity between the prongs. The

208

J. Hartkens et al.

accuracy of this sensor has a deviation of about .± 3%–5% (higher deviation at higher moisture levels) [6]. The body of the sensor is made of epoxy, which should withstand the conditions. The prongs, however, are made of bare metal, which calls the long term durability into question. This sensor supports analog and RS485 communication. Unfortunately, the documentation of the sensor is limited to product descriptions. Tinovi Capacitive Soil Moisture Sensor The Tinovi Capacitive Soil Moisture Sensor is a dust and waterproof sensor that measures soil moisture via capacitance. No bare metal needs to be exposed for this measuring principle. It costs about 30e and has a accuracy deviation of around .±5% [7]. The sensor supports I2C communication, but the cable length is very limited for I2C, which can be a problem depending on the distance between the pots. There is documentation provided by the manufacturer. Trübner SMT50 The Trübner SMT50 is a more expensive option, priced at around 70e, but according to its documentation, it provides better quality data than the previous sensors, with a deviation of .±3% [8]. Its range of possible measurements, however, is limited to a volumetric water content of 50% or less. The capacitive sensor is dust and waterproof, as it was made for outdoor use. The sensor is compatible with an analog communication protocol, and documentation is provided by the manufacturer. Trübner SMT100 The Trübner SMT100 is the most expensive option, priced at around 120e, but it provides good accuracy with a deviation of .±2% (up to 50% volumentric water content) and .±3% (up to 100% volumentric water content) [9]. The capacitive sensor is dust and waterproof and is compatible with analog, RS485, and SDI-12 communication protocols. Documentation is provided by the manufacturer. Furthermore, the SMT100 has already been successfully used in comparable scientific projects [10, 11]. The 3-prong sensor and the Xiaomi Mi Flower Care Plant Sensor are inadequately documented and their long-term performance under measurement environment conditions is questionable. Extensive arrangements must be made to operate the sensors as required. Alternative replacements of sensors in case of misbehavior would be imaginable due to the low prices, but the effect of operation in unsuitable environments on accuracy is unknown. This results in time- and money-consuming pre-experiments which the project budget does not cover. Consequently, these two sensors will not be considered any further. In contrast, the Tinovi Capacitive Soil Moisture Sensor as well as the Trübner sensors SMT50 and SMT100 are well documented and built for outdoor use. It should be possible to operate all three sensors in the measurement environment without adjustments. The accuracy of all three sensors is well documented. Accordingly, this can be considered when analyzing measurements without determining the accuracy by own experiments. Based on these findings, the three sensors are candidates for further investigation in the subsequent second phase.

Detecting Effects on Soil Moisture with Guerilla Sensing

209

5.2 Sensor Experiments The remaining sensors were evaluated in more detail in three experiments to determine their fitness for the microclover project. In the first experiment, it is to be determined whether sensors work correctly and provide plausible values. The second experiment considers the continuity of sensors’ measurements over an extended period of time. Experiment three will compare sensors to each other regarding their response to changes in soil moisture. In order to efficiently perform experiments with the sensors, a universal G-Box was realized to which all sensors in question can be connected. The G-Box requests measurements from all sensors and sends them to the microclover G-Platform for later analysis. Developed drivers are accessible via G-Platform and therefore usable via the configurator in further progress. Two samples of each sensor in question were obtained in each experiment to detect possible issues with individual samples. Here, the RS485 communication option was chosen for the SMT100. With an analog connection, measurements change depending on cable length due to different resistance values. Communication via SDI-12 would be suitable in principle as well, but RS485 has already been used in combination with G-Boxes, so corresponding experience is available. Together, the universal G-Box requires at least two analog connections (Trübner SMT50), two I2C connections (Tinovi Capacitive Soil Moisture Sensor) and two RS485 connections (Trübner SMT100). The universal G-Box is composed by an individual circuit board and off-the-shelf components, which can also been used to build the microclover G-Box. In order to avoid influences caused by faulty contacts between used components, which can easily occur with breadboards, for example, a circuit board was designed and produced for the universal G-Box. The microclover measurements should also be protected against influences caused by faulty contacts. Therefore, during the development of the universal G-Box circuit board, care was taken to ensure that parts can be adopted to other circuit boards. Thus a microclover-specific circuit board can be easily realized by reusing parts of the universal G-Box circuit board. In addition, the board was designed to allow easy replacement of damaged components. For this purpose, pin header sockets have been placed on the board. Accordingly, components are chosen with the possibility of pin header connections. Furthermore, JST sockets were placed on the board to pass communication interfaces for sensors externally. This way, sensors can be connected simply and stable. The firmware for G-Boxes requires support of the Arduino framework, which is given with ESP32 microcontrollers. Thereby, ESP32 microcontrollers can establish WiFi connections, providing a wireless way to connect to systems such as the GPlatform via an appropriate network structure. ESP32 developer boards, like the Firebeetle, offer the possibility to program integrated ESP32 microcontrollers easily via USB connection and are mountable via pin headers. Additionally, the 3.3 V output and the 5 V output of developer boards can be used to power remaining components, while developer boards are powered via USB. The Firebeetle has

210

J. Hartkens et al.

already been used successfully in several Guerilla Sensing projects and does not differ much from other ESP32 developer boards in terms of cost. Therefore, the Firebeetle ESP32 board is used here. The Firebeetle has an analog-to-digital converter with 8-bit resolution. However, in order to capture soil moisture readings of the Trübner SMT50, which provides them as an analog signal with 8-bit resolved voltages, the resolution should be doubled to 16-bit due to synchronization issues. Accordingly, an external analogto-digital converter with a 16-bit resolution is used. An ADS1115 board meets this criterion and can operate at least two SMT50 sensors simultaneously with four analog inputs. The board can be connected by pin headers, offers I2C communication, which is natively supported by the Firebeetle, and has been successfully used in Guerilla Sensing projects several times. Accordingly, an ADS1115 board was chosen to connect Trübner SMT50 sensors. Connections via RS485 are not natively supported by the Firebeetle. But with a UART TTL to RS485 board the UART interface of the Firebeetle can be used to connect RS485 components. Also, such a board offers the possibility to be connected via pin headers. Since several components can be connected to one bus at the same time via RS485, a UART TTL to RS485 board is sufficient here to connect at least two Trüber SMT100 sensors. As mentioned, the Firebeetle supports I2C communication. I2C is designed as a bus to which multiple components can be connected. Thus, at least two Tinovi Capacitive Soil Moisture sensors can be connected directly to the Firebeetle. However, I2C is only designed for short distance communication. Thus, according to the manufacturer’s recommendation [7], pullup resistors have been placed on the board at appropriate lines to reduce interference at longer distances. Two examples of each selected sensor can be connected to the universal G-Box via plugs. Also, the firmware of the G-Box can easily transferred via USB, enabling communication with sensors, thus making following experiments possible. Experiment 1 The functionality of sensors with the universal G-Box was verified with known soil moisture conditions. Measurements were taken with sensors either in air or fully submerged in water. Soil moisture of 0% should correspond the former and soil moisture of 100% (or 50% for SMT50) to the latter. All six sensor samples provided corresponding results and thus show plausible values for edge cases. Additionally, the sensors work with the universal G-Box. Experiment 2 In the microclover project, soil moisture must be monitored over a period of 1 year. If possible, measurements from sensors should be provided whenever they are requested. Otherwise, it may not be possible to collect sufficient measurements to confirm or reject the microclover hypothesis. Therefore, the continuity of sensors is verified by taking measurements with the universal G-Box every 5 minutes over a week in a pot with soil. Whereby the measurement frequency of 5 minutes together with the measurement period of 1 week should be sufficient to conclude a longer operation. The SMT50 as well as the SMT100 sensors delivered a measurement for every measurement request. However, the Tinovi sensors proved fairly unreliable.

Detecting Effects on Soil Moisture with Guerilla Sensing

211

Multiple times, after previously establishing a functioning communication, sensors failed to transmit their measurements. This was attributed to I2C, which is optimized for shorter distances than what was intended for this component, which gets delivered with a four meter cable. Adjustment of the pull-up resistor improved this behavior, but did not completely eliminate interference. Tinovi sensor cable lengths could be shortened at the expense of sensor deployment flexibility, but interferences in the project environment will be greater than those in this experiment. Based on that behavior and limited time for further stabilization actions, the Tinovi sensor was not considered any further. Experiment 3 Since changes in soil moisture over time must be considered, the behavior of the SMT50 and the SMT100 sensors was observed in this regard. Sensors were placed in pots with soil and measurements were performed every minute. A comparatively high frequency of one measurement per minute was assumed to capture occurring effects. The pots were irrigated until water drained from them at varying time points. Soil moisture was expected to increase shortly after irrigation and then decrease until the next irrigation. Different intervals were chosen to look at the effect over different time spans. The soil was irrigated on the first, second, fourth, fifth, sixth, seventh, 11th, and 14th day. The pots were located indoors and, thus, exposed to minor influences such as small temperature variations. Since there is no water consumer and temperature was moderate, rather slow decreases in soil moisture were expected. If such decreases can be detected, it is likely that decreases with consumers and higher temperatures can also be detected. Based on the “Predictive Plant Production” project, it is known that the compression of soil influences measurements. Accordingly, the soil of all pots was initially moistened and mixed together in a container. Then, the soil was evenly distributed across the pots, with sensors placed in the center. The soil was loosely layered and gently pressed down after filling. However, irrigation of soil was expected to change the compression of the soil and thus influence measurements, for example, by increasing soil compression and thereby changing contact surface between sensor and soil. In weekly interpretations of recorded measurements by biologists, the running time of the experiment was adjusted. Finally, the experiment was performed for 22 days. After each irrigation, there is a spike in soil moisture (Fig. 2), which subsequently drops back. At the beginning of the experiment, the spikes are more intense and the drop to a lower level occurs more quickly. Measurements from the SMT50 show greater spikers and quicker drops compared to the SMT100. Between two irrigations there is in some cases an increase of measurements at the beginning of the experiment after reaching the lowest level. In early measurements, small increases in soil moisture content just before the next irrigation occurred. This effect was stronger for the SMT50 and disappeared with the irrigation on the 11/01, detailed shown in Fig. 3. The measurements of all sensors remain stable or decrease over the remaining runtime.

212

J. Hartkens et al.

moisture in %

SMT50 - 1 SMT50 - 2

50 48 46 44 42 40 38 36 34 32 30 28 26 24 22 20 18 16 14 12 10 8 6 4 2 0 10/19

10/23

10/25

10/27

SMT100 - 1 SMT100 - 2

10/29

10/31

11/04

11/06

11/08

11/10

date

Fig. 2 Experiment 3 measurements overview SMT50 - 1 SMT50 - 2

SMT100 - 1 SMT100 - 2

26 24

moisture in %

22 20 18 16 14 12 10 8 10/29

10/31

11/02

11/04 date

11/06

11/08

11/10

Fig. 3 Experiment 3 measurements after compression

Compared to measurements of the SMT100, the strong spikes of the measurements of the SMT50 show that they are more sensitive to irrigation. In addition, measurements of the SMT50 drop very quickly to a low level, which can hardly be explained by evaporation. In contrast, the measurements of the SMT100 increasingly behaved as expected as the experiment went on. Between irrigations, the SMT50 and SMT100 show fluctuations as expected with measurement errors. The measurements from the SMT100 are more stable than the measurements from the SMT50. This can be attributed, at least in part, to the fact that the SMT100 take multiple measurements per measurement request and fuse them to provide one stabilized measurement. Increased measurements between two irrigations until the irrigation on 11/01 are interpreted as compression of soil through irrigation. Accordingly, only measurements after 11/01 are considered in the selection of sensors.

Detecting Effects on Soil Moisture with Guerilla Sensing

213

The SMT50 are not suitable for the microclover project due to the high spikes of measurements, the rapid drop in these and the hardly noticeable changes between irrigations. Evaporation effects are hardly reflected in measurements. In contrast, the SMT100 provide traceable measurements that show evaporation. Although they are more expensive, these sensors are the only ones considered here that are suitable for the microclover project. Accordingly, SMT100 are used to monitor soil moisture over time.

6 Microclover G-Boxes Setup As described in detail in Sect. 2, the ornamental plants to be observed are placed and maintained outdoors in a nursery. Accordingly, G-Boxes must be protected against weather conditions and irrigation. Additionally, only one fixed power source and no Internet connection is present in the intended area. Former can be used as a power source for G-Boxes, but must be instrumentalized so that multiple G-Boxes can be operated. The latter must be established to allow connections to the G-Platform. G-Boxes and infrastructure needed for G-Boxes must be developed to match the microclover project conditions. The G-Boxes are build to accommodate SMT100 sensor according to experience gained in the preliminary experiments. A microclover G-Box consists of an individual circuit board, a screw terminal, several JST sockets, several pin headers, a Firebeetle ESP32, and a UART TTL to RS485 board. Like with the universal GBox, the individual circuit board offers advantages in terms of stable interconnection of components and sensors. SMT100 sensors are connected via JST sockets which gives flexibility and stable connections. The microclover G-Box offers 12 sockets for SMT100 sensors. Whereby 12 sockets should be enough to cover eligible sensor instances .mi . The cost increased negligibly for the high number of connections. Connections that are not used do not have to be equipped with sockets, thus only the low costs for a possibly larger circuit board are incurred. Pin headers are used to connect the Firebeetle and the UART TTL to RS485 board to the circuit board in a stable but still replaceable way. Screw terminals on the circuit board are used to provide a central connection for power. In order to ensure that the microclover G-Box has the required weather resistance and is protected against irrigation, the circuit boards are installed in IP66-rated housings. To connect sensors and power sources, IP66 cable glands are mounted to the housings. Thus, all components are protected against dust and strong water streams. The exact number of cable glands for sensor instances .mi is determined in the context of parameterization in Sect. 7. While setting up the infrastructure, attention must be paid to the conditions of the measurement environment so that G-Boxes can be operated at least over the duration of the project. It should also be noted that the number of G-Box instances n is designed variable, allowing to scale between cost and handleability in the following. For this reason, the power supply and the Internet connection are set up in a treelike structure starting from a central distribution box with IP66 protection class.

214

J. Hartkens et al.

Fig. 4 Microclover SMOS evaluation and parameterization experiment setup

Components, like 5 V power adapters and a mobile data router, are placed in the distribution box in a weatherproof and irrigation-proof manner. IP66 cable glands are used to lay cable lines to the G-Boxes. Via WiFi, the G-Boxes can establish connections to the router and thus to the Internet.

7 Microclover SMOS Evaluation and Parameterization In order to evaluate the ability of the SMOS to support the microclover experiment and to determine outstanding parameters, a realistic experiment was performed. For this purpose, a microclover G-Box with six SMT100 sensors was used to detect changes in soil moisture at the surface between the roots of plants and at the bottom of pots between the 29th January 2023 and the 28th February 2023 for three pots planted with rhododendrons (Fig. 4). Compared to experiment 3 in Sect. 5.2, there are water consumers and two SMT100 sensors in each pot, as in the microclover project. By observing changes across three individual plants, a first impression regarding influences of individuals is indicated. The duration of the experiment is designed with more than a month so that drops in soil moisture can be observed several times and various influences such as compression due to irrigation can be registered. This should allow conclusions about the required measurement accuracy and the sampling rate of soil moisture. Additionally, the setup allows to evaluate the handleability for a fixed chosen number of sensor instances .mi . The initial number of sensor instances .mi was set to six, because handling is subjectively given. The plants were treated as normal plants by gardeners in the greenhouse of the botanical garden of the University of Oldenburg. The care of the plants indoors differs from the microclover project environment, but G-Boxes and sensors still

moisture in %

Detecting Effects on Soil Moisture with Guerilla Sensing

68 66 64 62 60 58 56 54 52 50 48 46 44 42 40 38 36 34 32 30 28 26 24 22 20

pot 1 - upper pot 1 - lower

02/02

02/09

pot 9 - upper pot 9 - lower

02/16

215 pot 10 - upper pot 10 - lower

02/23

03/02

03/09

date

Fig. 5 Decrease between first and second irrigation

need to be protected against irrigations. Accordingly, initial conclusions regarding irrigation resistance can be drawn. Before starting the experiment, the plants were irrigated to compression by irrigation known from experiment 3 and the “Predictive Plant Production” project. Preliminary compression irrigation was monitored with a microclover G-Box and showed no noticeable effect after seven irrigations. Based on preparation, soil moisture is expected to increase rapidly after irrigation and then decrease slowly over time due to evaporation and water consumption. In the experiment, the five irrigations performed during the period were recorded on 30th January, 9th February, 14th February, 22nd February, and 28th February. The soil moisture measurements for these are shown in Fig. 5 and can be accessed via Guerilla Sensing.2 After most irrigations, measurements rise rapidly and then slowly decrease to a lower level over time. Different local maxima and minima are reached between pots. The effect is more pronounced for sensors at the bottom of the pots. Recorded decreases in soil moisture ranged from about 0.5% to 16% in the lower region of the pots. Measurements from sensors in the upper area of the pots showed a slight increase between the second irrigation on 9th February and the fourth irrigation on 22nd February. The maximum drop in measurements at the top of the pots is about 1%. In some cases, such as after the second irrigation on 10th February, after the fourth irrigation on 24th February and after the fifth irrigation on 6th March, the measurements of sensors in the lower part of the pots drop abruptly by a maximum of 2.5%. The sampling rate of 1 minute was not met in all cases. Different maxima and minima are explained by gardeners irrigation according to their sense. Therefore, there are also different intervals between irrigations. This irrigation behavior will not affect the microclover project because the focus is on soil moisture decrease and not absolute soil moisture levels. 2 https://www.guerilla-sensing.de/measured-values?gboxes=[588]&from=1670949919000&to=

1682696719000.

216

J. Hartkens et al.

The more pronounced effect in measurements of sensors in the lower area of the pots is contrary to the hypothesis made in Sect. 2 that water evaporates more strongly in the upper area of the pots. Positioning of sensors between the roots probably has an effect here. Plants absorb water from the soil via their roots. Accordingly, sensors between roots are continuously surrounded by water in the roots and show relatively constant or even increasing soil moisture levels. In contrast, the water in the lower part of the pots is drained away by runoff, by water consumption and evaporation. Root growth over the microclover project period may cause the sensors in the lower area of the pots to be surrounded by roots as well. Further experiments need to be considered where biologists can examine the measurements in more detail and make conclusions. In most cases, the decrease in soil moisture at the bottom of the pots due to the difference in levels can be attributed to drainage, evaporation and water consumption. The differences are more than double the measurement accuracy stated by the manufacturer for the SMT100. Other cases show smaller differences and are reasonable, but would need to be observed more frequently to exclude measurement errors as unlikely cause. This experiment was performed at low temperatures. Accordingly, evaporation effects should be lower than at summer times. With enough observations, the measurement accuracy of the SMT100 sensors should be sufficient to detect the considered effect. Thereby, the effect can be more pronounced by adjusting the delay between irrigations. Suddenly occurring drops stay within the measurement accuracy of the SMT100 sensors and could be explained by measurement errors. However, it is noticeable that drops occur at the same time as missing measurements. The missing measurements could be related to interferences in the internet connection or to unreachable sensors. Between individual measurements, the microclover G-Box switches to idle mode. In this mode, the power supply of all sensors is interrupted to save energy. During previous experiments this did not lead to any disturbances, but in few cases it could occur as a result. It is therefore necessary to investigate whether drops are related to missing measurements and to clarify the cause of drops as well as missing measurements. A positive aspect is that the remotely accessible measurements enabled to detect missing measurements without traveling to the site. Recorded soil moisture decreases are very slow and therefore a lower sampling rate can be assumed sufficient for the microclover project. Thus costs and energy can be saved during data transmission and measurements. But it should be noted that rain in the outdoor area can cause sudden water inflow. With a sampling rate of one measurement per hour, there should be enough decrease in soil moisture observable, even if sudden water inflows occur. The initial decision to set the number of sensor instances .mi to six proved to be practical. Installing the G-Box with sensors went smoothly. Therefore, the number of sensors does not need to be reduced. But the number should also not be increased, because subsequently the assembly and installation of G-Boxes becomes disproportionately more difficult. Therefore, all .mi are assigned to six. Corresponding to 96 required sensors, this results in 16 G-Boxes.

Detecting Effects on Soil Moisture with Guerilla Sensing

217

8 Conclusion The developed SMOS fulfills the requirements almost completely. Soil moisture measurements can be automatically performed and documented several times a day over a long period of time. Documented measurements can be accessed remotely. Evaluation and monitoring of measurements can thus be done without traveling to the site. Furthermore, reasonable parameter assignments for the required measurement accuracy and the sampling rate of soil moisture were found. By scaling the experiment period to the intended period of the micro clover project, more measurements than needed were successfully obtained. Occurrences of missing measurements in the experiments do not carry weight. For future projects, occurrences of missing measurements can be attributed to the idle mode of G-Boxes or unstable internet connections. Both can be addressed by slight adjustments and extensions of the G-Box firmware. Guerilla Sensing were used across all phases due to its adaptability and expandability. Functionalities such as remotely accessible documentation, visualisation, and export of measurements were used without adjustments. Developed components for initial experiments were reused to a large extent in the further course and thus reduced the development effort. Future projects will also benefit from components that have been implemented. Further activities are planned to extend the visualization capabilities and to implement notifications in case of missing measurements. The line charts in Guerilla Sensing fully visualize all required information, but an aggregated display of project-relevant information would be clearer. For example, the average drop in soil moisture could be visualized grouped by treatment. This would allow biologists to see at a glance the current state of their experiment. Missing notifications will further reduce the effort required to detect missing measurements. Active monitoring will no longer be necessary. In summary, based on Guerilla Sensing, a SMOS that allows to detect effects of soil cover with clover on soil moisture was developed. The described adaptionactivities can be used as a blueprint for projects like the microclover project. Whether the SMOS will prove successful as expected remains to be seen. In approximately 1 year, the microclover project will be completed and final conclusions can be made.

References 1. Hartwig, N.L., Ammon, H.U.: Cover crops and living mulches. Weed Sci. 50(6), 688–699 (2002) 2. Feil, B., Liedgens, M.: Pflanzenproduktion in lebenden mulchen – eine übersicht. Pflanzenbauwissenschaften 5(1), 15–23 (2001) 3. OFFIS: Predictive plant production - projektbeschreibung (2023). https://predictive-zlantproduction.de/projektbeschreibung/. Accessed 10 May 2023

218

J. Hartkens et al.

4. Banse, M., Schmalriede, F., Theel, O., Winter, A.: Environmental wellbeing through guerilla sensing. In: INFORMATIK 2021, pp. 57–66. Bonn (2021). Gesellschaft für Informatik e.V. (GI) 5. Beijing HuaHuaCaoCao Technology: Xiaomi huahuacaocao flower plants smart monitor. https://files.miot-global.com/files/plants_monitor/Plants_monitor-EN.pdf. Accessed 17 May 2023 6. JXCT-IOT: Soil moisture measurement sensor temperature and humidity detector with 3 pin (2023). http://www.jxct-iot.com/product/showproduct.php?id=189. Accessed 17 May 2023 7. Tinovi: I2c capacitive soil moisture, temperature sensor (2019). Accessed 17 May 2023. https:// tinovi.com/wp-content/uploads/2022/08/PM-WCS-3-I2C.pdf 8. Trübner: Smt50 soil moisture sensor- instruction manual (2018). https://www.truebner.de/ assets/download/Anleitung_SMT50.pdf. Accessed 17 May 2023 9. Trübner: Smt100 soil moisture sensor - instruction manual (2021). https://www.truebner.de/ assets/download/Anleitung_SMT100_V1.1.pdf. Accessed 17 May 2023 10. Schaffitel, A., Schuetz, T., Weiler, M.: A distributed soil moisture, temperature and infiltrometer dataset for permeable pavements and green spaces. Earth Syst. Sci. Data 12(1), 501–517 (2020) 11. Berthelin, R., Rinderer, M., Andreo, B., Baker, A., Kilian, D., Leonhardt, G., Lotz, A., Lichtenwoehrer, K., Mudarra, M., Padilla, I.Y., Pantoja Agreda, F., Rosolem, R., Vale, A., Hartmann, A.: A soil moisture monitoring network to characterize karstic recharge and evapotranspiration at five representative sites across the globe. Geosci. Instrum. Methods Data Syst. 9(1), 11–23 (2020)

Data Management of Heterogeneous Bicycle Infrastructure Data Johannes Schering, Pascal Säfken, Jorge Marx Gómez, Kathrin Krienke, and Peter Gwiasda

Abstract Data that is related to traffic and specially to cycling is already commonly used in bicycle infrastructure planning processes. Data supports the understanding of bicycle use. What becomes more relevant is data about the state of the bike infrastructure. In general, cycling data sources have become increasingly heterogeneous what increases the need for suitable data management. This contribution presents the data management solution of the INFRASense research project that aims at the quality assessment of bicycle infrastructure. As a first step, the state of the art of data applications in cycling planning is presented. The data pipeline of the research project that considers many of these data sources is based on a Data Lake approach where the raw data sets are stored before transforming these individually for further data processing. The available data sources can be divided between time series and non-times series data. The related data models that allow the combination of different tables inside the database will be presented. As a last step, the contribution gives an outlook to forthcoming applications that will build up on the presented data management solution (interactive dashboard for data analysis). Keywords Cycling infrastructure data · Time series data · Non time series data · Snowflake schema · Object-relational data model · ETL · Shapefiles · Data Lake

J. Schering () · P. Säfken · J. Marx Gómez Department of Business Informatics VLBA, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany e-mail: [email protected] K. Krienke · P. Gwiasda Planungsbüro VIA eG, Cologne, Germany © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_12

219

220

J. Schering et al.

1 Introduction More and more cycling data has become available in the past few years. Many cities are already using cycling data as part of their practical work. Many cities conducted Bike to Work campaigns supported by smartphone applications to make an incentive to switch from car to bike more often which should not only have a positive impact on environmental but also health aspects [1]. As part of these campaigns quite a lot of bicycle data was gathered [2] to get more insights into bicycle use. Induction slopes in the asphalt are very common to measure the amount of bicycles on important roads [3]. Because the expectations of cyclists regarding bicycle infrastructure (space, increasing number of conflicts with motorized traffic) are rising [4], municipalities have started to gather more information about the bike infrastructure itself. To represent different aspects of the quality of infrastructure, such as the alignment of the main bicycle routes, surface types or the length of roads, Shapefiles are often applied in municipal administrations. These Shapefiles allow spatial data such as geographical points, lines and polygons to be visualized on a map (topography, bike path network etc.). However, as the availability of data increases, the heterogeneity of data sources also increases noticeably. Thus the complexity of data processing and storage solutions increased which reflects the evolving structure of municipal data management. Administrations must adapt to the changing data environment by improving and implementing flexible and scalable systems that can efficiently process the growing range of data formats and sources. In this way, they can achieve the best results from the analysis of the increasing availability of data what may be the basis for a more efficient decision making and planning process. In this contribution we discuss what data sources are the state of the art in cycling planning and how it is applied in the current decision making process. The second part will provide a short overview about INFRASense and how the project contributes to a better understanding of infrastructure quality. As we will see, there are quite a lot of data sources in different formats available. The availability of cycling related data sources in two exemplary cities Oldenburg and Osnabrück which are both well known for bicycle friendliness [5] will be presented. The data management of these data sources is based on a Data Lake approach to support the ETL process. It will be discussed which data models are suitable for storage and processing of these heterogeneous cycling and bike infrastructure data sources. In general, we can differentiate between time series and non-time series data. Different models for data processing are required. In the conclusion chapter the results will be summarized. The final paragraph provides an outlook about the applications that may become available in further working steps based on the created cycling databases. The data sources should also become available for (external) applications and users. An Application Programming Interface (API) will be the basis for a dashboard that provides Key Performance Indicators (KPIs) and dynamic maps.

Data Management of Heterogeneous Bicycle Infrastructure Data

221

2 Introduction Data Sources 2.1 Data Use in Cycling Planning The use of digital data has become increasingly important in cycling infrastructure traffic planning. In Sects.. 2.3 and 2.4 a tabular overview about the specific data sources that are used in this project will be presented. In this subchapter we provide a general description about relevant data sources that are applied in the practical cycling planning process so far. A data source that has been used for a very long time are the accident statistics which are gathered and provided by the police departments in Germany. For a long time, this data was only available on physical maps with colored pins at the offices of the police departments, which showed the accident data less accurately but, as today, classified into different types [6]. Now, since this data source is available digitally with a geographical reference, the evaluation options for experts have improved significantly. Accident hotspots and accumulation lines can be recognized and evaluated with regard to the causes of the accident. EUSKa (Elektronische Unfalltypensteckkarte) [7] is an electronic map of accident-types which is used by police departments. The results can be evaluated by the regulations of the ,M Uko “(Merkblatt zur Örtlichen Unfalluntersuchung in Unfallkommissionen, Fact Sheet for the Local Investigation of Accidents in Accident Commissions), where all types of accidents are defined [8]. In total, there are seven types of bicycle accidents defined (crossing, turning, longitudinal traffic etc.) [9]. The information about how serious the injuries of the accident victims are, involved means of transport, time of day, brightness and weather conditions is also included in EUSKa. By examining these accident reasons, potential risk factors can be identified. This working step is an integral part of every qualified bicycle infrastructure traffic planning concept and enables planning experts to develop targeted measures to eliminate identified infrastructure problems [10]. The accident data is also used for monitoring as well as evaluating implemented infrastructure measures. By analyzing the data, it is possible to evaluate whether certain infrastructure changes or safety measures are taking their intended effect. This can also be done from the opposite perspective: When an infrastructure measure is implemented, it can be useful to review the impact on road safety. Accident data can be applied in planning projects at different scales. For example, city-wide analyses can provide information on the road network, while accident data on individual intersections or routes can be used for small-scale evaluations. Traffic volume is another important information for the bicycle infrastructure traffic planning process. In contrast to cycling data, data sources about motorized vehicle traffic is largely available, since many municipalities are gathering it as part of their traffic models. Therefore, counting systems can be installed at junctions or traffic lights to count the number of different types of vehicles (heavy vehicles, cars). This data source plays an important role in the process of determination of the appropriate alignment of a bike path. Especially in the case of road reconstruction measures, the volume of motorized traffic is relevant to decide about feasibility.

222

J. Schering et al.

Fig. 1 Traffic volume thresholds for the bike path alignment at two-lane inner city roads [11]. Congestion sectors for the selection of bike lane alignments at inner city two-lane roads (the transitions between the congestion sectors are not strict separation lines)

Figure 1 shows different load ranges which are defined depending on the traffic volume (Kfz/h, cars per hour) and speed (km/h). The ranges I–IV do not indicate hard separations of the values, but which, depending on the extent of some other decision criteria, indicate the most suitable form of bike path alignment. For example, in range I and II it is compatible to guide cycling traffic on the road. In range IV (high speed level and traffic volume), it is more safe to divide bikes and cars on the street. Data on the amount of bicycle traffic is not available for the whole road network. Although many municipalities have set up bicycle counters in recent years, this network is still incomplete. An alternative approach to measure the bicycle volume is used by some cities: Short-term countings can be applied, which are extrapolated afterwards and converted into an annual average. For example, the City of Munich installed permanent and short-term counting stations [12]. In general, only the most frequently used bicycle connections are equipped with a bicycle counting station. However, the existing bicycle traffic counters can also be used for monitoring (e.g. to prove an increase in bicycle traffic). Data about cycling infrastructure may provide relevant insights for the planning process. Unfortunately, these data sources are often not available. Some administrations have access to databases that provide information about the type of cycling facility. As an example, the “Mobility Dashboard” of the city of Aachen provides different graphs and tables related to mobility data. Bicycle traffic is also considered [13]. But there is a lack of data that allows conclusions to be drawn about the quality of a bike path (e.g. surface type, width). Data about timelosses for cyclists due to waiting times at intersections or to a bad construction quality of bike lanes are usually missing. These quality criteria are frequently re-

Data Management of Heterogeneous Bicycle Infrastructure Data

223

assessed as part of cycling concepts. This information from real-life bike trips is not available so far. An exception is the increasingly important measurement of surface quality, which, however, is mainly limited to the network of out of town roads. As an example, cycling infrastructure data from different municipalities is stored in a central database for the German federal state of Baden-Wuerttemberg RadVIS (Statewide Bicycle Infrastructure System, Landesweites Radverkehrs-InfrastrukturSystem) [14] in order to gain an overview of the entire cycling infrastructure in southwestern Germany. In general, information on cycling infrastructure can be recorded, processed and evaluated digitally. The data can be used for documentation and planning of bicycle networks. In the future, data on cycling traffic volumes, digital terrain models or source-destination matrices may be added for determination, planning and coordination of bicycle connections [14]. Smartphone applications for cyclists provide new potential to get an overview about route choice in the cycling network, in particular by displaying heatmaps for the administration. There are currently a number of providers who collect such data based on trips of real cyclists. However, app usage is often limited in time or at least to specific target groups (e.g. sportive cyclists) [15]. Therefore, smartphone app data is not sufficient to replace bicycle countings that were already mentioned above [16]. The smartphone based bike trips are currently used in the planning process more as additional and even corrective information to get an impression about the bicycle volume on different bicycle routes. For example, it will be possible to identify routes used by cyclists that planners have not yet noticed by defining a cycling network. However, smartphone app data may not be representative as it does not reflect the entire and very important bicycle traffic flows. One of the relevant examples is bicycle school traffic because schoolchildren do not usually use bicycle apps. Approaches to process cycling data have become more diverse. It is already possible to estimate the riding speed of cyclists. However, this information has not yet been used systematically and comprehensively in the planning process, especially since this information is still incomplete and not reliable. Regarding specific user groups as the schoolchildren surveys are a very frequently used instrument to gather fundamental data in the planning process. Surveys are particularly widespread in school mobility. These surveys can be integrated into projects related to school mobility or conducted as online questionnaires. Usually the children are asked how they get to school, which routes they take by bike, and how they feel about the traffic situation in the school environment. On the one hand, the information is relevant to gain an overview about the distribution of school transport in general. On the other hand, pupils are providing assessments about dangerous or uncomfortable locations in the cycling network. In the case of the city of Euskirchen, a schoolchildren survey was an integral part of the mobility concept [17]. The survey data is also often used in cycling concepts in the decision process about the right bicycle infrastructure measures. In Euskirchen more than 300 pupils were asked about their mobility behavior in context of their personal route to school. The gathered data was very useful for developing specific measures for improvement. In this context, infrastructure shortcomings reported by citizens are also an important data source that is often applied in the planning process by

224

J. Schering et al.

different types of physical and digital participation. In addition to the data collection by planning experts and decision makers themselves, the reportings are a second important data source to identify differences between the perspective of specialized people and the personal impressions of the cyclists. The frequency of a reported topic or a reported location can be an indicator of its importance for the users and can thus be incorporated into the prioritization of bicycle infrastructure measures.

2.2 The INFRASense Project The data driven research project INFRASense [18] wants to tackle some of the challenges that are mentioned above. As already mentioned, cycling data is becoming more and more diverse. Bike infrastructure data is increasingly often gathered and stored in municipal databases. The research project has to deal with heterogeneous cycling data and is based on the cycling planning guideline HEBRA (Hinweise zur einheitlichen Bewertung von Radverkehrsanlagen, Suggestions for the Consistent Assessment of Bike Paths) that suggests different criteria to make an assessment of (a) bike paths and (b) intersections. According to the guideline the assessment of bike paths should consider criteria such as the average travel speed, the surface quality, the number of accidents, citizen feedback or the traffic volume. Waiting and stopping times are relevant for a decision about the quality of intersections and junctions [19]. As part of the project a data driven quality assessment of the bike infrastructure is developed that is based on the HEBRA guideline. The software tool Bicycle Lane Quality Evaluation (BIQEmonitor) [20] which considers crowdsourcing data based on bike sensor and smartphone app data of citizens visualizes the assessment in quality levels (A = very good to F = very bad) based on real bike trips. The involvement of real cyclists is a key aspect to create an assessment tool that reflects the user perspective and real life cycling problems. At least 1000 citizens will participate in the data collection using a specialized sensor box that measures GPS, acceleration (which allows the analysis of vibrations on the bicycle) and environmental data directly on the bike. Further information about the bike sensors can be found in Schering et al. [21]. In addition to that, different types of data analysis are conducted to support the quality assessment. Key Performance Indicators (KPIs), graphs or maps based on additional data sources (accidents, reportings, traffic volume etc.) should provide more detailed insights in the quality of the bike path network. The data analysis requires a comprehensive data management system in the background. Based on a Data Lake architecture for the processing of bicycle infrastructure data [22] the data management has to be built up. This step requires the selection of suitable data models that allow the combination of similar data sources that are part of the heterogeneous cycling data that was described earlier. Depending on the data sources, these can be connected to the data management system manually or automatically by an Application Programming Interface (API). In the following

Data Management of Heterogeneous Bicycle Infrastructure Data

225

sections we will describe the available data sources, the ETL process and the suitable data models more in detail.

2.3 Time Series Data As we will see, many of the potential data sources that are mentioned in the first chapter are considered in the data management of the project. In general, all the data sources that contain a time stamp can be divided between bike sensor data, bicycle accidents, citizen reportings and countings of the traffic volume (bicycles, cars, heavy vehicles). All these time series data describe specific events related to the bike infrastructure. Data sets from the two Lower Saxony cities Oldenburg and Osnabrück are available. Most of the data sets were provided as Excel sheet or CSV file. The sensor data is part of the project internal data collection and includes bike trips from inhabitants of the two cities Oldenburg and Osnabrück. The accident data was provided by the police department. The reportings are provided by the municipal reporting systems Stadtverbesserer (Oldenburg) [23] and EMSOS (Osnabrück) [24]. The traffic data was provided by different traffic related departments of the municipalities (Table 1).

Table 1 Data sources for time series data Data source Bicycle sensor and smartphone app data

Format API

Bicycle accidents

Excel

Citizen reportings

API

Bicycle volume (countings)

Excel

Traffic volume

Excel

Heavy vehicle volume (truck)

Excel

Description The bike trips are including information about the trip (e.g. speed) but also about the state of the infrastructure (acceleration) The accidents are including many information about the circumstances, the participants, the types, lightning conditions etc. of police registered cycling accidents This data set includes citizen generated data from municipal reporting systems (infrastructure shortcomings etc.) This data set is based on stationary sensor systems that count the bicycle amount on a road per 15 min The traffic amount is measured temporarily and permanently at different locations in the city Differentiation of traffic volume into cars and heavy vehicles

226

J. Schering et al.

Table 2 Data sources for non-time series data Data source Road cadastre

Format Excel

Cycling infrastructure

Excel

Cycle and road network

Shapefile

Cycle routes Bus network

Shapefile Shapefile/GeoJSON

Bus stops

Shapefile/GeoJSON

Segmentation

GeoJSON

Description Information about the roads (e.g. name, length, network hierarchy) Features of the cycling infrastructure (e.g. bike path type, width, surface type) Geographic lines and topographical data (slope) of the cycle and road network Geographic lines of cycle routes Geographic lines of the bus network and additional information (e.g. bus line name, bus type) Geographic locations of the bus stops and additional information (e.g. bus stop name) Road segmentation (approximately 50 m segments): Geographic lines of the segments and additional information (e.g. length, surface type)

2.4 Non-time Series Data The non-time series data sources (Table 2) consist of files with different information about cycling and transport infrastructure in general. The data management of the INFRASense project considers infrastructure data sources from the cities of Oldenburg and Osnabrück and the municipality of Morbach. The file formats in which the data sources are available are Excel, Shapefile and GeoJSON, depending on the kind of data provided. Each city and municipality has a different set of data regarding the existing cycling infrastructure. This statement is a result of expert interviews with several city, municipal and federal state administrations. Among the interviewees were the German cities of Bochum, Essen and Frankfurt, the provinces of Utrecht in the Netherlands and Antwerp in Belgium and the federal state of Baden-Württemberg [25]. The interview partners were chosen to gain insights into the data availability, collection and management of cycling infrastructure data in administrations with a different status in this area. The administrations of Baden-Wuerttemberg (RadVIS) [14], Utrecht [26] and Antwerp [27] have developed their own platforms for cycling infrastructure data. The two foreign administrations each have a Fietsbarometer that visualizes their data on a map. According to the involved experts from the traffic planning domain, the data collection is sometimes carried out by several (building) authorities or, as in the Netherlands and Belgium, by bicycle associations. The focus is different in each case and therefore no uniform data is collected. There is no specific standard in the area of cycling infrastructure data, since the administrations all have different databases and pursue different procedures for processing it. Sometimes they just store the data in databases or files, or are just beginning to develop systems to

Data Management of Heterogeneous Bicycle Infrastructure Data

227

process it. Cycling infrastructure data sources are generally available as Excel, CSV or Shapefiles or can be made available in one of these formats. In general, data on structural features (e.g. type of bike path, width and type of surface) and on the condition and quality of the cycling facility is collected as well as geographical data (e.g. cycling network) [25]. The cities of Oldenburg and Osnabrück each provide their own street cadastre containing information about their road network. Both street cadastres have a different scope. The Osnabrück data only contains the road names, the Oldenburg data also includes information about length, district, postal code and cleaning of the roads etc. The cadastre of the municipality of Morbach which is located in the German Federal State of Rhineland-Palatinate provides more detailed data about local cycling infrastructure, such as the width and type of bike paths. This data is collected according to the guidance on the standardized evaluation of cycling facilities [19]. It is not available for Oldenburg and Osnabrück. Geographical data represent another important aspect in the management of cycling infrastructure data. It describes geographic features of the cycling infrastructure and is widely used in municipal administrations. These data sources are partially available for the cities of Oldenburg and Osnabrück and consist of information about cycle and road networks, cycle routes, bus lines and bus stops. Data from bus lines and bus stops can point out obstacles for bicycle traffic and a higher amount of traffic in general. Therefore, they may be relevant for later analyses. Many of the data sources were provided by city administrations. A smaller number of data sources which were not available in the administration were extracted from Open Street Map (OSM). It can be expected that the data sources from the administrations may be more reliable and of higher quality. A road segmentation for the both cities is also included in the project. All roads in the road network are divided into sections of around 50 m in order to locate sections of lower quality more precisely and to create better comparability. This is an important requirement for cycle traffic planning and the quality assessment [19].

3 Data Lake and Data Pipeline This section provides an overview about how the data management of the project was built up. It is based on a Data Lake approach. As already discussed in literature, Data Lake architectures are suitable for the processing of bicycle infrastructure data [22]. The requirements of the project partners were collected and considered in the development of the data management. In the following sections the whole data pipeline from data collection, storage, processing and visualizing will be discussed and presented (Fig. 2). The first step is the collection of the relevant data sources. According to the Data Lake approach, all the data sets are stored in raw form on the project server at the University Oldenburg. Raw data sets in this context mean that the data sources are stored as these are provided without any adjustments or changes. To increase data

228

J. Schering et al.

Fig. 2 Data pipeline based on Data Lake approach

security, the data sets are not stored in external clouds but are completely hosted at the IT infrastructure of the University. For this purpose, an open source based simple storage solution (s3) client MinIO [28] was deployed. It functions as Data Lake to save the raw data sets. Besides that, this approach allows the data storage at the University, the main advantage of this solution is that the data sources can be called by the software project for further processing (e.g. part of the Extract-TransformLoad process). As described earlier, nearly all of these data sources were provided by different departments of the city administration or the police. Besides time series and non-time series data, images of bike paths are also stored to enable surface analysis based on AI approaches. The image data was gathered by the project itself. The results of this analysis will be presented in further publications. The next step is the data integration of relevant parts of the data into the data management system. For the project we implemented PostgreSQL [29] databases which are hosted by the IT service administration of the University. The GIS data has specific requirements for data storage and processing. For this use case we implemented the PostGIS [30] extension that supports geographical data formats. Figure 3 represents an example of GIS data that is stored and may be visualized with the geometry viewer in the database. The relational databases enable the combined processing of time series and non-time series data and the querying of huge data sets. The aggregation of the data sources requires several steps. At the beginning the raw data sets that are stored in the object storage are preprocessed. Because of the availability of heterogeneous data sources each data set has to be checked for its format, type and important attributes for the later work. The result of this step is the selection of relevant information that will be used in the data management. The designing of a suitable schema for the data management system is required. The data models for time series and non-time series data are presented in chapter “Concepts for Open Access Interdisciplinary Remote Sensing with ESA Sentinel-1 SAR Data”. The next step is the transformation of the data sources according to the selected

Data Management of Heterogeneous Bicycle Infrastructure Data

229

Fig. 3 Cycling network of Oldenburg (Geometry Viewer, PostGIS)

schema as part of the ETL process (Extract—Transform—Load) [31]. This and all other data processing steps were realized based on the programming language Python [32]. The heterogeneous data sources are transformed to the structure of the schema(s). Each data source is extracted from the Data Lake and transformed one by one to make it suitable for the schemas. As soon as the transformation is complete, the data sources are filled into the desired tables for further processing. The extract, transform and load processes have to be automatized to make the data warehouse expandable in the future for further data and potential new data sources. When the database is ready, the data analysis may follow. As we will see in the next section, a lot of different tables are available, many potential approaches for analysis related to bike infrastructure are imaginable. The goal is to create knowledge out of the different data sources. The project internal sensor data that is connected to the database by an API includes the highest data volume compared to other tables. Different methods have to be implemented to find relevant events in the bike trips such as increased speed levels, stopping times or strong vibrations on the bike as a consequence of the surface quality. AI methods are implemented to detect damages on bike infrastructure images. All the data sources that are stored in the Data Warehouse according to the schemas will be further analyzed according to the goals of the research. The requirements on the analysis are provided by the bike path planning experts (Planungsbüro VIA) as part of expert interviews. The CRISPDM (Cross Industry Standard Process for Data Mining) according to Wirth and Hipp [33] is used as process model. Query based data analysis techniques and Machine Learning algorithms are part of this work. Whenever possible (e.g. privacy,), data will be published as open data to make it available for cycling experts and their practical work. For this purpose, a CKAN [34] platform was implemented.

230

J. Schering et al.

4 Data Models 4.1 Time Series Data The chosen schema for time series data (Fig. 4) is inspired by a snowflake schema [35]. Instead of a fact table all tables in the schema contain a standardized timestamp that can be used as a combining attribute to connect all tables with each other. The schema design considers the time as the central dimensional parameter in the data warehouse for time series data. Some dimension tables are normalized to avoid the redundant storage of data. To unify the time stamps, a standardized time stamp with the same measurement frequency has to be created. Most of the data sources were collected in a different time resolution (90 s, 5 min etc.). All data sources can be transformed to a 15 min interval (12:00, 12:15, 12:30 etc.). Data sources with a smaller measurement frequency were sampled up to this interval. The data sets that describe a single event

Fig. 4 Data model for time series data

Data Management of Heterogeneous Bicycle Infrastructure Data

231

(e.g. bicycle accident) received a second timestamp that is rounded down to the past 15 min. As an example, if an accident happened at 09:53 the rounded time is 09:45. With this approach, all time series data sources may be combined by a standardized time stamp. The time series data includes two tables for the accident data of the police department. EUSKa was already mentioned earlier. The table includes information about accident type, the severity of injury of the involved cyclists, the participants or lightning and road conditions. Additional information can be found in the Cognos table which is based on a second database of the police department. Cognos includes some more detailed information as a text description about how the accident occurred and some personal information about the victims (sex, age). Infrastructure shortcomings reported by citizens are also considered in the data management. The reportings are connected to the object storage by an API to two municipal reporting systems in the cities of Oldenburg and Osnabrück (Fig. 2). Further details on the reportings can be found in the publication of Schering et al. [36]. Similar to the Cognos data, the reportings contain a free text description. The reportings can be categorized according to different topics (e.g. road infrastructure). The traffic volume plays an important role to decide about the alignment of a bike path [11]. The database contains different types of traffic measurements that can be differentiated according to vehicle types. The database includes the measurement of bicycle volume (bicycle_volume), car volume (car_volume_osnabrueck, car_volume_rissmueller_osnabrueck), truck volume (car_truck_volume_osnabrueck) and the whole traffic volume without differentiation of different means of transport (traffic_volume_oldenburg, traffic_volume_brookweg_ol). The truck volume and parts of the traffic data (traffic_volume_brookweg_ol, car_volume_rissmueller_osnabrueck) were gathered on a temporary basis. The geo locations of the counting stations are important to identify roads or segments of it with high traffic volume. To avoid redundancies in the data that could have an impact on the performance, the coordinates, road name, city, the names of the location and the road_segment_id are saved in the dimensional table (locations). The station_id is used as Foreign Key to connect the traffic measurements with the locations. To connect the geo coordinates of the accidents, the shortcomings and the traffic counting systems to a specific road segment, all coordinates received a road segment id to support the quality assessment of specific parts of the road.

4.2 Non-time Series Data The database for non-time series data (characteristics of the cycling infrastructure) is based on an object-relational data model. This model type was chosen because cycling infrastructure data contains both atomic and complex data (geospatial data). The data model has to store atomic data such as the width or length of the cycle lanes (numerical values) as well as their road names and their surface type (textual

232

J. Schering et al.

Fig. 5 Data model for non-time series data

values). In addition, geographic data such as the cycling network of a city have to be included in this database. An object-relational data model is suitable to contain all these data types [37]. Characteristics of the cycling infrastructure for roads (roads) and junctions (junctions) are stored in the model for non-time series data (Fig. 5). Both traffic components are linked to a specific city (city). Their characteristics need to be divided into two groups. The first one contains features that do not change over time or that change only very rarely (e.g. road name). These are stored directly in the same table with the road or junction. In the second group there are characteristics that may change more frequently (e.g. surface type). These features have to be stored in separate tables (road_features and junction_features) and have to be versioned. The storage of these data sources in combination with the year of collection enables the versioning. This could be relevant in case of road rehabilitation.

Data Management of Heterogeneous Bicycle Infrastructure Data

233

5 Conclusion and Outlook As we have learned, decision making in cycling planning is already supported by data. There are some data sources such as the accidents or the traffic volume that are properly used. The case of the crowdsourcing data shows that not all of these data sources are used systematically because data is not available or not representative enough. It is also not clear what the different measurements mean for quality assessment. The INFRASense project should fill this research gap. The research project deals with heterogeneous data sources. A Data Lake approach was chosen to support the ETL process. In general, we can differentiate between time series and non-time series data. The time series data includes different types of time based data sets such as bike trips, stationary traffic measurements, citizen reportings or accidents. The non-time series data sets are describing the state and shape of the bike infrastructure (lengths of roads, important bike routes etc.). To enable the further processing and combination of different tables in the databases, the implementation of suitable data models is required. The time series data sets can be combined with standardized time stamps. That means that all time stamps are rounded to quarterly hours. The object relational data model was chosen for bike infrastructure data to enable the storage of atomic data in combination with spatial data. The data models and management systems will be the basis for further applications. As part of the project, different types of visualizations and data analysis will be realized. Many of the data sets are relevant for experts. The external data sources were collected and processed based on the requirements of the planning practice. One of the goals is to support the daily work of cycling planning experts by meaningful analysis including maps or graphs and charts to display the relevant KPIs. The Radweg Radar (Bike Path Radar) [38] is currently in development as a dynamic visualization and analysis tool. A question that should be discussed in the future is what are the most important KPIs for the quality assessment of a bike path. The cycling databases that were presented in this contribution are connected to the dashboard as a backend. The goal of the data analysis is the assessment of the bike path network. Bike paths with shortcomings (bad surface, high accident rate, high traffic volume etc.) should be identified. The data analysis is referring to the statements of the HEBRA that were mentioned in Sect. 2.2. The results of the data analysis will be presented in further contributions (e.g. [39]). One problem that is not solved so far in cycling planning is that the relevant data is hard to access or cannot be accessed although it is available in principle. Different departments in the city administration are using different databases that are not centralized. To improve the accessibility to the data sources that were discussed in this contribution, a standardized API was developed. The API allows the export of relevant parts of the data sources of the databases. The data sources can be integrated into other applications. The processed raw data will also be made available on an open data portal in the future. This will support decision makers and stakeholders in traffic planning in their future working processes.

234

J. Schering et al.

Acknowledgements INFRASense is funded by the Bundesministerium für Digitales und Verkehr (BMDV, German Federal Ministry of Digital and Transport) as part of the mFUND program (project number 19F2186E) with a funding amount of around 1.2 Mio. Euro. As part of mFUND the BMDV supports research development projects in the field of data based and digital mobility innovations. Part of the project funding is the promotion of networking between the stakeholders in politics, business, administration and research as well as the publication of open data on the Mobilithek portal.

References 1. Anderson, C., Clarkson, D., Howie, V., Withyman, C., Vandelanotte, C.: Health and wellbeing benefits of e-bike commuting for inactive, overweight people living in regional Australia. Health Promot. J. Austr. 33(S1), 349–357 (2022) 2. Hebbal, F., Schering, J., Marx Gómez, J., Klemp, J.: Analysis of bicycle use in german cities based on smartphone app-generated data. In: 1st International Conference on Technological Advancement in Embedded and Mobile Systems (ICTA-EMoS) (in publication process) 3. K˛edziorek, P., Kasprzyk, Z., Rychlicki, M., Rosi´nski, A.: Analysis and evaluation of methods used in measuring the intensity of bicycle traffic. Energies. 16(2), 752 (2023) 4. Lücke, B.: ADFC-Fahrradklima-Test 2022 - Schon wieder eine Vier! Radwelt - Das ADFCMagazin. 2, 6–12 (2023) 5. Niedersächsisches Ministerium für Wirtschaft, Verkehr, Bauen und Digitalisierung: Fahrradfreundliche Kommune Niedersachsen. https://www.mw.niedersachsen.de/startseite/ themen/verkehr/radverkehr/fahrradfreundliche_kommune_niedersachsen/fahrradfreundlichekommune-niedersachsen-98580.html. Accessed 2023/06/02 6. Huth, C., Lesch, P., Steinberg, G., Fromberg, A., Gwiasda, P., Hoffmann, C., Reuter, F., Tönnes, D.: Endbericht - Nahmobilitätskonzept für die Stadt Castrop-Rauxel. Dortmund/Cologne (2021) 7. PTV Group: PTV Euska. https://www.myptv.com/de/mobilitaetssoftware/ptv-euska. Accessed 2023/05/25 8. Forschungsgesellschaft für Straßen- und Verkehrswesen (FGSV): Merkblatt zur Örtlichen Unfalluntersuchung in Unfallkommissionen (M Uko). FGSV Verlag, Cologne (2012) 9. Ortlepp, J., Butterwegge, P.: Unfalltypen-Katalog - Leitfaden zur Bestimmung des Unfalltyps. Gesamtverband der Deutschen Versicherungswirtschaft e.V., Unfallforschung der Versicherer, Berlin (2016) 10. Holthaus, T., Adenstedt, F.: GIS-gestützte Identifikation von Unfallhäufungen und Sicherheitspotentialen im Straßennetz - Ein Beitrag zur präventiven Verkehrssicherheitsarbeit. Straßenverkehrstechnik. 3, 207–215 (2021) 11. Forschungsgesellschaft für Straßen- und Verkehrswesen (FGSV): Empfehlungen für Radverkehrsanlagen (ERA). FGSV Verlag, Cologne (2012) 12. Zorn, E., Hager, G., Wöppel, H.-D.: Radverkehr: Datenerhebungen zum Radverkehr in München (Bicycle Data in Munich). Straßenverkehrstechnik. 1, 32–39 (2011) 13. City of Aachen: Mobilitätsdashboard der Stadt Aachen. https://verkehr.aachen.de/. Accessed 2023/05/23 14. Nahverkehrsgesellschaft Baden-Württemberg: RadVIS BW: Radverkehrs-Infrastruktur digital verwalten. https://www.aktivmobil-bw.de/radverkehr/raddaten/radvis-bw/. Accessed 2023/05/23 15. Lißner, S., Huber, S., Lindemann, P., Anke, J., Francke, A.: GPS-data in bicycle planning: “which cyclist leaves what kind of traces?” Results of a representative user study in Germany. Transport Res Interdiscipl Perspect. 7, 100192 (2020)

Data Management of Heterogeneous Bicycle Infrastructure Data

235

16. Lißner, S., von Harten, M., Huber, S.: Mobilität von Radfahrenden in Deutschland – Nutzerbefragung im Rahmen der Kampagne STADTRADELN. Verkehrsökologische Schriftenreihe 15 (2021), ISSN 2367-315X. http://nbnresolving.de/urn:nbn:de:bsz:14-qucosa-201073 17. City of Euskirchen: Onlinebefragung zur nachhaltigen Mobilität an Schulen. https://euskirchen.ratsinfomanagement.net/sdnetrim/ Euskirchener UGhVM0hpd2NXNFdFcExjZSP6p8hSa2uNA4BeWP6EysaHzFFgOIl74palCEltChFM/ Auswertung_Schuelerbefragung.pdf. Accessed 2023/08/15 18. Entwicklung einer Softwareanwendung zur Qualitätsbestimmung kommunaler Radverkehrsanlagen auf Basis von Crowdsourcing-Daten – INFRASense. https:// Accessed bmdv.bund.de/SharedDocs/DE/Artikel/DG/mfund-projekte/infrasense.html. 2023/05/25 19. Forschungsgesellschaft für Straßen- und Verkehrswesen (FGSV): Hinweise zur einheitlichen Bewertung von Radverkehrsanlagen (Ausgabe 2021). FGSV Verlag, Cologne (2021) 20. Worldiety: BIQEmonitor. www.biqemonitor.de. Accessed 2023/05/23 21. Schering, J., Janßen, C., Kessler, R., Dmitriyev, V., Marx Gómez, J., Stehno, C., Pelzner, K., Bankowsky, R., Hentschel, R.: ECOSense and its preliminary findings: Collection and analysis of bicycle sensor data. In: Kamilaris, A., Wohlgemuth, V., Karatzas, K., Athanasiadis, I. (eds.) Enviroinfo 2020 Environmental Informatics - New perspectives in Environmental Information Systems: Transport, Sensors, Recycling. Adjunct Proceedings of the 34th EnviroInfo Conference, pp. 145–153. Shaker Verlag, Düren (2021) 22. Schering, J., Marx Gómez, J., Büsselmann, L., Alfaro, F., Stüven, J.: Potentials of bicycle infrastructure data lakes to support cycling quality assessment. In: Demmler, D., Krupka, D., Federrath, H. (eds.) Informatik 2022, Informatik in den Naturwissenschaften, pp. 26–30. September 2022, pp. 783–794. Hamburg (2022) 23. City of Oldenburg: Stadtverbesserer. https://gemeinsam.oldenburg.de/oldenburg/de/flawRep/ 54305. Accessed 2023/06/10 24. City of Osnabrück: EMSOS (interaktives EreignisMeldeSystem der Stadt Osnabrück). https:// geo.osnabrueck.de/emsos/?i=start. Accessed 2023/06/10 25. Säfken, P.: Entwicklung eines Datenmanagementsystems für die Verarbeitung von Fahrradinfrastrukturdaten in einer Data Fabric. University of Oldenburg (unpublished master thesis), Oldenburg (2023) 26. Safety Performance Index Fiets. https://planner1.fietsersbond.nl/editor/spi.html. Accessed 2023/05/26 27. Province of Antwerp: Fietsbarometer. https://fietsbarometer.provincieantwerpen.be/ geoloketten/?viewer=fietsbarometer. Accessed 2023/05/26 28. MinIO. https://min.io/. Accessed 2023/05/25 29. PostgreSQL: The World’s Most Advanced Open Source Relational Database. https:// www.postgresql.org/. Accessed 2023/08/08 30. About PostGIS. https://postgis.net/. Accessed 2023/08/08 31. Welker, P., Wehner, J., Schnider, D., Jordan, C.: Data Warehouse Blueprints - Business Intelligence in der Praxis. Carl Hanser, Munich (2016) 32. Python. https://www.postgresql.org/. Accessed 2023/08/08 33. Wirth, R., Hipp, J.: CRISP-DM: Towards a standard process model for data mining. In: Proceedings of the 4th international conference on the practical applications of knowledge discovery and data mining, Vol. 1, pp. 29–39. Springer, London (2022) 34. CKAN: The world’s leading open source data management system. https://ckan.org/. Accessed 2023/08/21 35. Mowade, S., Mowade, Y.: Data warehousing and data mining. J. New Zealand Herpetol. 12, 447–454 (2023) ISSN NO: 2230-5807 36. Schering, J., Marx Gómez, J., Gerdes, L., Alfaro, F.: Citizen reportings and its application to bike infrastructure quality assessment. In: Proceedings of the 3rd International Conference on Next Generation Computing Applications, NextComp 2022. IEEE Xplore. https://doi.org/10.1109/NextComp55567.2022.9932249. https://ieeexplore.ieee.org/ stamp/stamp.jsp?tp=&arnumber=9932249

236

J. Schering et al.

37. Riemer, P., Bauer, E., Faeskorn-Woyke, H., Bertelsmeier, B.: Datenbanksysteme - Theorie und Praxis mit SQL2003, Oracle und MySQL. Pearson Studium, Munich (2007) 38. University of Oldenburg, Radweg Radar. www.radweg-radar.de. Accessed 2023/05/29 39. Birke, M., Dyck, F., Kamashidze, M., Kuhlmann, M., Schott, M., Schulte, R., Tesch, A., Schering, J., Säfken, P., Marx Gómez, J., Krienke, K., Gwiasda, P.: The bike path radar: a dashboard to provide new information about bicycle infrastructure quality. Enviroinfo. (2023) (in publication process)

Part IV

Sustainable Planning and Infrastructure

Evaluation of Incentive Systems in the Context of SusCRM in a Local Online Retail Platform Benjamin Buchwald, Mattes Leibenath, Richard Schulte, Sascha Heß, Linus Krohn, and Benjamin Wagner vom Berg

Abstract The current consumption patterns observed in both offline and online retail present sustainability challenges related to environmental impacts associated with products, last-mile delivery, personal mobility, and packaging. Different choices in delivery methods and individual mobility to overcome the last mile lead to varying levels of emissions. To address these environmental impacts, this study examines the application of a Sustainability Customer Relationship Management (SusCRM) approach to a local retail platform, aiming to meet customer expectations while promoting sustainable and conscious consumption within the various stages of the e-commerce customer journey. Consumers nowadays tend to act in a more sustainable way at the same time, they hold retailers and logistics companies responsible for creating these offers. Designing such a platform addressing the demand for sustainable supply is the key goal of the research project “R3— Resilient, Regional, Retail in the Metropolitan Region Northwest”. To explore potential incentive systems that can be implemented within the platform, a survey was conducted. The survey results indicate that most respondents demonstrated a high level of acceptance towards the proposed incentive systems. Keywords SusCRM · Incentive system · Sustainability · Retail platform · Customer journey · E-commerce

B. Buchwald · M. Leibenath · R. Schulte () · S. Heß · L. Krohn · B. W. vom Berg Hochschule Bremerhaven, Bremerhaven, Germany e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_13

239

240

B. Buchwald et al.

1 Introduction 1.1 Motivation The rapid advancement of technology and digitalization has provided consumers with numerous options for purchasing goods and services. This has led to intense competition between different channels, prompting the need to understand the factors influencing customer preferences [1]. Research has focused on evaluating the sustainability of products and examining the impact of consumer choices on carbon consumption, greenhouse gas emissions, and climate change [2]. It is recognized that consumer decisions, as well as decisions made by logistics service providers, manufacturers, retailers, and the technology industry, can contribute to increased greenhouse gas emissions. Understanding the environmental footprint of consumer retail behavior has become crucial, especially with the expansion of multiple purchasing channels [3]. Efforts are being made to minimize the carbon footprint in retail, transport, and energy consumption, aiming to assist stakeholders involved in the delivery and receipt of goods. A comparison between traditional and online shopping reveals varying CO2 emissions [4], depending on storage space utilization [5], the mode of transport as well as personal mobility [6]. Energy efficiency in commercial buildings is also gaining importance as their numbers increase annually. Furthermore, challenges such as environmental impacts from package waste and product manufacturing remain in both shopping models. Online shopping tends to be more sustainable when it comes to the storage of products as less energy is used in comparison to the number of products stored [7]. Offline retail is meant to be more socially and economically sustainable because of the advantages of the “local multiplier effect”. The local multiplier effect [8] refers to the economic and financial impacts that arise from local spending within the region or community. These effects extend beyond the immediate transaction and have a diverse influence on the local economy. When money remains within a region and is spent by its residents, a positive chain reaction occurs. These expenditures lead to long-term development as new jobs and business opportunities emerge, expanding local capacities. This in turn, results in increased employment and income for both residents and region. The local multiplier effect strengthens the local economic structure by prompting support for local business and reducing dependency on external markets. Thus, strengthening the local economy and fostering a greater diversity of offering and furthermore enhancing the resilience of the economy. The increase in economic activity and the enhancement of residents’ quality of live, plays a crucial role in fostering social stability and a strong sense of community. Ultimately, the local multiplier effect significantly contributes to the region’s positive long-term development, acting as a catalyst for growth and selfsufficiency [9]. The transformation of the retail sector driven by technological advancements and changing consumer behaviors, coupled with the global impact of climate change, raises the question of which consumer behaviors can lead to lower carbon emissions

Evaluation of Incentive Systems in the Context of SusCRM in a Local Online. . .

241

and a more sustainable world and how can the decision process be addressed within a digitally enabled customer buying process. Sustainable approaches in the field of customer relationship management seek to expand the traditional economic approach by incorporating ecological and social dimensions, providing customers with alternative options for sustainable consumption. Overall, there is a growing recognition of the need to align economic activity, consumer behavior, and sustainability to address the environmental challenges we face.

1.2 State of Art The Ecological Effects of Online Retailing A literature review conducted by the German Federal Environment Agency, summarized in a report titled “Ökologisierung des Onlinehandels” in 2020, classifies the environmental impact of e-commerce into the following areas, which are explained in more detail below [10]. This report shows that the greatest environmental impact results from the transport processes, especially last-mile transports, and from the shipping packaging that is used. The resulting environmental impacts vary significantly depending on the choice of transportation and the distance covered. According to a study by DCTI [11] the environmental impacts in terms of the share of CO2 emissions on the last mile are estimated to be 390 g. There is great potential for optimization using alternative means of transportation (electric vehicles, cargo bikes, etc.) or alternative delivery concepts (such as delivery to alternative locations, use of micro-hubs, etc.) [10]. The environmental impacts that occur in the warehouses of online retailers or in distribution centers are very low and therefore rarely quantified in detail. A calculation by the German Federal Environment Agency, based on various business reports, found that approximately 20–80 g of CO2 emissions are generated when purchasing a product worth 50 A C [10]. Compared to a local retailer, the emissions consumed in e-commerce are 16 times lower [12]. Without a pre-existing and well-established infrastructure of information and communication technology, e-commerce would not be possible. However, the utilization of this Infrastructure also leads to environmental impacts. This applies to both the IT-infrastructure of online retailers and that of consumers. Quantifying these impacts in terms of resulting emissions is methodologically challenging and therefore only possible to a limited extend [10]. The environmental impacts of e-commerce are particularly significant due to the use of additional shipping packaging, which is almost universally employed for every shipment. According to the German Federal Environment Agency, significant reductions in packaging waste, up to 45% [13], can be achieved through the reduction, elimination, or utilization of reusable packaging materials. In the case of pick-up at stationary retail outlets, the customer’s shopping trip is of particular importance. While no CO2 emissions occur when the shopping

242

B. Buchwald et al.

route is covered on foot or by bicycle, the shopping trip by car over the distance of 15 km can emit 1800–3300 g of CO2 equivalents. Other scenarios involve the use of public transport, which has a positive impact on the environmental sustainability of shopping compared to shopping trips by car [10]. To address these environmental impacts, the Federal Environment Agency has issued a guide for sustainable online stores. This guide covers not only optional website design and structure but also incentives for utilizing sustainable alternatives. Customer Journey The possibilities offered by the Internet have transformed the way consumers search for and select products and suppliers. The constant availability of information as well as the amount of information available have increased the consumer’s market power. This also has an impact on the buying process, which has changed because of the Internet, according to Heinemann [14]. This changed buying process is directly reflected in the customer journey. The customer journey describes a customer’s path to the purchase decision as well as the subsequent processes of customer loyalty and recommendation by satisfied customers. Different versions of the customer journey are in circulation, meaning that there is no generally applicable model in the current scientific context. What they all have in common, however, is the division into different phases before, during and after the purchase. Kreutzer [15], for example, divides the customer journey into five different phases: Awareness, consideration, purchase, retention, and recommendation. During the “awareness” phase, customers discover the company or product through marketing efforts, cultivating their interest. Moving into the “consideration” phase, customers collect information and compare the offering with alternatives. The pivotal point arrives in the “purchase” phase, where the customer finalizes the buying decision, whether online or offline. Following this, the “retention” phase comes into play, emphasizing customer satisfaction and loyalty by delivering topnotch service, tailored communication, and pertinent offers. Positive experiences set the stage for the “recommendation” phase, potentially igniting another customer’s “awareness” journey. In all the phases of the customer journey consumers are making decisions regarding the different retailer and product choices, logistical processes such as packing and delivery as well as what mobility mode to choose to get to the retail store. Companies could influence customers in all these phases at different touchpoints for example through an online presence. SusCRM SusCRM refers to customer relationship management (CRM) which is designed for sustainability, the “Sustainability Customer Relationship Management”. CRM is the focus of a company on its customers and the management of its relationships with them. CRM methods that promote sustainable behavior on the ecological, economic, and social dimensions can be categorized as SusCRM. Since successful customer retention and customer loyalty are already closely linked to sustainable business models, economical sustainability can already be understood as an instrument of CRM. The target group of SusCRM can be any business partner, both consumers and other businesses. It can help to communicate sustainability

Evaluation of Incentive Systems in the Context of SusCRM in a Local Online. . .

243

Table 1 Approaches and examples for addressing sustainable buying behavior Approach Measures for sustainable impacts

Offering of sustainable choices Incentivation of customers to sustainable choices

Holistic sustainable orientation and integration of customers

Description Measures for sustainability are taken but customers aren’t involved in the decision process Customers can choose sustainable options Education and promotion of sustainable options as well as incentives and rewards for selection Addressing and incentivation of customers’ decision-making within the whole customer journey

Example Emission-free delivery without alternativesa

Sustainable product alternativesb Sustainability report of buying behaviorc

No leading examples available at present day

a https://www.boxbote.de/ b https://www.amazon.de/ c https://www.memo.de/

values and goals more precisely to consumers and to let these influence purchasing decisions in favor of more sustainable products/providers. SusCRM can incentivize sustainable business practices towards business partners. A central aspect for the realization are incentive systems. Incentive systems are originally a concept from organizational theory and represent the measures taken by the employer to influence the motivation and behavior of the employee. A basic distinction is made between tangible and intangible incentive systems, as well as the recipient and the incentive source (intrinsic or extrinsic) [16]. However, incentive systems can also be used as a “marketing tool within CRM, so that the extrinsic or intrinsic motivation of the customer to behave in certain ways can be increased. In the sense of SusCRM, the focus is on the motivation to act in a more sustainable way [17]. At present stand-alone approaches with the aim of addressing sustainability within the buying process can be found. Table 1 refers to current approaches and gives examples for these. It illustrates the previous use of the approaches as individual solutions and shows that there are no holistic solutions so far. SusCRM addresses this circumstance and proposes an extensive application, that involves the entirety of the stakeholders. R3: Resilient, Regional, Retail in the Metropolitan Region Northwest Within the project1 “R3—Resilient, Regional, Retail in the Metropolitan Region Northwest”, a platform for regional retail and a sustainable, competitive supply and logistics structure is to be conceptually developed that strengthens regional retail

1 https://handel-nachhaltig.de/

244

B. Buchwald et al.

against large online platforms. The project started on June 1st, 2021, and is planned for 2 years. The project is being conducted by the University of Applied Sciences Bremerhaven while the applicant for funding by the Metropolitan Region Northwest was the Erlebnis Bremerhaven GmbH (society for marketing and tourism for the strengthening of the city of Bremerhaven / who are also a marketing partner). The project is supported by many partners, ranging from companies, associations, societies, cities, municipalities, districts, and the state of Bremen. In the project, a platform is first designed with the various stakeholders and then engineered in software terms. Competitive advantages such as sustainability and regionality, competent consulting as well as delivery and pick-up services are going to be integrated within the platform. In this platform, a SusCRM-approach is implemented, in which stakeholders, such as consumers, retailers, institutions, and logistics providers shall be incentivized to more sustainable acting. The platform approach aims to lead to more orders from local retailers, who are participating in the local marketplace. Therefore, an increase in traffic resulting from retail last-mile logistics and individual mobility is expected. Sustainable aspects of the last mile have to be evaluated regarding the different delivery options, as well as the individual mobility to local retailers.

2 SusCRM in a Local Online Retail Platform 2.1 Alignment of the Online Retail Platform The concept of the local retail platform is based on the current scientific findings regarding existing and failed platforms and thus their success and failure factors. The typology of local trading platforms ranges from pure company information platforms to platforms for requesting products to transaction platforms for the complete processing of purchases [18]. The platform strives for a horizontal orientation, meaning high range of product categories to cover a wide variety of goods and services. As a transactional platform, the connection between supplier and customer is established while customers can pay and receive orders from different suppliers bundled in one shopping cart. The geographical and sustainable orientation of the retail platform already has a major influence on the associated logistics structures and processes. The question whether the distribution logistics allow regional, national or even international orders or deliveries or whether sustainable products are highlighted within the platform has an impact on all three dimensions of sustainability. The regional orientation or limitation of a retail platform to a specific region can have positive effects on the dimensions of sustainability in this context by, among other things, keeping the added value within the region of the stationary retail business and the consumer in the sense of the local multiplier effect. In addition, this geographical limitation reduces competitive pressure for retailers, especially if

Evaluation of Incentive Systems in the Context of SusCRM in a Local Online. . .

245

assortments or products are not offered in multiple versions by different companies on the retail platform. Furthermore, this approach promotes proximity to the customer and reduces the number of trips required to cover the last mile for both delivery and collection. This makes an important contribution to driving the use of sustainable and regional logistics systems. Negative environmental impacts resulting from delivery to the consumer or upon pick-up by the consumer can thus be reduced. On the social dimension, the regional community is strengthened as value creation flows less out of the region, thus achieving greater social and environmental fairness. Finally, the social fabric benefits from a sense of community and the promotion of regional communication between retailers and consumers. Pick-up options such as Click and Collect and Click and Reserve can influence city center revitalization by encouraging the public to visit the city center to pick up products. It can also result in more personal contact and customer loyalty, promoting the city center and the local retail stores as places of social interaction. The alignment of the local retail platform can further emphasize customer proximity through integrated localization of products and suppliers as well as of the customers. For example, when searching for a product, customers can be shown preferentially the product that is offered at a retailer in the proximity. This favors the reduction of delivery routes as well as the customer’s routes to the store. It also fosters community on a microregional level in, for example, neighborhoods and districts, as customers prefer nearby stores. Implementing SusCRM in a local online commerce platform offers the opportunity to influence both the supply and demand sides in terms of sustainable behavior. In particular, local online retail platforms make it possible to influence consumer choice in individual mobility on the way to the store by implementing incentive models that lead the consumer to a more sustainable choice. Retailers offering sustainable products and services can be promoted within the platform to give them a competitive advantage and incentivize sustainable actions of customers.

2.2 Incentive Systems in the Context of a Local Online Retail Platform An important tool for the application of SusCRM in a local online retail platform are incentive systems already mentioned in Sect. 1.2 as a method of specifically influencing the extrinsic or intrinsic motivation and behavior structure of the consumer. The goal of the developed and defined incentive systems is to influence the consumer’s purchasing behavior directly or indirectly regarding sustainability. During his customer journey, the consumer is motivated by various functional and information-driven incentives to make his purchase decision as sustainable and environmentally friendly as possible. This is also intended to increase their awareness for future purchases. The incentive systems developed for SusCRM are based on

246

B. Buchwald et al.

the following approaches [17], depending on their type and function: Informationbased, Game- and competition-based, Reward-based and Social Incentives. Information-based Incentives Basically, this approach involves the provision of diverse sources of information with detailed sustainability information around the local online retail platform and all participants, as well as the featured products. Information-based incentives thus include all incentives that influence the buying behavior of the demander by providing information. The following informationbased incentives described for the development of a sustainable online platform are intrinsically motivated [17]. For example, the consumer can use a filter function to select the category “sustainable products” in addition to the common shopping categories on the product overview page. Only products that fulfill sustainability aspects are then displayed. Also of great importance is a comprehensive product description with as many product features as possible on the product page. The more information is disclosed, the less likely it is that a return will be made due to missing information. This can save CO2 . Using a map of the respective region, the locations of all retailers and the respective distance to the delivery address can be displayed and viewed. This is primarily intended to encourage the use of low-emission mobility and delivery options. For short distances, deliveries could be made by cargo bike. In another conceivable option, the consumer could consider picking up the order personally on site via “click & collect” and walking the distance or using public transport. Both scenarios would help to significantly reduce CO2 emissions over the last mile. Before a purchase is made, the shopping cart should indicate that individual shipments are to be avoided and that reusable shipping packaging is to be used. By bundling several shipments, the route can be optimized, and packaging waste can be avoided by using reusable packaging. In addition, the shopping cart has the function to mark duplicate items and to point out past orders and any returns as a reference value. Finally, the high value of a rating system should be mentioned, through which reviews can be written as experience ratings, as well as product Q&A’s, and which meaningfully depicts the relevance of the displayed reviews. Both positive and negative product reviews have a high influence on the purchase decision nowadays [19]. Lastly, it must be mentioned that the availability and dissemination of sustainability information alone is rarely sufficient to achieve a change in behavior on the part of the buyer. This must be stimulated, for example, by attractive alternative incentives to act [17]. Game- and Competition-based Incentives In this approach, the consumer should be in a competitive situation with himself or other consumers. One’s own performance in comparison to others should intrinsically motivate the consumer to achieve an increase in performance in certain areas. In terms of the development of an online marketplace, the performance to be achieved is to develop or strengthen a personal sustainable purchasing awareness by surpassing, or achieving, selfimposed performance limits.

Evaluation of Incentive Systems in the Context of SusCRM in a Local Online. . .

247

An example would be the introduction of a scoring system, through which the consumer can collect points for sustainable actions during the shopping process. If the consumer chooses a product that meets sustainability criteria, uses reusable packaging for delivery, or selects a low-emission mobility option for delivery, points are credited to the consumer’s account. Once the customer has collected enough points, they can be exchanged for attractive rewards. These rewards could be, e.g., discount vouchers for future orders, tickets for using public transportation, or tickets or discounts for local events. Reward-based Incentives As the name implies, the purpose is to provide consumers with performance incentives in form of tangible and intangible rewards that reward their behavior and thus influence their purchasing behavior [17]. This incentive is often related to the game and competition-based incentives. Thus, for the points achieved in the example of the scoring system, rewards are issued to the consumer in the form of bonuses and the consumer is rewarded for his sustainable behavior. However, not only the consumer, also the supplier, i.e., a retailer, can benefit from this approach. For example, a product offered by the retailer can be given a higher relevance in search queries if it is marked with sustainability labels or provided with particularly extensive product details. The more detailed the product is described, the more likely it is that the right product will be selected, which can avoid returns. Another approach would be to recommend that the supplier use or switch to green electricity. Subsequently, he could then provide his product pages with a certificate or label showing the use of green electricity. This approach has an image-enhancing effect for the provider. Social Incentives Another approach mentioned is the social incentive. In this approach, the demander is motivated through the concept of “expected reciprocity”, which describes the desire to increase one’s status within the society [17]. Such an incentive is created through the regular evaluation of consumer purchasing behavior and decisions. This could be implemented in form of a customerspecific sustainability report. This report would be placed in the customers profile, for example monthly, providing individuals with information about the sustainability of their shopping behavior. It would include key metrics like CO2 emissions per order and CO2 emissions saved using sustainable options. Among these options are delivery with reusable or recyclable shipping packaging, delivery with lowemission modes of transportation, or the selection of environmentally friendly delivery methods. The percentages of these considered on all orders could also be relevant metrics. The aim is that customers receive a meaningful overview of how sustainable their shopping process is and clearly shown where there is potential for improvement within the customer journey.

248

B. Buchwald et al.

2.3 SusCRM in the Online Customer Journey The customer journey of a company can consists of various contact points, such as social media, website or online store, which interact with each other. Some of these contact points can be found in many or even all phases of the customer journey. These also include the online presence of a (local) online retail platform. Mangiaracina et al. list the following phases for the e-commerce customer journey [20]: Site Landing, Catalogue Browsing and Product Discovery, Product Presentation, Cart Management and Check Out. The first step of the e-commerce customer journey is about how users get access to enter the website which is considered as the awareness phasis in regard to Kreutzer’s customer journey (cross reference). The focus is on the various entry points, such as search engines, newsletters, communication campaigns and direct URL entry. Search engine positioning is critical to drive visitors to the website. The landing page, usually the home page, plays a crucial role in the first impression. One example for an incentive system in this step is sustainability related content on the landing page of the platform or sustainability and regionality focused SEO (search engine optimization). The next phase of catalog and product search is about how users explicitly search and implicitly discover products within the website and takes places within the consideration phasis of the customer journey. Often there is a specific need, and a user already knows which product is required. The search engine or the category overview are then used as entry points for the product search. If a user’s need is not yet more concrete, the choice must be narrowed down further with the help of website functions. Basically, the following functions are used in this phase: Search and filter, category search, sorting, product preview and recommendations. Another step in the consideration phase is the presentation of products. The aim is to provide the user with the most comprehensive explanation and presentation of the product. Typical elements in this phase are product description, product images or media, reviews and recommendations, product configurations, price, etc. Within the online consideration phase information-based incentives like sustainability certificates could be shown on the product overview and detail page. The next phase is represented by the shopping cart management with elements such as display of delivery options, wish list and recommendations. The shopping cart is used by the user to (pre)select various products before the actual purchase is made and therefor is utilized in both the consideration and the purchase phasis of the customer journey. In the shopping cart information regarding the costs and discounts of the order are revealed to the user for individual items, delivery, taxes and overall information. Products can be removed from the cart, or the quantity of items can be increased. For example, the shopping cart can inform the buyer regarding duplicate items (of different sizes) or give a hint to already bought and returned items (Fig. 1). In the checkout process phase (purchase phase of customer journey), the shopping cart is finalized as a purchase. The consumer is provided with final information needed to complete the purchase here. Primarily, from the consumer’s perspective,

Evaluation of Incentive Systems in the Context of SusCRM in a Local Online. . .

249

Fig. 1 Example of incentive systems in a customer journey in general and within the online retail platform

the final decisions are made before the purchase is completed. Important elements include login or guest order option, delivery selection, and payment processing. In this part of the customer journey information-based incentives such as bundling of product shipping and usage of multi-use packing can be promoted. Also, game- and competition-based incentives like gaining score for sustainable choices of product, delivery or packing options can be implemented. In the post-purchase phase, the aim is customer loyalty as well as recommendation by the customer. Basically, this is the result of customer satisfaction from the previous processes [21]. Elements of a sales platform for customer retention are, for example, newsletters, bonus systems, community interactions, blog and information pages as well as customer support. The recommendation phase can be implemented within a retail platform through a rating and review systems, product Q&A, and ‘refer a friend’. In the retention phase the customer can be motivated by rewardbased incentives e.g., exchanging the collected score for discounts on sustainable products and deliveries, local events or public transportation. A social incentive here is the supply of individual sustainability reports for the user considering the consumer behavior in the retail platform.

3 Evaluation of Different Incentive Systems 3.1 Research Hypothesis and Method Consumers purchasing decisions have different influences on the various dimensions of sustainability. A local retail platform offers the opportunity to positively influence purchasing behavior in this regard. The incentive systems in context with SusCRM are instruments to incentivize consumers regarding more sustainable and conscious actions. By designing the platform as a local ecosystem, consumers can

250

B. Buchwald et al.

be influenced in their choice of products, delivery options, returns and individual mobility. This raises the question of which incentive systems are relevant for consumers and in which phase of the customer journey they should be integrated. With the aim of answering this question, an online survey was created and distributed via the online-survey-platform SurveyCircle and social media to find participants. Before conducting the survey, participants knew the topic of the survey, which may have had a bias on who’s participating. The survey was created in German language and designed in three sections. The first section contains questions on sustainability-related topics to determine the basic attitude of the participants. This section includes widely validated statements from environmental awareness research based on a 2016 [22] feasibility analysis raised by the German Federal Environment Agency, were reviewed as metrics to measure environmental awareness in Germany. Attitudinal statements were created in which respondents were asked to indicate the extent to which they agreed with each statement. The rating scale ranges from “fully agree” with a maximum value of 5, through “tend to agree”, “undecided” with a value of 3, “tend to disagree,” to “disagree at all” with a minimum value of 1. For the next section in the main part of the survey, the incentive systems developed as part of the research project were presented and the influence on the purchasing behavior of the participants was queried. This survey section was structured according to the different stages of the customer journey. The questions mainly consist of Likert scales with five items and some closed ones, as well as choice or multiple-choice response options. The response options are based on the five-point Likert scale and range from “very likely” with a maximum value of 5, to “likely,” “undecided” with a value of 3, “not likely,” and “not likely at all” with a minimum value of 1. The last section contains questions on the sociodemographic of the participants with choice response options.

3.2 Results The survey resulted in 110 valid response forms, with 71% female, 28% male, and 1% not responding. 135 surveys were started and 125 surveys were completed, with an additional 15 surveys having to be discarded due to incorrect responses to an attention control, resulting in 110 valid responses. A share of 45% of the respondents are between 25 and 29 years old, 32% between 20 and 24 years old, 15% between 30 and 34 years old and the rest older than 35 years. All participants are from Germanspeaking countries. 69% of the respondents have a university degree, 23% have a high school diploma, 7% have a technical college entrance qualification and the rest have a secondary school diploma. Just under 45% of people earn between 1000 A C and 2000 A C, 22% earned less and 19% earned more. Just under 14% have no income of their own or did not want to answer the question. The answers to the sustainability attitudes of the individual participants were summarized in a mean value and the participants were divided into three clusters. The clusters are based on the five-point Likert scale from the response options, with

Evaluation of Incentive Systems in the Context of SusCRM in a Local Online. . .

251

the sustainability average between 2 and 3 points, the average between 3 and 4 points, and the average between 4 and 5 points each representing a cluster. With an average mean score of 2.9 out of a possible 5 points, six people are classified in the first and thus least sustainability-oriented cluster; with an average mean score of 3.7 in the second cluster, 84 people; and with an average mean score of 4.2 in the third and thus most sustainability-oriented cluster, 20 people. Across all response forms, there was a mean score of 3.72 for environmental attitudes. This means that respondents recognize environment-related issues and the challenges that come with them. Analysis of the survey results indicates that participants’ responses to incentive systems were more positive if they already had a more sustainability-oriented mindset anyway. In the consideration phase, information-based as well as reward-based incentive systems, in the purchase phase, information-based incentive systems, and in the commitment phase, game- and competition-based incentive systems strongly correlated with respondents’ sustainability orientation. A less strong correlation between sustainability attitude and evaluation of the incentive systems was found for information-based incentive systems in the awareness phase, reward-based incentive systems in the recommendation phase, the commitment phase, and in the purchase phase. Table 2 shows the mean rating of the respondents with regard to the types of incentive systems within the customer journey. Information-based incentive systems are divided into active and passive incentive systems in this study. Active are those that involve the consumer performing an action, such as filtering products according to sustainable features, and passive are those that don’t require action by the consumer and only serve to pass on information, such as a sustainability label on a product page. In relation to the individual incentive systems, there was an overall mean score of around 3.75 out of a possible 5 points and thus a rather positive evaluation. Twelve of the surveyed incentive systems from the various phases of the customer journey and different incentive types have a mean score of over 4 points, while only three are below a score of 3 points. Table 3 shows the five best-rated incentive systems, while Table 4 shows the five worst-rated incentive systems. Table 2 Average points on the five-point-Lickert-scale of each type of incentive system

Incentive type Purchase information-based (active) Consideration reward-based Consideration information-based (passive) Purchase reward-based Retention game- and competition-based Consideration information-based (active) Retention reward-based Purchase information-based (passive) Recommendation reward-based

Average 4.01 3.94 3.84 3.78 3.75 3.75 3.68 3.38 3.23

252

B. Buchwald et al.

Table 3 Average points on the five-point-Lickert-scale and description of the five best-rated incentive systems Incentive type Purchase information-based active Consideration reward-based Consideration information-based passive Retention game and competition-based Retention reward-based

Incentive description Avoidance of individual shipments (instead use of bundling different products into one delivery) (temporary) discounts are given on products with a sustainability label The product page contains detailed reviews of the product Sustainability points for not using individual shipments (instead use of bundling different products into one delivery) Free deliveries for redemption of sustainability points

Average 4.49 4.47 4.39 4.36

4.35

Table 4 Average points on the five-point-Lickert-scale and description of the five worst-rated incentive systems Incentive type Retention reward-based Purchase information-based passive Retention reward-based Retention game- and competition-based Awareness information-based

Incentive description Vouchers or tickets for public transport for redemption of sustainability points Note for the same items in the shopping cart (e.g.: “Do you really need the shoe in two sizes?”) Vouchers/tickets to local events for redemption of sustainability points Sustainability points for writing reviews

Average 3.16

Place of residence/region (e.g., Bremerhaven) as a search term

2.36

3.14 2.94 2.93

Another interesting topic within the study are the shipping-packaging-related incentive system. Due to their significant impact on the environment, the respondent’s approval is of importance. According to the survey results, the two information-based approaches “waiver of the shipping packaging” and “using reusable packaging” produced an above-average mean score of 3.74 points and 4.25 points, respectively.

4 Discussion Through the evaluation and analysis of the conducted online survey “Sustainability in Online Retailing”, it can initially be determined that overall, there is a high awareness of sustainable behavior among the participants. This is clear from the clusters for sustainability-oriented awareness classified in the evaluation. The least sustainability-oriented cluster has a mean value of 2.9 out of a possible 5 points on the Likert scale, to which only six out of 110 survey participants were assigned.

Evaluation of Incentive Systems in the Context of SusCRM in a Local Online. . .

253

Overall, this is an important basic prerequisite for the development of an online marketplace with sustainability impact and the use of corresponding incentive systems. The willingness to use and the assessment of the influence on consumer behavior were generally rated positively. In this context, only three of the incentive systems presented received an approval rating of 3 or fewer points on the Lickert scale, while 25% (12) of the incentive systems received an approval rating of more than 4 points. It should be noted, however, that the 110 respondents represent just a sample that was not tested to which extent it reflects reality. Particularly because 92% of the participants are under 35 years of age, it is difficult to assess the extent to which the structure and composition of this sample corresponds to the overall population. The difference in the acceptance of active and passive information-based incentive systems is particularly striking. Active information-based incentive systems are among the best rated incentive systems overall. This is particularly evident in the purchase phase, where several active incentive systems were rated significantly better than several passive incentive systems. The active incentive system for indicating the avoidance of individual shipments (instead of bundling goods) received the highest approval rating across all groups here, while the group of passive information-based incentive systems received the second lowest average of all groups in the purchase phase, with the indication of products in the shopping cart in multiple versions as an incentive system receiving the fourth lowest approval rating. The thesis formulated in Sect. 2.2 that the mere passing on and provision of information is not sufficient in most cases to induce a sustainable change in behavior in the consumer’s purchase decision can tend to be supported by the survey conducted for the purchase phase. Due to the exclusively intrinsic motivation of passive information-based incentive systems, an optimization potential would be the combination of incentive systems of different types. For example, a passive information-based incentive could be followed by a reward-based incentive that is more extrinsic in nature. Thus, it can be assumed that the consumer is more likely to act sustainably within his purchase decision if there is a chance to be rewarded by following or applying an information-based hint. In addition to the integration of possible incentive systems for the use of a regional retail platform with a special focus on sustainability, these also serve as possible solutions to counteract existing negative environmental impacts caused by online trade. These are explained in more detail in Sect. 2.1 and classified according to their relevance. According to studies, as mentioned in Sect. 1.2, the environmental impacts caused using shipping packaging are not negligible and can be reduced by avoidance and reduction of packaging. In view of the fact that, in addition to a protective function, shipping packaging also enables the contents of the shipment to be handled anonymously, the high level of acceptance of the two information-based incentive systems “waiver of the shipping packaging” and “using reusable packaging” must be rated as particularly important and should definitely be incorporated into the concept of the online marketplace. The survey results achieved regarding the incentive systems are confirmed, among other things,

254

B. Buchwald et al.

by a previous study with reference to sustainability in online retailing, which was conducted by EEC Cologne in 2015 [23]. The evaluations showed that 87% of the participants consider the incentive “reuse of packaging” as their favorite. By applying the findings presented, negative environmental impacts can be reduced.

5 Future Outlook Consumer purchasing decisions have implications across the three levels of sustainability: Ecology, Economics, Social. In particular, the decision whether a customer buys from a bricks-and-mortar retailer or from an (international) online retailer has a major impact. The choice of product, retailer or means of transport or personal mobility also has an impact on emissions or the local multiplier effect. While the acceptance of various incentive systems on the part of the consignors was investigated in this paper, the extent to which this acceptance also exists on the part of the retailers is also decisive for their implementation. Ideally, an incentive system should result in an economic benefit for the retailer. Incentives that are less attractive to consumers according to the survey can be evaluated differently from the retailers’ perspective. For example, the reference to products of the same type, is technically easy to implement, and the economic benefit for the retailer can be positive if that results in reducing the number of returns and thus the costs of processing them. Based on this, a bachelor thesis is planned within the research project “R3— Resilient, Regional, Retail in the Metropolitan Region Northwest” at the University of Applied Sciences Bremerhaven, to deal in more detail with the development and acceptance of incentive systems for the use of a retail platform on the supplier and buyer perspective. The synergies of incentive systems at side of the supplier and demand will also be investigated. For example, the incentive of comprehensive product descriptions for more conscious shopping was rated positively within the survey. Suppliers who provide these extensive product descriptions could be rewarded by higher visibility within the retail platform. Furthermore, the feasibility of the developed incentive systems must be examined. To create a meaningful sustainability report, the operator of the platform is dependent on the retailers being able to provide emission details of their products. In doing so, they are dependent on the cooperation of the respective product manufacturers. However, it is questionable whether this information is available in every case and can be presented in a way that is comprehensible to consumers. To ultimately be able to verify whether the incentive systems described are suitable for initiating a behavioral change in the purchasing decision in favor of sustainability, they must be tested within a real application. For this purpose, it would be conceivable to evaluate them in connection with an already existing online marketplace.

Evaluation of Incentive Systems in the Context of SusCRM in a Local Online. . .

255

References 1. Kreutzer, R.T.: Führungs- und Organisationskonzepte im digitalen Zeitalter. Dialogmarketing Perspektiven 2017/2018: Tagungsband 12. wissenschaftlicher interdisziplinärer Kongress für Dialogmarketing, pp. 21–87. Springer Fachmedien, Wiesbaden (2018) 2. Balan, C.: How does retail engage consumers in sustainable consumption? A systematic literature review. Sustainability. 13, 96 (2020). https://doi.org/10.3390/su13010096 3. Stallmann, M.: Die Ökologisierung des Onlinehandels. Umweltbundesamt (2021). Accessed 28 Feb 2023. https://www.umweltbundesamt.de/en/publikationen/die-oekologisierung-desonlinehandels-0 4. Postpischil, R., Jacob, K.: E-Commerce vs. stationärer Handel: Die Umwelt- und Ressourcenwirkungen im Vergleich (2019). https://doi.org/10.17169/refubium-2557 5. Fernandez, D., Chegut, A., Scott, J., Glennon, E., Yang, J.: Retail carbon footprints: measuring impacts from real estate and technology. Real Estate Innovation Lab (2021). https://realestateinnovationlab.mit.edu/research_article/retail-carbon-footprintsmeasuring-impacts-from-real-estate-and-technology/. Accessed 5 Jun 2023 6. Wiese, A., Toporowski, W., Zielke, S.: Transport-related CO2 effects of online and brick-andmortar shopping: a comparison and sensitivity analysis of clothing retailing. Transp. Res. Part D: Transp. Environ. 17(6), 473–477 (2012). https://doi.org/10.1016/j.trd.2012.05.007 7. Schumann, G.: Klimawirkungen auf dem Prüfstand: Wie umwelt(un)freundlich ist der E-Commerce wirklich? (2021). https://www.gambio.de/files/images/presse/ Studie_Ecommerce_Klimawirkungen-auf-dem-Pruefstand_Gambio.pdf. Accessed 5 Jun 2023 8. Moretti, E.: Local multipliers. Am. Econ. Rev. 100(2), 373–377 (2010). https://doi.org/ 10.1257/aer.100.2.373 9. Doma´nski, B., Gwosdz, K.: Multiplier effects in local and regional development. Quaestiones Geographicae. 29(2), 27–37 (2010) 10. Eisemann, L.: Die Ökologisierung des Onlinehandels. Umweltbundesamt (2020). Accessed 5 Jun 2023. [Online]. Available: https://www.umweltbundesamt.de/publikationen/dieoekologisierung-des-onlinehandels 11. Deutsches CleanTech Institut, ed.: Klimafreundlich einkaufen - Eine vergleichende Betrachtung von Onlinehandel und stationärem Einzelhandel (2015). Accessed 5 Jun 2023. [Online]. http://www.pressebereich20.de/download/DCTI_Studie/ Studie_Klimafreundlich_Einkaufen.pdf 12. J. Edwards, A. Mckinnon, and S. Cullinane, “Comparative carbon auditing of conventional and online retail supply chains: a review of methodological issues,” Supply Chain Manage. Int. J., vol. 16, pp. 57–63, Jan. 2011, doi: https://doi.org/10.1108/13598541111103502 13. Stallmann, M.: Klimabilanz von Online- und Ladenkauf: Das Produkt entscheidet. Umweltbundesamt (2020). https://www.umweltbundesamt.de/presse/pressemitteilungen/klimabilanzvon-online-ladenkauf-das-produkt. Accessed 5 Jun 2023 14. Heinemann, G., Gaiser, C.W.: SoLoMo – Always-on im Handel. Springer Fachmedien, Wiesbaden (2016). https://doi.org/10.1007/978-3-658-13545-4 15. Kreutzer, R.T., Land, K.-H.: Digitaler Darwinismus. Springer Fachmedien, Wiesbaden (2016). https://doi.org/10.1007/978-3-658-11306-3 16. Bartscher, P.D.T.: Definition: Anreizsystem. https://wirtschaftslexikon.gabler.de/definition/ anreizsystem-30143/version-253734. Accessed 5 Jun 2023 17. Wagner vom Berg, B.: Konzeption eines Sustainability Customer Relationship Managements (SusCRM) für Anbieter nachhaltiger Mobilität. Aachen: Shaker (2015). [Online]. Available: http://katalog.suub.uni-bremen.de/DB=1/LNG=DU/ CMD?ACT=SRCHA&IKT=8000&TRM=1830938924* 18. CCEC Research: Lokale Shoppingplattformen 2016. Soest. (2016) 19. Umweltbundesamt, Ed.: Leitfaden für mehr Umweltfreundlichkeit im Onlineshop (2022)

256

B. Buchwald et al.

20. Mangiaracina, R., Brugnoli, G., Perego, A.: The eCommerce Customer Journey: A Model to Assess and Compare the User Experience of the eCommerce Websites, vol. 14 (2009) 21. Pink, M., Djohan, N.: Effect of ecommerce post-purchase activities on customer retention in Shopee Indonesia 1, 12, no. 1, Art. no. 1, Oct. 2021 22. Scholl, G., Gossen, M., Holzhauer, B., Schipperges, M.: Mit welchen Kenngrößen kann Umweltbewusstsein heute erfasst werden? Umweltbundesamt (2016). https:// www.umweltbundesamt.de/publikationen/welchen-kenngrossen-kann-umweltbewusstseinheute. Accessed 5 Jun 2023 23. IFH Köln: Nachhaltigkeit im Online-Handel – Was sagt der Kunde? IFH KÖLN, Nov. 09, 2015. https://www.ifhkoeln.de/nachhaltigkeit-im-online-handel-was-sagt-der-kunde/. Accessed 5 Jun 2023

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe Peter Paul Theisen

and Benjamin Wagner vom Berg

Abstract The European Union has established two major goals: the interconnection of Europe and achieving climate neutrality by 2050. Rail transport, known as the most environmentally friendly mode of transportation, has the potential to bridge these goals if the right strategies are implemented. However, Europe still faces significant challenges in developing a unified rail transport network, particularly evident in the inadequate infrastructure in border areas. This study aims to address these challenges by first identifying all locations in Europe where cross-border rail traffic occurs and then exploring potential factors that influence the development of cross-border rail connections. To achieve this, a comprehensive literature review was conducted to identify potential influencing factors. Subsequently, a quantitative data analysis was performed using geographic data to identify relationships and confirm potential influences. Geographical Information Systems were utilized to create comprehensive datasets, providing detailed information on all cross-border rail connections in Europe and their corresponding border regions. The analysis confirmed a strong economy and a common language as the most significant factors influencing the emergence of cross-border rail links. Surprisingly, no correlation was observed between population size and the presence of cross-border rail infrastructure in border regions. In this context, it was discovered that many populous regions lack a direct rail connection to their neighbouring region when separated by a national border. This shows the persistent divisive nature of national borders in Europe, despite the existence of the single market and freedom of movement. The datasets generated in this study offer highly accurate geospatial data on European cross-border rail infrastructure. These datasets hold great potential for future research endeavours across multiple domains, providing fresh perspectives on the infrastructure of border areas. Keywords Cross-border infrastructure · GIS · Rail infrastructure · European border areas · Geospatial data

P. P. Theisen () · B. W. vom Berg University of Applied Sciences Bremerhaven, Bremerhaven, Germany e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Wohlgemuth et al. (eds.), Advances and New Trends in Environmental Informatics 2023, Progress in IS, https://doi.org/10.1007/978-3-031-46902-2_14

257

258

P. P. Theisen and B. W. vom Berg

1 Introduction 1.1 Background The European Union has committed itself to becoming climate neutral by 2050. The framework conditions for this were laid down in the so-called European Climate Law [1]. In order to achieve this ambitious goal, however, a wide range of measures must be taken. Special attention is paid to the transport sector: it currently accounts for about a quarter of the EU’s total greenhouse gas emissions [2] and is therefore of particular importance in the fight against climate change [3]. The inclusion of sustainability in the list of six key EU priorities for the period from 2019 to 2024 [4], together with the goal of being climate-neutral by 2050, means that the EU’s transport infrastructure is increasingly coming into the focus of political decisionmakers and the public. Railways in particular should play a major role in achieving climate neutrality, as they are the most sustainable and environmentally friendly mode of transport [5]. Cross-border rail infrastructure plays an important role in mobility within the EU, but one that is often underestimated. Despite its relatively small area of approximately 4.235.000 km2 the European Union is made up of 27 states. The density of national borders is therefore very high, which means that border regions occupy a large part of the total area. In fact, these border regions account for 40% of the EU territory, where about 30% of the population lives [6]. Nevertheless, due to their peripheral location, the border regions are less well connected than the core regions and often have structural problems such as low economic strength or inadequate employment development and infrastructure [7]. For example, only about 44% of border residents in the EU have access to rail transport [8]. However, despite the fact that the European single market, including the free movement of persons and the quasi abolition of borders, is one of the EU’s greatest achievements, international train journeys within the EU are still often difficult. This is due to various reasons, such as the lack of a uniform ticketing and signalling system, the often high prices, the number of changes and thus the increased travel time, the sparse number of night train connections and many more. A more fundamental problem, however, is the missing or unused infrastructure. This is particularly noticeable at national borders with the so-called missing links, where railway lines often end a few meters before the border. Furthermore, there are many abandoned cross-border railway connections that are no longer in operation [9]. In fact, only 0.65% of all rail journeys in the EU cross a national border [10].

1.2 Aim of the Work Why cross-border rail transport is nevertheless successful in some regions, while in others the national border separates the track infrastructure of two countries like

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe

259

an imaginary impenetrable wall, is to be examined more closely in this study. The aim is to analyse the already existing and operating rail infrastructure in connection with potential external influencing factors. The major difficulty here is that there is no uniform EU-wide rail infrastructure network and thus no official record of the entire intra-European rail network. The challenge is therefore first to record and map the entire European rail network with the help of available current data, in order to identify all cross-border rail connections (CBRCs) in the EU in a further step. The following research question will be addressed: Which factors have an influence on the emergence of cross-border rail connections? In this work, it is first assumed that cross-border rail links are found only sporadically at many national borders and not at all in some regions of the EU. Furthermore, it is assumed that there are external factors that favour or impede the emergence of connections. These factors will be examined in more detail in this paper.

1.3 Structure To address the research question, the utilization of geographic information systems, commonly known as GIS, is essential. GIS refers to computer-based systems employed for processing spatial data. In this study, spatial data is utilized and geographically processed through various software tools to conduct the analysis. The primary objective is to compile cross-border connections and the corresponding border regions, while augmenting them with comprehensive metadata to facilitate an in-depth examination of potential influencing factors. As a result, the data collection and processing should produce up-to-date and comprehensive datasets on all intra-European, cross-border rail connections. For future analyses and research, the procedures described in this work can be repeated with the utilized datasets, some of which are updated on a daily basis, in order to obtain an up-to-date data basis at any time. This allows for the establishment of a constantly updated and reliable database, facilitating ongoing investigations and research endeavours. Subsequently, the data will be interpreted in a quantitative data analysis to make statements about possible influencing factors that favour or impair the emergence of CBRCs. The possible applications of the geographic data generated in the course of the work are manifold, as they can be combined with various metadata from different research areas such as mobility research, border area research, economics, infrastructure planning or social science and thus contribute to research in the corresponding disciplines. The data can also be of great value for political decisionmakers (especially in the field of transport policy).

260

P. P. Theisen and B. W. vom Berg

2 The Search for Influencing Factors To address the research question, a thorough analysis of external influences on Cross-Border Rail Connections (CBRCs) is necessary. However, it is crucial to first identify the potentially relevant influences. Since the work is hypothesis-driven, it is essential to derive the most probable influencing factors from theoretical foundations. The choice of influencing factors therefore depends on careful theoretical research. For this purpose, three topics have been selected which are highly pertinent to cross-border transportation. Firstly, the topic of cross-border cooperation was examined more closely. Effective operation of infrastructure spanning both sides of the border necessitate cooperation between at least two countries or regions. Secondly, a closer examination of the motives for border crossings were conducted focusing on understanding what motivates individuals to cross borders. Lastly, the Trans-European Transport Network was explored as another relevant issue. Given that the planning of this transport network involves a comprehensive consideration of multiple criteria, potential influencing factors can be directly derived from these criteria. After conducting comprehensive theoretical research, an exploration of potential influencing factors was undertaken, taking also into consideration the availability of data. As a result, five specific potential influencing factors were selected for in-depth analysis: Language, Tourism, Economy, Population and Natural Borders.

3 Collection and Processing of the Geographic Data The analysis of the CBRCs presupposes as a basis that analysable data are available on them. So far, however, no usable dataset with geospatial information on these connections exists. An important prerequisite for the analysis is therefore the identification of all CBRCs in the European Union. Since rail infrastructure often exists in border areas, but only extends as far as just before the border (missing links), it is important to determine which parts of this infrastructure cross the national borders. To address this, a process has been developed that will now be examined in more detail. Figure 1 illustrates the complete (streamlined) procedure of this study, with the aim of enhancing clarity.

3.1 Creation of the Base Map Given the limited detail regarding the boundary lines in the EU’s GISCO map, more precise geographical data from Open Street Map (OSM) is utilized. The goal is to extract and subsequently compile geographic data from individual countries, creating a comprehensive and accurate map specifically focusing on the course of

Fig. 1 Simplified flow chart of the process

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe

261

262

P. P. Theisen and B. W. vom Berg

the borders. Despite the complexity and time required, this process proves valuable, as enhanced accuracy mitigates significant data misregistration issues.

3.2 Data on Railway Infrastructure OpenStreetMap provides geospatial data that can be freely utilized within a vast and openly accessible database. These original files receive regular updates and are available for download. Nevertheless, handling these sizable raw files presents challenges due to their immense size. Additionally, a raw dataset encompassing the European region contains extraneous information. To address this issue and acquire only pertinent data, the subsequent approach is recommended: First, the smallest possible dataset is to be selected. In this case it is the dataset “Europe” [11]. The dataset was downloaded in the very heavily compressed osm.pbf format which resulted in a size of approximately 25 GB, even after compression. Therefore, efforts were made to minimize the amount of data without compromising the quality of the information. For the identification of the CBRCs, only the border regions of countries were examined while disregarding the infrastructure within the interior of each country. Therefore, the approach taken was not to filter the European dataset in its entirety, but to consider only the border regions. To exclusively filter data in the border regions, the European dataset required extraction of those regions. This involved creating polygons along the border courses. Utilizing these poly-files, an Osmosis command was written that individually extracted the border areas together with all geographical data from the European dataset and thus created new datasets for each border area. Osmosis, a Java-based command-line application specifically designed for processing OSM data, enables the creation of database extracts from diverse databases. This forms the basis for effectively filtering out specific tags from the border regions. Each tag is comprised of a key and a corresponding value. The key defines the category of property (e.g., ‘railway’), while the value specifies the specific property within that category (e.g., ‘subway’). In order to identify the rail infrastructure, relevant tags were determined, and commands were written to filter out relevant rail infrastructure data from the datasets of all the individual border regions.

3.3 Identification of the Intersections In the preceding chapter, all pertinent geospatial data pertaining to European crossborder rail infrastructure were filtered, resulting in the availability of raw data. To interpret, visualize, and analyse this data, the software QGIS is utilized. First of all, all positions are to be determined where a rail connection intersects a

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe

263

national border and can thus be counted as a cross-border rail connection. By finding these intersections, the requirements for addressing the research question are met, enabling the determination of the locations of cross-border rail infrastructure throughout Europe. An algorithm was employed to generate point objects at the intersections between the input layer, which represents national borders in Europe, and the Intersect layer, which contains relevant infrastructure filtered by tags. These point objects represent all intersections in Europe where rail infrastructure crosses the border of a country. These intersections calculated are overlaps between national borders and tracks. However, railway lines are to be used as the basis for the analysis and not the number of tracks. In this study, a railway line counts as a cross-border connection, regardless of whether it is a single or multi-track line. The number of tracks, shunting tracks, sidings, etc., is deemed irrelevant as their inclusion could potentially distort the results. To address this issue and treat multi-track railway lines as a single connection, “buffers”—polygons around these intersections—were created and merged. Centroids, representing polygon centres, were calculated to count rail connections accurately. The count of these centroids thus accurately reflects the number of cross-border rail connections.

3.4 Preliminary Result A total of 242 connections were identified, which are shown in the following map. Each point represents the position where a rail link crosses the national border of an EU Member State, Switzerland, the United Kingdom, or Norway (Fig. 2).

3.5 Filtering Border Regions with CBRCs The NUTS classification is a uniform system for dividing the EU territory into comparable territorial units (regions). Its main purpose is to simplify and standardize data collection, making it primarily a statistical framework [12]. The regions are assigned a numerical code, with lower numbers indicating broader divisions (e.g., federal States) and higher numbers indicating finer subdivisions (e.g., counties). Therefore, the NUTS classification functions as a hierarchical system. In order to find out in which European border regions cross-border rail transport exists, a layer with all European NUTS-2 and NUTS-3 regions, as well as the intersections calculated in the previous chapter, is needed. By merging these layers, the number of CBRCs can then be determined for each NUTS region. This process was carried out both at NUTS-2 and at the smaller NUTS-3 level in order to find out at both hierarchical levels which regions have CBRCs between them. This dataset, which contains all European CBRCs including the border

264

P. P. Theisen and B. W. vom Berg

Fig. 2 All cross-border rail connections across Europe

regions at both NUTS-2 and NUTS-3 level, will in future be referred to as the CBRC-dataset.

3.6 Identification of all European Border Regions To compare border regions with and without cross-border links in the analysis, all European border regions are identified at NUTS-2 and NUTS-3 level. All identified NUTS-2 and NUTS-3 border regions were exported and could be processed in the analysis and supplemented with metadata. The data was used as a comparison to identify discrepancies between regions with and without CBRCs.

4 Analysis of Potential Influencing Factors In order to be able to examine the potential influencing factors on the emergence of CBRCs, a way must first be found to supplement the links with metadata for the influencing factors to be examined. This is done in such a way that each CBRC is

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe

265

assigned to its two associated NUTS regions. All regions can then be listed and the associated CBRCs summarised. This results in a list of all European border regions together with the number of their CBRCs. In future, this dataset will be referred to as the regions dataset in this paper. Depending on the type of analysis, this information is relevant for the NUTS-2 or NUTS-3 level and will therefore be repeated for both levels if possible. The border regions can now be expanded with statistical data on the respective potential influencing factors, which makes it possible to link the number of CBRCs in a region with the respective metadata. Different statistical methods were chosen for the different influencing factors. After determining the number of CBRCs per region, some regions exhibit disproportionately high connection counts. This results from identifying bordercrossing rail links that traverse multiple times. By resolving these overlaps, the refined regional dataset encompasses all European border regions, each with a corrected CBRC count. In conjunction with metadata on the respective influencing factors, the potential influencing factors will now be analysed individually.

4.1 Analysis of Language as an Influencing Factor In the CBRC-dataset, each region has a country code (according to ISO-3166-1), which can be supplemented with data on the official national language (according to ISO-639-1). It can then be checked for each connection whether the two border regions have a common official language or, in the case of many official languages (e.g., in Switzerland or Spain), at least one language matches. If both regions belonging to a border crossing have at least one official language in common, this will be assigned to the respective regions in the region dataset. Each border region thus receives the additional information of how many of the CBRCs lead to a neighbouring region with the same language. Statements can now be made about whether there is a correlation between the number of CBRCs in a region and the number of connections with a common language.

4.2 Analysis of the Economy as an Influencing Factor In order to analyse the economy as an influencing factor, the regional GDP information of each NUTS-2 [13]- and NUTS-3 [14] border region is required. The respective 2020 datasets were obtained in tabular form from Eurostat and assigned to the border regions identified above in the regions dataset as metadata. With the help of this information, it could now be determined whether a correlation can be established between the number of CBRCs and the regional GDP of the NUTS regions. Since it was assumed on the basis of the literature research that GDP can have a direct influence on the number of rail connections between countries,

266

P. P. Theisen and B. W. vom Berg

GDP is the independent variable or predictor, and the number of connections is the dependent target variable. In addition, a regression analysis was performed.

4.3 Analysis of Population Size as an Influencing Factor In analysing the relationship between population size and the number of CBRCs, a correlation coefficient is to be determined, as in the case of the influencing factor economy. A dataset from Eurostat from the year 2021 with the population size at both NUTS-2 level [15] and NUTS-3 level [16] was used as a basis. For the analysis of the population, all border regions at NUTS-2 and NUTS-3 level are also included, which means that border regions without a rail connection were also part of the analysis. Subsequently, correlations were examined, and a regression analysis was conducted.

4.4 Analysis of Tourism as an Influencing Factor In order to analyse tourism as an influencing factor, both the number of arrivals at tourist accommodation establishments [17] and the number of nights spent at tourist accommodation establishments [18] at NUTS-2 level were used as a data basis. To exclude the influence of the COVID-19 pandemic, datasets from 2019 were used for this purpose. Both datasets also originate from Eurostat and could be allocated to the respective border regions using the NUTS-2 code. No data could be found in this regard at NUTS-3 level. Similar to the approach taken with GDP and population size, correlations were sought in relation to the number of CBRCs.

4.5 Analysis of Natural Borders as an Influencing Factor For the natural borders, data from the European Environment Agency was retrieved to be linked and processed with the map produced for this work. Three datasets were used to include mountains [19], large water bodies (rivers or lakes) [20] and nature reserves [21] in the analysis. For each dataset, all intersections between the polygons of the natural borders and the CBRCs were identified. The results for all three natural borders were exported to be linked and analysed together with the regions in the regions dataset. To analyse whether natural limits affect the number of güSv, a chi-square test was performed.

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe

267

5 Results 5.1 Language The same language is spoken on both sides of the border in 82 of 242 CBRCs. The proportion of border regions speaking the same language is thus 34%. A closer look at the heat map of all CBRCs reveals that a large proportion of the connections are located in the German-speaking DACH region and in the French-speaking regions of Belgium, Switzerland, Luxembourg and France. Figure 3 shows all CBRCs, while Fig. 4 shows only those CBRCs that operate between regions of the same language. When looking at Eastern Europe, it is noticeable that despite many border regions with different languages, many CBRCs exist. The language differences therefore do not seem to have a negative effect on the spread of cross-border rail infrastructure there. In order to establish a correlation between the number of CBRCs in a region and the number of these connections to a region with the same language, the Spearman correlation coefficient was calculated. The result was a correlation of 0.335 which is a moderate positive correlation. For the border regions, this means that bordering a region with the same language is often associated with an increased number of CBRCs. Fig. 3 Heatmap of all CBRCs (own representation)

268

P. P. Theisen and B. W. vom Berg

Fig. 4 Heatmap of all CBRCs with neighbouring regions of the same language (own representation)

5.2 Tourism Data on the number of nights spent at tourist accommodation establishments were available for 131 out of the 141 border regions. With a correlation coefficient of −0.03, no correlation was found. Similarly, data on arrivals at tourist accommodation establishments were available for 133 out of 141 border regions, but no correlation was found with a correlation coefficient of 0.01. Unfortunately, no official dataset was available for the smaller NUTS-3 regions.

5.3 Economy At NUTS-2 level, data were found and analysed for 128 out of 141 border regions. Regions in Switzerland were not included in the dataset and therefore not part of the analysis. The correlation between the number of rail links of a NUTS-2 region and its GDP is very low at 0.02. This means that no correlation could be proven. The regression line with a coefficient of determination R2 of 0.0002 indicates an absence of a linear relationship or that the predictive capacity of the model is limited. A scatter diagram shows also a high number of outliers, which is not an optimal condition for a linear regression. However, even without outliers, no significant linear correlation could be found (correlation coefficient −0.05). It is noticeable that a correlation is most likely to be discernible in regions with a high

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe

269

GDP. The correlation coefficient between the number of CBRCs and the GDP of the 20 economically strongest regions is 0.545, which indicates a moderately strong positive correlation. In the analysis of the NUTS-3 regions, a larger correlation coefficient of 0.42 was found. The coefficient of determination of 0.18 also shows a larger linear correlation. At the NUTS-3 level, it was found that the median GDP in regions with CBRCs (6273.36 millioneuros) is significantly larger than the GDP in regions without CBRCs (3726.33 million euros). However, for the analysis of the NUTS-3 regions, data were only found for 115 of the 332 border regions and thus only about 35% of the regions were analysed.

5.4 Population At NUTS-2 level, metadata on population was found for 139 out of 141 border regions and analysed accordingly. No statistical correlation between population size and the number of CBRCs in a border region could be found. The correlation coefficient is 0 and thus does not indicate a linear relationship. At the smaller NUTS-3 level, data on population size were found for 326 regions for the 332 border regions identified. With a correlation coefficient of 0.28, a weak positive correlation was found. In the ten most populous NUTS-3 border regions, there are five regions that have only one rail connection and two regions that do not have a single connection to a neighbouring country. Regardless of the number, however, it can be said that on average far more people live in border regions with at least one cross-border rail connection (391.980) than in regions without any CBRC (241.466). The median of the border regions with CBRCs is also significantly larger (282.885) than the median of the regions without CBRCs (187.343). The analysis of the residuals shows that regions that have few to no CBRCs despite a very high population are often regions that have a very small land border length compared to the area. The visual comparison of particularly populated areas in NUTS-2 border regions and the presence of CBRCs shows similar structures, especially in the Central European region, while significant differences are noticeable in Southern Europe (Figs. 5, 6).

5.5 Natural Borders The following table could be generated for the natural borders in order to perform a chi-square test (Table 1). Since almost all border regions contain at least one nature reserve, the table was generated without including nature reserves as natural borders. When performing the chi-square test, using the table without nature reserves results in the following values: For χ2 = 0.90228 and the degree of freedom

270 Fig. 5 Populated areas in Europe (own representation)

Fig. 6 Presence of CBRCs (own representation)

P. P. Theisen and B. W. vom Berg

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe

271

Table 1 Distribution of border regions for Chi-square test (without nature reserves) Regions with CBRCs Regions without CBRCs Total

Natural borders 162 73 235

No natural borders 61 35 96

Total 223 108 331

df = 1, the chi-square table yields the critical value 3.841 at a significance level of 0.05. Since this value is higher than the χ2 value, the null hypothesis is not rejected. Accordingly, no statistical correlation is proven since no dependence of the occurrence of natural borders and CBRCs could be established. Furthermore, 0.3422 is a rather high p-value, which is also higher than the significance level of 0.05. This, too, does not show a probable dependence of the variables and is a further indication that the null hypothesis should not be rejected.

6 Discussion and Outlook The aim of this work was to identify factors that might have an influence on the emergence of cross-border rail links in Europe. Using Geographical Information Systems, the approach was to create own geographical data and to combine existing tabular datasets with geographical data in order to identify relationships between the European cross-border rail infrastructure and potential influencing factors. As there is no central administration of the European rail network, but it is composed of many different railway undertakings of the member countries, there is no available geographical data in this respect. An important first step and basis of this work was therefore the identification of the European cross-border rail connections. This resulted in a dataset comprising the geospatial information of all crossborder rail connections in Europe. The analysis of these connections revealed that approximately half of the CBRCs are concentrated in five Central European countries. In contrast, in Northern and Southern Europe, there are only sporadic rail connections between countries. The countries with the most CBRCs in Europe are Germany, France, Czech Republic, Austria and Switzerland. Approximately 50% of all European CBRCs are situated within these countries. One explanation for this could be their central location, which is associated with a higher number of neighbouring countries. For example, Germany has nine neighbouring countries due to its central location in Europe, which makes a high number of CBRCs more likely. In general, a country with a long national border has more space for connections to its neighbouring country or countries. The Czech Republic, Austria and Switzerland have no access to the sea, which means that they are completely surrounded by neighbouring countries and have a relatively long national borders in proportion to their size. In contrast, Denmark has a relatively short land border, which may explain the

272

P. P. Theisen and B. W. vom Berg

limited number of rail connections to neighbouring countries despite having a welldeveloped rail network. Another reason for the increased occurrence of CBRCs in Central Europe could be that there are more national borders in relation to the area due to the rather smaller countries. It was found that 124 out of 141 NUTS-2 border regions have a cross-border and operational railway connection. Crossing the border is therefore possible in most border regions at NUTS-2 level. Nevertheless, in some parts of Europe there are pronounced gaps between the CBRCs and in large regions such as Andalusia, border crossings are only possible by road. These gaps become evident when looking at the NUTS-3 level, where only 223 out of 331 border regions have a rail link to the rest of the country. The assumption that CBRCs in Europe are sporadic and insufficient could thus be confirmed, as a total of 108 NUTS-3 border regions have no rail infrastructure connecting them to a neighbouring country. With 30% of the European population living in border regions and the European Union’s aspiration to achieve climate neutrality by 2050, along with the objective of connecting all European regions to the trans-European transport network, the fact that more than 30% of NUTS-3 border regions still lack direct rail connections with their neighbouring countries is alarming. In order to answer the research question based on the geographical data of all European CBRCs, it was further searched which factors have an influence on the emergence of cross-border rail connections. When analysing language as a potential influencing factor, it was noticed that despite the great diversity of languages in the EU, the same language is spoken in 34% of the border regions with at least one CBRC. Especially considering that the EU has 24 official languages, it is remarkable that over one third of the links exist between regions speaking the same language. With regard to the number of CBRCs, a moderate positive correlation could be found. There tends to be a higher number of CBRCs between regions where the same language is spoken than between regions with different languages. Border regions speaking the same language have a great advantage in planning large projects, which undoubtedly include infrastructure projects. The data collected and the analysis can confirm this effect. The results therefore confirm the hypothesis that a common language has a positive effect on the development of cross-border rail infrastructure. Tourism does not seem to have any impact on the rail infrastructure. At least, no correlation could be proven on the basis of the datasets used. The reason for this could be that both datasets assume that visitors spend at least one night in the respective region. Day trips are therefore not included in the measurements. However, since day trips play a major role especially in border areas, the use of the datasets is an important limitation [22]. No linear relationship could be found for economic power at NUTS-2 level. Nevertheless, it can be deduced from the results that economically strong regions generally have more CBRCs than weaker regions. When considering only the economically strong NUTS-2 regions, a strong positive linear relationship could therefore be found. The presence of a linear correlation only at the NUTS-2 level for economically prosperous regions could suggest that the number of connections

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe

273

increases with growing economic strength, but only beyond a certain threshold of GDP. Another reason for the correlation could also be that economically strong regions can draw on greater financial resources to build and maintain the often costly rail infrastructure. The analysis of the economy at the smaller NUTS-3 level concluded that there is a moderately strong positive correlation between economic performance and the number of CBRCs. This means that the number of CBRCs increases with the increase in economic performance. However, the results concerning the NUTS3 regions should be treated with caution, as insufficient data were collected. The median GDP at NUTS-3 level is almost twice as high in regions with CBRCs as in regions without any CBRC. It can be deduced from this that, regardless of the number of CBRCs, regions without CBRCs tend to be economically weaker regions than regions with CBRCs. This also shows that in this context the number of CBRCs may play a smaller role than the fact that there is at least one cross-border connection to a neighbouring country. To ultimately establish a causal relationship, future research could explore the temporal dynamics of economic development in relation to the emergence of rail connections. This would shed further light on the connection between economic progress and the development of rail infrastructure. At NUTS-2 level, no correlation between population size and the number of CBRCs in a region could be found. The number of CBRCs, therefore, does not seem to be linearly related to the size of the population. However, at the NUTS3 level, a weak positive correlation was observed. By considering the presence or absence of at least one CBRC (using a nominal scale, e.g., 0 for no CBRC and 1 for at least one CBRC), further analysis could investigate whether the presence of at least one CBRC is associated with higher population density. This is supported by the fact that, at NUTS-3 level, regions with a CBRC have, on average, a much larger population than regions without a CBRC. Logistic regression could be the appropriate statistical method to support the assumption that it is not a set of CBRCs but the existence of at least one link that is decisive. The graphical comparison of the heat maps reveals notable similarities between the distribution of CBRCs and the distribution of densely populated areas. This suggests a potential correlation between population size and the presence of CBRCs. However, a different pattern emerges in southern European countries, where no clear correlation between population size and rail infrastructure can be observed. In particular, the high population size in the areas in the south of Spain and Portugal as well as between Romania, Bulgaria and Greece does not seem to have any influence on improved rail services. A detailed analysis of the regional dataset highlights the absence of any cross-border connection between Andalusia, which is the second most populous NUTS-2 region in Europe, and its neighbouring country, Portugal. In Central and Northern Europe, on the other hand, a connection can certainly be found, at least on the heat map: In Northern Europe, in contrast to Southern Europe, it can be seen that although there are only a few CBRCs, these are related to the small population size. Thus, one can see that in northern European countries, the CBRCs mostly exist in the vicinity of densely populated places.

274

P. P. Theisen and B. W. vom Berg

The research revealed that one of the EU’s aims with the trans-European transport network is to guarantee the accessibility and connection of all regions in the Union as well as the seamless mobility of people and goods. However, the data collected show that large parts of the population have no direct connection to their neighbouring countries, at least by rail, and are thus at a disadvantage compared to the well-connected regions. This results in a recommendation for action for policymakers and researchers: In order to identify the places in Europe where cross-border connections are most urgently needed to effectively connect as many people as possible, one approach for further analysis could be to use the available regional dataset to investigate where in Europe the correlation between population size and the number of CBRCs is lowest. In border regions, where the correlation is particularly low, there is a significant opportunity to efficiently connect a large number of people who currently lack rail connections to their neighbouring countries. Also, no correlation between natural borders and rail infrastructure could be established. The results can be interpreted as showing that NUTS-3 regions with rivers or mountains have a similar probability of having a CBRC as regions without such natural borders. The expected values with stochastic independence even show that more connections exist in regions with natural borders and that fewer regions without CBRCs have natural borders than expected. The fact that the presence of mountains and large bodies of water does not exclude the existence of CBRCs becomes particularly clear in the Alpine region, especially in Switzerland. This observation could be attributed to the fact that Switzerland is known for its significant investments in infrastructure compared to other countries [23]. Furthermore, the emphasis on rail transport in Switzerland, with 37% of freight transport being handled by rail in 2021, highlights the need for efficient and reliable cross-border connections in the region [24]. When examining the Pyrenees region, only a limited number of four CBRCs can be observed. Additionally, it is noteworthy that three of these CBRCs, which establish connections between France and Spain, are situated on the outskirts of the Pyrenees, where the terrain is relatively flat. The dataset used for European mountains does not allow for a differentiation of altitude, which is why a differentiation in this respect is not possible. In the measurements, regions with steep mountain massifs and regions with flatter hilly landscapes are therefore treated equally if they are still part of a mountain range. A differentiation, for example, according to altitude differences per square kilometre or the inclusion of the number of CBRCs in a region, could provide additional interesting results. Overcoming natural borders requires large time and financial resources and makes it difficult to establish cross-border connections. Therefore, a negative effect on the emergence of CBRCs was suspected. This effect cannot be confirmed by the data collected. Even if, for example in the case of the Pyrenees, natural borders may indeed appear to have a negative effect in some places, no general statement can be made in this respect for the entire European area. In total, five influencing factors were considered, and it turned out that an influence of tourism, natural borders and population size on rail infrastructure could not be confirmed. The clearest correlations were found between CBRCs and the

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe

275

economic power of border regions as well as a common language. Based on the analysis, it can therefore be assumed that a common language between neighbouring regions as well as the strong economy of at least one neighbouring border region have a positive effect on the development of cross-border rail infrastructure. It is important to note that the analyses were carried out for the entire European area. Regional specifics were not taken into account. The high number of outliers indicates that there are likely no universal correlations, but rather that each country and each region has its own unique infrastructural approaches and challenges. For future research, it would therefore be interesting to check which correlations arise on a smaller scale. It could be interesting to compare economically strong or weak regions with each other or to make a geographical separation and compare, for example, Northern Europe with Southern Europe or Eastern Europe with Western Europe. Although the hypothesis that a higher population of a border region has a positive effect on the emergence of cross-border infrastructure could not be confirmed, an instruction for action could be derived from this surprising result. Indeed, the lack of correlation suggests that large parts of the European population are only very poorly connected to the rest of the continent. Therefore, when building new rail infrastructures, greater focus should be placed on regions which, despite a large population, have no cross-border infrastructure and are thus poorly connected to their neighbouring countries. In this context, support must be given above all to EU countries whose rail networks have generally been poorly developed to date, as these are also the countries with the worst connections to their neighbouring countries. In the future, the trans-European transport network should guarantee connections to all regions and also place great emphasis on sustainability. However, the results of this work show that many of the densely populated border regions are not connected to their neighbouring countries by rail infrastructure, which means that for a large part of the EU population, rail is not an alternative for crossing national borders within Europe. An analysis especially of the densely populated regions without CBRCs should therefore be part of further research on the problem factors of the crossborder rail transport network. The datasets generated in this work are suitable for further investigation in many different research areas. In border region research, there is an opportunity for conducting more detailed analyses of the social dynamics in relation to the infrastructure and placing greater emphasis on regional characteristics than what has been done in this study. For trade research, especially the separate consideration of freight transport would be an opportunity to gain new insights. Also, for political decision-makers and for railway companies, the geographic data created can be of value in order to get a better overview of the European rail infrastructure. For the European Union, the cohesion of Europe is an important concern aiming to grow even closer together in the future. In order to attain this objective without losing sight of the ambitious target of achieving climate neutrality by 2050, it is imperative for the transportation sector to make a significant contribution. As the most environmentally friendly mode of transport, the railway has the potential to combine these two goals. However, concerted efforts are required to establish

276

P. P. Theisen and B. W. vom Berg

railways as a viable alternative in international transportation. The issue of crossborder rail infrastructure as a prerequisite for a unified European rail network that connects everyone is therefore more important today than ever before.

References 1. European Parliament and Council of the European Union: Regulation (EU) 2021/1119 of The European Parliament And Of The Council of 30 June 2021 establishing the framework for achieving climate neutrality and amending Regulations (EC) No 401/2009 and (EU) 2018/1999 (‘European Climate Law’). Official Journal of the European Union L243/1 (2021) 2. Eurostat. https://ec.europa.eu/eurostat/web/nuts/statistical-regions-outside-eu. Accessed 2022/04/19 3. European Commission: Directorate General for Mobility and Transport: EU Transport in Figures: Statistical Pocketbook 2022. Publications Office, Luxembourg (2022). https://doi.org/ 10.2832/928929 4. European Commission: The European Green Deal. COM(2019) 640 final (2019) 5. European Environment Agency: Transport and Environment Report 2020: Train or Plane? Publications Office of the European Union, Luxembourg (2021). https://doi.org/10.2800/ 43379 6. European Commission: Communication from the Commission to the Council and the European Parliament—Boosting growth and cohesion in EU border regions. COM(2017) 534 final (2017) 7. Pallagst, K., Hartz, A., Caesar, B., & Akademie für Raumforschung und Landesplanung (Hrsg.): Border futures - Zukunft Grenze - Avenir frontière: Zukunftsfähigkeit grenzüberschreitender Zusammenarbeit. Akademie für Raumforschung und Landesplanung, Leibniz-Forum für Raumwissenschaften, Hannover (2018) 8. Medeiros, E., Ferreira, R., Boijmans, P., Verschelde, N., Spisiak, R., Skoniezki, P., Dietachmair, J., Hurnaus, K., Ebster, M., Madsen, S., Ballaguy, R.-L., Volponi, E., Isinger, E., Voiry, P., Markl-Hummel, L., Harster, P., Sippel, L., Nolte, J., Maarfield, S., Lehnert, C., Perchel, A., Sodini, S., Mickova, B., Berzi, M.: Boosting cross-border regions through better cross-border transport services - The European case. Case Studies on Transport Policy. 9(1), 291–301 (2021). https://doi.org/10.1016/j.cstp.2021.01.006 9. European Commission, Directorate-General for Regional and Urban Policy, Roux, L., Wolff, D., Nolte, J., Maarfield, S., Sippel, L.: Comprehensive Analysis of the Existing Cross-Border Rail Transport Connections and Missing Links on the Internal EU Borders - Final Report. Publications Office, Luxembourg (2018). https://doi.org/10.2776/69337 10. European Commission: Directorate General for Education, Youth, Sport and Culture: Erasmus + Annual Report 2020. Publications Office, Luxembourg (2021). https://doi.org/10.2766/ 36418 11. Geofabrik: https://download.geofabrik.de/europe.html. Accessed 2022/07/19 12. Statistical Office of the European Communities: European Commission: Regions in the European Union - Nomenclature of Territorial Units for Statistics. Office for Official Publications of the European Communities, Luxembourg (2007) 13. Eurostat: Gross domestic product (GDP) at current market prices by NUTS 2 regions. In: Eurostat. https://ec.europa.eu/eurostat/databrowser/view/NAMA_10R_2GDP/ default/table?category=na10.nama10.nama_10reg.nama_10r_. Accessed 2022/08/13 14. Eurostat: Gross domestic product (GDP) at current market prices by NUTS 3 regions. In: Eurostat. https://ec.europa.eu/eurostat/databrowser/view/NAMA_10R_3GDP/default/ table?%20%20category=na10.nama10.nama_10reg.nama_10r_gdp. Accessed 202/10/28

Geospatial Data Processing and Analysis of Cross-Border Rail Infrastructures in Europe

277

15. Eurostat: Population on 1 January by age, sex and NUTS 2 region. In: Eurostat. https://ec.europa.eu/eurostat/databrowser/view/DEMO_R_D2JAN/default/table?category =demo.demopreg. Accessed 2022/08/12 16. Eurostat: Population on 1 January by age group, sex and NUTS 3 region. In: Eurostat. https://ec.europa.eu/eurostat/databrowser/view/DEMO_R_PJANGRP3/default/ table?category=demo.demopreg. Accessed 2022/10/28 17. Eurostat: Arrivals at tourist accommodation establishments by NUTS 2 regions. In: Eurostat. https://ec.europa.eu/eurostat/databrowser/view/TOUR_OCC_ARN2/default/ table?lang=en&category=tour.tour_inda.tour_occ.tour_occ_a. Accessed 2022/11/03 18. Eurostat: Nights spent at tourist accommodation establishments by NUTS 2 regions. In: Eurostat. https://ec.europa.eu/eurostat/databrowser/view/TOUR_OCC_NIN2/default/ table?lang=en&category=tour.tour_inda.tour_occ.tour_occ_n. Accessed 2022/08/13 19. European Environment Agency (EEA): WISE Large rivers and large lakes. In: EEA. https:// www.eea.europa.eu/data-and-maps/data/european-mountain-areas. Accessed 2022/10/10 20. European Environment Agency (EEA): European mountain areas. In: EEA. https:// Accessed www.eea.europa.eu/data-and-maps/figures/wise-large-rivers-and-large-lakes. 2022/10/10 21. European Environment Agency (EEA): Nationally designated areas (CDDA). In: EEA. https://www.eea.europa.eu/data-and-maps/data/nationally-designated-areas-national-cdda-17. Accessed 2022/10/10 22. Deutsches Wirtschaftswissenschaftliches Institut für Fremdenverkehr e.V.: Tagesreisen der Deutschen – Grundlagenuntersuchung, München (2013) 23. Statista: Investitionen in Schieneninfrastruktur pro Kopf in Europa in 2021. In: Statista. https:/ /de.statista.com/statistik/daten/studie/70006/umfrage/investitionen-in-schieneninfrastrukturpro-kopf/. Accessed 2022/07/07 24. Bundesamt für Statistik (BFS): Güterverkehr in der Schweiz 2021. Statistik der Schweiz (2022)