New Developments and Environmental Applications of Drones: Proceedings of FinDrones 2023 [1st ed. 2024] 3031446062, 9783031446061

This volume presents the conference proceedings from FinDrones 2023. The book highlights recent drone technology develop

116 2 8MB

English Pages 156 [152] Year 2024

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

New Developments and Environmental Applications of Drones: Proceedings of FinDrones 2023 [1st ed. 2024]
 3031446062, 9783031446061

Table of contents :
Contents
Wild Swarms: Autonomous Drones for Environmental Monitoring and Protection
Abbreviations
1 Introduction
1.1 Technology for Climate Change
1.2 Environmental Protection
1.3 Robotics and Automation to Protect the Environment
1.4 Contributions of This Article
2 Drone Swarms in the Wild
3 WildSwarm for Area Surveillance and Asset Protection
3.1 The Hypothetical Scenario
Wild Swarms for Park Monitoring and Entity Protection
3.2 The Mission for the Wild Swarm in This Scenario
Mission Goals
Swarm Capabilities
3.3 The Swarm Algorithms Executed by the Drones
Resource Management
Operating Cooperatively
Flight Plans
Adapting to the World/Changes in the World
4 Materials and Method
4.1 Parameters and Performance Measures
Parameters
Performance with Regard to Park Monitoring
Performance with Regard to Actor Monitoring
4.2 The SIP Simulation
Visualization
User Interface
Implementation
4.3 The SIP Modelling
The Park Area
Assets (Drones, Drone Bases) and Entities
Entity Movement
Drone Departures and Movement (Flight Plans)
Operational Cost for (Drone) Movement in the Park
4.4 Data Collection
5 Results and Discussion
5.1 Cooperation versus Non-cooperation
Results for Enabling Cooperation
Discussion: The Impact of Cooperation on the fog-of-war
5.2 Number of Bases and Number of Drones
Results for Varying Numbers of Bases and Stationed Drones
5.3 Park Monitoring and Target Surveillance
5.4 Discussion
6 Conclusion and Future Work
References
Connecting Different Drone Operations with the Farm Robotic Management
1 Introduction
2 Material and Methods
2.1 Use Cases with Drones
2.2 Multirobot Mission Management and Connectivity
3 Results
4 Discussion and Conclusions
References
Is Alice Really in Wonderland? UWB-Based Proof of Location for UAVs with Hyperledger Fabric Blockchain
1 Introduction
2 Background
3 Related Work
4 Proposed System Architecture
5 Methods and Experiments
5.1 UWB Ranging
5.2 Hyperledger Fabric Network
5.3 Proof of Location
5.4 Experimental Setup
5.5 Experimental Results
6 Conclusion and Future Work
References
Accuracy Assessment of UAS Photogrammetry with GCP and PPK-Assisted Georeferencing
1 Introduction
2 Data and Methods
3 Results
4 Discussion
5 Conclusions
References
Simulation Analysis of Exploration Strategies and UAV Planning for Search and Rescue
1 Introduction
2 Background
2.1 Robot Operating System (ROS)
2.2 PX4
2.3 Robot Simulator
3 Initial Implementation
3.1 Object Detector
3.2 Flight Controller
4 Initial Results
5 Conclusion
References
Evaluating the Performance of Multi-scan Integration for UAV LiDAR-Based Tracking
1 Introduction
2 Related Work
3 Methodology
4 Experimental Results
5 Conclusion and Future Work
References
Applications and Challenges Related to the Use of Unmanned Aircraft Systems in Environment Monitoring
1 Introduction
2 Wildfire Monitoring with Drone Swarms
3 Monitoring of Floating Waste
4 Monitoring of Tailing Ponds of Mines
5 Hyperspectral Imaging of Flora from Drone
6 Challenges Related to the Use of UAS in Environmental Monitoring
7 Conclusions and Outlook for Further Research Needs
References
UAV-Borne Measurements of Solar-Induced Chlorophyll Fluorescence (SIF) at a Boreal Site
1 Introduction
2 Material and Methods
2.1 Site Description
2.2 Instrument Description
2.3 Flight Operations
2.4 Computational Methods
Uncertainty
3 Results
3.1 Spatial Variability
3.2 Flight Height Variability
3.3 SIF Samples Over the Growing Season
4 Discussion
5 Conclusions
References
Thermal Drone Images as a Predictor of Soil Moisture Values
1 Introduction
1.1 Tools
2 Methods
2.1 Material
2.2 Processing
3 Results
3.1 Categorical Variables
3.2 Continuous Variables
4 Discussion
4.1 Conclusion
References
Index

Citation preview

Tomi Westerlund Jorge Peña Queralta   Editors

New Developments and Environmental Applications of Drones Proceedings of FinDrones 2023

New Developments and Environmental Applications of Drones

Tomi Westerlund • Jorge Peña Queralta Editors

New Developments and Environmental Applications of Drones Proceedings of FinDrones 2023 International conference on FinDrones, Salo, Finland 15 February–16 February 2023

Editors Tomi Westerlund University of Turku Turku, Finland

Jorge Peña Queralta University of Turku Turku, Finland

ISBN 978-3-031-44606-1 ISBN 978-3-031-44607-8 https://doi.org/10.1007/978-3-031-44607-8

(eBook)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Contents

Wild Swarms: Autonomous Drones for Environmental Monitoring and Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fabrice Saffre, Hannu Karvonen, and Hanno Hildmann Connecting Different Drone Operations with the Farm Robotic Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jere Kaivosoja, Kari Kolehmainen, Oskar Marko, Ari Ronkainen, Nina Pajevi´c, Marko Pani´c, Sergio Vélez, Mar Ariza-Sentis, João Valente, and Juha-Pekka Soininen Is Alice Really in Wonderland? UWB-Based Proof of Location for UAVs with Hyperledger Fabric Blockchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lei Fu, Paola Torrico Morón, Jorge Peña Queralta, David Hästbacka, Harry Edelman, and Tomi Westerlund

1

33

43

Accuracy Assessment of UAS Photogrammetry with GCP and PPK-Assisted Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anssi Rauhala

57

Simulation Analysis of Exploration Strategies and UAV Planning for Search and Rescue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Phuoc Nguyen Thuan, Jorge Peña Queralta, and Tomi Westerlund

75

Evaluating the Performance of Multi-scan Integration for UAV LiDAR-Based Tracking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Iacopo Catalano, Jorge Peña Queralta, and Tomi Westerlund

85

Applications and Challenges Related to the Use of Unmanned Aircraft Systems in Environment Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jukka Sassi, Vadim Kramar, Matti Mõttus, Olli Ihalainen, and Sami Siikanen

97

v

vi

Contents

UAV-Borne Measurements of Solar-Induced Chlorophyll Fluorescence (SIF) at a Boreal Site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Marika Honkanen, Pauli Heikkinen, Alasdair MacArthur, Tea Thum, Rigel Kivi, and Hannakaisa Lindqvist Thermal Drone Images as a Predictor of Soil Moisture Values . . . . . . . . . . . . . 137 Janne Kalmari, Iita Appelgren, Gilbert Ludwig, and Hannu Haapala Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

Wild Swarms: Autonomous Drones for Environmental Monitoring and Protection Fabrice Saffre

, Hannu Karvonen

, and Hanno Hildmann

Abbreviations VTT TNO AI CI ConOps EEA GHG HMI ISR MAS ML MRS MSP RAS RGB

Technical Research Centre of Finland Netherlands Organisation for Applied Scientific Research Artificial Intelligence Collective Intelligence Concept of Operation European Environment Agency Greenhouse Gas Human-machine interface Intelligence, Surveillance, and Reconnaissance Multi-Agent System Machine Learning Multi-Robot System Mobile Sensing Platform Robotics and Autonomous Systems Red/green/blue

F. Saffre (✉) VTT Technical Research Centre of Finland Ltd, MIKES bldg, Espoo, Finland e-mail: [email protected] H. Karvonen VTT Technical Research Centre of Finland Ltd, FutureHub bldg, Espoo, Finland e-mail: [email protected] H. Hildmann TNO Netherlands Organisation for Applied Scientific Research, Den Haag, The Netherlands The Hague University of Applied Sciences, Smart Sensor Systems Research Group, Delft, The Netherlands e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 T. Westerlund, J. Peña Queralta (eds.), New Developments and Environmental Applications of Drones, https://doi.org/10.1007/978-3-031-44607-8_1

1

2

SA SAR SDG SI SiP UI UAS UAV UN UNHCR VTOL WS

F. Saffre et al.

Situational Awareness Search and Rescue Sustainable Development Goal Swarm Intelligence Swarm in a Park User Interface Unmanned Aerial System Unmanned Aerial Vehicle United Nations The UN Refugee Agency Vertical Take-off and Landing Wild Swarm

1 Introduction Climate change mitigation and environmental protection are likely to remain among the most urgent challenges facing humanity in the coming decades. The European Environment Agency (EEA) has identified climate change as “one of the biggest challenges of our times”1 while the United Nations (UN) considers it to be “the defining crisis of our time”2 [11], with the UN Refugee Agency (UNHCR) projecting disaster displacement to be among “its most devastating consequences”.3 It is self-evident that reducing the environmental footprint of our technologybased civilization is not optional (i.e., “There is no Planet B”; [8]) but could technology itself hold one of the solutions to this necessary transformation? This question seems a double sided sword as we have, on the one hand, clear and undeniable evidence that “[t]echnology is transforming societies worldwide” [14] and we expect that digital technologies and autonomous systems can offer substantial contributions to achieving the sustainable development goals (SDGs) of the UN [37]. However, it is this very same technology that itself is part of the problem: the Cloud computing infrastructure on our planet already eclipses the airline industry’s footprint [33]. Artificial Intelligence (AI), as a technology, has a significant carbon impact [11]; for a recent study on how AI and Machine Learning (ML) affects the global Greenhouse Gas (GHG) emissions, the reader is referred to [25].

1 https://www.eea.europa.eu/themes/climate/climate-change-is-one-of. 2 https://www.un.org/en/un75/climate-crisis-race-we-can-win. 3 https://www.unhcr.org/climate-change-and-disasters.html

(accessed March 2023).

Autonomous Drones for Environmental Protection

3

1.1 Technology for Climate Change The world we live in is constantly changing, but with regard to climate change this comes at an accelerating pace. This is already exceeding our capabilities to assess and process these changes [27]. In many ways, technology is trying to keep up with the pace and is “revolutionizing the study of organisms in their natural environment” [27]. Engineers and computer scientists have long since contributed to climate change related investigations though initially this was primarily focused on the studying and modelling of the phenomena (e.g., [49]). On the engineering side, there have been many innovative approaches [16] to support, for example, automated data collection (e.g., [40]) while on the computing side recent approaches to process the increasingly large amounts (and sources) of data have allowed insights previously impossible (cf. [29, 45]). In recent years, the emergence of Robotics and Autonomous Systems (RAS) has become a major driver and innovation, disrupting businesses and transforming societies worldwide [37]. Due to significant advances in device autonomy as well as a drop in production cost for consumer grade devices, we are seeing a rapid increase in the use of semi-autonomous devices as Mobile Sensing Platforms (MSPs). Furthermore, we are witnessing the first deployment of Multi-Robot System (MRS) (often referred to as swarms) in the wild. While the field of Multi-Agent Systems (MASs) has been around for a long time, the actual deployment of groups of physical agents is relatively new (and still hampered by, e.g., legal and regulatory issues). While (in our view) not any group of devices constitutes a swarm, we are actually starting to see more and more examples of groups that increasingly deserve this label. From the very specialized field of swarm robotics comes some thought-provoking, yet controversial ideas. But before we discuss our own take on it, we first consider the concept of environmental protection in general. As with most things, it is complicated and there is no simple answer or silver bullet.

1.2 Environmental Protection Environmental protection includes, among other things, the prevention of different types of nature crimes, such as illegal polluting, fishing, poaching, logging/harvesting, mining, and land conversion [6], to help in conserving our nature’s flora and fauna. From the technology perspective, an open key question related to environmental protection is the following: is promoting robots for that purpose even a good idea in general? One reason why there is no universal answer to this question is that in conservation biology the related economics are complex and very challenging (see, e.g., [21]). However, there are some relevant points to consider. One aspect of environmental protection is that many of the natural ecosystems that are worth protecting are in developing countries with high poverty rates. In these

4

F. Saffre et al.

countries, using an expensive and high-tech solution for environmental protection is not logical. Most damage that is done to the environment in the poorest countries is done by people who have no other choice, due to their dire economic circumstances. For example, there is poaching in the natural parks of Africa (e.g., [10]) and the reason for this is that the people living there have to do it in order to stay alive and sustain their families. Therefore, in the field of conservation biology there is some thinking (see, e.g., [7]), which suggests that high-tech solutions would not be needed, but instead, another way should be provided for the people to move away from environmentally destructive activities. This could be done, for instance, by giving the people other opportunities (e.g., a decent job as a warden of the natural park) and by providing them with better infrastructure that would allow them to move away from this destructive behaviour towards the environment. At first sight, this seems a very sensible and ethical standpoint. Especially in the short term, it is something that is needed to be done to provide an alternative to environmentally damaging activities. However, in our view, this kind of thinking may be short-sighted. One cannot assume that a short-term solution will carry on being a good solution from environment’s perspective, if development follows its normal course. When the economic development starts to advance in developing countries, one might end up in a situation where pollution and degradation of the environment is still a problem, as that seems to be a natural “byproduct” of socioeconomic development in all civilizations. The Western world has done it, it is currently happening (for example, in Southeast Asia), and it is probable that it will happen to the other regions as well in the future when their economic circumstances improve. There is clearly a trend of polluting more (see, e.g., [2, 23]), not less, when socioeconomic development happens, and as poor countries start to get richer. In other words, “normal” socioeconomic development typically happens at the expense of the natural world.

1.3 Robotics and Automation to Protect the Environment As mentioned above, technology is revolutionizing how we study our environment [27] and innovations and technological advancements in the field of RAS have already had a fundamental impact on how economies work and how our societies function [16]. RAS have the potential to change our interaction with the environment and how we travel (e.g., [14]) as well as the manner in which we can achieve the UN’s SDGs. Like any other species, humans are invasive. This is not a moral judgement; when people expand into a new area, they become part of the ecosystem and challenge the natural equilibrium that existed before humans moved into that space. Therefore, one can argue that the best way to protect wilderness is to stay away from it as much as possible. [16] suggests that RAS are ideally positioned to help us to replace human presence and activities in areas where this is not desired. A popular example

Autonomous Drones for Environmental Protection

5

of this is the use of drones to monitor penguins [34, 43] who may not perceive these machines as the same type of threat as humans. The question then becomes the following: how to reconcile this realization with the legitimate aspiration of people to (re)connect with nature? This is where robots could be a part of the solution. For example, they could be our “eyes and ears” in places where our actual physical presence would be disruptive. Robots are not biochemical beings like people (or animals). If designed appropriately, they do not interact with the environment, consume resources, and generate waste in a similar manner as we do as humans. In other words, the robots can be designed to minimize their harmful interaction with the environment, while at the same time observing the environment. In this sense, robotics could be a feasible approach to environmental conservation and natural observation, as it is not as invasive as sending people to be habited to that area. Aside from this above more philosophical long-term view of why robots could be a good idea for environmental monitoring, in the short-term there are also many direct questions to answer. Even if the principle of replacing humans with robots as observers and protectors of the environment remains controversial, there are circumstances in which the case is easier to make in the here and now. For example, there are wilderness areas that are harsh or even dangerous (e.g., the Arctic, deserts, and rainforests) for humans. Monitoring them for potential illegal activities while keeping those areas pristine, is something where robotic solutions could excel. Some environmental protection tasks are also exhausting (e.g., monitoring a disaster area), stressful (e.g., policing), and repetitive (e.g., routine patrols). For instance, being a warden in the national park of Serengeti can be a dangerous job: one can even get murdered by the poachers.4 Similarly, people with environmental crime operations, such as illegal logging or mining in the Amazon, can get killed.5 Obviously, there are also natural reserves in the developed world where the reasoning about labour and infrastructure being cheap and therefore a better option than scientifically advanced technological solution is not plausible anymore. For example, in Europe the cost of stationing someone in the wilderness with the right working conditions cannot be as negligible compared to as it would be in some other parts of the world.

1.4 Contributions of This Article Considering the above wide-ranging points, we have developed an example proofof-concept scenario to illustrate the potential of drone swarms by means of

4 https://www.nationalgeographic.com/animals/article/160204-Tanzania-Emily-Kisamo-wildlife-

crime-poaching-conservation. 5 https://www.hrw.org/report/2019/09/17/rainforest-mafias/how-violence-and-impunity-fueldeforestation-brazils-amazon.

6

F. Saffre et al.

simulation. The proof-of-concept includes an autonomous drone force in a national park where people and wildlife coexist in a recreational context (instead of exploitation context, as above) and the stakes are relatively low: for example, there is no criminal activity, such as poaching or illegal mining/logging, finding and assisting injured or lost visitors is the primary mission for the drone force, and the most harmful behaviour of people is simply negligence, such as lighting a fire or littering. Therefore, there is no antagonism between the people in the environment. Our aim with this proof-of-concept simulation is to explore the advantages of the proposed concept in a natural park context and potentially of robotics in environmental protection in general. There is some relevant previous literature related to the use of individual drones for conservation efforts [24] and in environmental protection tasks, such as the detection of wildfires (e.g., [41]) or humans [30], disaster management (e.g., [22]), shark detection and protection [44], detecting poachers [12] or vehicles [36], farm monitoring [47], air pollution monitoring (e.g., [48]). Not only do RASs have the potential to contribute towards achieving the SDGs of the UN [37], the expectations are that they will also fundamentally transform how some of these goals are achieved [16]. There are, in fact, not many true swarms deployed in the world, yet, but this is now changing. For example, [38] have previously conducted research for wildfire detection and monitoring with drone swarms. Furthermore, [9] have studied drone swarms in the context of air pollution monitoring. Legal and regulatory obstacles notwithstanding, more and more reports emerge about cooperative groups of cyber-physical devices (as opposed to multitudes of physically collocated devices operating independently and in ignorance of one another) being deployed in the wild. We consider the lack of applications that require a true swarm more of a reason than the lack of regulations for them. The current state of the art merely exploits the autonomous nature of these devices, once we start unlocking the potential of such devices operating collectively and cooperatively we will see potential that far exceeds what we currently have.

2 Drone Swarms in the Wild As autonomous and intelligent unmanned robotic systems become an increasingly mainstream technology and their presence more ubiquitous, the opportunities afforded by them multiply. In particular, Unmanned Aerial Systems (UASs) or Unmanned Aerial Vehicles (UAVs) are fast becoming an everyday tool in many professional and leisure contexts, and there is a wide variety of applications in which the ability of drones to operate semi-independently (without direct control from a human pilot) could bring substantial advantages. Autonomous decision-making based on information collected in real-time and the ability of several drones to communicate and cooperate in the execution of complex missions can open many new possibilities for their usage.

Autonomous Drones for Environmental Protection

7

Drone swarms are currently a popular topic in the military domain,6 as in that area there are many potential applications for this technology, such as Intelligence, Surveillance, and Reconnaissance (ISR). However, in many cases, the use of the word “swarm” in this context can be somewhat misleading. In many cases, for example, Collective Intelligence (CI) or Swarm Intelligence (SI) is rarely involved and even when it is, the operations typically include only the pre-coordinated flight of many drones (e.g., formation flying) over a short period to complete a single oneoff mission. The drones in this type of a mission might not know each other, interact with each other, or exchange relevant information for their coordination. Instead, they typically follow a pre-defined choreography by executing a set of instructions without awareness of each other. In contrast, biological swarms, such as ant colonies or beehives, work rather differently. They are completely decentralized systems, their adaptive behaviour is resulting from self-organization (not preplanning), and most importantly, they also cooperate typically over a very long timescale [17, 35]. In addition, they operate in a way in which individual members of the swarm perform many different types of tasks from one day to the other, or from one part of their life to the other (in the case of age polyethism; [42]). Even though these behaviours emerge as same type of positive and negative feedbacks as with some robotic swarms, biological swarms need to adapt over a very long period in order to sustain the colony. This is a very different approach compared to what it takes to do one single robotic swarming mission. Considering these points, in this paper, we suggest the idea of Wild Swarms (WSs). In practice, we refer with this idea to a nature-inspired autonomous drone swarm in charge of environmental protection in a natural park that would behave more like a real natural hive than like a robotic swarm in the context of one mission in a specific application area. In detail, in this paper, we envision a scenario where there would be a population of drones that “live” in a national park, keep it under surveillance for a long period of time, and manage all the aspects of the life of the swarm that are necessary for this drone force to operate over an extended period in that area. Practically speaking, the drone force’s mission would include thousands of individual flights with objectives ranging from patrolling the park to closer monitoring relevant events.

3 WS for Area Surveillance and Asset Protection We acknowledge that the concept of wild swarms, as described by us above in Sect. 2, is widely/broadly applicable, with use cases in the societal (Situational Awareness (SA) [20], monitoring [5], minority protection, Search and Rescue

6 E.g., https://www.newscientist.com/article/2357548-us-military-plan-to-create-huge-autonomousdrone-swarms-sparks-concern/ (https://tinyurl.com/4srx7rth).

8

F. Saffre et al.

(SAR) [4, 15], etc.), industrial (data collection [3], infrastructure maintenance, monitoring [1], etc.), and military (ISR [46], SA [50], area protection [51]) domains immediately coming to mind. To remain domain-agnostic, we present an imaginary future scenario to demonstrate our ideas of using a drone force to support both the protection of pristine nature and people hiking in a wilderness area. Next, we present this WS scenario and the related basic Concept of Operation (ConOps) [13] that will be used in our proof-of-concept simulations later.

3.1 The Hypothetical Scenario In the near future, a new International Park (cf. Fig. 1) is created on the basis of multi-national agreements and treaties. This park may straddle the former territories of multiple countries, maybe somewhere in Northern / Eastern Europe. The nature preserve within the park is a protected area which may have conservation goals and which contains a vast expanse of forests, lakes, and wetlands. In the park, there is no settlement or road access to that area—for all intents and purposes of our hypothetical scenario in this article, it is unspoilt wilderness. Due to the lack of industrial intervention and because the area is only loosely governed by a multinational body, the park is home to a growing amount of flora and fauna. These elements of the park are considered integral parts of it and they should not be

Fig. 1 A simulated instance of the International Park. Shown are the individual locations within the park (the smallest sub-area in the park, which is observed as a whole), which can be water (blue) or covered with vegetation (to a varying degree, as represented by the shade of green). Drones are indicated by their symbols, coloured yellow, and their bases are shown on the map as an H inside a yellow circle or a drone symbol. Planned flight paths are represented by a yellow line

Autonomous Drones for Environmental Protection

9

removed or restraint unless absolutely necessary. Given their now protected status, they may thrive and prosper. Naturally, humans (and maybe governments) are curious and therefore the park gradually (as its reputation grows) becomes popular among extreme hikers (which we may refer to later as blue entities) who are willing to camp in the park, possibly for days or even weeks. However, this greatly increases the potential interactions with other entities in the park. Specifically, the wildlife (later referred to as red entities) within the park may be of such nature that it may be harmful to the hikers (later referred to as blue entities). As there is no existing infrastructure in the area and since it is not under the jurisdiction of one single country, creating a human oversight body is challenging. Furthermore, given the vast expanse of the park as well as its nature, it is very difficult to put human wardens there to monitor the enormous park area. In addition, human guardians may become predictable, lazy, or partial to financial incentives that could allow one country to affect their service performance in some subjective way. Realizing that establishing a permanent human presence in the park would be costly and might negate the park’s raison d’être, the multi-government authorities overseeing the park opt for an ambitious high-tech solution that has never been tried before in a similar context: entrusting an autonomous swarm with the long-term protection of the park by means of continuing surveillance flights by the drones of a swarm.

Wild Swarms for Park Monitoring and Entity Protection In practice, the governments decide to have quadcopter drone swarm “hives” placed in the park, in an effort to see how that kind of a solution could work from different (e.g., economic, environmental, political, and social) perspectives. Therefore, instead of human warden supervision on-site, the nature reserve is placed in the care of wild autonomous drone swarms, which continuously monitor the area over a long period of time. The occasional maintenance of the hives and the drones happens almost exclusively at the hives, which are located such that the service teams can access them when needed. A hive will report to the oversight body and include required maintenance and repairs in these reports.

3.2 The Mission for the Wild Swarm in This Scenario To describe the purpose of the swarm, we can list its high-level goals. To identify the challenges for reaching the goals, we can identify corresponding required swarm capabilities to bring about/reach these goals.

10

F. Saffre et al.

Mission Goals The mission of the swarms is, at a high-level, straight-forward: their main purpose is to patrol this park territory over an indefinite period. Independent of the entities within the park, the area observed by the swarm over time serves as a measure of performance, with seeing everything all the time the upper bound/ideal performance. The swarm mission can be seen as serving a small number of (more specific) goals: the patrolling includes detecting and discreetly monitoring hikers and animals that are of interest (such as bears) and ensuring that 1. Both are kept safe from each other. 2. Their wellbeing is preserved. 3. The park is protected from potential harm caused by humans. Since the scenario is hypothetical, we refrain from defining in more detail what it would, for example, mean to protect an actor’s wellbeing. Realizing the goals will involve human intervention or scenario-specific actions by the drone (e.g., providing instructions and assistance to some actors when the situation requires it). In our scenario, we do not consider the drones to have the ability to physically interact with the environment (other than using, e.g., a speaker to warn a hiker). We consider the swarm members to be essentially MSPs and, consequently, restrict our evaluation of the swarm’s performance to the domain of sensing. Specifically, we evaluate two aspects: 1. The percentage of the entire park that is monitored. 2. The amount of time that specific actors (blue or red) are monitored (seen by a drone). In summary, the swarm’s aim is to autonomously maintain the best possible situational awareness by discovering and monitoring relevant actors (humans, wildlife, etc.) and events (fire, littering, flood, etc.). However, the swarms do not obviously know a priori when these events will happen in the lifetime of the swarms. Therefore, the swarms need to be able to prepare for these events to happen at some point (can be in days, months, or years) and to react appropriately (where appropriately means to act such that the performance of the swarm is increased/maximized).

Swarm Capabilities The above stated high-level goal implies at least the following requirements from the swarm operation perspective: 1. To manage the available resources (drone units and landing pads/ charging stations) to ensure the swarms’ long-term survival 2. To generate flight plans that are compatible with the range limitations of the drones of the swarms

Autonomous Drones for Environmental Protection

11

3. To leverage cooperative efforts (e.g., division of labour to optimize the swarms’ coverage so that two drones do not patrol the same area and leave another area completely unvisited) 4. To adapt to changing circumstances: when one drone of a swarm detects something relevant (e.g., hiker, smoke, or bear), the swarm must be able to change its behaviour and take appropriate action. These actions might include, for example, moving from swarm’s exploratory behaviour to monitoring behaviour, sounding an alarm, or giving instructions for a hiker through a loudspeaker on one drone Note that other than offering information to entities in the park, the drones are not considered to be acting agents in the scenario. They are MSPs [18], with an emphasis on sensing. They are tasked primarily with surveillance and are not assumed to be able to deploy suitable countermeasures (such as placing fireretarding materials if a fire is confirmed). For more specific scenarios and use cases, such as using drone swarms to fight fire more actively, see, for example, [5, 38]. Furthermore, the drones in the swarm operate individually and not as pairs or larger groups (cf. [19] for an approach that facilitates a swarm performing aerial data collection in real-time). We believe that this hypothetical scenario allows us to do some investigations into the swarm behaviour and performance without including any real-world use case scenario specifics into our model. The suggested scenario clearly could be cast as a societal application, an industrial challenge or even a military use case. To demonstrate these and other features of the Wild Swarms concept, we have developed a simulation demonstrator called Swarm in a Park (SiP), which will be presented next.

3.3 The Swarm Algorithms Executed by the Drones In Sect. 3.2 we listed the required/desired swarm abilities. Here, we describe how each is addressed in our implementation.

Resource Management In the broadest sense, the resources of the swarm are the drones. In this section we do not aim to discuss the management of the entire drone swarm but on how the individual drones manage their resources, such as battery power. Through the decisions taken independently by the drones, the resource management over the entire swarm (i.e., the trade-off between conserving resources and exploring the park to increase SA) is a property emerging from these individual decisions and interactions. The decisions the drones can take is whether to take off and deploy on a mission. The park will have a fixed number of drone bases and they include a reservation

12

F. Saffre et al.

system where by a drone is guaranteed that it can land and find a place to recharge when it reaches that particular base.

Operating Cooperatively With regard to working as a swarm, the drones can be in one of the following two states: cooperative or non-cooperative (this is one of the parameters discussed in Sect. 4.1). The difference is rather subtle in that cooperative flight plan creation involves considering existing flight plans (and operating on the assumption that they have been performed and that the fog-of-war heatmap is updated accordingly) while non-cooperative planning does not.

Flight Plans The drones are expected to monitor the park in general and certain entities they might encounter within the park, in particular. This will be reflected in the two performance measures for park- and for entity-monitoring (both discussed in Sect. 4.1). Flight plans are created in a straight-forward manner: without the cooperation feature on, the drones are conducting only persistent random walks where they do not really try to optimize their behaviour. By enabling the cooperation feature, the swarm units start to utilize a navigation process that tries to maximize the amount of fog-of-war that they can clear with respect to each other. In practice, they, for example, take into account that one drone has been exploring a certain region and the other drones go to another direction. This division of labour allows the drones of the swarm to share the area optimally so that they do not overlap (i.e., duplicate information) each other’s exploration areas.

Adapting to the World/Changes in the World In first approximation, adapting to the world would be the result of the three aforementioned capabilities being realized. However, there are additional cases to consider in which the swarm needs to adapt to changing operational parameters. For instance, a new type of drone with new capabilities (range, sensor equipment, etc.) may be introduced, a new base may enter service providing additional landing/charging facilities, a region that was previously not included in the patrolling area may be added (or a new “no fly zone” introduced within the perimeter), etc. The flight planning and decision-making algorithms are designed so that these modifications will be taken into account seamlessly without any need for redesign.

Autonomous Drones for Environmental Protection

13

4 Materials and Method In the previous section, we described a hypothetical scenario for a WS (Sect. 3.1) and listed goals and required / desired swarm capabilities (Sect. 3.2). We furthermore proposed an approach to meet these goals (Sect. 3.3). In Sect. 5 we will compare the performance of the approaches across a number of key parameters. In this section, we first discuss the parameters across which we compare performance as well as introduce the performance measures (Sect. 4.1). We then (Sect. 4.2) present the SiP simulation, specifically how the hypothetical scenario is visualized and how a user can interact with the simulation. In Sect. 4.3, we state and motivate all modelling decisions (such as, e.g., how the park area and topology was modelled or how the flight plans are formalized). Finally, in Sect. 4.4 we offer all relevant information with regard to the data collection for the results.

4.1 Parameters and Performance Measures Numerical experiments, in which variations for the proposed solution were tested, demonstrate our system’s ability to support the relevant swarming behaviour, such as efficient division of labour and recruitment, in the pursuit of this mission. In Sect. 3.2 we stated the goals for the swarm. From these two goals we derive our performance measures: 1. The percentage of the entire park that is monitored. 2. The amount of time that specific actors (blue or red) are monitored (seen by a drone).

Parameters Identifying suitable variables to assess the performance of a distributed decisionmaking framework in a complex scenario featuring substantial random variability in key parameter values (since many environmental characteristics that translate into operational constraints are procedurally generated) is problematic. Among the possible parameters for our scenario are: position and number of drone bases, number of units, topographical (in our scenario: location, shape, and size of “lakes”) and topological features (their effect on possible trajectories for relevant agents, i.e., “hikers” and “bears”) affecting the simulated environment, etc. We chose to focus on the following for our parameter space exploration: – Cooperation: when this is turned on, drones plan their flight path to remove the maximum amount of uncertainty from the fog-of-war. – Number of bases: the number of drone charging stations that are available. This includes a parameter to determine the number of drones a base can charge at the same time.

14

F. Saffre et al.

– Number of drones: the number of drones in the swarm. – Bears / hikers present. We can distinguish between internal parameters, such as the number of bases and drones, and external parameters, such as the presence of entities (hikers and bears / blue and red teams). It should be noted that the park did not change between simulation runs (the same map was used for all simulations), only the location of the bases (and, depending on the experiment, their number or drone charging capacity) changed.

Performance with Regard to Park Monitoring Values were measured using the average values for the fog-of-war. The fog-of-war is a very simple representation of SA. Clearly the term includes a host of other aspects when applied operationally, but for our study (and in the context of a WS operating continuously and autonomously) we accept the heatmap (see Fig. 2) representing the age of information for any one location as adequate to represent SA. This is implemented as follows: each location has a fog-of-war-density value (between zero and 1) which increases linearly over time until the value 1 is reached or the location is visited again by a drone. The overall ability of the swarm to monitor the entire park is represented by the average of all values at any given moment of time. The results discussed in Sect. 5 report this value, for example, as an average over a fixed number of simulation runs.

Performance with Regard to Actor Monitoring Is entity-focused in that we simply record how many time steps a hiker or bear was under observation. We avoid double counting when more than one drone is within sight of an entity by maintaining the corresponding counter in the entity, which increments independent of the number of observing drones. Since bears (red team) will be monitored by a drone until it runs out of battery and has to return to a base (as opposed to hikers, which the drone will report but then continue on its path) the surveillance for bears should be significantly higher (over time) and better.

4.2 The Swarm in a Park (SiP) Simulation In the SiP simulator, a key aim of the swarm is to monitor relevant actors (hikers and bears). The detected hikers need to be checked only from time to time to ensure that they are safe and following the rules (e.g., not lighting a fire or littering). The detected bears need to be monitored constantly during their crossing of the park in order to inform potential hikers nearby about the dangerous animal for humans.

Autonomous Drones for Environmental Protection

15

Visualization The basic elements of the SIP simulator include the covered area represented as a map of the wilderness area with lakes (blue small hexagons) and forests (green small hexagons), which are shown in Fig. 1 in a large hexagonal mesh format. In this environment, there can be drone bases (a circle with a big letter H in the middle, meant for landing, recharging, and take-off of the drones, see Sect. 3.3), quadcopter drones (indicated by a quadcopter symbol) and their monitoring coverage (represented by a surrounding circle) within which they can detect relevant objects in the environment (hikers or bears). Drone units’ flight plans are represented by yellow trajectory lines. The performance of the drones, that is, their monitoring capability, will in a real scenario be the output of their sensing payloads (such as cameras or infra-red sensors). We simplify this aspect by abstracting away from the sensors entirely and simply using a heatmap: in the SiP demonstrator, a fog-of-war approach is also used (see Fig. 2). This approach means that a place that has not been visited (or that was visited a long time ago) is covered with a “grey fog.” This fog means that there is no accurate or up-to-date information (i.e., no SA for the swarm) about that particular area. In the SiP demonstrator, this fog-of-war is removed when the flight patterns of the drone bring it close to the location. As discussed, the SiP simulation includes actors such as hikers and bears (see Fig. 3). Their representation distinguishes whether the entity has already been detected (such as the bear in Fig. 3) or not (such as the hiker in the same Figure).

Fig. 2 An example of a SiP park scenario with one base and two drones currently on patrol according to their pre-planned flight path. The amount of time that has passed since the last time a drone has visited a location is indicated by increasingly opaque “fog,” referred to as fog-of-war

16

F. Saffre et al.

Fig. 3 Entities may move freely in the forest areas (green hexagons) but lakes (blue hexagons) are considered impassable for humans and animals alike. Naturally, the drones can hover above the forest or the lakes and are not restricted in this manner. The outline of a hiker or a bear means that they have not been detected yet by the swarm; while a full symbol means that the actor has been detected

User Interface The SiP simulator includes the main swarm simulation window (left side of Fig. 4) and a user interface for the controls of the different swarm parameters (see the right side of Fig. 4). The shape of the park area in the main swarm simulation window can be defined manually by the user (by drawing the borders of the park) before starting the simulation. The lakes and forest areas are automatically randomly generated, but they could also be determined manually. The user interface includes controls for confirming the manually drawn park area, resetting the simulator, starting the simulation, enabling or disabling the fog-of-war, choosing the speed of the simulation, adding a hiker or a bear, saving a screenshot, adding a drone, adding the maximum amount of drones (depending on the amount of bases), and enabling or disabling the swarm’s cooperation and recruitment features.

Implementation The Monte Carlo simulation engine was developed in native Java. The interactive version, which adds a user interface (see Fig. 4, right) to control various aspects of the swarm operations and trigger basic events (injection of a blue or red agent), was built using standard javax.swing components and a mouse listener (for designating the boundaries of the park and placing bases on the map). This version was used to produce the various screenshots in Figs. 1, 2, 3 and 4, the latter showing a

Autonomous Drones for Environmental Protection

17

Fig. 4 A screenshot of the SiP simulator (left) and the UI (right)

user-generated perimeter as opposed to the hexagonal area used for performance evaluation (see Figs. 1, 2 and 3 and Sect. 5). The basic GUI (Fig. 4) allows the user to add more drones up to the number that can be serviced simultaneously, which is determined by the number of bases (each has 4 landing pads by default in the demo version) and to change their behaviour (tick boxes). It can be regarded as an early prototype of what a future control console for a real swarm might look like (minus some of the simulation controls in the central panel).

4.3 The Swarm in a Park (SiP) Modelling The Park Area As previously published in [39] the map is discretised using a hexagonal mesh. This choice ensures that the distance between any two neighbours (candidate way points) has the same value for all connections (in our simulations this value is set to 1). A distance of 1 can be understood as 100 m to allow comparison to the real world. Assets (Drones, Drone Bases) and Entities In any instance of the simulation, there are a fixed number of drone bases and the available bases suffice to land all drones. Bases have a maximum number of landing spots where drones can recharge their batteries; bases are also assumed to always have sufficient power to offer charging as needed.

18

F. Saffre et al.

The number of assets is set by the user (sufficient charging locations for all drones across all bases is ensured). With regard to the placement of the bases, a minimum separation distance of 2 km was imposed. In addition to the drones, there are also entities in the park which the drones are supposed to monitor. We have two types of autonomous agents: the “hikers” and the “bears” (which we also refer to as “blue team” and “red team,” respectively). We simulated the park and all actors in it. Since drones behave differently than the other entities (drones are under our control; entities are assumed to follow some habitual behaviour), these are described separately. Their difference lies exclusively in how the drones react to them as their movement is probabilistic. Entity Movement In the park we have two types of autonomous entities, the “hikers” and the “bears.” Their “autonomous” movement through the park is stochastic: their entry points into the park are chosen at random, so is the exit point towards which they will move. When doing so they have to avoid impassable locations such as the lakes. In addition, their movement speed is non-uniform (in order to make their progression more realistic, a random test against a fixed probability is performed on every step to determine if a move has occurred). It should be noted that despite there being room for predicting the future movements of an entity (after a few observations the general direction of travel could be inferred) the drones do not make such predictions nor do they attempt to tailor their flight path using any intelligent insight.

Drone Departures and Movement (Flight Plans) Drones are subject to a simple, to state process model (shown in Fig. 5). Simply stated, they are either deployed to the park or at base resting. The resting will always

Fig. 5 A simple overview over the process that the drones execute. Drones can either be deployed to the park or at base, their behaviour is partly driven by their battery status (indicate by the shade of blue). The specifics of deployment or their time at base is illustrated in Figs. 6 and 7, respectively

Autonomous Drones for Environmental Protection

19

Fig. 6 An illustration of the processes executed by deployed drones: once deployed, the drones will follow their pre-planned flight path and they will continue to fly until their battery is low and they have to return to base. If either a hiker or a bear is spotted, the drone informs the base and then (a) in case of having spotted a hiker returns to the flight path or (b) in case of having spotted a bear proceeds to hover over the bear as long as its battery permits this

consist of charging the drone first. Charged drones may idle in the base as their deployment is controlled by a stochastic process that aims to ensure that there is always a certain ratio of drones deployed while the rest is resting at base. Once a drone is deployed (and independent of what triggered their deployment) they will follow the steps shown in Fig. 6. Primarily the drone will follow is preplanned flight path which was constructed such that the battery charge suffices to execute the planned mission with a reserve charge to spare. If at any time during this normal operation the drone comes within sight of an entity then the drone react to this autonomously as follows: if a hiker (blue team, a friendly entity which we tasked with protecting) is spotted then the drone informs the base of this event and transmits the exact location of the hiker at this time. This information will be used by the next drone to deploy (which will include this last known location in its flight path). After this, the drone continues on its pre-planned path. However, if the drone sights a bear (red team, potentially dangerous to hikers) then the drone abandons its planned flight path, hovers over/near the bear in a safe distance (but close enough to ensure continuous surveillance) and proceeds to shadow the bear. The drone continues to do this until its battery reaches a low state, at which point the drone will inform the base of the location of the bear and then proceed to its assigned resting base. When a drone arrives at a resting place it will always first recharge its battery and undergo any maintenance scheduled for it. It will then move to a resting state which is illustrated in Fig. 7. In this state, the drone will either wait until a stochastic process triggers its deployment or until an entity is sighted in the park (this is filtered by the drones maximum range, i.e., drones will not be dispatched to check up on entities that are outside their maximum reach).

20

F. Saffre et al.

Fig. 7 Upon their return to base drones first charge until they are fully charged. They then idle until their departure is triggered by a stochastic process or until an entity is spotted in the park. If the entity was spotted while the drone was still charging then the drone first charges the battery and then immediately reacts to the sighting. In any case, after deployment is triggered, the drone creates a flight plan, files it and then deploys to the park

In case of an entity, sighting the next available drone is deployed immediately. If no drone is available, the next drone that completes its charging/maintenance cycle is allocated the task and as a result it will depart immediately. In any case, when a drone is deployed it first creates a flight plan. If an entity was sighted, the first way-point on this plan is the last known location of the entity. If the deployment is a standard patrol then the flight plan is built without any specific first way-point. When creating their flight plan, drones can be in one of two modes: cooperative or non-cooperative. This setting is a setting that, in our simulations, did not change during the simulation and was always the same for all drones. In other words, our drones were either all cooperating or none was. Cooperation has a simple meaning in our implementation: a cooperating drone will update the fog-of-war heatmap (see Fig. 2) for everyone, effectively communicating its intention to follow the planned path such that the next drone can make their plan to avoid visiting the same part of the park. Under the setting where drones are not operating cooperatively each drone maintains its own heatmap. This means that drones can avoid locations they themselves have previously visited, but they are uncertain about the actions of other drones. The updating happens before the drone leaves because this removes the need for communication during a patrol, which may be costly, problematic or which may be denied due to environmental conditions outside the control of the drone. Operational Cost for (Drone) Movement in the Park With regard to power consumption and battery charging, each hop incurs a cost of 1 while each time step spent at a base recharges the battery by 1. The range of each

Autonomous Drones for Environmental Protection

21

drone (at full charge) was set to 60, meaning it is capable to fly for 60 time steps and/or travel 60 space discretisation units before needing to land and recharge. If considering the space discretisation unit to be 100 m, this corresponds to a total distance travelled per trip of 6 km. If the drones’ flight time is a realistic 30 min, this in turn implies a time-unit of 30 sec and a flight speed of 12 km/h. NB: these values are approximations and are only intended to give the reader a rough idea of the scale at which the proposed system could operate.

4.4 Data Collection For the performance measure we need to reiterate the intended application domain, namely the continuous allocation (observation) of locations to schedules such that we minimize the age of the information as much as possible. This value is already recorded for all locations in the heatmap (see Fig. 2). This aggregated value serves as the indicator for how well the mark monitoring was done. Therefore, as discussed in Sect. 4.1, we selected two variables (time-series) which we find are representative: 1. Density of the fog-of-war, used as a global measure of the overall SA of the swarm 2. Duration of the actual observation window for each type of agent (relative to the time spent within the monitored area, itself a function of the randomly selected points of entry/exit) These variables were recorded across a sufficient number of simulation runs to allow for systematic analysis of the necessarily noisy data. A combination of statistical tools such as frequency distribution, average, standard deviation, and extreme values (observed maximum and minimum) were used. Results are summarized in the next Sect. 5. A representative park was created and used for all investigations we report in this paper (this particular park is shown in Fig. 1). For each run of the simulation, the bases were positioned anew (a constraint of having a minimum distance of 20 locations between them, which for 100 m per location means a minimum distance between bases of 2 km).

5 Results and Discussion 5.1 Cooperation versus Non-cooperation In Sect. 3.2 we discussed the mission tasks’ swarm capabilities, which included cooperation. In Sect. 4.1 we suggested making the swarm’s cooperative behaviour a Boolean parameter. In Sect. 4.3 we discussed how drones create their flight plans.

22

F. Saffre et al. 4 Bases (4 landing/charging pads each), 16 UAVs - No coop. 1

0.9

0.9

0.8

0.8 “Fog-of-war” density

“Fog-of-war” density

4 Bases (4 landing/charging pads each), 16 UAVs 1

0.7 0.6 0.5 0.4 0.3

0.7 0.6 0.5 0.4 0.3

0.2

0.2

0.1

0.1 0

0 0

500

1000

1500

2000

0

500

Time (arbitrary unit)

1000

1500

2000

Time (arbitrary unit)

Fig. 8 Evolution of fog-of-war density. Average over 100 independent realizations. Dashed lines: extreme values (highest/lowest average density observed over all runs)

Results for Enabling Cooperation Figure 8 shows the evolution of the fog-of-war for a park with 16 UAVs and 4 bases, which can charge 4 drones each. The results show the averages from 100 runs, with maximal and minimal performance values included as dashed lines.

Discussion: The Impact of Cooperation on the fog-of-war Figure 8 shows the beneficial influence of cooperation on the swarm’s ability to clear the fog-of-war efficiently. As discussed in Sect. 4.1, drones are cooperating if they update the fog-of-war map along their intended flight plan before take-off, which reduces the chance of overlap with patrol routes subsequently chosen by other drones (implicit division of labour through emergent area specialization). For the chosen parameter values (4 bases, 16 drones), the effect is small but consistent (absence of cooperation increases both average and extreme values).

5.2 Number of Bases and Number of Drones Figure 9 summarizes the influence of the number of bases and drones on the clearing of the fog-of-war. The number of landing/charging pads is such that, independently of the number of bases, there is always a place for any of the up to 16 drones to land. In all cases, the swarm cooperates. The drones have a maximum reach, dictated by their battery and location of the drone base. We would like to compare performances between swarms of equal size, but with different numbers of bases.

Autonomous Drones for Environmental Protection 4 Bases (4 landing/charging pads each), 8 UAVs

1 Bases (16 landing/charging pads each), 8 UAVs

1

1

0.9

0.9

0.8 “Fog-of-war” density

0.8 “Fog-of-war” density

23

0.7 0.6 0.5 0.4 0.3

0.7 0.6 0.5 0.4 0.3

0.2

0.2

0.1

0.1

0 0

500

1000 Time (arbitrary unit)

1500

0

2000

0

1000

1500

2000

Time (arbitrary unit)

4 Bases (4 landing/charging pads each), 16 UAVs

1 Bases (16 landing/charging pads each), 16 UAVs

1

1

0.9

0.9

0.8

0.8 “Fog-of-war” density

“Fog-of-war” density

500

0.7 0.6 0.5 0.4 0.3 0.2

0.7 0.6 0.5 0.4 0.3 0.2

0.1

0.1

0

0 0

500

1000 Time (arbitrary unit)

1500

2000

0

500

1000

1500

2000

Time (arbitrary unit)

Fig. 9 Evolution of fog-of-war density. Average over 100 independent realizations. Dashed lines are extreme values (highest/lowest average density observed in any simulation run). Left: 4 bases; right: 1 base; top: 8 drones; bottom: 16 drones

Results for Varying Numbers of Bases and Stationed Drones Indicate that the number of bases has only a minor effect (for the tested parameter values) on the fog-of-war. The number of the drones and their allocation over the bases has a bigger impact. This can be seen in Fig. 9 where there is hardly any difference between the left and right plots (which have the same number of drones but differ in the bases), but there is a noticeable difference between top and the bottom figures (respectively) which differ in the number of drones.

5.3 Park Monitoring and Target Surveillance The evaluation of the swarm’s performance with respect to monitoring relevant agents (“hikers” and “bears”) is complicated by their stochastic behaviour (see Sect. 4.3). Results illustrate the complexity of the dynamics resulting from the swarm’s collective decision-making. In the baseline scenario in which neither “hikers” nor “bears” are present, no active recruitment takes place, which translates into fewer drones patrolling simultaneously (statistically) than would be feasible if every unit

24

F. Saffre et al.

was taking off again as soon as its battery is fully recharged (this “idling” can be interpreted as operating in a “resource-saving” mode in the absence of any remarkable event). This results in an average fog-of-war density of .≈0.24. In the presence of hikers only, this value drops to .≈0.19, which can be explained by the fact that, when a hiker is discovered, the swarm assigns itself a future mission to “check the presence of hiker X at their last known location,” which is executed as soon as possible (i.e., when the first available unit capable of reaching the corresponding geographical coordinates is fully charged). The ensuing reduction in idling time at base (statistically) appears to be sufficient to improve the clearing rate of the fog-of-war. When only bears are present, it has an intermediate effect, which is attributable to the difference in policy for dealing with each type of agent. When detecting a bear, and unless it is already being monitored by another swarm member, a drone adopts a loitering behaviour to keep it in sight, until such time that it must leave to recharge at the nearest available landing pad. At this point, a request is sent for a replacement, to continue monitoring the animal (whether this “handover” is successful depends on when another drone becomes available, how long it takes to reach the last known location of the bear, and whether the target has moved beyond detection range in the interval). The result is that, on the one hand, idling is also reduced (as with hikers) but, on the other, some drones spend a substantial amount of airborne time “pinned” to one slowly moving location (hovering in the vicinity of the bear) and not on patrol, resulting in an average fog-of-war density of .≈0.22. The scenario in which both types of agents are involved is the most interesting and puzzling. Simulation results unequivocally indicate that the average fog-ofwar density drops somewhat below that which was observed when only hikers are involved, to a value of .≈0.18 (see Fig. 10). This seems counter-intuitive as the

Fig. 10 Frequency distribution of the average “fog-of-war” density. The fact that every peak is clearly distinct indicates that each of the four scenarios leads to a different performance (as measured by this variable). NB: lower values are better. 500 realizations per scenario. Zero values are omitted

Autonomous Drones for Environmental Protection

25

Fig. 11 Frequency distribution of the relative observation time (monitoring divided by total presence). Once detected, hikers (unlike bears) are not kept under surveillance but only “checked upon” at irregular intervals, so lower values are expected. The shift in the distribution for bears suggests that the swarm is struggling with this task when otherwise solicited by the monitoring of hikers. 1000 realizations per scenario. Zero values are omitted

protocol of the numerical experiment always ensures that the number of hikers (4) and bears (1) concurrently present in the park is identical as for the two exclusive scenarios (involving only one type). Accordingly, the overall “workload” on the swarm is increased as it must check on hikers and monitor bears at the same time, which may result in heightened strain on available resources, as confirmed by the slightly degraded performance for the latter task (see Fig. 11). We speculate that the full mobilization of the swarm caused by having to perform two types of mission simultaneously results in further reduction of idling time and that this is sufficient to compensate for the “pinning” effect of keeping the bear under constant surveillance. Verifying this hypothesis will be the subject of future work.

5.4 Discussion The basic ConOps of the presented WS approach has been described in Sect. 3 in the proposed scenario description, which we assess could be possible in a real-world implementation in perhaps 15 years. However, there are a few additional sociotechnical factors that should be considered in the real implementation of this type of a ConOps. Firstly, as the swarm system is planned to function autonomously, minimal human input needed in its operation. Nevertheless, humans would be needed at least in the maintenance of the system, if many drones start to experience technical issues or are lost during operations (e.g., due to extreme weather) as time goes by. To counter this problem, the drones should be designed to be as technically

26

F. Saffre et al.

robust and weather-resistant as possible. Secondly, the drone bases would need to be designed as durable as possible and so that they provide sufficient shelter for the drones to safely stay in this “nest” even for even long periods. One step already towards similar drone-in-a-box solutions has been done by the major drone manufacturers, such as DJI with their DJI Dock system (https://www.dji.com/fi/ dock). Thirdly, the drones in the WS should include at least red/green/blue (RGB) and thermal imaging cameras as payloads. The RGB camera feed would be automatically analysed with AI to detect the relevant actors (e.g., humans and bears) from the wilderness. The thermal imaging could be used to detect a heat source (e.g., a campfire) and automatically do necessary alerts based on that. Furthermore, some drones in the swarm could be equipped with loudspeakers as payloads. The loudspeakers would enable the drones to automatically issue a loud warning to the humans if a bear is close by or correctional instructions if humans are seen, for example, littering. Alternatively, the loudspeaker could be used to scare the bear away from the direction where the humans are camping or instructions for humans to put out a campfire right away. Fourthly, a human-machine interface (HMI) would be needed to monitor the functioning of the drones and the sensor feeds from them. A remote human user would need to see from the sensor feeds, for example, the detected potential fires to assess whether they are controllable campfires or uncontrollable starts of wildfires that need to alarm a fire brigade. In addition to this type of system output monitoring from the HMI, some input can be envisioned to be useful in future versions of the system from human users. Such cases include, for example, the human user being able to assign special points of interest, which the drone swarm should be monitoring more carefully. These points of interest may include typical camping sites of the hikers or a place where a certain hiker has been last seen if he/she has been announced missing (with no other connection to that hiker). Fifthly, the current implementation of the swarm in the SiP simulator includes only hovering quadcopter drones. In a scenario where the monitored area is very large, it would probably make more sense to have fixed-wing drones conducting the large area surveillance due to their longer range and typically larger payload possibilities. Therefore, a hybrid approach combining both vertical take-off and landing (VTOL) fixed-wing and quadcopters could be one avenue for an optimal implementation. The fixed-wing VTOL drones would be used to spot relevant actors/events from the environment and the quadcopters would then be launched from the nearest drone base to monitor the relevant actor/event more closely. Sixthly, the drones of the swarm would need a connectivity solution to communicate their relevant findings between each other and to the human monitoring the system. In the future, this connectivity would most probably rely on 6G connectivity, combining both terrestrial and non-terrestrial (satellite-based) communications. The swarm’s units would also need to communicate with each other in order to, for example, avoid duplicate efforts in exploration and keeping a safe distance between the drone units when airborne. Related to the latter, a separate drone traffic and airspace management system would not necessarily need to be implanted, as the

Autonomous Drones for Environmental Protection

27

swarm system itself would include keeping the drones at safe distance from each other and as there are not any other airspace users (e.g., helicopters or small planes) in the national park area. At the moment, however, the system is designed so that it would not require any communication for the exploration behaviour of the drones after take-off. The flight plan is generated before take-off in the base and the drone executes the flight plan and comes back to land. At that point only, the drone should then upload the information to the server. However, for communicating the spotting of hikers and bears that will require data communications to send that information at least back to base. Finally, similar ConOps, as the one presented in our case study, could be utilized in other environmental protection activities with drone swarms. As discussed in Sect. 1, the prevention of illegal fishing, poaching, logging/harvesting, mining, conversion of forests for agriculture use, or other environmental crimes are one of the aims of environmental protection. A similar drone swarm as the one in our Wild Swarms example could be used for the detection of these types of illegal activities to advance environmental protection. From a technical perspective, it is only a matter of teaching the AI used to analyse the drone sensor feeds to detect relevant things from the environment (e.g., fishers, poachers, equipment, or anomalies in the environment). Other similar drone swarm ConOpses, as the one discussed here, have already been presented in the military [28] and wildfire management domains [26].

6 Conclusion and Future Work Our paper aimed to demonstrate a WS as a viable option for environmental protection, at least for medium-term operations. Some of the robust autonomous features (e.g., flights are planned before take-off, so each drone can operate without connection to the base) allow the swarms to operate efficiently without human guidance for periods that are orders of magnitude longer than a single mission. However, over a long-term period, there are aspects of the system that an autonomous swarm would not be able to do. For example, the drones still need maintenance. When a drone breaks down, the system is not able to repair it by itself with the current foreseeable state of technology. To some extent, the collective intelligence can mitigate this problem: the SiP simulations showed that if drones are added or removed, the system still aims to do its best with the amount of drones available. However, eventually critical mass will be lost and the swarm will no longer function. Nevertheless, for example, the automatic take-off, landing, navigation, recharging and autonomous division of labour of the swarm with the level of technology that is foreseeable in the future could be conducted autonomously at the time of writing this paper. Several open questions still remain, of which few are, for example, the following: Could the swarms’ mission be extended beyond maintaining situational awareness? At the moment, we have mostly discussed about observation by the drone swarms, but less about possible actions. In the simulation, there is no actuation in the system:

28

F. Saffre et al.

when the system detects a bear, it only monitors it but does not, for example, release a bear spray to chase it away from humans. In the future, there is the possibility to add some sort of actuation to the system and be able to do something about an event that is detected (e.g., a fire starting and you might want to do something about it). A related question is that would heterogeneous swarms combining ground and aerial units make more sense in that respect? In many cases, a drone that is meant to patrol and collect information about the environment, might not be the best equipped to take care of the event it has just noticed. For example, in managing a starting fire, flying drones could not carry an awful lot of water or fire-retardant powder to extinguish the fire. However, they might still raise the alarm to a ground robot with a big tank of water that could be dispatched to the area of the fire. One can also ask that would not the robotic swarm affect the wildlife in the area where it is dispatched? There are some studies (e.g., [31, 32]) about the effect of drone noise to the fauna in certain environments. However, some future designs of drone propellers (such as toroidal propellers7 ) which can help in minimizing the noise emitted by drones in the future. Even though the bears might not be affected by the sound of the drones hovering above them, the humans in the wilderness area would probably react to a drone hovering nearby. For example, they could change their behaviour as they are aware of being monitored. On the other hand, the human might interpret a drone hovering some distance away as an indication that a bear is there and start moving away from the direction of the drone. The Wild Swarm approach presented in this paper aimed to demonstrate how an autonomous swarm-based cyber-physical system could contribute to environmental protection. The proposed ambitious and futuristic ConOps included a self-sufficient and self-organizing colony of “drone wardens” left to autonomously plan and execute a numerous amount of individual reconnaissance missions. Our Swarm in a Park simulator demonstrated some of the key features of swarming in this context. Numerical experiments, in which a variety of simple but robust approaches were tested, demonstrated our SiP system’s ability to support the relevant swarming behaviour, such as efficient division of labour and recruitment. Our future work will focus on the refinement and implementation of the presented WS ConOps and the SiP simulator. Swarming technology would benefit from an improved ConOps, more algorithm development and numerical experiments (and analysis), to allow continuing to explore the dynamics of the SiP system and to optimize its behaviour. Acknowledgments This research is funded by Academy of Finland (project grant number 348010) and conducted as part of the Unmanned Aerial Systems based solutions for real-time management of wildfires (FireMan) project. This research was also partly supported by the Academy of Finland project “Finnish UAV Ecosystem” (FUAVE, project grant number 337878).

7 https://www.ll.mit.edu/sites/default/files/other/doc/2022-09/TVO_Technology_Highlight_41_ Toroidal_Propeller.pdf.

Autonomous Drones for Environmental Protection

29

References 1. AL-Dosari, K., Hunaiti, Z., Balachandran, W.: Systematic review on civilian drones in safety and security applications. Drones 7(3) (2023). https://doi.org/10.3390/drones7030210. https:// www.mdpi.com/2504-446X/7/3/210 2. Ali, S.H., de Oliveira, J.A.P.: Pollution and economic development: an empirical research review. Environ. Rese. Lett. 13(12), 123003 (2018). https://doi.org/10.1088/1748-9326/aaeea7. https://dx.doi.org/10.1088/1748-9326/aaeea7 3. Antenucci, A., Mazzaro, S., Fiorilla, A.E., Messina, L., Massa, A., Matta, W.: A ROS based automatic control implementation for precision landing on slow moving platforms using a cooperative fleet of rotary-wing UAVs. In: 2020 5th International Conference on Robotics and Automation Engineering (ICRAE). pp. 139–144 (2020). https://doi.org/10.1109/ICRAE50850. 2020.9310899 4. Arnold, R., Jablonski, J., Abruzzo, B., Mezzacappa, E.: Heterogeneous UAV multi-role swarming behaviors for search and rescue. In: 2020 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA). pp. 122–128 (2020). https:// doi.org/10.1109/CogSIMA49017.2020.9215994 5. Ausonio, E., Bagnerini, P., Ghio, M.: Drone swarms in fire suppression activities: a conceptual framework. Drones 5(1) (2021). https://doi.org/10.3390/drones5010017. https://www.mdpi. com/2504-446X/5/1/17 6. Bergenas, J., Knight, A.: Green terror: environmental crime and illicit financing. SAIS Rev. Int. Affairs 35(1), 119–131 (2015), https://www.jstor.org/stable/27000981 7. Berger-Tal, O., Lahoz-Monfort, J.J.: Conservation technology: the next generation. Conser. Lett. 11(6), e12458 (2018). https://doi.org/10.1111/conl.12458. https://conbio.onlinelibrary. wiley.com/doi/abs/10.1111/conl.12458 8. Berners-Lee, M.: There Is No Planet B: A Handbook for the Make or Break Years. Cambridge University Press, Cambridge (2019). https://doi.org/10.1017/9781108545969 9. Bolla, G.M., Casagrande, M., Comazzetto, A., Dal Moro, R., Destro, M., Fantin, E., Colombatti, G., Aboudan, A., Lorenzini, E.C.: Aria: Air pollutants monitoring using UAVs. In: 2018 5th IEEE International Workshop on Metrology for AeroSpace (MetroAeroSpace). pp. 225– 229 (2018). https://doi.org/10.1109/MetroAeroSpace.2018.8453584 10. Büscher, B., Ramutsindela, M.: Green violence: Rhino poaching and the war to save Southern Africa’s peace parks. Afr. Aff. 115(458), 1–22 (12 2015). https://doi.org/10.1093/afraf/adv058 11. Dhar, P.: The carbon impact of artificial intelligence. Nat. Mach. Intell. 2(8), 423–425 (2020). https://doi.org/10.1038/s42256-020-0219-9 12. Doull, K.E., Chalmers, C., Fergus, P., Longmore, S., Piel, A.K., Wich, S.A.: An evaluation of the factors affecting ‘poacher’ detection with drones and the efficacy of machine-learning for detection. Sensors 21(12) (2021). https://doi.org/10.3390/s21124074. https://www.mdpi.com/ 1424-8220/21/12/4074 13. Fairley, R., Thayer, R., Bjorke, P.: The concept of operations: the bridge from operational requirements to technical specifications. In: Proceedings of IEEE International Conference on Requirements Engineering. pp. 40–47 (1994). https://doi.org/10.1109/ICRE.1994.292405 14. Goddard, M.A., Davies, Z.G., Guenat, S., Ferguson, M.J., Fisher, J.C., Akanni, A., Ahjokoski, T., Anderson, P.M.L., Angeoletto, F., Antoniou, C., Bates, A.J., Barkwith, A., Berland, A., Bouch, C.J., Rega-Brodsky, C.C., Byrne, L.B., Cameron, D., Canavan, R., Chapman, T., Connop, S., Crossland, S., Dade, M.C., Dawson, D.A., Dobbs, C., Downs, C.T., Ellis, E.C., Escobedo, F.J., Gobster, P., Gulsrud, N.M., Guneralp, B., Hahs, A.K., Hale, J.D., Hassall, C., Hedblom, M., Hochuli, D.F., Inkinen, T., Ioja, I.C., Kendal, D., Knowland, T., Kowarik, I., Langdale, S.J., Lerman, S.B., MacGregor-Fors, I., Manning, P., Massini, P., McLean, S., Mkwambisi, D.D., Ossola, A., Luque, G.P., Pérez-Urrestarazu, L., Perini, K., Perry, G., Pett, T.J., Plummer, K.E., Radji, R.A., Roll, U., Potts, S.G., Rumble, H., Sadler, J.P., de Saille, S., Sautter, S., Scott, C.E., Shwartz, A., Smith, T., Snep, R.P.H., Soulsbury, C.D., Stanley, M.C., Van de Voorde, T., Venn, S.J., Warren, P.H., Washbourne, C.L., Whitling, M., Williams, N.S.G.,

30

F. Saffre et al.

Yang, J., Yeshitela, K., Yocom, K.P., Dallimer, M.: A global horizon scan of the future impacts of robotics and autonomous systems on urban ecosystems. Nat. Ecol. Evolut. 5(2), 219–230 (2021). https://doi.org/10.1038/s41559-020-01358-z 15. Gonçalves, L., Damas, B.: Automatic detection of rescue targets in maritime search and rescue missions using UAVs. In: 2022 International Conference on Unmanned Aircraft Systems (ICUAS). pp. 1638–1643 (2022). https://doi.org/10.1109/ICUAS54217.2022.9836137 16. Guenat, S., Purnell, P., Davies, Z.G., Nawrath, M., Stringer, L.C., Babu, G.R., Balasubramanian, M., Ballantyne, E.E.F., Bylappa, B.K., Chen, B., De Jager, P., Del Prete, A., Di Nuovo, A., Ehi-Eromosele, C.O., Eskandari Torbaghan, M., Evans, K.L., Fraundorfer, M., Haouas, W., Izunobi, J.U., Jauregui-Correa, J.C., Kaddouh, B.Y., Lewycka, S., MacIntosh, A.C., Mady, C., Maple, C., Mhiret, W.N., Mohammed-Amin, R.K., Olawole, O.C., Oluseyi, T., Orfila, C., Ossola, A., Pfeifer, M., Pridmore, T., Rijal, M.L., Rega-Brodsky, C.C., Robertson, I.D., Rogers, C.D.F., Rougé, C., Rumaney, M.B., Seeletso, M.K., Shaqura, M.Z., Suresh, L.M., Sweeting, M.N., Taylor Buck, N., Ukwuru, M.U., Verbeek, T., Voss, H., Wadud, Z., Wang, X., Winn, N., Dallimer, M.: Meeting sustainable development goals via robotics and autonomous systems. Nat. Commun. 13(1), 3559 (2022). https://doi.org/10.1038/s41467-022-31150-5 17. Henderson, G., Wagner, R.O., Jeanne, R.L.: Prairie ant colony longevity and mound growth. Psyche J. Entomol. 96 (1989). https://doi.org/10.1155/1989/51654 18. Hildmann, H., Kovacs, E.: Review: using unmanned aerial vehicles (UAVs) as mobile sensing platforms (MSPs) for disaster response, civil security and public safety. Drones 3(3) (2019). https://doi.org/10.3390/drones3030059. https://www.mdpi.com/2504-446X/3/3/59 19. Hildmann, H., Kovacs, E., Saffre, F., Isakovic, A.F.: Nature-inspired drone swarming for realtime aerial data-collection under dynamic operational constraints. Drones 3(3) (2019). https:// doi.org/10.3390/drones3030071. https://www.mdpi.com/2504-446X/3/3/71 20. Hill, V.W., Thomas, R.W., Larson, J.D.: Autonomous situational awareness for UAS swarms. In: 2021 IEEE Aerospace Conference (50100). pp. 1–6 (2021). https://doi.org/10.1109/ AERO50100.2021.9438461 21. Holden, M.H., Biggs, D., Brink, H., Bal, P., Rhodes, J., McDonald-Madden, E.: Increase antipoaching law-enforcement or reduce demand for wildlife products? A framework to guide strategic conservation investments. Conserv. Lett. 12(3), e12618 (2019). https://doi.org/10. 1111/conl.12618. https://conbio.onlinelibrary.wiley.com/doi/abs/10.1111/conl.12618 22. Janik, P., Zawistowski, M., Fellner, R., Zawistowski, G.: Unmanned aircraft systems risk assessment based on SORA for first responders and disaster management. Appl. Sci. 11(12) (2021). https://doi.org/10.3390/app11125364. https://www.mdpi.com/2076-3417/11/12/5364 23. Jayachandran, S.: How economic development influences the environment. Ann. Rev. Econ. 14(1), 229–252 (2022). https://doi.org/10.1146/annurev-economics-082321-123803 24. Jiménez López, J., Mulero-Pázmány, M.: Drones for conservation in protected areas: Present Future. Drones 3(1) (2019). https://doi.org/10.3390/drones3010010. https://www.mdpi.com/ 2504-446X/3/1/10 25. Kaack, L.H., Donti, P.L., Strubell, E., Kamiya, G., Creutzig, F., Rolnick, D.: Aligning artificial intelligence with climate change mitigation. Nat. Climate Change 12(6), 518–527 (2022). https://doi.org/10.1038/s41558-022-01377-7 26. Karvonen, H., Honkavaara, E., Röning, J., Kramar, V., Sassi, J.: Using a semi-autonomous drone swarm to support wildfire management – a concept of operations development study. In: Engineering Psychology and Cognitive Ergonomics (2023) 27. Keitt, T.H., Abelson, E.S.: Ecology in the age of automation. Science 373(6557), 858–859 (2021). https://doi.org/10.1126/science.abi4692. https://www.science.org/doi/abs/10. 1126/science.abi4692 28. Laarni, J., Väätänen, A., Karvonen, H., Lastusilta, T., Saffre, F.: Development of a concept of operations for a counter-swarm scenario. In: Harris, D., Li, W.C. (eds.) Engineering Psychology and Cognitive Ergonomics. pp. 49–63. Springer International Publishing, Cham (2022)

Autonomous Drones for Environmental Protection

31

29. Mangal, P., Rajesh, A., Misra, R.: Big data in climate change research: Opportunities and challenges. In: 2020 International Conference on Intelligent Engineering and Management (ICIEM). pp. 321–326 (2020). https://doi.org/10.1109/ICIEM48762.2020.9160174 30. Mantau, A.J., Widayat, I.W., Leu, J.S., Köppen, M.: A human-detection method based on YOLOv5 and transfer learning using thermal image data from UAV perspective for surveillance system. Drones 6(10) (2022). https://doi.org/10.3390/drones6100290. https://www.mdpi.com/ 2504-446X/6/10/290 31. Mesquita, G.P., Mulero-Pázmány, M., Wich, S.A., Rodríguez-Teijeiro, J.D.: Terrestrial megafauna response to drone noise levels in ex situ areas. Drones 6(11) (2022). https://doi. org/10.3390/drones6110333. https://www.mdpi.com/2504-446X/6/11/333 32. Mo, M., Bonatakis, K.: Approaching wildlife with drones: using scientific literature to identify factors to consider for minimising disturbance. Aust. Zool. 42(1), 1–29 (2021). https://doi.org/ 10.7882/AZ.2021.015 33. Monserrate, S.G.: The Cloud Is Material: On the Environmental Impacts of Computation and Data Storage. MIT Case Studies in Social and Ethical Responsibilities of Computing (Winter 2022) (2022). https://mit-serc.pubpub.org/pub/the-cloud-is-material 34. Popovi´c, M.: Counting penguins with drones. Sci. Rob. 5(47), eabe7458 (2020). https://doi.org/ 10.1126/scirobotics.abe7458. https://www.science.org/doi/abs/10.1126/scirobotics.abe7458 35. Porter, S.D., Jorgensen, C.D.: Longevity of harvester ant colonies in southern Idaho. J. Range Manag. 41, 104–107 (1988). Harvester ant colonies (Pogonomyrmex owyheei Cole) in southern Idaho were monitored periodically for 9 years. Mortality rates indicate that established colonies live 14–30 years (mean = 17). Mounds were commonly reactivated after the death of an old colony; consequently, some may be utilized for many decades. Clearings with active mounds showed almost no change after 9 years of observations while those without active mounds were rapidly filled by annual herbs and then gradually by perennial shrubs. Harvester ants are clearly a very persistent component of cold desert shrub communities. https://doi.org/10.2307/3898942. http://hdl.handle.net/10150/645205 36. Qiu, Z., Bai, H., Chen, T.: Special vehicle detection from UAV perspective via YOLO-GNS based deep learning network. Drones 7(2) (2023). https://doi.org/10.3390/drones7020117. https://www.mdpi.com/2504-446X/7/2/117 37. Sachs, J.D., Schmidt-Traub, G., Mazzucato, M., Messner, D., Nakicenovic, N., Rockström, J.: Six transformations to achieve the sustainable development goals. Nat. Sustainab. 2(9), 805– 814 (2019). https://doi.org/10.1038/s41893-019-0352-9 38. Saffre, F., Hildmann, H., Karvonen, H., Lind, T.: Monitoring and cordoning wildfires with an autonomous swarm of unmanned aerial vehicles. Drones 6(10) (2022). https://doi.org/10.3390/ drones6100301. https://www.mdpi.com/2504-446X/6/10/301 39. Saffre, F., Hildmann, H., Karvonen, H., Lind, T.: Self-swarming for multi-robot systems deployed for situational awareness. In: Lipping, T., Linna, P., Narra, N. (eds.) New Developments and Environmental Applications of Drones. pp. 51–72. Springer International Publishing, Cham (2022) 40. Sanford, A., Lopez, M., Johnson, J., Dennis, A., Meyers, L., Alshammari, H., Garcia, G.: Sailboat-mounted submersible device for ocean and atmospheric data collection. In: 2021 Waste-management Education Research Conference (WERC). vol. 02, pp. 1–5 (2021). https:// doi.org/10.1109/WERC52047.2021.9477544 41. Saraiva, C.M.D.: Autonomous environmental protection drone. Dissertação de mestrado, Iscte - Instituto Universitário de Lisboa (2019) 42. Seeley, T.D.: Adaptive significance of the age polyethism schedule in honeybee colonies. Behav. Ecol. Sociobiol. 11(4), 287–293 (1982). https://doi.org/10.1007/BF00299306 43. Shah, K., Ballard, G., Schmidt, A., Schwager, M.: Multidrone aerial surveys of penguin colonies in Antarctica. Sci. Rob. 5(47), eabc3000 (2020). https://doi.org/10.1126/scirobotics. abc3000. https://www.science.org/doi/abs/10.1126/scirobotics.abc3000 44. Sharma, N., Saqib, M., Scully-Power, P., Blumenstein, M.: SharkSpotter: Shark Detection with Drones for Human Safety and Environmental Protection, pp. 223–237. Springer International Publishing, Cham (2022). https://doi.org/10.1007/978-3-030-72188-6_11

32

F. Saffre et al.

45. Simões, L., Ladeiro, L., Bernardino, J., Pedrosa, I.: The usage of big data and data analytics in the study of climate change. In: 2021 16th Iberian Conference on Information Systems and Technologies (CISTI). pp. 1–6 (2021). https://doi.org/10.23919/CISTI52073.2021.9476492 46. Stahlbuhk, T., Deutsch, P., Kelly, D., Cipolle, D., Wong, T., Bartlett, W., Hood, K.: Robust network protocols for large swarms of small UAVs. In: 2022 IEEE Aerospace Conference (AERO). pp. 1–18 (2022). https://doi.org/10.1109/AERO53065.2022.9843316 47. Ubina, N.A., Cheng, S.C.: A review of unmanned system technologies with its application to aquaculture farm monitoring and management. Drones 6(1) (2022). https://doi.org/10.3390/ drones6010012. https://www.mdpi.com/2504-446X/6/1/12 48. Walendziuk, W., Szatylowicz, E., Oldziej, D., Slowik, M.: Unmanned aerial vehicle as a measurement tool in engineering and environmental protection. In: Romaniuk, R.S., Linczuk, M. (eds.) Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2018. vol. 10808, p. 108085X. International Society for Optics and Photonics, SPIE, Bellingham (2018). https://doi.org/10.1117/12.2501378 49. Wang, D., Post, W., Wilson, B.: Climate change modeling: computational opportunities and challenges. Comput. Sci. Eng. 13(5), 36–42 (2011). https://doi.org/10.1109/MCSE.2010.147 50. Wang, Y., Fan, Y., Wang, G., Qiao, S., Feng, R.: Research on cooperative situational awareness of unmanned surface vehicle swarm based on multi-radar joint detection. In: 2022 7th International Conference on Automation, Control and Robotics Engineering (CACRE). pp. 51–57 (2022). https://doi.org/10.1109/CACRE54574.2022.9834192 51. Zhang, X., Bai, Y., He, K.: On countermeasures against cooperative fly of UAV swarms. Drones 7(3) (2023). https://doi.org/10.3390/drones7030172. https://www.mdpi.com/2504-446X/7/3/ 172

Connecting Different Drone Operations with the Farm Robotic Management Jere Kaivosoja , Kari Kolehmainen , Oskar Marko , Ari Ronkainen Nina Pajevi´c , Marko Pani´c , Sergio Vélez , Mar Ariza-Sentis , João Valente , and Juha-Pekka Soininen

,

1 Introduction Food production has major challenges in productivity, cost-efficiency, and sustainability [1]. Automation and digitalization have been proposed to address productivity needs, but at the same time, they are costly solutions. Recently, the concept of smart farming also called as advanced precision agriculture has emerged as a part of Fourth Industrial Revolution [2, 3]. This includes an increase in automation and robotization. Drones also called UAVs (unmanned aerial vehicle) can be a crucial part in future farm automation and robotization in agricultural fields. The drone technologies can offer relatively cheap, inclusive, and advanced smart solutions [4]. The approach does not need to have direct contact with crops or soil, and missions are often fully automated already by the manufacturer. By applying drones, farmers can produce accurate measurements on demand of the interested phenomenon and can perform small-scale operations such as pesticide spraying or spreading of fertilizers. From the farmer’s point of view, drones are separate semiautonomous robots that can perform specific tasks related to farming. In a review [5], the farming robotics

J. Kaivosoja (✉) · A. Ronkainen Natural Resources Institute Finland (LUKE), Production Technologies, Helsinki, Finland e-mail: [email protected] K. Kolehmainen · J.-P. Soininen VTT Technical Research Centre Finland, Digital Technologies, Oulu, Finland O. Marko · N. Pajevi´c · M. Pani´c BioSens Institut, Center for Information Technologies, Novi Sad, Serbia S. Vélez · M. Ariza-Sentis · J. Valente WUR Wageningen University & Research, Information Technology Group, Wageningen, the Netherlands © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 T. Westerlund, J. Peña Queralta (eds.), New Developments and Environmental Applications of Drones, https://doi.org/10.1007/978-3-031-44607-8_2

33

34

J. Kaivosoja et al.

were divided to flying drones and land-based robots. According to a review [6], agricultural multirobot systems (MRSs) include aerial robots, ground robots, and manipulators. In both reviews, drones are seen as essential part of agricultural robotization. In addition to drone technologies, the automation and robotics in arable farming have continuously advanced. There are task-specialized robots [7], general robot platforms that can be adjusted to different tasks, and semiautonomous tractors that can operate with multiple existing tools. While the automation and the number of units increase, the operation management of the robotics becomes essential part. Farming machinery is developing toward digitalized and autonomous systems. The concepts of cloud robotics and digital twin have added capabilities to monitor and control the device robotics [8]. Furthermore, the Internet-of-Robotic-Things (IoRT) concept combined the cloud robotics and IoT domains and allowed managing the robots from IoT platforms [9]. In this study, we connect state-of-the-art drone technologies with the future farm multirobot mission management and take first steps for the integration of the drone technologies. We aim to determine requirements for future drone systems operating in agriculture and find solutions on how to integrate existing state-of-the-art systems into the farm robotic management concept.

2 Material and Methods We developed eight different drone operation use cases of approximately TRL 7 (technology readiness level) applications. Then, we studied the integration capabilities of the drone technologies with the multirobot mission workflow management included in a mission control center (MCC) approach [10] to enable data flow to traditional farm management system (FMS) managed by the farmer, farm data storage, other robotics, linked AI enterprises, and IoT solutions provided by third parties [10]. The MCC is a project-specific solution that contains heterogeneous robotic mission management and fleet management mandatory for such operations with heterogeneous machinery and drones. Certain limitations that currently exist with the drone technologies such as short operational time due to battery limitations are not strictly considered in the use cases. We are assuming that new developments in the drone industry are solving those problems at some timescale by applying, for example, automatic docking stations.

2.1 Use Cases with Drones These use cases were the detection of the rapeseed (Brassica napus) pests, rapeseed pest control by spraying application, situation awareness of a tractor fleet in grass harvest, weed mapping in grass fields, highbush blueberry (Vaccinium corymbosum)

Connecting Different Drone Operations with the Farm Robotic Management

35

Table 1 Used drones, their primary payloads, target application, and mission type in the studied use cases Drone platform DJI Mavic Air 2 Parrot Anafi DJI Phantom 4 RTK DJI Agras T-16 DJI Matrice M600 pro DJI Matrice 210 RTK DJI Matrice 300 RTK Trinity F90+

Payload Integrated camera Integrated camera Integrated camera Integrated spraying unit Specim AFX10 (VNIR) Zenmuse X5S RGB and MicaSense Rededge Zenmuse L1 LIDAR and MicaSense Altum MicaSense Altum

Application Moving objects Rapeseed Vineyards Rapeseed Blueberries Vineyards

Mission type Live video stream Imaging Mapping Spraying Mapping Mapping

Vineyards

Mapping

Blueberries

Mapping

field disease detection, blueberry yield prediction and irrigation management, and grapevine (Vitis vinifera) disease risk mapping. The use cases were carried out in Finland, Spain, and Serbia. By these means, we managed pilots from versatile stateof-the-art drone platforms and tools. We applied the following drones presented in Table 1. The table shows the brand of the drone, used payloads (typically a camera), the target application, and the mission type for each use case. The imaging mission was a case study, where drones take close range images within 1–2 m from the target field [11, 12] or insect traps. These images are then further analyzed to determine the number of pests within that area. The mission plan and the image transmission can be done offline. The live video stream use case was an extension of situation awareness of a robot tractor, where the drone follows the tractor and sees the surrounding areas without being dependent on any other robot unit. This mission requires realtime communication for the flying operation and for the video output. The pest control use case [12] demonstrated pest control with spraying drone [12] carried out within present aerial regulations meaning that the drone did not spray anything on its mission. The mapping use cases included Botrytis detection in vineyards [13, 14], row segmentation for ground vehicles, disease detection, and yield prediction in the highbush blueberry fields. In the drone mapping use cases, the connection is all about getting the preplanned mission to the drone system and getting the recorded data to the analyzing phase. Most of the drones in the use cases represent brands with the closed system, which is quite typical with commercial drones also for the safety reasons and due to aviation regulations.

2.2 Multirobot Mission Management and Connectivity In a multirobot scenario in the future farming, several different drones and ground robots can work simultaneously in the same field operating similar or different tasks.

36

J. Kaivosoja et al.

From the supervision point of view, multirobot missions of this kind with several parallel actions might be practical: the transportation and the monitoring of multiple units can be done simultaneously with possible minor efforts. A proposed multirobot agricultural mission workflow management is presented by [10]. The drone systems need to be connected to the robotic mission management layer. This mission management has a workflow model that is used both for controlling the execution of activities and collecting and sharing of data. The model supports Robotics-as-aservice (RaaS) model where a separate company provides robot fleet services for the farmer [10]. If the drones or their control systems are connected to the Internet, the connection is straightforward by implementing the MQTT (Message Queuing Telemetry Transport) client to the robot. MQTT is a standard for IoT Messaging [15]. In case of open Robot Operating System (ROS), this is straightforward [10]. However, drones are often closed solutions. With closed systems, one possibility is to use a gateway that reads the drone messages to the control system and creates respective status messages for the fleet manager with Micro Air Vehicle Communication Protocol (MAVLink) [16] protocol that is a common protocol for communicating with drones. This layer is responsible for the coordination of robot operations on the field. It consists of the fleet manager, the robots, and open-source MQTT broker called Mosquito [17]. The drones themselves can be autonomous, they execute their own missions, and they can be operated through their dedicated controllers [10]. However, the drone collaboration must be built in into each drone. In addition, a tool for sending simple control commands to the whole robotics fleet or to a single robot is needed. QGroundControl (QGC) is an open-source software that provides full flight control and mission planning for any MAVLink enabled drone. Rosetta Drone [18] is an Android App that brings the MAVLink protocol to DJI SDK (Software Development Kit) drones available in most of the products that were used in the use cases. Rosetta Drone application is installed to SDK controller and Android, and it is used to apply QGC-application for the drone mission. QGC can be adopted for monitoring multiple robots using MAVlink protocol as well as open drone ID [19] as identifier for separate drones. However, the protocol is currently at the development stage. The protocol is defined so that it is compliant with EU regulations (EN 4709002) [20]. In the drone solutions where the mission is very specific or the system is completely closed (such as DJI Agras-T16 or Trinity F90+ in our use cases), an additional onboard Android device operating with application MAVLink UDP Android Example [21] can be added to send data to the QGC using the MAVLink protocol. This provides real-time information about the status of the drone but does not provide any communication to the drone itself. Our selection of drones was based on the availability of tools and their practicality for the planned missions. The missions were operated during the summer 2021 and 2022. Since that, DJI has released a DJI Cloud API [22], where based on MQTT protocol, the third-party platform server can be accessed enabling private protocols. This makes the adaptation feasible for the supported modern drones. The Cloud API feature set access to the cloud server, device management, live streaming, media

Connecting Different Drone Operations with the Farm Robotic Management

37

management, and path management in real time [23]. It is designed for vertical applications, where applications are created according to a specification provided by the end user. These fit well for our use cases.

3 Results

MQTT

As results, we present our data integration methodologies. There are four identified solutions for the integration (Fig. 2): (A) connecting drone autopilot directly enabling real-time data flow with third-party; (B) connecting the ground station with the third-party applications making it the most reliable with the flying operation; (C) connecting payload to third-party application directly or via ground station, this solution may be incomplete in relation to data; and (D) connecting the data processing service. This does not enable real-time communication but is more or less the state-of-the-art solution. Figure 1 shows the MQTT connection possibilities and the connectivity within the drone missions. The following Table 2 presents the mission integration methodology, enabling tools and classified solution (A–D). All the drones ended up with different solutions. In the solution A, the DJI drone remote controller is connected with Android smartphone that has the custom controller software running. This controller software is connected to mission management via MQTT broker and is also able to stream live video from the drone to RTMP (Real-Time Messaging Protocol) video streaming service. The SDK allows controlling the drone and reports the real-time GNSS (global navigation satellite system) position of the drone. The target position comes via MQTT. In the solution B, the QGroundControl software communicates with MAVLink protocol with the drone. In the solution C, external onboard Android payload manages the real-time connection with MAVLink protocol to

MQTT B

Drone Mission controller

A

Drone controller

Drone autopilot

”sticks”

”ground station”

Mission planner

Payload

”date + KML”

”camera”

Data processing services

MQTT

C

Fig. 1 General connectivity within the drone mission and the MQTT options

D MQTT

Specim AFX10 (VNIR)

Zenmuse X5S RGB & MicaSense Rededge Zenmse L1 LIDAR & MicaSense Altum

MicaSense Altum

DJIMatrice M600 pro

DJIMatrice 210 RTK

Trinity F90+

DJIMatrice 300 RTK

Tool Integrated camera Integrated camera Integrated camera Integrated spraying unit

Drone platform DJI Mavic Air 2 Parrot Anafi DJIPhantom 4 RTK DJI Agras T-16

MAVLink

DJI Pilot 2

Cloud API

QGoundcontrol

UX SDK

Enabling tools SDK, Custom APP Additional gamepad UX SDK, Rosetta Drone Onboard Android, MAVLink UDP Android Android SDK

QGC, separate camera mission Cloud API

Mission integration methodology Custom software QGC QGC QGC

Table 2 Integration solutions for different drones and use cases

RJ45-cable, specific software SD card, Orthomosaic assessment tool SD card, Orthomosaic assessment tool, Cloud API SD card, Orthomosaic assessment tool

Data output MQTT SD card SD card QGC data export

C

B

B, D

B, D

Solution A B, D B, D C

38 J. Kaivosoja et al.

Connecting Different Drone Operations with the Farm Robotic Management

39

Fig. 2 Screen captures of different connectivity (a–d) methodologies applied in the use cases

QGroundcontrol without having a true connection to the drone itself. In the solution D, collected data is manually transferred to the processing tools and then further delivered to farm management or for other purposes. Figure 2 presents the screen captures of different use cases presenting the concrete applications. (A) The custom drone controller on an Android device, (B) QGC operating pest imaging mission, (C) spraying drone controller screen and QGroundcontrol operating with additional Android payload, (D) 3D model of a vineyard ready for further analysis.

4 Discussion and Conclusions Our aim was to determine requirements for future drone systems operating in agriculture robotics and find solutions on how to integrate them into the farm robotic management concept. For this, we needed the management concept including mission control center (MCC) concept for the robotics [10]. We found out four ways to integrate the drones: connecting drone autopilot directly, connecting the ground station with the third-party applications, connecting payload to third-party application, or connecting the data processing service. Third-party applications and middleware were required to connect state-of-the-art drone technologies with the future robotic mission management. These solutions made the operation more complicated which is not desirable in the aviation as more components may fail or can be incompatible in exceptional situations. Fortunately, the evolution of drone technologies is heading in the favorable direction, where third-party participation is

40

J. Kaivosoja et al.

enabled without directly impacting the flying mission execution. This is also a good direction from the multirobot integration perspective. When heterogeneous robotic systems are considered in the farming environment, the option for MCC approach is simply the robotic integration in task level (single contractor solutions) and not in farm level. With that approach, separate closed solutions for drone technologies would be enough. But like connecting heterogenous farm machinery as the idea behind ISOBUS standardization, the interoperability is constantly needed where no farm is like other. Safety aspects in drone missions can be highly critical especially in the prototyping phase. Involving third-party applications in the mission execution itself is not favorable. Although the critical data can be accessed in real time, the execution can be unreliable. For this, it is good that manufacturers are starting to provide interfaces for the mission management. On the other hand, drones and their operation are a special case in the world of robotic fleet operations in the agriculture and may need additional requirements since the nature of the operation. As the need for enabling real-time access to the drone mission of third parties has already emerged, future robotic mission management may take advantage of it. Within this development, it is important to take account the robotic control needs within heterogeneous robotic fleets. Such needs are real-time connectivity to the drone control, telemetry data, the connection to the payload such as video stream, still images or work actuator, and connection to general mission control coordinating multiple heterogeneous units in the agricultural operations. Future work should first focus on defining the interface requirements for the drone industry. Most of state-of-the-art drones are designed for certain operations or tasks that will not be the exact same if the operation is part of field robotics. For example, there would not be a dedicated supervisor, and in addition, online communication with other units and mission management are needed. At current state, current legislation, safety, and current drone capabilities are dominating the related discussions. Acknowledgments This research was done under the project FlexiGroBots funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No 101017111.

References 1. Lezoche, M., Hernandez, J., Díaz, M., Panetto, H., Kacprzyk, J.: Agri-food 4.0: a survey of the supply chains and technologies for the future agriculture. Comput. Ind. 117, 103187 (2020). https://doi.org/10.1016/j.compind.2020.103187 2. Abbasi, R., Martinez, P., Ahmad, R.: The digitization of agricultural industry–a systematic literature review on agriculture 4.0. Smart Agric. Technol. 2, 100042 (2022) 3. da Silveira, F., Lermen, F., Amaral, F.: An overview of agriculture 4.0 development: systematic review of descriptions, technologies, barriers, advantages, and disadvantages. Comput. Electron. Agric. 189, 106405 (2021)

Connecting Different Drone Operations with the Farm Robotic Management

41

4. Kaivosoja, J.: Future possibilities and challenges for UAV-based imaging development is smart farming. In: New Developments and Environmental Applications of Drones: Proceedings of FinDrones, pp. 109–119 (2020) 5. Botta, A., Cavallone, P., Baglieri, L., Colucci, G., Tagliavini, L., Quaglia, G.: A review of robots, perception, and tasks in precision agriculture. Appl. Mech., 830–854 (2022). https:// doi.org/10.3390/applmech3030049 6. Ju, C., Kim, J., Seol, J., Son, H.: A review on multirobot systems in agriculture. Comput. Electron. Agric. 202, 107336 (2022) 7. Shamshiri, R., Weltzien, C., Hameed, I., Yule, I., Grift, T., Balasundram, S., Pitonakova, L., Ahmad, D., Chowdhary, G.: Research and development in agricultural robotics: a perspective of digital farming. Int. J. Agric. Biol. Eng. 11(4), 1–14 (2018) 8. Vermesan, O., Bahr, R., Ottella, M., Serrano, M., Karlsen, T., Wahlstrom, T., Sand, H., Ashwathnarayan, A., Gamba, M.: Internet of robotic things intelligent connectivity and platforms. Front. Robot. AI. 1, 104 (2020). www.frontiersin.org 9. Ray, P.: Internet of robotic things: concept, technologies, and challenges. IEEE Access. 4 (2016). https://doi.org/10.1109/ACCESS.2017.2647747 10. Soininen, J., Kolehmainen, K., Kalaoja, J., Heikkilä, T., Kaivosoja, J., Backman, J., Pesonen, L.: Approach for Distributed Workflow Modelling for Agricultural Multi-Robot Missions. IEEE, IROS, under review (2023) 11. Kaivosoja, J., Ronkainen, A., Hautsalo, J., Backman, J., Linkolehto, R., San Emeterio, M., Soininen, J.: Automatized rapeseed pest detection and management with drones. In: Robot2022, Fifth Iberian Robotics Conference: Advances in Robotics, vol. 2, pp. 427–437 (2022) 12. Ronkainen, A., Kaivosoja, J., Laitinen, P., Laitila, E.: Overcoming legislative requirements for implementing UAV plant protection operations in Europe. In: The XX CIGR World Congress 2022, Kyoto Japan (2022) 13. Vélez, S., Ariza-Sentís, M., Valente, J.: Mapping the spatial variability of Botrytis bunch rot risk in vineyards using UAV multispectral imagery. Eur. J. Agron. 142, 126691., ISSN 11610301 (2023). https://doi.org/10.1016/j.eja.2022.126691 14. Vélez, S., Ariza-Sentís, M., Valente, J.: Dataset on unmanned aerial vehicle multispectral images acquired over a vineyard affected by Botrytis cinerea in northern Spain. Data Brief. 46, 108876. ISSN 2352-3409 (2023). https://doi.org/10.1016/j.dib.2022.108876 15. MQTT Homepage.: https://mqtt.org/. Last accessed 31 Mar 2023 16. MAVLink Developer Guide.: https://mavlink.io/en/. Last accessed 31 Mar 2023 17. Eclipse Mosquitto - An open source MQTT broke: Eclipse Foundation. https://mosquitto.org/. Last accessed 31 Mar 2023 18. Rosetta Drone: Github, https://github.com/The1only/rosettadrone. Last accessed 31 Mar 2023 19. Open Drone ID Homepage.: https://www.opendroneid.org/. Last accessed 31 Mar 2023 20. ASD-Stan Standardization.: https://asd-stan.org/downloads/din-en-4709-0022021-02/. Last accessed 31 Mar 2023 21. Suma, M.: MAVLink UDP Android Example, Github. https://github.com/mareksuma1985/ mavlink. Last accessed 31 Mar 2023 22. DJI Cloud API: Github, https://github.com/dji-sdk/Cloud-API-Doc. Last accessed 31 Mar 2023 23. Yang, F., Hosticka, G., Peng, F.: Unlocking automated drone operations with DJI dock and cloud API, Nestgen, Webinar 9th March 2023. https://www.flytnow.com/tv. Last accessed 31 Mar 2023

Is Alice Really in Wonderland? UWB-Based Proof of Location for UAVs with Hyperledger Fabric Blockchain Lei Fu, Paola Torrico Morón, Jorge Peña Queralta, David Hästbacka, Harry Edelman, and Tomi Westerlund

1 Introduction Unmanned aerial vehicles (UAVs) are becoming increasingly popular for a wide range of applications, from aerial surveying to UAV delivery services [5, 15, 17]. However, the integration of UAVs into the airspace poses significant challenges, particularly in terms of ensuring the safety and security of other airspace users and the public [10, 18, 29]. One critical challenge is the need for a reliable and trustworthy proof of location system for UAVs. Therefore, direct remote identification, the process of identifying an UAV from a remote location without an access to the UAV, has been introduced [4]. It is a crucial aspect of UAV regulation and safety as it enables authorities to identify the operator and the UAV in real-time, which is important for ensuring compliance with regulations and preventing potential security threats [12, 25]. In Europe, the U-space is evolving as a regulatory framework for the safe and efficient operation of UAVs, with the more standard unmanned aircraft systems (UAS) name in the regulations. The Uspace includes a range of measurements such as the registration and direct remote identification and real-time tracking of the UAS. Same in the USA, the Federal

L. Fu (✉) · P. Torrico Morón · J. Peña Queralta · T. Westerlund Turku Intelligent and Embedded Robotic Systems, University of Turku, Turku, Finland e-mail: [email protected]; [email protected]; [email protected]; [email protected]; https://tiers.utu.fi D. Hästbacka Department of Engineering and Business, Tampere University, Tampere, Finland e-mail: [email protected] H. Edelman Computing Sciences, Faculty of Information Technology and Communication Sciences, Turku University of Applied Sciences, Turku, Finland e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 T. Westerlund, J. Peña Queralta (eds.), New Developments and Environmental Applications of Drones, https://doi.org/10.1007/978-3-031-44607-8_3

43

44

L. Fu et al.

Aviation Administration (FAA) has issued regulations that require UAVs to have remote identification capabilities [3]. However, the risk of spoofing in global navigation satellite system (GNSS)can be a potential problem for direct remote identification of UAVs [6, 14, 24], which is the deliberate manipulation of signals to deceive the remote identification system [26]. As for proof of location (PoL), while it could be a useful tool to verify the location of a UAV, it is not always secure. Even with trust, there is a risk of errors or intentional manipulation of location data. For example, GNSS signals can be jammed or spoofed with incorrect location data as well as environment conditions such as interference could affect the accuracy of location data. These limitations call for the development of innovative PoL solutions that can address the security concerns and enhance the reliability of UAV location data. The primary objective of our research is to investigate the feasibility and effectiveness of combining Ultra-Wideband (UWB) technology with decentralized ledger technology (DLT), such as Hyperledger Fabric blockchain. In this chapter, we propose a novel PoL solution for creating a trustable and secure air-ground interaction ecosystem for short-distance UAV operation. The solution generates highly accurate and reliable location data for UAVs, which can be securely stored and verified on a tamperproof and decentralized blockchain. Through our study, we aim to provide empirical evidence and insights into the potential benefits and challenges of our proposed PoL solution. Our contribution lies in evaluating the performance of UWB technology and DLT integration, demonstrating their capability to address security concerns. The concept is illustrated in Fig. 1. The rest of this chapter is structured as follows. In Sect. 2, we introduce background concepts in the areas of Proof of Location, UWB technology and the Hyperledger Fabric blockchain framework. Section 3 then describes related works in the area of proof of location and remote identification for UAVs.

Fig. 1 Illustration of the proposed remote identification process

UWB-Based Proof of Location for UAVs with Blockchain Identities

45

2 Background Through this section, we briefly introduce the key technologies behind the UWBbased Proof of Location approach for UAVs with Hyperledger Fabric blockchain. Proof of Location (PoL) is a technology that enables the verification and broadcasting of a device’s physical location coordinates to a blockchain network [2, 9, 27, 33]. It enables other devices to rely on the location data without having to trust the broadcasting device. The technology uses cryptographic methods to prove the authenticity and accuracy of the location data, making it a secure and trustworthy source of location information. In the field of UAVs, PoL can be a valuable tool for ensuring accountability and security in critical applications, such as UAV deliveries and autonomous flying. By integrating PoL with blockchain identities, UAVs can be better managed and monitored. This integration can provide enhanced security, reliability, and accountability by ensuring that the UAV is at a specific location when a critical action is performed, such as landing or taking off. It also has potential from the perspective of access control and secure fleet management. Ultra-Wideband (UWB) technology is one of the most accurate wireless ranging solutions that can enable PoL implementations [23, 28, 34]. UWB uses radio frequency signals to determine the location of a device with high precision and broadcast the encrypted data and signature to the other nodes, providing a reliable source of real-time location data and ensuring UAVs’ verification [19]. By segmenting the data into smaller parts, encrypting, digitally signing, and then transmitting each part as a separate message, the accuracy and trustworthiness of location data can be increased. Even if an attacker intercepts some of the segments, they will be unable to use the information without having access to all the parts, providing an added layer of security. UWB technology has a range of applications in addition to PoL, including indoor positioning, asset tracking, and even remote control of devices [30]. Its high accuracy and low power consumption make it an attractive choice for a wide range of Internet of Things (IoT) and wireless communication applications. Air-ground coordination and localization systems based on UWB have already been proposed in the literature [31]. Hyperledger Fabric is a blockchain platform that offers various features beneficial to distributed robotic systems, including secure identity management and certificate generation for participant identification, permissioned networks for enhanced data security, high performance capabilities for real-time robotic data processing, lowlatency transaction confirmation for real-time consensus, and partitioned data channels enabling transaction privacy and confidentiality [7, 21]. Moreover, these features can integrate with the pub/sub system of Robot Operating System (ROS), providing a reliable and scalable system for distributed robotic systems, making it a potential candidate for use in PoL mechanisms of UAVs and similar applications where secure data exchange and privacy preservation are critical factors. In this scenario, the integration of UWB and blockchain technology, such as Hyperledger Fabric, can be an effective solution for PoL mechanisms in the field of

46

L. Fu et al.

UAVs. As we have demonstrated in previous papers integrating aerial and ground robot systems with UWB and Hyperledger Fabric, UWB provides real-time location information and, with the latest standards, also encrypted data transmission with high accuracy and low latency, while the blockchain acts as a secure ledger to store and verify the authenticity of the location information [20]. This combination makes it a potential solution for increasing the accuracy of UAV location and has wide-ranging applications in industries such as delivery, logistics, and aerospace. By leveraging these technologies, PoL can be extended to improve the efficiency, security, and transparency of location-based applications.

3 Related Work The concept of Proof of Location (PoL) has been proposed as a means of validating user locations in various studies. Initially, Brent Waters and Felten Edward introduced the concept of PoL in a centralized setting, where a trusted location manager signs the proof based on round-trip communication measurements with the prover [27]. Recently, new approaches have been proposed for generating PoL to ensure the location of UAVs as well as other mobile assets within the more general domain of the Internet of Things (IoT). For instance, Khan et al. introduced the concept of witness-oriented endorsements and a collusion-resistant protocol for generating asserted location proofs and identifying the vulnerabilities that may arise in non-federated environments [11]. In another work, Chitra et al. proposed a new protocol for generating and verifying secure location proof [9]. The protocol employs an information theoretically secure fuzzy vault scheme and unique spatial-temporal wireless channel characteristics as well as offers a cryptographic security. In a more recent work, and incorporating distributed ledger technologies, Amoretti et al. presented a novel method of blockchain-based PoL in 2018 [2]. The approach consisted on producing digital certificates that attest the presence at a specific geographic location at a certain point in time by using a decentralized peer-to-peer scheme that ensures location trustworthiness and user privacy. In addition to these, there has been growing interest in the development of remote identification systems for UAVs in recent years as a means of improving safety and security in the airspace. Various approaches have been proposed, including radio frequency (RF) identification and other sensor-based methods. In terms of remote detection and identification of UAVs, Mohammad et al. in [1] developed a comprehensive database containing RF signals emitted by UAVs in different flight modes. Based on this database, novel algorithms have been proposed to detect and identify UAVs, as well as determine their type and flight modes. In terms of blockchain integration for remote identification of drones, the authors in [8] introduced a new Drone Remote Identification Protocol (DRIP) using Hyperledger Iroha framework and its applications to identify UAVs securely. The main idea of this work is to register new UAVs and store the identification to the

UWB-Based Proof of Location for UAVs with Blockchain Identities

47

blockchain network and validate the received data by the public key and certificates. However, the system deals with identification alone and does not consider the proof of location. Therefore, there is a gap in the literature in the area of remote direct identification with proof of location for UAVs.

4 Proposed System Architecture We propose the integration of UWB ranging for remote UAV IDs to enable proof of location when the UAVs interact with infrastructure, such as landing pads in known locations. Both the UAV and the infrastructure must be part of a common Fabric blockchain. This process has potential for significantly improve the security and efficiency of UAV landing systems, as it allows for real-time verification of the UAV’s identity and status. The decentralized and secure nature of blockchain can also prevent unauthorized access or tampering of the identity certificates, ensuring that only authorized UAVs are able to land. In terms of the specific software implementation, the envisioned system architecture is illustrated in Fig. 2. Only part of such a system is actually implemented within the context of this chapter. However, we envision a system architecture where the key components are the cloud backend systems, the UAS system, and the ground infrastructure (e.g., landing pad), which acts as an edge gateway to the cloud. Within

Fig. 2 Envisioned system software architecture

48

L. Fu et al.

this architecture, we show how UWB-based situated communication can be used for both remove identification and proof of location, as well as in aiding in the landing process for more accurate localization. Additionally, this interaction can be extended to, for example, obtaining permission to dock or take-off. At the cloud end, we see Zenoh as one of the most promising technologies for such distributed systems, allowing for performant and scalable communication and computing, as recent studies demonstrate [13, 32]. On the fleet management and interfacing side, in addition to Hyperledger Fabric, a version of Open-RMF adapted to UAVs has significant potential as an open-source solution to the problem [22, 35]. The process of proof of location we propose, which is illustrated in Fig. 3, would operate as follows: 1. UAV issues a timestamped notification as a Fabric blockchain transaction and generates a uniquely identifiable code for both UAV and ground platform, to be used for ranging and validating. The UAV then initiates a proof of location procedure. 2. The ground platform receives the request through blockchain and issues a polling message through the UWB with the received UAV ID and platform ID to potential receivers nearby. 3. UAV receives the message and compares the unique IDs in the blockchain. If the identities are paired, the UAV replies the polling message from the ground platform. 4. Platform receives the reply from UWB polling message. 5. The platform’s UWB node initiates two way ranging with the UAV. Additionally it sends its encrypted identity. 6. UAV receives part of the encrypted information for platform identification. It replies to the polling using UWB if the identity matches. 7. Platform is able to compute the distance to the UAV for localization. The smart contract compares the GPS location data with the UWB location data. If it matches, it then sends a message to prove the UAV location and verify the authorization for interaction, for example, landing.

5 Methods and Experiments To demonstrate the viability of the proposed approach, we target a proof of concept demonstration where we focus on implementing the UWB-based PoL approach with Hyperledger Fabric, which provides enhanced security, reliability, and accountability to ensure the UAVs are at specific locations during critical actions.

UWB-Based Proof of Location for UAVs with Blockchain Identities

UAV

Ground station

Fabric

UWB

49

UAV

Initialize network

CA: Certificate issued

CA: Certificate issued

Approach Smart Contract #1 Request proof, submit GNSS position Generate {IDiUAV}, {IDjGCS} IDjGCS Event listener IDiUAV Event listener Ranging measurements (N times)

Validate IDiUAV

Validate IDjGCS

Smart Contract #2 Submit UWB ranges Calculate position error Proof Event listener Proof Event listener

Location proof completed

Fig. 3 Illustration of the proof of location process. The vertical axis indicates time (flowing down). The UAV is represented on both sides of the figure to differentiate communication between the UAV and the ground platform (GCS) through the Fabric Network (Fabric column) or through the UWB radios (UWB column)

5.1 UWB Ranging UWB for localization can be implemented with different ranging modalities, with the one we are using being Time of Flight (ToF). It consists of measuring the time it takes to exchange messages between a pair of UWB devices to obtain their distance, and then triangulating the distances obtained from different pairs to get the position [16]. With a decentralized system, a minimum of two messages is

50

L. Fu et al.

necessary for the distance calculation and a minimum of four nodes for the position. Leveraging the need to exchange messages for the UWB localization, we utilize these messages to also exchange the UAV and ground platform uniquely identifiable code for authentication. Once an event is triggered in the Hyperledger Fabric and the codes are generated, the platform UWBs will share its codes with the UAV UWBs, by embedding them in one of the messages used for ToF ranging. If the code received in the UAV matches the one obtained from the Fabric, the UAV UWBs respond with their codes for the platform to verify. This provides verification on the side of the UAV and the platform, while also providing the necessary exchange of messages for the relative localization to occur. The system can then compare the GPS location and verify if the UAV computed location matches to authorize the interaction.

5.2 Hyperledger Fabric Network Hyperledger Fabric network functioned as a secure data transport layer that connects different ROS subsystems. We utilize our recent event-driven architecture that utilizes ROS-Fabric application to handle callbacks originating from both the ROS and Fabric networks [7]. Whenever new messages are published to specific topics, a ROS event will be generated. In accordance with the specified configuration of the bridge application, such messages can be subsequently relayed to the Fabric network in the form of transactions, which may involve the creation or modification of assets located in distinct channel chaincodes. Fabric channel events are triggered by the confirmation of new transactions on the blockchain, representing an integral component of the system’s event-driven architecture. In our case, both UAV and landing pad act as one host in the Fabric network and requests are transmitted in the blockchain as an event.

5.3 Proof of Location We implement one chaincode (smart contract) and two Fabric applications for the experimental setup. The smart contract has a simple data structure that can support any ROS message type and minimal functions, including creation, edition, query, and deletion of assets. The main idea is to register channel events with transaction types and payloads, which enables the Fabric applications to listen to channel events without prior knowledge of their structure or methods. The two applications are specifically designed for interactions between the Hyperledger Fabric network and the ROS computational graph of a given system. To manage the assets, each host is equipped with two applications, called Fabric publisher and Fabric subscriber, with the former responsible for transmitting data from ROS to Fabric and the latter performing the reverse operation. In our case, the UAV initiates the procedure by

UWB-Based Proof of Location for UAVs with Blockchain Identities

51

issuing a timestamped notification as a Fabric blockchain transaction and generating a unique code for both the UAV and the landing platform. The platform then receives the request and issues a polling message with IDs to potential receivers nearby. If the identities are paired, the UAV responds to the polling message using UWB, allowing the platform to compute the distance to the UAV for localization in the smart contract. If the location data matches, the platform sends a message to verify the authorization of landing and the proof of location through blockchain.

5.4 Experimental Setup The current implementation relies on the PX4 flight control stack and a companion computer running our autonomy stack through the Robot Operating System (ROS). The UWB ranging is implemented with Decawave DWM1001 transceivers and a custom firmware. The UAV is assembled from a TDrone M690B frame, with a flight time over 50 min and 2 Kg of payload (Fig. 4). Hardware The Fabric network is set up with two different computers for UAV and landing platform, with various hardware capabilities. The computational load of running the Fabric network remains vastly negligible, as demonstrated in our previous research [7, 21]. The two hosts have Intel i7-9750H and Intel i3-1215U processors, and 64 GB and 16 GB of memory, respectively. We use DWM1001 UWB nodes with a custom firmware. Only one of the UWB nodes is used in our experiment for now. The platform is equipped with four UWB nodes to get a more accurate UAV location data.

Fig. 4 Illustration of the UAV and UWB sensors used in our experiments

52

L. Fu et al.

Software The Fabric applications are implemented in NodeJS and the chaincode in Golang. Both UAV and landing platform run ROS Noetic under Ubuntu 20.04. Two Fabric applications are then deployed in each host to interface with the ROS systems. UWB nodes are programmed and used to authenticate the identification as well as locate the position of the UAV. Location Data We use an external motion capture (MOCAP) system with 16 Optitrack cameras to determine the precise location data of the UAV and landing platform, which is employed as the ground truth reference system.

5.5 Experimental Results This subsection reports experimental results for the proof of concept based on the implementation described above. To validate the proof of location system, we perform experiments at both short and long distances. In Fig. 5, we show the positions of the UWB transceivers in the ground station (fixed, in known locations) and the UAV (location to be validated). These positions are obtained from the MOCAP system and emulate GNSS positions in actual outdoor flights. In order to validate the location given by the UAV through the smart contract in the Fabric blockchain, we predefine a maximum error buffer with a 1 m radios, shown in the figure in yellow. The results in both Figs. 5 and 6 show that the location of the UAV is effectively validated in four different occasions. While the estimated location error obtained

Fig. 5 First experiment for remote identification at short distances with two different measurements while the UAV is moving. Secure short-distance location can aid in precision landing systems

UWB-Based Proof of Location for UAVs with Blockchain Identities

53

Fig. 6 Second experiment for remote identification at larger distance with two different measurements while the UAV is moving

from the UWB measurements becomes more comparable to the predefined maximum error buffer in the larger distance experiment, we can argue that a buffer of 1 m is rather conservative as GNSS errors in single-constellation receivers are easily in the order of meters, especially in urban areas. In any case, the output of the proof of location smart contract can be altered to express instead the error itself, or the likelihood that the UAV is in fact in the position it claims.

6 Conclusion and Future Work We have proposed a novel approach to proving the location of aerial robots or autonomous aerial systems through situated communication with ground stations. The method proposed in this paper relies on UWB ranging measurements and a Hyperledger Fabric blockchain network to validate both the identity and the position of a UAV that communicates with fixed ground infrastructure, given that the position of such infrastructure is known a priori. Through a proof of concept implementation, we show that the proposed solution is viable, and we provide

54

L. Fu et al.

sample Fabric smart contracts and applications required for such a system. Our experiments also demonstrate that the UWB-Fabric-based proof of location can be executed in real-time and has potential for scalability. In future work, we will integrate the proposed solutions with ROS 2 and the PX4 autopilot project to deliver a more generic and power-on-and-ready solution. Acknowledgments This research work is supported by the Academy of Finland’s AeroPolis project (Grant No. 348480, 348481) and by the R3Swarms project funded by the Secure Systems Research Center (SSRC), Technology Innovation Institute (TII).

References 1. Al-Sa’d, M.F., Al-Ali, A., Mohamed, A., Khattab, T., Erbad, A.: RF-based drone detection and identification using deep learning approaches: an initiative towards a large open source drone database. Future Gener. Comput. Syst. 100, 86–97 (2019) 2. Amoretti, M., Brambilla, G., Medioli, F., Zanichelli, F.: Blockchain-based proof of location. In: 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). pp. 146–153. IEEE, Piscataway (2018) 3. Batuwangala, E., Silva, J., Wild, G.: The regulatory framework for safety management systems in airworthiness organisations. Aerospace 5(4), 117 (2018) 4. Belwafi, K., Alkadi, R., Alameri, S.A., Al Hamadi, H., Shoufan, A.: Unmanned aerial vehicles’ remote identification: a tutorial and survey. IEEE Access 10, 87577–87601 (2022) 5. Bithas, P.S., Michailidis, E.T., Nomikos, N., Vouyioukas, D., Kanatas, A.G.: A survey on machine-learning techniques for UAV-based communications. Sensors 19(23), 5170 (2019) 6. Chamola, V., Kotesh, P., Agarwal, A., Gupta, N., Guizani, M., et al.: A comprehensive review of unmanned aerial vehicle attacks and neutralization techniques. Ad hoc Netw. 111, 102324 (2021) 7. Fu, L., Salimi, S., Peña Queralta, J., Westerlund, T.: Event-driven Fabric blockchain - ROS 2 interface: Towards secure and auditable teleoperation of mobile robots. arXiv (2023) 8. Hashem, Y., Zildzic, E., Gurtov, A.: Secure drone identification with hyperledger Iroha. In: Proceedings of the 11th ACM Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications. pp. 11–18 (2021) 9. Javali, C., Revadigar, G., Rasmussen, K.B., Hu, W., Jha, S.: I am alice, i was in wonderland: secure location proof generation and verification protocol. In: 2016 IEEE 41st conference on local computer networks (LCN). pp. 477–485. IEEE, Piscataway (2016) 10. Kakaletsis, E., Symeonidis, C., Tzelepi, M., Mademlis, I., Tefas, A., Nikolaidis, N., Pitas, I.: Computer vision for autonomous UAV flight safety: an overview and a vision-based safe landing pipeline example. ACM Comput. Surveys 54(9), 1–37 (2021) 11. Khan, R., Zawoad, S., Haque, M.M., Hasan, R.: ‘Who, when, and where?’ Location proof assertion for mobile devices. In: Data and Applications Security and Privacy XXVIII: 28th Annual IFIP WG 11.3 Working Conference, DBSec 2014, Vienna, Austria, July 14–16, 2014. Proceedings, vol. 28, pp. 146–162. Springer, Berlin (2014) 12. Khan, M.A., Menouar, H., Eldeeb, A., Abu-Dayya, A., Salim, F.D.: On the detection of unauthorized drones-techniques and future perspectives: a review. IEEE Sens. J. 22, 11439– 11455 (2022) 13. Liang, W.Y., Yuan, Y., Lin, H.J.: A performance study on the throughput and latency of Zenoh, MQTT, Kafka, and DDS (2023). Preprint arXiv:2303.09419 14. Meng, L., Yang, L., Yang, W., Zhang, L.: A survey of GNSS spoofing and anti-spoofing technology. Remote Sens. 14(19), 4826 (2022)

UWB-Based Proof of Location for UAVs with Blockchain Identities

55

15. Mohsan, S.A.H., Khan, M.A., Noor, F., Ullah, I., Alsharif, M.H.: Towards the unmanned aerial vehicles (UAVs): a comprehensive review. Drones 6(6), 147 (2022) 16. Morón, P.T., Queralta, J.P., Westerlund, T.: Towards large-scale relative localization in multirobot systems with dynamic UWB role allocation. In: 2022 7th International Conference on Robotics and Automation Engineering (ICRAE), pp. 239–246 (2022). https://doi.org/10.1109/ ICRAE56463.2022.10054614 17. Osco, L.P., Junior, J.M., Ramos, A.P.M., de Castro Jorge, L.A., Fatholahi, S.N., de Andrade Silva, J., Matsubara, E.T., Pistori, H., Gonçalves, W.N., Li, J.: A review on deep learning in UAV remote sensing. Int. J. Appl. Earth Obser. Geoinformat. 102, 102456 (2021) 18. Outay, F., Mengash, H.A., Adnan, M.: Applications of unmanned aerial vehicle (UAV) in road safety, traffic and highway infrastructure management: recent advances and challenges. Transpor. Res. Part A Policy Practice 141, 116–129 (2020) 19. Peña Queralta, J., Qingqing, L., Schiano, F., Westerlund, T.: Vio-UWB-based collaborative localization and dense scene reconstruction within heterogeneous multi-robot systems. In: IEEE International Conference on Advanced Robotics and Mechatronics. IEEE, Piscataway (2022) 20. Salimi, S., Morón, P.T., Peña Queralta, J., Westerlund, T.: Secure heterogeneous multi-robot collaboration and docking with hyperledger fabric blockchain. In: IEEE 8th World Forum on Internet of Things. IEEE, Piscataway (2022) 21. Salimi, S., Peña Queralta, J., Westerlund, T.: Hyperledger fabric blockchain and ROS 2 integration for autonomous mobile robots. In: IEEE/SICE SII. IEEE, Piscataway (2023) 22. Sim, S.W., Kwan, B.H., Yap, W.S., Ng, D.W.K.: Development of ROS2-based multi-robot simulation for AGVs in factory-like environment. IEICE Proc. Ser. 69(SS2–5), 85–90 (2022) 23. Singh, M., Leu, P., Abdou, A., Capkun, S.: UWB-ED: distance enlargement attack detection in ultra-wideband. IACR Cryptol. ePrint Arch. 2019, 1349 (2019) 24. Sorbelli, F.B., Conti, M., Pinotti, C.M., Rigoni, G.: UAVs path deviation attacks: Survey and research challenges. In: 2020 IEEE International Conference on Sensing, Communication and Networking (SECON Workshops), pp. 1–6. IEEE, Piscataway (2020) 25. Stöcker, C., Bennett, R., Nex, F., Gerke, M., Zevenbergen, J.: Review of the current state of UAV regulations. Remote Sens. 9(5), 459 (2017) 26. Tang, A.C.: A review on cybersecurity vulnerabilities for urban air mobility. In: AIAA Scitech 2021 Forum, p. 0773 (2021) 27. Waters, B., Felten, E.: Secure, private proofs of location. Department of Computer Science, Princeton University, Technical Report TR-667-03 (2003) 28. Wilson, R., Tse, D., Scholtz, R.A.: Channel identification: secret sharing using reciprocity in ultrawideband channels. IEEE Trans. Informat. Forens. Security 2(3), 364–375 (2007) 29. Xiangmin, G., Renli, L., Hongxia, S., Jun, C.: A survey of safety separation management and collision avoidance approaches of civil UAS operating in integration national airspace system. Chinese J. Aeronaut. 33(11), 2851–2863 (2020) 30. Xianjia, Y., Qingqing, L., Peña Queralta, J., Heikkonen, J., Westerlund, T.: Applications of UWB networks and positioning to autonomous robots and industrial systems. In: CyberPhysical Systems of Systems (CPSoS) and Internet of Things (IoT) Conference. IEEE, Piscataway (2021) 31. Xianjia, Y., Qingqing, L., Peña Queralta, J., Heikkonen, J., Westerlund, T.: Cooperative UWBbased localization for outdoors positioning and navigation of UAVs aided by ground robots. In: IEEE International Conference on Autonomous Systems (IEEE ICAS 2021). IEEE, Piscataway (2021) 32. Yu, X., Morón, P.T., Salimpour, S., Peña Queralta, J., Westerlund, T.: Loosely coupled odometry, UWB ranging, and cooperative spatial detections for relative Monte-Carlo multirobot localization. arXiv (2023) 33. Zafar, F., Khan, A., Anjum, A., Maple, C., Shah, M.A.: Location proof systems for smart internet of things: requirements, taxonomy, and comparative analysis. Electronics 9(11), 1776 (2020)

56

L. Fu et al.

34. Zhang, Y., Liu, W., Fang, Y., Wu, D.: Secure localization and authentication in ultra-wideband sensor networks. IEEE J. Sel. Areas Commun. 24(4), 829–835 (2006) 35. Zhang, J., Keramat, F., Yu, X., Hernández, D.M., Peña Queralta, J., Westerlund, T.: Distributed robotic systems in the edge-cloud continuum with ROS 2: A review on novel architectures and technology readiness. In: 2022 Seventh International Conference on Fog and Mobile Edge Computing (FMEC). pp. 1–8. IEEE, Piscataway (2022)

Accuracy Assessment of UAS Photogrammetry with GCP and PPK-Assisted Georeferencing Anssi Rauhala

1 Introduction The use of unmanned aircraft systems (UASs), together with Structure from Motion (SfM) photogrammetry, has allowed aerial surveying and mapping in the m2 to km2 range at a centimeter-level resolution [1, 2]. These technologies have been embraced by various fields, including geomorphometry [3], agriculture [4, 5], forestry [6], agroforestry [7], mining [8, 9], inspections [10], environmental monitoring [11], archaeology [12], etc. SfM processing has been described in detail by several authors [13–15]. Generally, SfM processing can automatically identify and match corresponding features in images and simultaneously determine 3D point coordinates and external camera parameters (i.e., camera station positions and orientations) through least squares optimization, known as bundle block adjustment (BBA). With the addition of external control measurements, the dataset can be georeferenced while simultaneously adding constraints to mitigate systematic errors. If self-calibration is included in the BBA, external controls can also improve the calibration of intrinsic camera parameters. The resolution and accuracy of UAS-SfM products depend on multiple factors, some relating to utilized equipment, such as camera sensors and settings [16, 17], and some to target properties and environmental conditions, such as weather and lighting [18]; flight planning and georeferencing also play an important role [15]. As highlighted by Benassi et al. [19], aerial photogrammetry has utilized four primary methods for accurate georeferencing of generated products. The georeferencing of images acquired with low-cost UAS has largely relied on the utilization of ground control points (GCPs), i.e., on so-called indirect georeferencing (IG). GCPs are known to be a reliable and accurate method for georeferencing when done properly A. Rauhala (✉) University of Oulu, Civil Engineering, Oulu, Finland e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 T. Westerlund, J. Peña Queralta (eds.), New Developments and Environmental Applications of Drones, https://doi.org/10.1007/978-3-031-44607-8_4

57

58

A. Rauhala

[20]. However, as GCPs need to be placed in a distributed and dense enough network, the placement of GCPs is at best time-consuming and at worst dangerous or may not even be possible in hard-to-access terrain. Recently, multiple UAS platforms capable of GNSS-assisted aerial triangulation (AAT) have become available. This technique relies on refining the positional accuracy of camera stations to a centimeter–decimeter range via either real-time kinematic (RTK) or post-processing kinematic (PPK) correction. As suggested by the name, in RTK, the differential corrections are sent in real time either from a local base station or from a continuously operating reference station (CORS) network, often referred to as network RTK (NRTK) mode. In PPK, GNSS raw data, recorded either by a local base or by a CORS, is utilized in the postflight refinement of camera station positions. The accuracy of IG and the suitable number of GCPs have been discussed by several authors [21–25]. Common wisdom states that GCPs should be placed in a uniform distribution, with more usually being positive, although increasing the number of GCPs beyond certain point provides a negligible improvement [21– 23]. Increasing the number of GCPs beyond a certain number may even decrease the accuracy if the positional accuracy of the survey equipment, and consequently the GCPs, is variable [22]. A comprehensive study by Sanz-Ablanedo et al. [25] demonstrated how a horizontal RMSE close to 1 GSD was achieved with approx. 2.5–3 GCPs per 100 images, with no major improvement observed when using more than 3 GCPs per 100 images. Vertical accuracy of ~2 GSD was achieved around 2 GCPs per 100 images, and it improved toward ~1.5 GSD when the number of GCPs was increased to 4 GCPs per 100 images. Martínez-Carricando et al. [24] have shown that in GCP-based georeferencing, increasing the GCP density from 0.25 GCP per ha to the observed optimal value of ~1.7 GCP per ha increased the vertical accuracy substantially (RMSEZ from 30.8 to 4.3 cm). In recent years, multiple studies have also compared GCP-based IG to AAT through RTK or PPK solutions [19, 26–31]. Many of the studies report that surveys relying on AAT typically have similar or slightly lower horizontal accuracy compared to the GCP-based approach, whereas vertical accuracy is more likely to exhibit stronger reduction if no GCPs are utilized alongside AAT [19, 26–29]. Benassi et al. [19] compared 12 GCP, RTK, and RTK+1GCP approaches with three different software packages. They conducted four repeated flights to collect ~150 images per flight with a GSD of ~2.3 cm while mapping a 20 ha area containing parking lots, green areas, buildings, etc. When relying solely on RTK (RMSEZ 1.9– 9.5 cm), some of the survey missions had vertical accuracies very close to the 12 GCP approach (RMSEZ 1.6–3.0 cm), whereas some had an apparent vertical bias. With the inclusion of a GCP in the RTK+1GCP approach, the bias was removed or greatly reduced in all cases (RMSEZ 1.4–3.9 cm). Forlani et al. [26] utilized the same UAS platform and the same 12 GCP, RTK, and RTK+1GCP experiments to map the same location as was done by Benassi et al. [19]. However, they included four additional flights, in which the corrections were sent from a CORS network (NRTK) instead of a local base. Similar results to [19] were reported, i.e., RTK had a clear vertical bias on some occasions (RMSEZ 1.9–9.5 cm), which could be

Accuracy Assessment of UAS Photogrammetry with GCP and PPK-Assisted. . .

59

largely remedied with the RTK+1GCP approach (RMSEZ 1.8–3.9 cm). Utilizing a CORS instead of a local base resulted in a lower accuracy for both the RTK (RMSEZ 6.7–12.6 cm) and the RTK+1GCP (RMSEZ 2.9–4.7 cm) experiments. Tomaštík et al. [31] compared the PPK approach to 4 GCP and 9 GCP georeferencing in a mainly forest-covered ~270 ha study area. However, the utilized GCP density was very low, ~0.015–0.033 GCP per ha, compared to the optimal amount described by Martínez-Carricando et al. [23]. Consequently, due to the suboptimal GCP placement, it is not surprising that on each of the four flights (two during the leaf-off season and two during the leaf-on season), PPK repeatedly provided better vertical accuracy (RMSEZ 10.0–18.0 cm) compared to the 4 GCP experiment (RMSEZ 23.9–62.9 cm) or the 9 GCP experiment (RMSEZ 25.2– 58.1 cm). McMahon et al. [29] compared 3–9 GCP indirect georeferencing to PPK on a 1 ha area (GSD ~ 0.85 cm). Three different CORSs (PPK) and a local base (PPK, PPK+1GCP, and PPK+2GCP) were tested. Five or more GCPs (RMSEZ 2.0–2.4 cm) and the CORS with the fastest 1 Hz sampling frequency (RMSEZ 2.7 cm) provided the best vertical accuracies. The local base had a RMSEZ of 4.7 cm, mostly due to bias, but the addition of 1–2 GCPs reduced the RMSEZ to 3.1 cm [30]. Some studies have also investigated whether the utilization of more than one GCP with AAT provides further benefits. Gerke and Przybilla [32] collected ~1900 images (GSD ~2.7 cm) of a 66 ha area containing a stockpile. They tested 4 GCP and 18 GCP configurations against RTK, RTK+4GCP, and RTK+18GCP configurations with and without adding a cross-flight pattern at a lower flight height on top of the stockpile (~20% of the area). The vertical RMSEs for the GCP-based experiments, which arguably had suboptimal GCP densities, were 19.6–21.5 cm (cross vs. no cross) for the 4 GCP setup and 7.3–8.1 cm for the 18 GCP setup. RTK provided better accuracy but only saw minor improvements with increasing number of GCPs, with RMSEZ values of 5.4–6.7 cm (RTK), 4.6–5.4 cm (RTK+4GCP), and 4.8–5.1 cm (RTK+18GCP). It should be noted that they utilized the same points as GCPs and checkpoints (CPs), i.e., increasing the number of GCPs to 4 and 18 reduced the number of CPs to 31 and 17, respectively. Benjamin et al. [28] compared IG with 3–7 GCPs to AAT utilizing PPK with 0–7 GCPs. Around 225 images were collected from a 5 ha area with vegetated areas and structures, and three different cameras were compared (GSD ~1.2–1.5 cm). The mean RMSEZ decreased from 6.2 cm with no GCPs to 3.0 cm with ≥1 GCP. The addition of subsequent GCPs had a significant, although not very strong negative correlation with RMSEZ . Stott et al. [33] collected 3390 images (GSD ~2.3 cm) of a ~79 ha braided river reach and tested 0–6 GCPs with RTK. The error statistics were consistent in both horizontal and vertical directions (RMSEZ ~ 7.1–7.6 cm) regardless of the number of GCPs used, and there was no trend in error decrease with an increasing number of GCPs. Curiously, the vertical error in their results could be largely explained by standard deviation (6.9–7.2 cm) instead of a bias (2.6–3.0 cm); most research states bias as the reason for lower accuracy in AAT when no GCPs are used.

60

A. Rauhala

As the previous literature highlights, GNSS-assisted aerial triangulation with either RTK or PPK workflow can produce nearly equal or sometimes even better accuracy, when compared to indirect georeferencing with GCPs. This is especially true if at least one GCP is utilized alongside AAT to remedy any possible vertical bias. However, much of the research focuses on comparing indirect georeferencing with GCPs to AAT with only 1–2 GCPs. Furthermore, these studies are often done over small areas, ≤20 ha or even ~1 ha. When more GCPs are utilized alongside AAT in larger areas, the results are mixed. This study aims to further assess how the number of GCPs utilized in AAT with PPK affects the accuracy in a large ~100 ha area, where the hypothetical optimal number of GCPs as described in literature would be ≥75 if relying on indirect georeferencing. Also, different types of base stations (low cost vs. prograde vs. virtual) and their accuracy are assessed. The measurements were done as a part of a continued monitoring campaign, and the original rationale for the tests was to find a time-efficient setup for upcoming surveys on the site. However, the results seemed interesting enough to share with the community, and there is still something of a gap in literature exploring the relationship between model accuracy and the number of GCPs, especially in the context of AAT with UAS.

2 Data and Methods The field measurements were performed on June 3, 2021, at the reclaimed Hitura mine tailings storage facility (63◦ 50' 41''N, 25◦ 01' 26''E) in Nivala, Finland. The tailings storage facility has a surface area of ~1 km2 (Fig. 1). The elevation in the area of interest varies from around 80 to 110 m above sea level. A Quantum Systems (QS) Trinity F90+ VTOL fixed-wing UAS with PPK capability and a 20-megapixel Sony UMC-R10C camera with 16 mm Sony SEL16F28 lens (corresponding to a 35 mm equivalent focal length of 24 mm) was used for mapping. A total of 2488 images were collected, with a target flight height of 100 m and forward/side overlaps of 80/80%, resulting in an average GSD of ~3 cm and a total coverage area of 1.75 km2 . Before the UAS flight, 24 GCPs were placed on the survey area. In addition, 21 vertical checkpoints (CPs) were collected. The coordinates of the points were measured using a Topcon HiPer V and a Leica GS08+ GNSS receiver in NRTK mode utilizing HxGN SmartNet NRTK correction service. At least three measurements were taken for each GCP, and the average values were used in georeferencing. The average standard deviations between the measurements were 2.3 mm, 2.5 mm, and 4.4 mm for the x, y, and z coordinates, respectively. Local base station data for PPK georeferencing was collected using the Topcon HiPer V GNSS receiver, and a QS iBase, an entry-level reference station provided with the QS Trinity F90+ UAS (Fig. 2). Topcon HiPer V is a survey-grade solution with a variety of controls and features. It has a multiband receiver with 226 channels and a RINEX logging rate of up to 1 Hz. The main components of the Topcon base

Accuracy Assessment of UAS Photogrammetry with GCP and PPK-Assisted. . .

61

Fig. 1 (a) Aerial image mosaic and (b) digital surface model of the Hitura mine tailings storage facility, June 3, 2021

Fig. 2 Topcon HiPer V base station with tripod and field controller (left), Quantum Systems Trinity F90+ fixed wing with a controller (center), and Quantum Systems iBase station with antenna and power bank (upper right, placed on a white A4-sized paper)

station setup are its sturdy tripod, tribrach, receiver, and field controller. The total weight of the setup is roughly 8.7 kg, and its minimum dimensions are roughly 143 × 18.4 × 18.4 cm while the tripod legs are closed. The iBase is a plug-and-play type solution that starts recording RINEX data when powered on with no additional controls. No technical specifications on the capabilities of the iBase are given by the

62

A. Rauhala

Fig. 3 Locations of the GCPs (1, 4, 6, 8, 10, 12, and 24) and 3D checkpoints utilized in the processing and the locations of the vertical checkpoints and local base stations

manufacturer. The main components of the iBase are the receiver, an antenna with a connection cable, and a power bank. As such, it is very portable, with dimensions of approximately 7 × 7 × 4.5 cm and a total weight of 356 g. During the measurements, the Topcon receiver was set up on the tripod, whereas the iBase was placed on the ground as no tripod is provided by the manufacturer and the idea was to test out-of-the-box accuracy. In addition, virtual base station or virtual reference station (VRS) data was obtained from the FinnRef CORS network, the Finnish first-order CORS network defining the national ETRS89 realization. The network contains a total of 47 stations around the country. The VRS data was obtained for the same coordinates that were measured for the iBase location. The GNSS reference data from each base station was post-processed separately using QS Qbase software, also provided with the QS Trinity F90+. The SfM processing of UAS data was performed using Agisoft Metashape separately for each PPK georeferenced set of images utilizing combinations of 0, 1, 4, 6, 8, 10, 12, and 24 GCPs (Fig. 3). With regard to processing settings in Agisoft Metashape, camera station positional accuracy of 2 cm was chosen per manufacturer recommendation. Furthermore, GCP marker accuracy of 1 mm was chosen instead of the default value of 5 mm, as experience has shown that larger than expected residuals on GCP coordinates are common if the value is kept closer to the true accuracy of ~1 cm [19, 26]. The image alignment was performed using the high accuracy setting. Camera self-calibration with all parameters except rolling shutter compensation was performed in each case, since pre-calibration is known to provide

Accuracy Assessment of UAS Photogrammetry with GCP and PPK-Assisted. . .

63

less reliable results with the nonmetric cameras typically utilized with UAS [19, 26]. The depth map and dense point cloud generation were performed with moderate depth filtering on a medium-quality setting to keep the processing times reasonable. For example, the ultra-high-quality setting resulted in 21 hr 51 min processing time for depth map and dense cloud generation for each model, whereas the medium quality setting required only 1 hr 19 min per model with the utilized workstation. Otherwise, the settings were kept at default values. From each point cloud, a raster digital surface model (DSM) with a cell size of ~12 cm was generated in Metashape. The horizontal and vertical accuracies of different combinations up to 12 GCPs were compared, keeping 12 of the remaining GCPs as 3D checkpoints (CPs) (Fig. 3). The residuals of 3D CPs were exported from Metashape for further statistical analysis in OriginLab Origin 2021b. The vertical accuracies of different combinations up to 24 GCPs were compared by calculating the pointwise elevation differences, Δz, between the collected 21 vertical checkpoints and the produced DSMs in ArcGIS 10.7 following Eq. (1): Δz = DSM − z,

.

(1)

where DSM is the surface elevation from the UAS survey and z is the checkpoint elevation measured with RTK GNSS. The results were also exported into OriginLab Origin 2021b for further statistical analysis. The accuracy of the generated products was mainly assessed by calculating the mean errors (ME), standard deviations (STD), and root mean square errors (RMSE). RMSE is arguably the most used error metric for UAS surveys and is also used as the main error metric in, for example, 2015 ASPRS Positional Accuracy Standards for Digital Geospatial Data [34]. However, as stated by ISO 5725, measurements involve a component of systematic error and a component of random error. As such, accuracy comprises trueness and precision. Trueness refers to the absence of bias, which can be described by ME, whereas precision is a measure of dispersion, which can be described by STD. When the bias is null, RMSE coincides with the population STD; otherwise, it can be considered as a compound of both ME and STD [35]. The use of these error metrics assumes that the errors approximate normal distribution, which is generally the case for photogrammetry or non-vegetated areas (as opposed to LiDAR or heavily vegetated areas) [34]. However, boxplots with error means, medians, ranges, and percentiles were also produced to provide more robust statistics.

3 Results Table 1 and Fig. 4 depict the accuracy for horizontal coordinates on the 3D CPs with respect to the number of GCPs utilized. With all base stations, the RMSEXY values are close to 1 GSD (~3 cm) using no GCPs, and there is only a slight improvement when the number of GCPs is increased. When looking at

64

A. Rauhala

Table 1 Horizontal root mean square errors (RMSEXY ), mean errors (MEXY ), and standard deviations (STDXY ) on the 3D checkpoints with different base stations and GCP configurations GCPs iBase XY RMSEXY (cm) MEXY (cm) STDXY (cm) Topcon XY RMSEXY (cm) MEXY (cm) STDXY (cm) FinnRef XY RMSEXY (cm) MEXY (cm) STDXY (cm)

0

1

4

6

8

10

12

3.09 2.70 1.52

3.03 2.64 1.50

3.03 2.60 1.54

2.96 2.56 1.49

2.78 2.41 1.39

2.70 2.34 1.35

2.61 2.27 1.28

3.66 3.61 0.61

3.57 3.52 0.60

3.49 3.42 0.68

3.41 3.34 0.66

3.23 3.16 0.70

3.13 3.05 0.69

3.03 2.96 0.63

2.54 2.42 0.79

2.54 2.40 0.81

2.45 2.31 0.79

2.32 2.18 0.80

2.21 2.08 0.73

2.18 2.05 0.73

2.16 2.03 0.73

Fig. 4 The relationship between the number of GCPs utilized and the horizontal difference between the generated models and 3D checkpoints with each PPK dataset

the systematic and random components of the errors, i.e., the mean error (MEXY ) and standard deviation (STDXY ), some differences can be seen. With iBase and FinnRef, some improvement is seen in both MEXY and STDXY . With Topcon, clear improvement is seen, mostly in MEXY , which remains the highest of the three, while STDXY is the smallest of the three base stations, regardless of the number of GCPs utilized. Levene’s test [36] indicates that at the p < 0.05 level, the population variances with individual base stations are not significantly different (p = 0.95 to 0.99). There is a strong (r = −0.96 to −0.99) and statistically significant (p < 0.001) negative Pearson correlation between the number of GCPs and RMSEXY with each base station. Similar correlation (r = −0.98 to −0.99, p < 0.001) is seen between the number of GCPs and MEXY . However, the Kruskal-Wallis test [37] indicates no statistically significant differences between the populations (p = 0.20 to 0.97). Regarding the vertical accuracy of 3D CPs (Table 2 and Fig. 5), the RMSEZ values for iBase and Topcon datasets are close to 2 GSD without any GCPs, whereas with FinnRef CORS data, the RMSEZ is noticeably below 1 GSD with

Accuracy Assessment of UAS Photogrammetry with GCP and PPK-Assisted. . .

65

Table 2 Vertical root mean square errors (RMSEZ ), mean errors (MEZ ), and standard deviations (STDZ ) on the 3D checkpoints with different base stations and GCP configurations GCPs iBase Z RMSEZ (cm) MEZ (cm) STDZ (cm) Topcon Z RMSEZ (cm) MEZ (cm) STDZ (cm) FinnRef Z RMSEZ (cm) MEZ (cm) STDZ (cm)

0

1

4

6

8

10

12

6.43 −6.30 1.28

5.83 −5.69 1.28

4.44 −4.25 1.29

4.40 −4.22 1.26

3.81 −3.66 1.06

3.18 −2.99 1.08

2.84 −2.62 1.09

6.32 −6.20 1.18

5.67 −5.55 1.18

4.27 −4.10 1.19

4.30 −4.15 1.13

3.72 −3.58 1.00

3.09 −2.92 1.03

2.79 −2.59 1.05

2.60 2.22 1.34

2.02 1.49 1.36

1.55 0.89 1.27

1.29 0.09 1.29

1.23 −0.18 1.21

1.21 −0.02 1.21

1.20 −0.04 1.20

Fig. 5 The relationship between the number of GCPs and the vertical difference between the generated models and 3D checkpoints with each PPK dataset

zero GCPs (Table 2). With all base stations, the RMSEZ values roughly halve when increasing the number of GCPs to ten. Most of the improvement in vertical accuracy is associated with a reduction in the systematic bias, i.e., mean error, whereas only a very minor improvement is seen in standard deviations with an increasing number of GCPs. Correspondingly, Levene’s test indicates that the population variances for individual base stations are not significantly different (p = 0.99). There is a strong (r = −0.88 to −0.98) and statistically significant (p < 0.01) negative correlation between the number of GCPs and RMSEZ with each base station. There is also a similar correlation between the number of GCPs and MEZ , although in the case of iBase and Topcon it is positive due to the bias being a negative one. Furthermore, the Kruskal-Wallis test indicates that the populations are significantly different (p < 0.01). With iBase and Topcon, pairwise comparison with Dunn’s post hoc test [38] further indicates that the means are significantly different when increasing the number of GCPs from 0 to 8–12 GCPs (p < 0.01) and from 1 to 10–12 GCPs (p < 0.001). With FinnRef, Dunn’s test indicates that the means are

66

A. Rauhala

Table 3 Vertical root mean square errors (RMSEZ ), mean errors (MEZ ), and standard deviations (STDZ ) obtained from the vertical checkpoints (CPs) with different base stations and GCP configurations GCPs iBase CPs RMSEZ (cm) MEZ (cm) STDZ (cm) Topcon CPs RMSEZ (cm) MEZ (cm) STDZ (cm) FinnRef CPs RMSEZ (cm) MEZ (cm) STDZ (cm)

0

1

4

6

8

10

12

24

5.71 −4.94 2.86

5.53 −4.64 3.00

4.26 −3.05 2.98

4.07 −2.82 2.93

3.86 −2.19 3.17

3.48 −1.71 3.04

3.38 −1.28 3.13

2.96 −0.10 2.96

5.59 −4.73 2.98

4.92 −4.17 2.62

4.00 −2.67 2.98

4.21 −2.70 3.24

3.50 −2.14 2.77

3.37 −1.48 3.03

3.02 −1.19 2.77

2.97 −0.08 2.97

4.98 3.90 3.10

4.15 2.98 2.89

3.92 2.44 3.06

3.42 1.60 3.02

2.94 1.30 2.63

3.14 1.41 2.80

3.13 1.38 2.81

3.31 1.64 2.88

Fig. 6 The relationship between the number of GCPs and the vertical difference between the generated DSMs and vertical checkpoints with each PPK dataset

significantly different with an increase from 0 to 6–12 GCPs (p = 0.03 to 0.004). When comparing the base station data to each other, Dunn’s test indicates that there are no statistically significant differences between iBase and Topcon. However, a statistically significant difference is seen when comparing FinnRef to either iBase or Topcon (p < 0.001). The comparison between vertical checkpoints and the generated DSMs shows similar results (Table 3 and Fig. 6), although these are not as straightforward results as was obtained from BBA products and 3D checkpoints. The RMSEZ values are just under 2 GSD without any GCPs for each dataset, and the values improve with the addition of more GCPs. iBase and Topcon reach the ~1 GSD level when 12– 24 GCPs are included, whereas FinnRef reaches the same level at 8 GCPs. The improvement is again mainly associated with a reduction in mean error, whereas standard deviations remain practically the same, regardless of the number of GCPs. In the case of FinnRef, the RMSEZ values become stable at 8 GCPs, and a reduction

Accuracy Assessment of UAS Photogrammetry with GCP and PPK-Assisted. . .

67

in accuracy, although negligible, is seen when further increasing the number of GCPs. With iBase and Topcon, there is a strong (r = −0.86 to −0.83) and statistically significant (p = 0.01 to 0.006) correlation between the number of GCPs and RMSEZ of vertical checkpoints. With FinnRef, there is also a correlation (r = −0.63), although it is not statistically significant (p = 0.09). Similar results are obtained with respect to MEZ . The Kruskal-Wallis test indicates that in the case of iBase (p < 0.001) and Topcon (p < 0.001), the populations are significantly different at the p < 0.05 level. Pairwise comparison with Dunn’s test further indicates that in the case of iBase, the means are significantly different, with an increase from 0–1 to 12–24 GCPs (p = 0.04 to 0.0001), and in the case of Topcon, from 0 to 10–24 GCPs (p = 0.04 to 0.0002) or 1 to 24 GCPs (p = 0.002). In the case of FinnRef, the Kruskal-Wallis test indicates that the populations are not significantly different (p = 0.13).

4 Discussion All the base stations provided RMSEXY values close to the GSD of 3 cm, even without GCPs. There was a strong and statistically significant correlation between the accuracy and number of GCPs, but the improvements were minor. Furthermore, no statistically significant differences could be found between the populations. This was somewhat expected since previous studies have highlighted how surveys with AAT generally have horizontal accuracies comparable to indirect georeferencing and, in that regard, benefit little from the addition of GCPs [19, 26–29, 32, 33]. FinnRef provided the best horizontal accuracy, but the difference in RMSEXY compared to iBase was only 0.44–0.64 cm (17.9–24.4%) and to Topcon 0.86– 1.12 cm (33.3–38.1%). Generally, there is less room to improve the accuracy in the horizontal direction as the 0 GCP results are already close to the GSD, and it is challenging to obtain accuracy well below GSD. The generally very good horizontal accuracy of AAT surveys is the reason why this and other studies tend to focus more on vertical accuracy. All the base stations provided very much usable (~1–2 GSD) vertical accuracy for various applications already, without GCPs. However, previous research has suggested that when flights are repeated, some may occasionally have noticeable vertical bias even when the equipment and flight parameters are unchanged [19, 26]. Thus, a recommendation to utilize at least one GCP is necessary. Many studies report that the addition of a single GCP to the RTK/PPK approach (e.g., RTK+1GCP) helped to remove or greatly reduce potential vertical bias [19, 26–29]. This, however, was not the case in this study as only a slight and non-significant improvement in vertical accuracy was observed with the addition of a single GCP. This might be due to the large area and the consequently large number of images collected in this study when compared to some of the earlier studies. Also, the bias in this study was at worst only ~2 GSD, whereas some of the earlier studies had biases of up to ~4 GSD without GCPs. Thus, there was also less room for significant improvement with a single GCP.

68

A. Rauhala

The obtained results also stand in opposition to those reported by Stott et al. [33], who tested 0–6 GCPs with AAT relying on RTK and found no trend of decreasing errors with increasing number of GCPs. The results by Stott et al. [33] also differ in that the vertical error in their study is mainly linked to a larger random component of the error (i.e., standard deviation) instead of systematic error (i.e., mean error or bias). It is not obvious what is behind these discrepancies. One possible source is the utilized processing parameters. However, it is difficult to be certain as detailed parameters are not given, and they utilize different software (Pix4D). As described by Benassi et al. [19], the relative weighting of pixel coordinates of tie points, camera station positions, and GCP coordinates during BBA is of key importance in AAT. They [19] tested how the horizontal and vertical accuracies of BBA are affected by assigning different precisions to camera projection centers and found that the vertical accuracy was particularly affected with a RMSEZ range of up to 8 cm with both Agisoft and Pix4D. It should be noted that the aim in this study was to compare how the number of GCPs and different base station data in the PPK workflow affected the “out-of-thebox” accuracy and not to demonstrate the absolute best achievable accuracy. Thus, parameter recommendations from the manufacturer or from literature were utilized, although experience indicates that the accuracy can sometimes be significantly improved by finding optimal processing parameters [20]. Furthermore, the chosen accuracy and quality settings in the image alignment and depth map generation were not the highest available to keep the processing times moderate. Nevertheless, the results were very satisfactory. When assessing the vertical accuracy of BBA results directly (i.e., 3D checkpoints), FinnRef clearly provided the best results. The vertical RMSEZ was already below 1 GSD with no GCPs and further decreased with the addition of more GCPs. The accuracy became stable around 6 GCPs and no further improvement was observed. It should be noted that, by this point, the systematic error (i.e., bias) is practically zero and the error is fully due to standard deviation (~1.2–1.3 cm). Thus, the vertical accuracy is by this point likely limited by the vertical accuracy of the GNSS receivers utilized in the measurement of the point coordinates. Further improvements would likely require more accurate measurements of the GCP and CP coordinates by, for example, total station or leveling. Both local base stations provided practically equal results regarding vertical accuracy. The RMSEZ values were close to 2 GSD with no GCPs and gradually decreased with the addition of more, reaching ~1 GSD level by 10 GCPs. All the base stations showed similar behavior, i.e., standard deviations remained roughly the same, only slightly above 1 cm, with the improvement mostly linked to a decrease in mean error that was strongly correlated with the number of GCPs. When assessing the vertical accuracy of the produced DSMs, the results are slightly more variable, and the differences between the local base stations and the CORS-based virtual reference station (FinnRef) are smaller. In DSM products, the improvement in vertical accuracy is also mostly seen in the reduction of bias. The random component of error (STDZ ) remains roughly the same, ~2 cm larger than in the assessment of BBA products, regardless of the number of GCPs utilized.

Accuracy Assessment of UAS Photogrammetry with GCP and PPK-Assisted. . .

69

iBase and Topcon ended up producing equal, ~1 GSD accuracy when 12–24 GCPs were utilized. FinnRef reached the same level with 8 GCPs and no further improvements were observed. The differences between BBA and DSM assessment are not unexpected as DSM generation is an additional step in the process, and generated DSMs also inherit dense matching errors and DSM interpolation errors [26]. Also, when assessing the accuracy of DSMs, vertical accuracy can be linked to horizontal accuracy [28]. Based on these results and the previous studies, some recommendations can be made for surveys utilizing AAT. If the aim is to simply acquire very highresolution orthophotomosaic from the survey area, GCPs are likely not required. It is possible to achieve ~1–2 GSD vertical accuracy even without GCPs, which might be completely suitable, for example, for applications in agriculture or in forest management, where 1 m accuracy for tree height estimates might be tolerable [29, 39]. Generally, the recommendation is still to utilize at least one GCP to avoid occasional cases of high bias. More GCPs are beneficial for surveys requiring higher accuracy, such as snow depth mapping, which must consider error propagation while differencing between models [40], or construction monitoring, where tolerances are low [29, 41]. The analysis of BBA products indicates a significant improvement in vertical accuracy to ~1 GSD level by 10 GCPs independent of the utilized base station. If the results are investigated in proportion to the survey area size or number of collected images, this would give easy-to-remember recommendations, i.e., ~0.1 GCP per ha or 1 GCP per ~250 images. Although placement of GCPs requires time and effort, the requirements are much below the “optimal” values for indirect georeferencing, i.e., ~1.7 GCP per ha [24] or 4 GCPs per 100 images [25] for a vertical accuracy of ~1–1.5 GSD. However, the results and recommendations might not be generalizable to larger sites or sites with different terrains. According to Benjamin et al. [28], the improvement of vertical accuracy with the addition of GCPs could be associated with a refinement of camera positions during BBA or with the refinement of interior orientation parameters (IOPs) during self-calibration. In their results, Benjamin et al. [28] linked the improvement solely to the refinement of IOPs as no differences were seen in camera station elevations with the addition of GCPs. In this study, the improvement can be linked to both. The improvement seems to be mostly due to the refinement of IOPs. For example, utilizing the adjusted camera parameters from the 12 GCP Topcon BBA to the 0 GCP case improves the BBA RMSEZ from 6.32 to 2.99 cm. If the improvement in accuracy is in most scenarios linked to the refinement of IOPs, there is likely a case-specific limit wherein diminishing returns are received from camera calibration with the addition of GCPs beyond a certain number. More research is required regarding this issue. There was practically no difference in accuracy between the small QS iBase receiver provided with the UAS and the professional-grade Topcon HiPer V base, thus indicating that the selection of the physical base station is not relevant. However, the measurements were done in an open area, and there might have been greater difference in an area where, for example, trees could obstruct the visibility

70

A. Rauhala

of the sky. Nevertheless, these results indicate that, at least in the open area, no additional benefits were obtained by utilizing the much bulkier prograde equipment. One advantage of utilizing FinnRef or a similar CORS-based VRS solution, in addition to the better accuracy that was achieved at least in this case, is the lower possibility of making manual measurement errors. Possible errors with a physical base station include errors in the setup, choosing a poor base location, errors in measuring the base location, etc. Also, some possible disadvantages may be associated with the CORS-based approach. Examples include a possible lack of capability of the service to provide virtual reference station data, a possible great distance from the nearest station, the network being down, etc. It is, however, possible to obtain redundancy by utilizing a small, pocketable receiver, such as the iBase, as a backup, since there is very little extra labor involved in carrying it to the site and setting it up. In a recent study, Štroner et al. [42] investigated different data acquisition strategies (cross-flight pattern, different flight heights, oblique imaging, and combinations of aforementioned) for AAT without GCPs. They found that a simple cross-flight pattern brings only minor improvements, but a combination of flights at different altitudes led to a significant improvement. The best results were obtained with a combination of nadiral image acquisition with oblique image acquisition. Future studies should investigate these strategies using AAT that includes multiple GCPs. Such studies could provide recommendations for a time-efficient total strategy for high-accuracy UAS photogrammetry using either multicopters capable of oblique imaging or fixed-wing platforms that typically rely on nadiral imaging.

5 Conclusions An empirical study was performed to study how the number of GCPs and different base station types affect the model accuracy when mapping a ~ 1 km2 site using a UAS capable of PPK correction. The horizontal accuracy was roughly the same, in the ~1 GSD range regardless of the base station and number of GCPs. A strong correlation was observed between the horizontal accuracy and the number of GCPs, but these improvements were negligible. Greater improvements were seen in vertical accuracy. When assessing the vertical accuracy from the 3D checkpoints, both tested local base stations provided vertical RMSEZ close to 2 GSD with no GCPs, with statistically significant improvements in accuracy to ~1 GSD when 10–12 GCPs were utilized. The FinnRef virtual reference station provided exceptional